AI-powered search engines: The race is on
Ready, set, go! Well, except not everyone was at the starting line.
Microsoft took off like a bullet with its surprise press event, announcing that it would integrate the large language model (LLM) ChatGPT-4, created by OpenAI, into its Bing search engine and Edge browser. Google, who had previously dubbed the threat that ChatGPT poses ‘code red,’ had to react fast. The next day, it announced its conversational AI named Bard. And this is only the first lap of the race.
What sets ChatGPT apart from previous other language models?
OpenAI’s decision to make ChatGPT available to the public for free was a bold move, considering the significant costs associated with maintaining the model while handling millions of questions from curious users – costs that Open AI CEO Sam Altman himself called ‘eye-watering’. However, this decision proved to be not only brave but also astute. By making ChatGPT accessible to everyone, OpenAI ignited a spark of curiosity and interest that captured worldwide attention.
In reality, ChatGPT was not the first model to be made available to the public for free. Meta’s Galactica attempted the same feat, suffered from an infamous crash and was turned off after only a few days. So how did ChatGPT succeed when Galactica didn’t? The answer is mainly related to how it was advertised and to whom. Galactica was presented to the academic community as an AI capable of effortlessly writing scientific papers – a field with a demanding and critical audience that is easy to disappoint. On the other hand, ChatGPT – and its limitations – was promoted publicly as an open and accessible tool for everyone to experiment with and enjoy. This approach, combined with ChatGPT’s performance, set it apart from previous models and made it a game changer in the world of AI.
The decision to make ChatGPT available to the general public was not without its limitations. Unlike open-sourced models, ChatGPT is only available in a limited way, with interested researchers unable to look ‘under the hood’ of the model or adapt it to their specific needs. This is a different approach than what OpenAI did with its exceptional Whisper model, which was commendable in its transparency. From a business standpoint, OpenAI’s decision to offer ChatGPT to the public for free, despite the high costs involved, proved to be the right one, as it attracted attention and funds.
How other companies reacted
ChatGPT has stirred up unprecedented attention and competition in the world of AI. The public’s tremendous interest in ChatGPT has prompted significant players in the industry to react swiftly. This type of competition is undoubtedly exciting, but it can also result in losses. Nevertheless, the strategies employed are quite intriguing.
Microsoft did not hesitate to allocate US$10 billion toward the integration of ChatGPT to all of its leading products, including Skype, Teams, and Word. It did so rapidly and openly in response to ChatGPT’s overwhelming public interest and popularity, and set a precedent for others to follow.
Google hastily announced it would integrate the Bard model into Google Search, but failed its first step when Bard made a factual error in its launching demo. Google’s strategy resulted in significant early losses for the company, as a 10% share price drop wiped off US$170 billion from Google’s market value.
Despite the setback caused by the public release and open-sourcing of the Galactica model, Meta persists in its approach of open-sourcing its state-of-the-art AI models. With the recent open-sourcing of its largest language model, called the LLaMa model, Meta seems to be attempting to decrease user reliance on the OpenAI GPT API by providing access to new models. LLaMa is not the first large-scale model to be open-sourced – BLOOM and OPT have been available for some time. But their application has been limited due to their high hardware requirements. LLaMa is about ten times smaller than these models, and can run on a single GPU, potentially helping more researchers access and study large language models. At the same time, LLaMa achieves similar results to GPT-3.
China’s tech giants haven’t been wasting time: Baidu is planning to integrate its chatbot Ernie (short for Enhanced Representation through kNowledge IntEgration) into its search engine in March and eventually into all Baidu operations.
Regulators’ response
Regulators in both the USA and China are taking notice: OpenAI CEO Sam Altman has been meeting with US lawmakers, who reportedly pressed him on bias, the speed of changes in AI, and AI’s potential uses.
In China, ChatGPT is not officially available, but users have been able to access it through workarounds. However, regulators have told key Chinese tech companies not to integrate ChatGPT into their services, over risks that the software ‘could provide a helping hand to the US government in its spread of disinformation and its manipulation of global narratives for its own geopolitical interests’. Chinese tech companies will also have to report to regulators before they roll out their own ChatGPT-like services.
Generative AI’s impact on our future
February developments in generative AI once again brought forth the question: will AI take over our jobs? Aside from the obvious, work brings meaning and fulfilment to humans, which is why fears of a new technology making human labour redundant run wild with every new tech hype.
Our colleagues from Diplo’s AI Lab, who have also been using language models to develop some pretty smart AI tools (we’re biased here), think that AI won’t make most jobs redundant. Some jobs have an intrinsically interhuman nature, and AI will be hard-pressed to replace those.
Yet, AI will make some jobs redundant, as has also been the case with every new technology.
The good news is that AI tools will free up time for workers by taking mundane tasks off their lists. And while some jobs will disappear, new ones will appear, as Microsoft CEO Satya Nadella and Microsoft founder Bill Gates already said. The question is: How do we ensure that both current and future generations are prepared to deal with current and similar changes in the job market?
At Diplo, our AI and Data Lab is leading the way in the development of Al technology, which has the potential to transform the way diplomacy is conducted. We’re also discussing quite deeply AI’s impact on our future through a series of webinars. We discussed whether AI will take over diplomatic reporting and what is the (potential) role of AI in diplomatic negotiations. We’re discussing how ChatGPT can help us rethink education, and, as an educational institution, we are also rethinking our policy regarding the use of AI tools in the framework of our courses and training programmes.
Would you like to share your thoughts on generative AI with us? Write to us at digitalwatch@diplomacy.edu!
Digital policy developments that made global headlines
The digital policy landscape changes daily, so here are all the main developments from February. We’ve decoded them into bite-sized authoritative updates. There’s more detail in each update on the Digital Watch Observatory.
Global digital architecture
As part of the Global Digital Compact development process, informal consultations were held with stakeholders and UN member states.
Sustainable development
ITU’s Facts and Figures: Focus on Least Developed Countries shows that the digital divide between least developed countries (LDCs) and the rest of the world has increased from 27% in 2011 to 30% in 2022.
China unveiled a new plan for building a digital China by 2035, aimed at placing the country at the global forefront of digital development.
Security
The EU ministers are discussing an amended version of the draft Cyber Resilience Act (CRA), a regulation on cybersecurity requirements for digital products.
The US White House has published a National Cybersecurity Strategy, noting that big companies should take more responsibility for insecure software products and services.
The US White House asked federal agencies to remove TikTok from all government-issued devices within 30 days due to security concerns. Similarly, the EU Commission, Parliament, and Council banned the use of TikTok on staff devices. TikTok has since announced ‘Project Clover,‘ a new data security strategy, through which European user data will be migrated to Ireland and Norway.
Messaging app Signal announced it would stop providing services in the UK if asked to undermine encryption under the upcoming Online Safety Bill.
Infrastructure
The European Commission launched a public consultation on the future of connectivity, which is considered a prelude to plans that could require Big Tech to pay their share of costs related to digital infrastructure. It also published a proposal for a Gigabit Infrastructure Act.
SpaceX plans to limit the Ukrainian military from using its satellite internet service to control drones.
NATO has set up a Critical Undersea Infrastructure Protection Cell to coordinate engagement between military and industry stakeholders.
E-commerce and the internet economy
PayPal has paused its stablecoin launch due to heightened regulatory scrutiny.
The Australian Competition and Consumer Commission (ACCC) will examine whether interconnected products and services offered by digital platforms harm competition and consumers.
Digital rights
The European Data Protection Board (EDPB) adopted its opinion on the draft adequacy decision for the EU-US Data Privacy Framework, expressing concerns over the application of the newly introduced principles of necessity and proportionality.
Canadian privacy regulators launched an investigation into TikTok’s collection, use, and disclosure of personal information.
A report by Access Now listed India, Ukraine, and Iran with the highest number of internet shutdowns occurring in 2022.
Content policy
The signatories to the 2022 Code of Practice on Disinformation, which includes all major online platforms, have set up a Transparency Centre, which will guarantee the transparency of their efforts to combat disinformation, and released reports on their implementation of the code’s commitments.
UNESCO’s Internet for Trust Conference discussed regulating digital platforms to safeguard freedom of expression and access to information.
Jurisdiction and legal issues
China announced plans for a National Data Bureau, which will establish a data system for the country and coordinate the utilisation of data resources.
Germany’s Federal Constitutional Court ruled that police use of automated data analysis to prevent crime was unconstitutional.
The US Department of Justice asked a court to sanction Google over the alleged destruction of evidence as part of an antitrust case.
The European Commission dropped its complaint against Apple’s in-app purchase mechanism, which obliges music streaming app developers to use the proprietary system if they want to distribute paid content on iOS devices, but will continue investigating Apple’s anti-steering practice.
Technologies
Representatives of 59 countries issued a joint call to action on the responsible development, deployment, and use of AI in the military domain.
The Council of Europe’s Committee on AI continued discussions on a convention on AI and human rights and published a draft version of the text.
The Netherlands will restrict the export of the most advanced semiconductors technology, including deep ultraviolet (DUV) lithography systems.
How are algorithms putting Section 230 to the test?
It’s long been argued that the law in the USA that protects social media platforms from liability over content posted by users on these platforms – the so-called Section 230 of the Communications Decency Act – should be narrowed down or downright abolished.
Section 230, which has been the object of so many debates, is a nifty two-sentence rule which states that: (a) platforms aren’t publishers (hence, they aren’t liable for the content users post, unlike publishers), (b) in cases where platforms self-police third-party content, they can’t be punished for other harmful content they don’t remove.
Admittedly, this rule has allowed the internet to flourish. Platforms were able to host a huge amount of user content, unshackled from the fear of liability. It also allowed content to be posted instantaneously, without platforms being required to review it before making it public. Freedom of expression thrived.
But now, the age of algorithms is putting Section 230 to the test. A few weeks ago, the US Supreme Court began hearing arguments in two cases, both initially decided by the Ninth District, that could have implications for Section 230.
Gonzales vs Google
In a lawsuit against Google, the family of an American woman killed in an attack in Paris by Islamist militants has been arguing that Google’s algorithm recommended content from the militant group to YouTube users. The family is appealing the first judgement by arguing that Section 230 doesn’t provide platforms immunity when it comes to algorithm-recommended content, as the suggestions made by algorithms are not third-party content but the company’s own. Google’s argument, on the other hand, is that Section 230 is not just about protecting companies from third-party content but goes as far as to state that platforms shouldn’t be considered publishers.
Many companies and organisations have filed court briefs in support of Google. Twitter, for instance, is arguing that algorithms provide a way of prioritising some content over others (‘newer content over older content’), but is not conveying any content of its own. Microsoft is arguing that algorithms are so essential to daily life that Section 230 shouldn’t be narrowed or reinterpreted, as this would ‘wreak havoc on the internet as we know it’.
Twitter vs Taamneh
The second case is an appeal filed by Twitter after the Ninth Circuit set aside Section 230 and allowed the case to proceed. It then ruled that Twitter and other platforms had not taken adequate steps to prevent terrorist content from appearing on the platform. The family of a Jordanian man accused Twitter of failing to police the platform after a 2017 attack led to the man’s death along with 38 other victims.
The focus of this case is the antiterrorism law, but since the appeal could overturn the lower court’s judgement (which included a ruling on Section 230), the appeal could also have repercussions for Section 230.
What’s at stake?
Although both cases are connected, it’s the Gonzales case which is likely to tackle the question of algorithms: whether platforms can be held liable for content promoted by their algorithms.
There are several possible outcomes to both lawsuits, which will be decided by the end of June. The most drastic outcome is for the court to remove the protections which Section 230 gives to platforms. But judging by what’s at stake, this is quite unlikely.
A more realistic outcome is for the Supreme Court to retain the current interpretation of Section 230 or, at most, introduce a subtle limitation. Then, it will be up to the legislative branches to address the discontent aired by policymakers in recent years.
Policy updates from International Geneva
Numerous policy discussions take place in Geneva every month. Here’s what happened in February.
OEWG on reducing space threats | 7 – 9 February
The third session of the Open-ended working group (OEWG) on reducing space threats took place at the UN Office in Geneva. The OEWG is mandated, inter alia, to make recommendations on possible norms, rules, and principles of responsible behaviours relating to threats by states to space systems. It was set up by UN General Assembly Resolution 76/23 and convened already twice: (a) from 9 to 13 March 2022, (b) then on 12 and 13 September 2023. The OEWG is expected to meet again from 7 to 11 August 2023 and to submit a final report to the 78th UN General Assembly in September 2023.
Existing challenges and solutions on combating counterfeiting of ICT devices | 15 February
The International Telecommunication Union (ITU) organised a series of webinars on Combating counterfeiting and stolen ICT devices. In its first episode, panellists from different stakeholder groups present issues and challenges on the circulation of counterfeit ICT devices. Particular attention was given to possible solutions via standardisation.
Explore digital Geneva!
Need a guide through internet governance in Geneva? Our Geneva Digital Atlas, where you can find details and contacts for the 46 most relevant digital policy actors, will be with you on your journey.Keep an eye on our Instagram, Twitter, YouTube, Facebook or LinkedIn for weekly Geneva Digital Tours videos where high-level names take you on a tour through their institutions. During March, we feature organisations involved in standardisation and infrastructure. Our first guest is Doreen Bogdan-Martin, Secretary-General of ITU!
The main digital policy events in March
The Digital Watch observatory maintains a live calendar of upcoming and past events.