Dear readers,
As authorities grapple with ChatGTP and similar AI tools, the first regulatory initiatives are now in sight. OpenAI, the company behind ChatGPT, is in for a troubled period with new investigations.
Meanwhile, tech companies are under pressure over many other issues – from hosting content, which is deemed a national security concern, to new fines and probes for (alleged) anticompetitive practices.
Let’s get started.
Stephanie and the Digital Watch team
// HIGHLIGHT //
Governments vs ChatGPT: A whirlwind of regulations in sight
ChatGPT, the AI-powered tool that allows you to chat and get answers to almost any question (we’re not guaranteeing it’s the right answer), has taken the world by storm.
Things are progressing fast. In the space of just two days, we learned that Google is creating a new AI-based search engine (unrelated to chatbot Bard which it launched last month), Elon Musk has created a new company, X.AI, which is probably linked to his effort to build an everything app called X, and China’s e-commerce giant Alibaba launched its own ChatGPT-style AI model.
Now, governments around the world are starting to take notice of the potential of these tools. They are launching investigations into ChatGPT (in April, we covered Italy’s temporary ban, and the investigations that privacy regulators in France, Ireland, Switzerland, Germany, and Canada are considering) and are now ramping up efforts to introduce new rules.
The new developments from last week are coming from the three main AI hotspots: the EU, China, and the USA.
1. The latest from Europe
Known for its tough rules on data protection and digital services/markets, the EU is inching close to seeing its AI Act – proposed by the European Commission two years ago – materialise. While the European Council has adopted its stance, the draft is currently being debated by the European Parliament (it will then need to be negotiated between the three EU institutions in the so-called trilogues). Progress is slow, but sure.
As policymakers debate the text, a group of experts argue that general-purpose AI systems carry serious risks and must not be exempt under the new EU legislation. Under the proposed rules, certain accountability requirements apply only to high-risk systems. The experts argue that software such as ChatGPT needs to be assessed for its potential to cause harm and must also have commensurate safety measures in place. The rules must also look at the entire life cycle of a product.
What does this mean? If the rules are updated to consider, for instance, the development phase of a product, this means that we won’t just wait to look at whether an AI model was trained on copyrighted material, or on private data, after the fact. Rather, a product is audited before its launch. This is quite similar to what China is proposing (see below) and what the USA will be looking into soon (details further down).
The draft rules on general-purpose AI are still up for debate at the European Parliament, so things might still change.
Meanwhile, prompted by Italy’s ban and Spain’s request to look into privacy concerns surrounding ChatGPT, the EU’s data protection watchdog has launched a task force to coordinate the work of European data protection authorities.
There’s little information about the European Data Protection Board’s (EDPB) new task force other than a decision to tackle ChatGPT-related action during the EDPB’s next plenary (scheduled for 26 April).
2. The latest from China
China has also taken a no-nonsense approach to regulating tech companies in recent years. The Cyberspace Administration of China (CAC) has wasted no time in proposing new measures for regulating generative AI services, which are open for public comments until 10 May.
The rules. Providers need to ensure that content reflects the country’s core values, and shouldn’t include anything that might disrupt the economic and social order. No discrimination, false information, or intellectual property infringements are allowed. Tools must undergo a security assessment before being launched.
Who they apply to. The onus of responsibility falls on organisations and individuals that use these tools to generate text, images, and sounds for public consumption. They are also responsible for making sure that pre-trained data is lawfully sourced.
The industry is also calling for prudence. The Payment & Clearing Association of China has advised its industry members to avoid uploading confidential information to ChatGPT and similar AI tools, over risks of cross-border data leaks.
3. The latest from the USA
Well-known for its laissez-faire approach to regulating technological innovation, the USA is taking (baby) steps towards new AI rules.
The Biden administration is studying potential accountability measures for AI systems, such as ChatGPT. In its request for public feedback (which runs until 10 June), the National Telecommunications and Information Administration (NTIA) of the Department of Commerce is looking into new policies for AI audits and assessments that tackle bias, discrimination, data protection, privacy, and transparency.
What this exercise covers. Everything and anything that falls under the definition of ‘AI system’ and ‘automated systems’, including technology that can ‘generate outputs such as predictions, recommendations, or decisions influencing real or virtual environments’.
What’s next? There’s already a growing interest in regulating AI governance tools, the NTIA writes, so this exercise will help it advise the White House on how to develop an ecosystem of accountability rules.
Separately, there are also indications that Senate Democrats are working on new legislation spearheaded by Majority Leader Chuck Schumer. A draft is in circulation, but don’t expect anything tangible soon unless the initiative secures bipartisan support.
And in a bid to avoid facing intellectual property infringements, music company Universal Music Group has ordered streaming platforms, including Spotify and Apple, to block AI services from scraping melodies and lyrics from copyrighted songs, according to the Financial Times. The company fears that AI systems are being trained on the artists’ intellectual property. IPR lawsuits are looming.
// OPENAI //
Italy tells OpenAI: Comply or face ban
The Italian Data Protection Authority (GDPD), which was the first to open an investigation into OpenAI’s ChatGPT, has provided the company with a list of demands it must comply with by 30 April, before the authority may lift its temporary ban.
The Italian government wants OpenAI to let people know how personal data will be used to train the tool, and to request consent from users (or based on legitimate interest) before processing their personal data.
But a more challenging request is for the company to implement an age-gating system for underaged users and to introduce measures for identifying accounts used by children (the latter must to be in place by 30 September).
Why is this relevant? The age-verification request coincides with efforts by the EU to improve how platforms verify their users’ age. The new eID proposal, for instance, will introduce a much-needed framework of certification and interoperability for age-verification measures. The way OpenAI tackles this issue will be a testbed for new measures.
More European countries launch probes into ChatGPT
France’s data protection regulator (CNIL) has opened a formal investigation on ChatGPT after receiving five complaints. These include complaints from member of parliament Eric Bothorel, lawyer Zoé Vilain, and developer David Libeau.
Germany’s data protection conference (DSK), the body of independent German data protection supervisory authorities of the federal and state governments, has opened an investigation into ChatGPT. The announcement was made by the North Rhine-Westphalia watchdog (the DSK itself has been mum about it).
Spain’s data protection agency (AEPD) announced an independent investigation in parallel to the work being carried out by the EDPD.
// CYBERSECURITY //
Classified Pentagon documents leaked on social media
The Pentagon is investigating the leak of over 50 classified documents that turned up on the social media platform Discord. Jack Teixeira, a 21-year-old US air national guardsman, suspected of leaking the documents, was charged in Boston, USA, on Friday under the Espionage Act.
Days after the Pentagon announced its investigation, the leaked documents could still be accessed on Twitter and other platforms, prompting a debate on the responsibility of social media companies in cases involving national security.
Russia accuses Pentagon, NATO of masterminding Ukraine attacks against Russia
The press office of Russia’s Federal Security Service (FSB) has accused the Pentagon and NATO countries of being behind massive cyberattacks from Ukraine against Russia’s critical infrastructure.
The FSB claims that over 5,000 hacker attacks on Russian critical infrastructure have been recorded since the beginning of 2022 and that cyberattack units of Western countries are using Ukrainian territory to carry out these attacks.
USA-Russia cyber impasse on hold?
Meanwhile, Russia’s official news agency TASS reported that the USA has maintained contact with Russia on cybersecurity issues.
US Department of State’s Ambassador-at-Large for Cyberspace and Digital Policy Nathaniel Fick told TASS that channels of communication remain open. ‘Yes, I’m across the table from Russian counterparts with some frequency, and with Chinese as well,’ he said.
// ANTITRUST //
South Korea fines Google for abusing global market dominance
Google is in trouble in South Korea after the country’s Fair Trade Commission (FTC) fined the company USD 31.9 million (EUR 29.2 million) for unfair business practices.
The FTC found that Meta entered into agreements with Korean mobile game companies between June 2016 and April 2018, which banned them from releasing their content on One Store, a local marketplace which rivals Meta’s marketplace.
Indian start-ups seek court order to block Google’s in-app billing system
Google could also be in trouble in India after a group of start-ups, led by the Alliance of Digital India Foundation (ADIF), asked an Indian court to suspend the company’s new in-app billing fee system, until the antitrust authority probes Google’s failure to comply with an October 2022 order. A new antitrust directive, issued in October, allowed the use of third-party billing services for in-app payments.
// DATA FLOWS //
European MEPs vote against proposal to greenlight westward data transfers
The proposal to allow personal data transfers of EU citizens with the USA under the new EU-US Data Privacy Framework has been rejected by the European Parliament.
Parliamentarians expressed concerns about the adequacy of US data protection laws and called for enhanced safeguards to protect the personal data of European citizens. The proposed framework does not provide sufficient safeguards, according to the members of parliament.
While the European Parliament’s position is not legally binding, it adds pressure on the European Commission to reconsider its approach to data transfers with the US and prioritise more robust data protection measures.
// METAVERSE //
Meta urged to keep kids off of the metaverse
Dozens of advocacy organisations and children’s safety experts are calling on Meta to halt its plans to allow kids into its virtual reality world, Horizon Worlds. In a letter addressed to Meta CEO Mark Zuckerberg, the groups and experts expressed concerns about the potential risks of harassment and privacy violations for young users in the metaverse app.
The experts also said that given Meta’s track record of addressing damaging design after harm has occurred, they are requesting that Meta not allow kids into the metaverse until it can ensure their safety and privacy with robust measures in place.
Meta says metaverse can transform education
Was it a coincidence that two days before, Meta’s Global Affairs Chief Nick Clegg penned an article lauding metaverse’s potential for education?
In any case, Clegg explains how the metaverse can enable access to educational resources and opportunities for learners across geographical and economic barriers and how virtual learning classrooms, simulations, and collaborative environments can enhance learning outcomes.
Clegg also acknowledges a need for responsible and inclusive design of metaverse educational experiences, with a focus on privacy, safety, and accessibility.
16–19 April: The American Registry for Internet Numbers (ARIN) 51st Public Policy and Members Meeting in Florida is discussing internet number resources, regional policy development, and the overall advancement of the internet.
21–23 April: The closing session of the European Commission citizens’ panel on the metaverse and other virtual worlds will ask participants to turn their ideas into concrete recommendations. They’ll be asked to suggest policy measures to help shape the evolution of virtual worlds.
Tech diplomacy in the Bay Area
In 2018, Diplo’s techplomacy mapping exercise explored how different diplomatic representations interact with the San Francisco Bay Area ecosystem. Since then, a lot has changed, which prompted Diplo to update its research. The 2023 report, Tech Diplomacy Practice in the San Francisco Bay Area’, launched last week, makes some important observations.
Tech diplomacy has matured, moving from informal engagements to more structured, formal engagements. Government representations in the San Francisco Bay Area and the structures within tech companies that act as partners to the conversation have become both more diverse and complex, adding challenges to reaching one another. San Francisco is seeing more and more collaborations between international diplomatic representations and tech companies to achieve common goals. Read the full text.
Was this newsletter forwarded to you, and you’d like to see more?