Dear readers,
China has emerged as a frontrunner in setting regulations to govern generative AI. Its new rules spell quite a challenge for companies to navigate and comply with.
In other news, it’s picket fences all around. The US Federal Trade Commission (FTC) is investigating OpenAI. Hollywood actors and writers are striking over (among other issues) AI’s impact. Civil rights groups are unhappy with the EU’s proposed AI Act. Google is being sued over data scraping. Amazon challenges the European Commission after being designated a very large platform. You get the point. Must be the heatwave.
Let’s get started.
Stephanie and the Digital Watch team
// HIGHLIGHT //
China unveils new rules for governing generative AI
When it comes to regulating generative AI – the cutting-edge offspring of AI that can produce text, images, music, and video of human-like quality – the world can be roughly divided into three groups.
There are those with little or no interest in regulating the sector (or at least, not yet). Then there is the group in which legislators are actively discussing new regulations (the EU is the most advanced; the USA and Canada fall somewhere in this group too). And finally, there are those who have enacted new rules – rules which are now part of the laws of the land, therefore legally binding and enforceable. One country belongs to the last group: China.
So what do the rules say?
China’s Provisional Measures for the Management of Generative Artificial Intelligence Services (translated by OpenAI from the original text), which will go into effect on 15 August, are a more relaxed version of China’s draft rules. April’s release was followed by a public consultation. Here are the main highlights:
1. Services are under scrutiny, research is not. The rules apply to services that offer generated content. Research and educational institutions, which previously fell under this scope, are now excluded from these rules as long as they don’t offer generative AI services. We won’t attempt to define services (the rules do not); the exclusion of ‘research and development institutions, enterprises, educational and research institutions, public cultural institutions, and relevant professional organisations’ might be problematically broad.
2. Core social values. Content that is contrary to China’s core socialist values will be prohibited. The rules do propose examples of banned content, such as violence and obscenity, but the implementation of this rule will be subject to the authorities’ interpretation.
3. Non-discrimination. The rules prohibit any type of discrimination, which is a good principle on paper, but will prove extremely difficult for companies to comply with. Let’s say an algorithm manages to be completely objective: Where does that leave human bias, which is usually built into the algorithms themselves?
4. Intellectual property and competition. Utmost respect for intellectual property rights and business secrets are also great principles, albeit the rules are somewhat evasive on what’s allowed and what’s prohibited. (And whose secrets are we talking about?)
5. Pre-trained data. Data for training generative AI shouldn’t infringe on privacy and intellectual property rights. Given that these are among the major concerns that generative AI has raised around the world, this rule will mean that companies will need to adopt a much more cautious approach.
6. Labelling content. The requirement for service providers to clearly specify that content is produced by AI is already on tech companies and policymakers’ wishlist. Implementing this will require a technical solution and, probably, some regulatory fine-tuning down the line.
7. Assessments. Generative AI services that have the potential to influence public opinion will need to undergo a security assessment in line with existing rules. The question is whether Chinese authorities will interpret this in a narrow or broad way.
8. Do no harm. The requirement to safeguard users’ physical safety is noteworthy. The protection of users’ mental health is a tad more complicated (how does one prove that a service can harm someone’s mental well-being?). And yet, China has a long history of enacting laws that protect vulnerable groups of users from online harm.
Who will these laws really affect?
If we look at the top tech companies leading AI developments, we can see that very few are Chinese. The fact that China has moved ahead so swiftly could therefore mean one of two things (or both).
With its new laws, China can shape generative AI according to a rulebook it has developed for its own benefit and that of its people. The Chinese market is too large to ignore: If US companies want a piece of the pie, they have to follow the host’s rules.
Or China might want to preempt situations in which its homegrown tech companies set the rules of the game in ways that the government would then have to redefine. This makes China’s position uncannily similar to the EU’s: Both face the expanding influence exerted by American companies; both are vying to shape the regulatory landscape before it’s too late.
// AI GOVERNANCE //
Researchers from Google DeepMind and universities propose AI governance models
Researchers from Google DeepMind and a handful of universities, including the University of Oxford, Columbia University, and Harvard, have proposed four complementary models for the global governance of advanced AI systems:
- An Intergovernmental Commission on Frontier AI, which would build international consensus on the opportunities and risks of advanced AI, and how to manage them
- A Multistakeholder Advanced AI Governance Organisation, which would help set norms and standards, and would assist in their implementation and compliance.
- A Frontier AI Collaborative, which would promote access to advanced AI as an international public-private partnership.
- A Multilateral Technology Assessment Mechanism, which would provide independent, expert assessments of the risks and benefits of emerging technologies.
Why is it relevant? First, it addresses concerns about advanced AI, which industry leaders have been cautioning about. Second, it aligns with the growing worldwide call for an international body to deal with AI, further fuelling the momentum behind this development. Last, the models draw inspiration from established organisations that transcend the digital policy sphere, such as the Intergovernmental Panel on Climate Change (IPCC) and the International Atomic Energy Agency (IAEA). These entities have previously been identified as role models to emulate.
US FTC launches investigation into ChatGPT
The US FTC is investigating OpenAI’s ChatGPT to determine if the AI language model violates consumer protection laws and puts personal reputations and data at risk, the Washington Post has revealed. The FTC has not made its investigation public.
The focus is whether ChatGPT produces false, misleading, disparaging, or harmful statements about individuals, and whether the technology may compromise data security.
Why is it relevant? It adds to the number of investigations which OpenAI is facing around the world. The FTC has the authority not only to impose fines, but to temporarily suspend ChatGPT (which reminds us of how Italy’s investigation, the first ever against ChatGPT, unfolded).
Civil rights groups urge EU lawmakers to make AI accountable
Leading civil rights groups are urging the EU to prioritise accountability and transparency in the development of the proposed AI Act.
A letter addressed to the European Parliament, the EU Council, and the European Commission (currently negotiating the final text of the AI Act), calls for specific measures, including a full ban on real-time biometric identification in publicly accessible spaces and the prohibition of predictive profiling systems. An additional request urges policymakers to resist lobbying pressures from major tech companies.
Why is it relevant? The push for a ban on public surveillance is not a new concept. Still, the urgency to resist lobbying is likely fueled by a recent report on OpenAI’s lobbying efforts in the EU (which is by no means the only company…).
Hollywood actors and writers unite in historic strike for better terms and AI protections
In a historic strike, screenwriters joined actors in forming picket lines outside studios and filming locations worldwide. The reason? They are asking for better conditions but also for protection from AI’s existential threat to creative professions. ‘All actors and performers deserve contract language that protects them from having their identity and talent exploited without consent and pay’, the actors’ union president said.
So far, the unions have rejected the proposal made by the entity representing Hollywood’s studios and streaming companies. This entity – the Alliance of Motion Picture and Television Producers (AMPTP), which represents companies including Amazon, Apple, Disney, Netflix, and Paramount – said the proposal ‘protects performers’ digital likenesses, including a requirement for performers’ consent for the creation and use of digital replicas or for digital alterations of a performance.’
Why is it relevant? AI is significantly impacting yet another industry, surprising those who did not anticipate such a broad influence on digital policy. AI has not only disrupted traditional sectors but continues to permeate unexpected areas, underscoring its still-developing transformative potential.
Google sued over data scraping practices
We recently wrote about the major class-action lawsuit against OpenAI and Microsoft, filed at the end of June. The same firm behind that lawsuit has now sued Google for roughly the same reasons:
‘Google has been secretly stealing everything ever created and shared on the internet by hundreds of millions of Americans. Google has taken all our personal and professional information, our creative and copy-written works, our photographs, and even our emails – virtually the entirety of our digital footprint – and is using it to build commercial AI products like Bard’.
Why is it relevant? First, given the similarities of the lawsuits – although Google’s practices are said to go back for a more extended period – a victory for the plaintiffs in one suit will likely result in a win against all three companies. Second, given the tech industry’s sprint ‘to do the same – that is, to vacuum up as much data as they can find’ (according to the FTC, quoted in the filed complaint), this also serves as a cautionary tale for other companies setting up shop in the AI market. (Speaking of which: Elon Musk’s AI startup, xAI, was launched last week.)
// ANTITRUST //
US appeals court turns down FTC request to pause Microsoft-Activision deal
The FTC’s attempt to temporarily block Microsoft’s planned USD69 billion (EUR61.4 billion) acquisition of Activision Blizzard, the creator of Call of Duty, has been dismissed by a US appeals court.
Why is it relevant? First, the FTC’s unsuccessful appeal might prompt it to abandon the case altogether. Second, the US developments may have influenced the UK’s Competition and Markets Authority’s climbdown in its opposition, which has now extended its deadline for a final ruling until 29 August.
// DSA //
Amazon challenges VLOP label
Amazon is challenging the European Commission’s decision to designate it as a very large online platform under the Digital Services Act, which takes effect on 25 August.
In a petition filed with the general court in Luxembourg (and in public comments) the company is arguing that it functions primarily as a retailer rather than just an online marketplace. None of its fellow major retailers in the EU have been subjected to the stricter due diligence measures outlined in the DSA, placing it at a disadvantage compared to its competitors.
Why is it relevant? Amazon is actually the second company to challenge the European Commission, after Berlin-based retail competitor Zalando filed legal action a fortnight ago. In April, the European Commission identified 17 very large online platforms and two very large online search engines. We’re wondering who is going to challenge it next.
Was this newsletter forwarded to you, and you’d like to see more?
10–19 July: The High-Level Political Forum on Sustainable Development (HLPF), organised under the auspices of ECOSOC, continues in New York this week.
18 July: The UN Security Council will hold its first ever session on AI, chaired by the UK’s Foreign Secretary James Cleverly. What we can expect: A call for international dialogue on its risks and opportunities for international peace and security, ahead of the UK hosting the first ever global summit on AI later this year.
22–28 July: The 117th meeting of the Internet Engineering Task Force (IETF) continues this week in San Francisco and online.
24–28 July: The Open-Ended Working Group (OEWG) will hold its fifth substantive session next week in New York. Bookmark our observatory for updates.
‘The risks of AI are real but manageable’ – Bill Gates
AI risks include job displacement, election manipulation, and uncertainty if/when it surpasses human intelligence. But Bill Gates, co-founder of Microsoft, believes we can manage these by learning from history. Just as regulations were implemented for cars and computers, we can adapt laws to address AI challenges.
There’s an urgent need to act, says OECD
The impact of AI on jobs has been limited so far, possibly due to the early stage of the AI revolution, according to the OECD’s employment outlook for 2023. However, over a quarter of jobs in OECD member countries rely on skills that could easily be automated.
Was this newsletter forwarded to you, and you’d like to see more?