Home | Newsletters & Shorts | DW Weekly #114 – 5 June 2023

DW Weekly #114 – 5 June 2023

 Text, Paper, Page

Dear readers,

It’s more AI governance this week, not in the form of binding rules, but rather, voluntary codes of conduct – and lots of them. In other news, there’s a new warning about AI-caused extinction and a multimillion-dollar settlement in the child privacy violation lawsuit against Apple.  

Let’s get started.

Stephanie and the Digital Watch team


// HIGHLIGHT //

Governing AI: Two codes of conduct announced

What’s the best course of action when legislation takes too long to materialise? If your idea is to simply wait patiently, you won’t find a kindred spirit in European Commissioner Thierry Breton.

The AI Pact: Eager to see companies get ready for the EU’s AI Act, last week Breton announced the AI Pact, a voluntary set of rules that will act as a precursor to the regulation. In an interview on TV5Monde on Saturday, he explained that the AI Pact aims to help companies get a head start on rules that will become binding and obligatory in around two or three years.

The commissioner hopes companies will warm to the proactive initiative, which is why he’s been doing the rounds, starting with Google CEO Sundar Pichai, followed by AnthropicAI CEO Dario Amodei. He’s also met EU digital ministers, who, we presume, have expressed support towards an initiative that could ward off regulatory headaches down the line.

The EU-US joint voluntary AI code: Fellow European Commissioner Margrethe Vestager appears to share a similar sense of impatience. Last week, she announced a voluntary code of conduct that will be developed by policymakers from Washington and Brussels in the coming weeks. The announcement came at the start of the bi-annual EU-US Trade and Tech Council (TTC) summit, which took place in Luleå, Sweden (the code did not make it into the joint statement, though).

The code announced by Vestager has a different objective than Breton’s: Although it’s still in the form of a two-page briefing, it will aim to set basic non-binding principles, or standards, around transparency requirements and risk assessments. 

Breton was more blunt: ‘It is for all the countries that are lagging behind, and the Americans are lagging behind on these issues, I’m not afraid to say it, well, they should also start doing the work that we have done, to establish basic principles. These principles underlie the legislative act that we have built.’ (Machine-translated from this original text: C’est qui que pour tous les pays qui sont en retard et les Américains sont en retard sur ces questions je n’ai pas peur de le dire et bien il faut aussi commencer à faire peut-être le travail que l’on a fait arrêter des principes de base qui sont les principes qui sont les principes sous-jacents à ceux qui nous ont permis d’avoir de bâtir cet acte législatif.)   

In a way, the joint initiative is an attempt to bridge the gap between the laissez-faire approach of the USA and the more stringent approach of the EU – an intermediate step before US companies will be obliged to follow EU rules. It’s probably what should have preceded the GDPR but didn’t.

AI labels to be added to EU’s disinformation code

Yes, there’s a third code that will be impacted by the need to set guardrails for generative AI. And yes, it comes from another European Commissioner. 

Values and transparency chief Vera Jourova announced this morning (Monday 5 June) that AI services should introduce labels for content generated by AI, such as text, images, and videos. This measure will be added to the voluntary Code of Practice on Disinformation, which counts Microsoft, Google, Meta, and TikTok among its signatories (Twitter left the group of code adherents).  

 Crowd, Person, Adult, Female, Woman, Blazer, Clothing, Coat, Jacket, People, Audience, Speech, Věra Jourová

‘Signatories of the EU Code of Practice against disinformation should put in place technology to recognise AI content and clearly label it to users,’ she said, with reference to services with ‘a potential to disseminate AI-generated disinformation’. It’s uncertain if this will be applicable to all generative AI services offered by participating companies.

Why is it relevant? First, all of these initiatives place the EU at the forefront of AI regulation. The EU clearly wants to set a global standard – and a high one at that – for AI, especially generative AI. Second, this will codify the emerging practice of labelling content generated by AI (here’s an example).


Digital policy roundup (29 May–5 June)
// AI GOVERNANCE //

Australia plans AI rules

Speaking of disinformation and deceptive content: Australia wants to introduce AI rules and is seeking public comment on how to mitigate the risks, which include algorithmic bias, lack of transparency, and reliability of data.

The request for comment highlights how other countries have approached AI rules – from voluntary approaches in Singapore to stricter regulation in the EU and Canada. 

Why is it relevant? The discussion paper attached to the call for comment extensively references the EU’s proposed AI Act. It includes elements of what a potential risk-based approach (the hallmark of the EU’s AI Act) could include. Breton will be happy.

OpenAI gets warning from Japan’s data protection watchdog

The Japanese data protection authority has issued administrative guidance to OpenAI, the operator of ChatGPT, in response to concerns over the protection of personal data. 

The guidance highlights the potential for ChatGPT to obtain sensitive personal data without proper consent, potentially infringing on privacy. No specific violations of the country’s privacy rules have been confirmed yet by Japan’s Personal Information Protection Commission.

OpenAI may face an onsite inspection or fines if it fails to take sufficient measures in response to the guidance.

Why is it relevant? Japan took a keen interest in ChatGPT: OpenAI CEO Sam Altman met Japan’s Prime Minister to discuss plans to open an office in the country, and government officials and financial sectors rushed in where others feared to tread. Although there’s been no breach, it seems the country’s data protection watchdog is treading more cautiously than the rest.

AI scientists warn about AI-caused extinction

Tech company chiefs and AI scientists have issued another stark warning: ‘Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.’

Almost 900 signatories endorsed the one-sentence open letter, spearheaded by the nonprofit organisation Center for AI Safety.

Why is it relevant? First, It’s yet again another warning (see the last week’s issues issue) that we should be much more concerned about the potentially catastrophic effects of future AI on humanity. Compared to what’s happening now, coming ramifications could be far more dire. Second, some of the signatories are behind companies that are pushing the boundaries of AI.


// CONTENT POLICY //

Meta threatens to pull news content from California over proposed rules

Meta has threatened to remove news content in California if state legislation were passed that required tech companies to pay publishers. The recently proposed California Journalism Preservation Act calls for platforms to pay a fee to news providers whose work appears on their services. 

In a tweet, Meta spokesman Andy Stone said the bill would predominantly benefit large, out-of-state media companies using the pretext of supporting California publishers.

Why is it relevant? This is taking place in parallel with Canada’s attempt to introduce similar legislation. Meta (and Google) told the Canadian Senate’s Standing Committee on Transport and Communications that it would have to withdraw from the country should the proposed bill pass as it stands.

 Page, Text, Person

// PRIVACY //

Amazon to pay settlement over children’s privacy lawsuit

Amazon will be required to pay USD25 million (EUR23.3 million) to the US Federal Trade Commission (FTC) to settle allegations that it violated children’s rights by failing to delete Alexa recordings as requested by parents. The FTC’s order must still be approved by the federal court.

The FTC’s investigation determined that Amazon had unlawfully used voice recordings to improve its Alexa algorithm for years. 

Why is it relevant? Although the technology is different, it reminds us of something similar: Companies training their models with personal data retrieved without consent. Amazon’s denial of the accusations will do little to appease parents after the FTC determined that the company deceived parents about its data deletion practices. 


Was this newsletter forwarded to you, and you’d like to see more?


 Logo, Symbol
// SHUTDOWNS //

Internet access disrupted in Africa: Authorities in Mauritania cut off the mobile internet last week, MENA-based non-profit SMEX reported. The Senegalese government imposed restrictions on mobile data and social media platforms, both actions following protests over the sentencing of opposition leader Ousmane Sonko. Senegal’s restrictions were confirmed by Netblocks, a global internet monitoring service, which said that authorities placed restrictions to prevent the ‘dissemination of hateful and subversive messages in the context of public order disturbances’.


The week ahead (5–12 June)

5–7 June: Re:publica returns to Berlin this week for its annual digital society festival. This year’s theme is money.

5–8 June: Another major meet-up, RightsCon 23, is taking place in Costa Rica and online. On the sidelines: The GFCE’s Regional Meeting for the Americas and Caribbean 2023

5–8 June: If you’re a regulator: ITU’s Global Symposium for Regulators 2023 is taking place in Sharm el-Sheikh, Egypt, and online. 

7 June: ENISA’s AI Cybersecurity Conference takes place in Brussels and online. AI is set to take centre stage.

12 June: It’s the last day to contribute to the US National Telecommunications and Information Administration (NTIA) request for comment on algorithmic accountability

12–15 June: The week-long ICANN77 is taking place in Washington, DC, and online.

#WebDebate on Tech Diplomacy

Join us online tomorrow, Tuesday, 6 June for Why and how should countries engage in tech diplomacy? starting at 13:00 UTC, with quite a line-up of special guests.


#ReadingCorner
 Body Part, Finger, Hand, Person, Electronics, Phone, Baby, Mobile Phone, Face, Head, Photography, Texting

Children and the metaverse

Meta’s release – and Apple’s planned release – of mixed-reality headsets may reignite people’s interest in the metaverse. This means more users might start spending their time in the metaverse. Which probably means that more kids will give it a go.

UNICEF and Diplo’s latest report, The Metaverse, Extended Reality and Children, considers the potential effects – both good and bad – that the metaverse has on children; the drivers of and predictions for the growth of the metaverse; and the regulatory and policy challenges posed by the metaverse.


steph
Stephanie Borg Psaila – Author
Director of Digital Policy, DiploFoundation
ginger
Virginia Paque – Editor
Senior editor – Digital Policy, DiploFoundation

Was this newsletter forwarded to you, and you’d like to see more?