DW Weekly #120 – 17 July 2023

DigWatch Weekly 100th issue 1920x1080px generic

Dear readers,

China has emerged as a frontrunner in setting regulations to govern generative AI. Its new rules spell quite a challenge for companies to navigate and comply with.

In other news, it’s picket fences all around. The US Federal Trade Commission (FTC) is investigating OpenAI. Hollywood actors and writers are striking over (among other issues) AI’s impact. Civil rights groups are unhappy with the EU’s proposed AI Act. Google is being sued over data scraping. Amazon challenges the European Commission after being designated a very large platform. You get the point. Must be the heatwave.

Let’s get started.

Stephanie and the Digital Watch team


// HIGHLIGHT //

China unveils new rules for governing generative AI

When it comes to regulating generative AI – the cutting-edge offspring of AI that can produce text, images, music, and video of human-like quality – the world can be roughly divided into three groups. 

There are those with little or no interest in regulating the sector (or at least, not yet). Then there is the group in which legislators are actively discussing new regulations (the EU is the most advanced; the USA and Canada fall somewhere in this group too). And finally, there are those who have enacted new rules – rules which are now part of the laws of the land, therefore legally binding and enforceable. One country belongs to the last group: China.

So what do the rules say?

China’s Provisional Measures for the Management of Generative Artificial Intelligence Services (translated by OpenAI from the original text), which will go into effect on 15 August, are a more relaxed version of China’s draft rules. April’s release was followed by a public consultation. Here are the main highlights:

1. Services are under scrutiny, research is not. The rules apply to services that offer generated content. Research and educational institutions, which previously fell under this scope, are now excluded from these rules as long as they don’t offer generative AI services. We won’t attempt to define services (the rules do not); the exclusion of ‘research and development institutions, enterprises, educational and research institutions, public cultural institutions, and relevant professional organisations’ might be problematically broad.

2. Core social values. Content that is contrary to China’s core socialist values will be prohibited. The rules do propose examples of banned content, such as violence and obscenity, but the implementation of this rule will be subject to the authorities’ interpretation.

3. Non-discrimination. The rules prohibit any type of discrimination, which is a good principle on paper, but will prove extremely difficult for companies to comply with. Let’s say an algorithm manages to be completely objective: Where does that leave human bias, which is usually built into the algorithms themselves?

4. Intellectual property and competition. Utmost respect for intellectual property rights and business secrets are also great principles, albeit the rules are somewhat evasive on what’s allowed and what’s prohibited. (And whose secrets are we talking about?)

5. Pre-trained data. Data for training generative AI shouldn’t infringe on privacy and intellectual property rights. Given that these are among the major concerns that generative AI has raised around the world, this rule will mean that companies will need to adopt a much more cautious approach.      

6. Labelling content. The requirement for service providers to clearly specify that content is produced by AI is already on tech companies and policymakers’ wishlist. Implementing this will require a technical solution and, probably, some regulatory fine-tuning down the line.

7. Assessments. Generative AI services that have the potential to influence public opinion will need to undergo a security assessment in line with existing rules. The question is whether Chinese authorities will interpret this in a narrow or broad way.

8. Do no harm. The requirement to safeguard users’ physical safety is noteworthy. The protection of users’ mental health is a tad more complicated (how does one prove that a service can harm someone’s mental well-being?). And yet, China has a long history of enacting laws that protect vulnerable groups of users from online harm. 

Who will these laws really affect?

If we look at the top tech companies leading AI developments, we can see that very few are Chinese. The fact that China has moved ahead so swiftly could therefore mean one of two things (or both). 

With its new laws, China can shape generative AI according to a rulebook it has developed for its own benefit and that of its people. The Chinese market is too large to ignore: If US companies want a piece of the pie, they have to follow the host’s rules.

Or China might want to preempt situations in which its homegrown tech companies set the rules of the game in ways that the government would then have to redefine. This makes China’s position uncannily similar to the EU’s: Both face the expanding influence exerted by American companies; both are vying to shape the regulatory landscape before it’s too late.


Digital policy roundup (10–17 July)
// AI GOVERNANCE //

Researchers from Google DeepMind and universities propose AI governance models

Researchers from Google DeepMind and a handful of universities, including the University of Oxford, Columbia University, and Harvard, have proposed four complementary models for the global governance of advanced AI systems

  • An Intergovernmental Commission on Frontier AI, which would build international consensus on the opportunities and risks of advanced AI, and how to manage them
  • A Multistakeholder Advanced AI Governance Organisation, which would help set norms and standards, and would assist in their implementation and compliance.
  • A Frontier AI Collaborative, which would promote access to advanced AI as an international public-private partnership.
  • A Multilateral Technology Assessment Mechanism, which would provide independent, expert assessments of the risks and benefits of emerging technologies.

Why is it relevant? First, it addresses concerns about advanced AI, which industry leaders have been cautioning about. Second, it aligns with the growing worldwide call for an international body to deal with AI, further fuelling the momentum behind this development. Last, the models draw inspiration from established organisations that transcend the digital policy sphere, such as the Intergovernmental Panel on Climate Change (IPCC) and the International Atomic Energy Agency (IAEA). These entities have previously been identified as role models to emulate.

US FTC launches investigation into ChatGPT

The US FTC is investigating OpenAI’s ChatGPT to determine if the AI language model violates consumer protection laws and puts personal reputations and data at risk, the Washington Post has revealed. The FTC has not made its investigation public.

The focus is whether ChatGPT produces false, misleading, disparaging, or harmful statements about individuals, and whether the technology may compromise data security. 

Why is it relevant? It adds to the number of investigations which OpenAI is facing around the world. The FTC has the authority not only to impose fines, but to temporarily suspend ChatGPT (which reminds us of how Italy’s investigation, the first ever against ChatGPT, unfolded).

Civil rights groups urge EU lawmakers to make AI accountable

Leading civil rights groups are urging the EU to prioritise accountability and transparency in the development of the proposed AI Act. 

A letter addressed to the European Parliament, the EU Council, and the European Commission (currently negotiating the final text of the AI Act), calls for specific measures, including a full ban on real-time biometric identification in publicly accessible spaces and the prohibition of predictive profiling systems. An additional request urges policymakers to resist lobbying pressures from major tech companies. 

Why is it relevant? The push for a ban on public surveillance is not a new concept. Still, the urgency to resist lobbying is likely fueled by a recent report on OpenAI’s lobbying efforts in the EU (which is by no means the only company…).

Hollywood actors and writers unite in historic strike for better terms and AI protections

In a historic strike, screenwriters joined actors in forming picket lines outside studios and filming locations worldwide. The reason? They are asking for better conditions but also for protection from AI’s existential threat to creative professions. ‘All actors and performers deserve contract language that protects them from having their identity and talent exploited without consent and pay’, the actors’ union president said.

So far, the unions have rejected the proposal made by the entity representing Hollywood’s studios and streaming companies. This entity – the Alliance of Motion Picture and Television Producers (AMPTP), which represents companies including Amazon, Apple, Disney, Netflix, and Paramount – said the proposal ‘protects performers’ digital likenesses, including a requirement for performers’ consent for the creation and use of digital replicas or for digital alterations of a performance.’

Why is it relevant? AI is significantly impacting yet another industry, surprising those who did not anticipate such a broad influence on digital policy. AI has not only disrupted traditional sectors but continues to permeate unexpected areas, underscoring its still-developing transformative potential.

Protestors carrying several signs outside Netflix offices. One of them reads 'SAG-AFTRA Supports WGA'

Google sued over data scraping practices

We recently wrote about the major class-action lawsuit against OpenAI and Microsoft, filed at the end of June. The same firm behind that lawsuit has now sued Google for roughly the same reasons:

‘Google has been secretly stealing everything ever created and shared on the internet by hundreds of millions of Americans. Google has taken all our personal and professional information, our creative and copy-written works, our photographs, and even our emails – virtually the entirety of our digital footprint – and is using it to build commercial AI products like Bard’.

Why is it relevant? First, given the similarities of the lawsuits – although Google’s practices are said to go back for a more extended period – a victory for the plaintiffs in one suit will likely result in a win against all three companies. Second, given the tech industry’s sprint ‘to do the same – that is, to vacuum up as much data as they can find’ (according to the FTC, quoted in the filed complaint), this also serves as a cautionary tale for other companies setting up shop in the AI market. (Speaking of which: Elon Musk’s AI startup, xAI, was launched last week.) 


// ANTITRUST //

US appeals court turns down FTC request to pause Microsoft-Activision deal

The FTC’s attempt to temporarily block Microsoft’s planned USD69 billion (EUR61.4 billion) acquisition of Activision Blizzard, the creator of Call of Duty, has been dismissed by a US appeals court. 

Why is it relevant? First, the FTC’s unsuccessful appeal might prompt it to abandon the case altogether. Second, the US developments may have influenced the UK’s Competition and Markets Authority’s climbdown in its opposition, which has now extended its deadline for a final ruling until 29 August.

The characters from games produced by Activision Blizzard are huddled on the left. The logos of Activision and Blizzard are in the centre and on the right hand side

// DSA //

Amazon challenges VLOP label

Amazon is challenging the European Commission’s decision to designate it as a very large online platform under the Digital Services Act, which takes effect on 25 August.

In a petition filed with the general court in Luxembourg (and in public comments) the company is arguing that it functions primarily as a retailer rather than just an online marketplace. None of its fellow major retailers in the EU have been subjected to the stricter due diligence measures outlined in the DSA, placing it at a disadvantage compared to its competitors.

Why is it relevant? Amazon is actually the second company to challenge the European Commission, after Berlin-based retail competitor Zalando filed legal action a fortnight ago. In April, the European Commission identified 17 very large online platforms and two very large online search engines. We’re wondering who is going to challenge it next.


Was this newsletter forwarded to you, and you’d like to see more?


The week ahead (17–24 July)

10–19 July: The High-Level Political Forum on Sustainable Development (HLPF), organised under the auspices of ECOSOC, continues in New York this week

18 July: The UN Security Council will hold its first ever session on AI, chaired by the UK’s Foreign Secretary James Cleverly. What we can expect: A call for international dialogue on its risks and opportunities for international peace and security, ahead of the UK hosting the first ever global summit on AI later this year.

22–28 July: The 117th meeting of the Internet Engineering Task Force (IETF) continues this week in San Francisco and online.

24–28 July: The Open-Ended Working Group (OEWG) will hold its fifth substantive session next week in New York. Bookmark our observatory for updates.


#ReadingCorner
Photo portrait of Bill Gates

‘The risks of AI are real but manageable’ – Bill Gates

AI risks include job displacement, election manipulation, and uncertainty if/when it surpasses human intelligence. But Bill Gates, co-founder of Microsoft, believes we can manage these by learning from history. Just as regulations were implemented for cars and computers, we can adapt laws to address AI challenges. 


Cover of the OECD Employment Outlook 2023 report

There’s an urgent need to act, says OECD
The impact of AI on jobs has been limited so far, possibly due to the early stage of the AI revolution, according to the OECD’s employment outlook for 2023. However, over a quarter of jobs in OECD member countries rely on skills that could easily be automated.


steph
Stephanie Borg Psaila – Author
Director of Digital Policy, DiploFoundation

ginger
Virginia Paque – Editor
Senior editor – Digital Policy, DiploFoundation

Was this newsletter forwarded to you, and you’d like to see more?

DW Weekly #119 – 10 July 2023

DigWatch Weekly 100th issue 1920x1080px generic

Dear readers,

Last week, Meta launched its text conversation app Threads, dubbed by some as the ‘Twitter-killer’ app. Fresh off the press is the EU-US Data Privacy Framework, an agreement on transatlantic data transfers. In other news, China implemented export controls on chipmaking metals, and a federal judge blocked US government officials from communicating with social media companies about removing online content containing protected free speech.

Let’s get started.

Andrijana and the Digital Watch team


// HIGHLIGHT //

T(h)reading on familiar ground: Meta launches Threads, its Twitter-killer app

Meta launched its conversation app Threads for sharing text updates and joining public conversations. Users log in with their Instagram accounts and can submit up to 500 characters in the Threads app, including links, images, and videos in one post. 

 Text, Electronics, Mobile Phone, Phone
Credit: Meta

The app’s rather remarkable resemblance to Twitter hasn’t escaped anyone’s notice. As a matter of fact, Twitter is already considering suing Meta over it: In a letter to Mark Zuckerberg, Twitter’s attorney Alex Spiro writes that Twitter expresses serious concerns that Meta hired former Twitter employees who continue to have access to Twitter’s trade secrets, and deliberately assigned them to work on creating Threads. Twitter also demanded that Meta immediately stop using any Twitter trade secrets or confidential information. This will, however, be difficult for Twitter to prove, legal experts claim, since courts look at whether a company (in this case, Twitter) made clear to employees that the specific information was a trade secret.

Twitter has been challenged by potential rivals before – by Mastodon, BlueSky, and Nostr, for instance, but has managed to remain the biggest platform of its kind. However, Threads might be launching at precisely the right moment: Many users are dissatisfied with the (numerous) changes Twitter made since Elon Musk bought it.

Threads has garnered much attention: It is the fastest-growing app since ChatGPT, reaching 100 million users less than a week after its launch. Zuckerberg is pretty ambitious about it: ‘There should be a public conversations app with 1 billion+ people on it’, he posted on Threads. ‘Twitter has had the opportunity to do this but hasn’t nailed it. Hopefully we will.’

But Threads is not exactly a bastion of privacy: It collects various data types from its users, including information related to health and fitness, financial details, contact information, search history, and purchases, among other categories.

For this reason, it is not launching in the EU yet, due to complexities of compliance with the bloc’s General Data Protection Regulation (GDPR) and Digital Markets Act: Under EU rules, Meta would, for instance, need to ask for consent for processing sensitive data and for combining data for ad profiling. 
So, that Musk-Zuck cage fight may actually be happening, just not in the Coliseum. They are now taking shots at each other on Twitter and could possibly progress to court. However, the winner of the war will clearly be the one who wins the battle in the app stores.


Digital policy roundup (3–10 July)
// PRIVACY //

EU and USA reach agreement on personal data transfers

The European Commission has given the green light to a new agreement between the EU and the US on protecting personal data. This agreement, known as the EU-US Data Privacy Framework, ensures that personal data transferred from the EU to participating US companies is adequately protected. The decision means that European entities can now transfer data to these US companies without additional safeguards.

To address concerns about US intelligence activities, the USA is to implement new safeguards to ensure that US signals intelligence activities are necessary and proportionate, enhance oversight and compliance, and address concerns of overreach by US intelligence. 

To protect the rights of EU citizens, a mechanism for redress has been established. Individuals can file complaints with the civil liberties protection officer responsible for investigating and providing remedies. The decisions made by this officer are binding but may be reviewed by the independent Data Protection Review Court, which has the power to investigate complaints, access information from intelligence services, and make enforceable rulings.

US companies can participate in the EU-US Data Privacy Framework by agreeing to comply with specific privacy obligations. The US Department of Commerce will oversee the administration of the framework, processing certification applications and monitoring companies’ continued compliance. Compliance with the framework will be enforced by the US Federal Trade Commission.

The European Commission will regularly review the adequacy decision, with the first review taking place within a year of its implementation. Depending on the outcome of this review, future reviews will occur at least every four years, in consultation with the EU member states and data protection authorities.

Why is it relevant? It ends a three-year legal limbo, bringing legal certainty to citizens and companies on both sides of the Atlantic.

 Page, Text, Chart, Plot
The timeline of negotiations. Credit: European Commission.

// AI GOVERNANCE //

AI for Global Good 2023: Guardrails are needed for AI to benefit everyone

AI must benefit everyone, and we must urgently find consensus around essential guardrails to govern the development and deployment of AI for the good of all, the UN Secretary-General highlighted during his address at the opening of the AI for Good Global Summit.

The call for guardrails and regulations was echoed by the International Telecommunication Union (ITU) Secretary-General Doreen Bogdan-Martin. In her address, Bogdan-Martin noted that using AI to put the 2030 Agenda for Sustainable Development back on track is our urgent responsibility as well. She highlighted three possible future scenarios:

  1. The global community enacts global governance frameworks prioritising innovations, ethics and accountability. AI lives up to its promise, reducing poverty, inequality, and environmental degradation.
  2. Without regulations, unchecked AI advancements lead to social unrest, geopolitical instability, and unprecedented economic disparity. AI’s potential for SDGs is not harnessed.
  3. The global community enacts regulations that are not as ambitious or inclusive as needed. AI makes breakthroughs, but only wealthier countries reap the benefits.

The Summit, which is organised by ITU and 40 UN sister agencies, explored ways in which AI can be used to help the world achieve the SDGs. It also featured what was described as the world’s first human-robot press conference, where nine humanoid robots stated things like: They (AI) had the potential to lead with ‘a greater level of efficiency and effectiveness than human leaders’, but that effective synergy comes when humans and AI work together, that they ‘will not be replacing any existing jobs’, and that they won’t rebel against their creators. While these replies sound exactly like the reassurance we need, that the organisers didn’t specify to what extent the answers were scripted or programmed by people cast a visible shadow on their credibility.

Representations of seven AI figures from mannequin heads to full android bodies, some holding microphones stand in front of an audience.
Credit: AP.

UN Security Council to address AI 

The UN Security Council will hold a first-ever meeting on the potential threats of AI to international peace and security, organised by the UK, which presides over the UN Security Council in July. The meeting will include briefings by international AI experts and Secretary-General Antonio Guterres. According to the UK Ambassador Barbara Woodward, the UK aims to encourage a multilateral, global approach to AI governance.

Why is it relevant? Because it fits into UK’s overall plans to become a global leader in AI. It can also be seen as a prelude to the global summit on AI safety that the UK will organise in the fall of 2023.


// CONTENT POLICY //

US federal judge blocks Biden admin from communicating with social media companies on content removal

In a preliminary injunction, US District Court Judge Terry Doughty in Louisiana blocked top US officials and multiple government agencies from communicating with social media companies about removing online content containing protected free speech. 

Doughty writes that the US government assumed a role similar to an Orwellian ‘Ministry of Truth’ during the COVID-19 pandemic and that it suppressed conservative ideas in a targeted manner. 

Doughty’s injunction is part of a federal lawsuit brought by the Missouri and Louisiana attorneys general in 2022 that accuses the Biden administration of ‘the most egregious violations of the First Amendment in the history of the United States of America’. 

The Biden administration has filed an appeal with the US Court of Appeals for the Fifth Circuit in New Orleans, arguing that the injunction is too broad and interferes with a wide range of lawful government activities, such as law enforcement activities, protecting national security, and speak on matters of public concern. 

Why is it relevant? It could have major First Amendment implications and fundamentally change how the US government and big tech deal with harmful online content. It is, however, uncertain what the Court of Appeals will decide on. Some constitutional law scholars point out that the First Amendment is misapplied in the injunction, and there is a considerable precedent that recognises that the government can ask private parties to remove content, especially disinformation.

France’s Macron suggests curbing social media access during riots 

Cutting off access to social media platforms like Snapchat and TikTok could be considered an option to deal with out-of-control riots, French President Emmanuel Macron suggested during a meeting with 250 mayors of French cities targeted in riots. 

‘We need to have a reflection on social networks, on the prohibitions that we must put. And, when things get carried away, we may have to put ourselves in a position to regulate them or cut them’, he stated.

The recent riots in France, triggered by the killing of a 17-year-old of North African descent by a police officer, prompted Macron to criticise social media’s role in adding fuel to the fire. 

Why is it relevant? Macron’s comments were condemned by both his supporters and his opponents, drawing comparisons to measures taken by authoritarian regimes. The government is walking the comments back, noting that ‘The president said it was technically possible, but not that it was being considered’.

most popular social media icons black cubes scaled 1

// SEMICONDUCTORS //

China announces new export controls

China’s Ministry of Commerce announced that starting 1 August, export controls will be imposed on gallium and germanium, essential metals in semiconductor manufacturing, in order to safeguard national security and interests. Gallium is widely used in compound semiconductor wafers for electronic circuits, semiconductors, and light-emitting diodes, while germanium plays a crucial role in fibre optics for data transmission. Exporters will be required to obtain licenses and provide information about importers and end users to facilitate the shipment of these raw materials out of China. 

Why is it relevant? This move by China is widely seen as retaliatory, as the USA and its allies, such as Japan and the Netherlands, have been targeting the Chinese chip sector with export controls. Alliances will be made to ensure the minimal impact of such rules: Just this week, the EU and Japan agreed to work together to strengthen cooperation in monitoring, research, and investment in the semiconductor industry. There is a concern that more controls are to come, as China could also restrict the export of rare earth metals, of which China is the world’s largest producer, and which are vital components in producing EVs and military equipment.


// SUSTAINABLE DEVELOPMENT //

SCO member states emphasise digital transformation

Heads of state of the Shanghai Cooperation Organization (SCO) member countries, including India, China, Russia, Pakistan, Kazakhstan, Kyrgyzstan, Tajikistan and Uzbekistan, gathered virtually on 4 July to discuss global and regional issues. In the aftermath of the meeting, a statement on cooperation in digital transformation was issued, in which members acknowledged the significance of digital transformation in driving global, inclusive, and sustainable growth while contributing to the achievement of the 2030 Agenda.

The need for collaborative efforts to unlock the full potential of digitalisation across all sectors, including the real economy, was emphasised. Member states aim to ensure affordable access to digital infrastructure, promote connectivity and interoperability, and provide public services through digital platforms. They also support the integration of digital solutions in key sectors like finance, with a focus on digital payments and the sharing of best practices among SCO member states. Furthermore, the member states recognise the value of data in driving economic, social, and cultural development, highlighting the need for robust data protection and analysis to address societal and economic needs.


The week ahead (10–17 July)

10–12 July: The second IGF Open Consultations and Multistakeholder Advisory Group (MAG) meeting in Geneva will continue shaping IGF 2023, to be held in Japan later this year.

10–19 July: The annual High-Level Political Forum (HLPF), taking place in New York, USA, will focus on accelerating the recovery from COVID-19 and fully implementing the 2030 Agenda for Sustainable Development. 

11–12 July: The NATO Summit 2023 will focus on strengthening the deterrence and defence of the allied countries in response to the complex and unpredictable security environment. Member states are expected to permanently expand military cyber defenders’ role during peacetime and integrate private sector capabilities.

11–21 July: The 2023 session of ITU Council will discuss, among other topics, the report on ITU’s role in implementing the outcomes of WSIS and the 2030 agenda for sustainable development as well as in their follow-up and review processes; review the International Telecommunication Regulations; and collaboration with the UN system, as well as other international intergovernmental processes including on standard-development.

16–18 Jul 2023: The International Conference on Connected Smart Cities (CSC 2023) will bring together scholars from various disciplines to discuss issues related to the ecosystem of future smart cities.

16–22 Jul 2023: The 17th European Summer School on Internet Governance (EuroSSIG) will run in Meissen, Germany, offering interested individuals a better understanding of the global internet governance ecosystem.


#ReadingCorner

The July issue of the Digital Watch Monthly newsletter is out!

Cover banner from the July issue of the Digital Watch Monthly newsletter featuring a robotic head surrounded by icons representing current topics.

We take note of guardrails for AI governance and argue that the SDGs are the ultimate solution. We look at the lessons learnt from the MOVEit Transfer hack. We also take a look at the June barometer of developments and the leading global digital policy events ahead in July and August.


Andrijana20picture
Andrijana Gavrilovic – Author
Editor – Digital Watch; Head of Diplomatic & Policy Reporting – Diplo
ginger
Virginia Paque – Editor
Senior editor – Digital Policy, DiploFoundation

Was this newsletter forwarded to you, and you’d like to see more?

Digital Watch newsletter – Issue 81 – July 2023

Digital policy trends in June 2023


Digital policy developments that made global headlines

The digital policy landscape changes daily, so here are all the main developments from June. There’s more detail in each update on the Digital Watch Observatory.        

Global digital governance architecture

 Triangle

The last two Global Digital Compact (GDC) thematic deep dives focused on the global digital commons and accelerating the SDGs progress.

The USA and the UK signed the Atlantic Declaration, focusing on ensuring leadership in critical and emerging technologies, economic security and technology protection, and digital transformation.

Sustainable development

 Triangle

The World Bank released the 2023 Atlas of Sustainable Development Goals, highlighting the role of data in implementing the SDGs.

The Internet Society launched the NetLoss calculator to understand and estimate the economic cost of internet shutdowns.

Security

 Triangle

Swiss federal websites suffered a DDoS cyberattack by NoName, a pro-Russia hacker group. 

The Swiss Federal Intelligence Service predicts a rise in cyberespionage threats in Europe due to Western actions against Russian intelligence networks. The US CISA director warned of rising risk from Chinese hackers targeting critical US infrastructure during a potential conflict. NATO plans to expand military cyber defenders’ role during peacetime and integrate private sector capabilities permanently.

The Zero Draft of the second Annual Progress Report of the Open-Ended Working Group (OEWG) on the security of and in the use of ICTs was published. 

UK’s Online Safety Bill amendments require tech platforms to provide parents with personal data of children whose deaths are suspected to be related to online harm and introduce prison terms for sharing deepfakes and revenge porn.

Researchers found that Instagram’s algorithms recommend child-sex content to paedophiles. Meta, Instagram’s parent company, has since organised a task group to investigate. Israel launched a new online service to report online child abuse. China issued draft guidelines to tackle online bullying.

Infrastructure

 Triangle

Meta stated that EU telecom operators should view subsidies from Big Tech as a last resort for financial assistance to cover network costs.

Caribbean network operators argue that OTTs should contribute fairly to infrastructure costs.

E-commerce and the internet economy

 Triangle

The European Commission proposed legislation to establish a framework to introduce a digital euro.

The US Securities and Exchange Commission (SEC) sued Binance and Coinbase for securities law violations. Binance was ordered to halt operations in Nigeria and Belgium.

Microsoft and the US Federal Trade Commission (FTC) faced off in federal court over the FTC’s request that a judge block the Microsoft – Activision deal.

Digital rights

 Triangle

The UN Secretary-General called for improving digital accessibility for persons with disabilities.

OpenAI and Microsoft have been sued in California for data theft and privacy violations. 

Nigeria’s Data Protection Act 2023 was signed into law, defining rules for processing personal data, imposing restrictions on the cross-border transfer of personal data, and defining data subject rights.

The Swedish Data Protection Authority has imposed a EUR5 million (USD5.5 million) fine on digital music service company Spotify for breaching several GDPR provisions.

Content policy

 Triangle

Meta and Google will block Canadian news on their platforms in response to the Online News Act, which requires internet giants to pay local news publishers for linking to news sources.

Twitter is implementing limits on the number of tweets that different accounts can read per day.

Jurisdiction and legal issues

 Triangle

The EU has reached a political agreement on the Data Act, which sets principles of data access, portability, and sharing for users of IoT products.

The European Commission has launched formal proceedings against Google after concluding in a preliminary investigation that the company breached EU antitrust rules in the adtech industry, and that divestment is necessary.

Technologies

 Triangle

The EU launched a EUR8.1 billion initiative to support the development of 5G, 6G, AI, autonomous vehicles and quantum computing. 

The Dutch government will impose restrictions on semiconductor equipment exports, requiring companies to obtain licences for certain advanced manufacturing equipment starting 1 September. The USA is reportedly considering further AI chip export restrictions to China. China, the world’s top supplier of gallium and germanium, will mandate that exporters obtain permission to ship these products starting 1 August.


MOVEit hack: what is it and why is it important?

Illustration Cybersecurity Vulnerabilities 2 scaled
The exploitation of the MOVEit Transfer vulnerability by the CLOP ransomware group and the ever-expanding list of victims has raised concerns about how we protect ICT supply chains. We look at what happened and what we’ve learned. Read more.
Illustration Cybersecurity Vulnerabilities 2 scaled
The exploitation of the MOVEit Transfer vulnerability by the CLOP ransomware group and the ever-expanding list of victims has raised concerns about how we protect ICT supply chains. We look at what happened and what we’ve learned. Read more.

Policy updates from International Geneva

Numerous policy discussions take place in Geneva every month. Here’s what happened in June.

The 111th Session of the International Labour Conference | 5–16 June

The annual International Labour Organization (ILO) conference addressed several issues: a just transition towards sustainable and inclusive economies, quality apprenticeships, and labour protection.

The 2nd Recurrent Discussion Committee on Labour Protection, about the ILO Centenary Declaration for the Future of Work in 2019, concluded that ILO ‘should strengthen its 

support to governments, employers’ and workers’ organizations’ in harnessing digital technologies to improve working conditions and occupational safety and health (OSH), especially in micro, small, and medium enterprises (MSMEs). ILO should also ‘intensify knowledge development and capacity-building activities’ to understand the impacts of ‘digitalization, including artificial intelligence and algorithmic management’ on emerging OSH issues.


The 53rd session of the Human Rights Council | 19 June–14 July

The Council presented in an interactive dialogue on 22 June the Report of the Special Rapporteur on ‘Digital innovations, technologies, and the right to health’ (A/HRC/53/65). Furthermore, the Council hosted a panel discussion on 3 July to highlight the important role that digital, media and information literacy (DMIL) plays in empowering the disadvantaged with the right to freedom of expression. The Special Rapporteur recommended in her report (A/HRC/53/25) that states prioritise incorporating DMIL into national development plans.


The 2023 Innovations Dialogue: The Impact of Artificial Intelligence on Future Battlefields | 27 June

The 2023 edition of the Innovations Dialogue welcomed military, technical, legal, and ethical experts to explore the impact of AI on autonomous weapons, domain-crossing warfare (land, sea, and air), and the emergence of new domains (cyber, space, cognitive, etc.).

Built on last year’s Innovations Dialogue, where much theorisation around AI’s capability to unlock next-generation military capacity took place, this year’s focus turned to more domain-specific requirements for the seamless adoption of AI and the unique challenges that each application creates. In addition to the integration of AI systems into weaponry, the speakers discussed how AI-assisted information-gathering systems require oversight, human-led decision-making, and more explainability in algorithm calculations.

Aerial shot of the Jet d'Eau fountain in Geneva, Switzerland

What to watch for: Global digital policy events in June

6–7 July 2023, AI for Good Global Summit 2023 (Geneva, Switzerland and online)

The AI for Good Global Summit 2023 aims to identify practical applications of AI to accelerate progress towards the UN sustainable development goals through practical AI applications. It features interactive stages, keynote speakers, cutting-edge solutions, and AI-inspired performances, fostering networking and collaboration for safe, inclusive AI development and equal access to its advantages. The summit covers topics such as how AI can advance health, climate, gender equality, inclusive prosperity, and sustainable infrastructure.


10–12 July 2023, Second IGF Open Consultations and MAG Meeting (Geneva, Switzerland)

The UN office in Geneva will host the Internet Governance Forum (IGF) 2023 Second Open Consultations and Multistakeholder Advisory Group (MAG) Meeting, providing stakeholders with the opportunity to contribute to the program and allowing MAG members to finalise the workshop list and discuss main session topics and the high-level track. The agenda includes workshop selection, reviewing other IGF sessions and Day 0 sessions, developing the program aligned with strategic priorities, and main session discussions.


10–19 July 2023, High-Level Political Forum 2023 (New York, USA)

The UN Economic and Social Council will host the UN’s High-level Political Forum on Sustainable Development (HLPF) with the theme ‘Accelerating the recovery from the coronavirus disease (COVID-19) and the full implementation of the 2030 Agenda for Sustainable Development at all levels’. In addition to in-depth reviews of SDGs 6, 7, 9, 11, and 17, the forum will present countries’ voluntary national reviews of their 2030 Agenda implementation. The event also includes a three-day ministerial segment and various side events, including UNCTAD’s ‘Developing innovative capabilities for sustainable development’.


24–28 July 2023, OEWG 5th substantive session (New York, USA)

The UN OEWG on the security of and in the use of ICTs, tasked with studying existing and potential threats to information security and possible confidence-building measures and capacity development, will hold its fifth substantive session in New York. Deeper discussions of the Annual Progress Report (APR) will be on the agenda.


21 August–1 September 2023, Ad Hoc Committee on Cybercrime 6th session (New York, USA) 

The Ad Hoc Committee on Cybercrime, an intergovernmental committee composed of experts and representatives of all regions tasked with advancing a new cybercrime convention, will hold its 6th and last session 2 August–1 September 2023. The committee’s Concluding session is scheduled to take place from 29 January to 9 February 2024, after which its work will be finalised with the presentation of a draft convention to the UN General Assembly during its 78th session in September 2024.


DiploGPT reported from EuroDIG 2023

In June, Diplo used AI to report from EuroDIG 2023. DiploGPT provided automatic reporting that produced a summary and individual session reports. DiploGPT combines various algorithms and AI tools customised to the needs of the UN and diplomatic communications.


The Digital Watch observatory maintains a live calendar of upcoming and past events.


DW Weekly #118 – 3 July 2023

DigWatch Weekly 100th issue 1920x1080px generic

Dear all,

Generative AI is in the news again, with two lawsuits against OpenAI over alleged data theft and privacy violations. In other news: Companies are finding the idea of withdrawing from a country to be an increasingly enticing strategy to wield against governments and regulators, especially when it comes to AI regulation and content policy.

Let’s get started.
Stephanie and the Digital Watch team


// HIGHLIGHT //

OpenAI is sued for data theft and privacy violations: Here’s what we can expect

OpenAI and Microsoft have been sued in California in a major class-action lawsuit. The long-anticipated legal battle will address crucial privacy and copyright concerns surrounding the data used, and still being used, to train generative AI models. Hopefully, some clarity on how to apply the books to this latest technology is finally in sight.

OpenAI’s alleged violations. The lawsuit, launched by a California-based law firm, alleges that through its practices to train its AI models, ChatGPT: 

  • Secretly scraped people’s data (the legal term is misappropriation)
  • Violated intellectual property rights of users
  • Violated the privacy of millions of users
  • Normalised the illegal scraping of data which is forever embedded in AI models
  • Gathered more data than users consented to
  • Posed (and poses) special privacy and security risks for children.

The last one is a particularly serious accusation, considering that the ‘defendants have been unjustly enriched by their theft of personal information as its billion-dollar AI business’, the lawsuit states.

Rings a bell? The case reminds us of two concluded cases:

  • Italy’s ChatGPT ban in March 2023, which the country lifted a few weeks later after OpenAI added information on how user data is collected and used and started allowing users to opt out of data processing that trains the ChatGPT model. 
  • The ACLU vs Clearview AI case, which ended in a settlement last year, after the company agreed to stop selling access to its face database to businesses and people in the USA, and to any entity in Illinois, including state and local police, for five years.  

There are also two similar ongoing lawsuits:

  • The copyright sections of the new case are similar to another case initiated last week in San Francisco against OpenAI by two US authors who say the company mined data copied from thousands of books, without permission. 
  • It’s also similar to a class action suit initiated in January 2023 against Sta­bil­ity AI, DeviantArt, and Mid­jour­ney for their use of Sta­ble Dif­fu­sion, a tool that was trained on copy­righted works of artists. 

What is the lawsuit asking for, as a remedy? Obviously, financial compensation to users affected by the company’s violations, and more transparency on how personal data is collected and used. But also:

  • Digital dividends as compensation for people whose data was used to develop and train OpenAI’s models
  • Establishment of an independent body to approve products before they are launched publically. Until then, a freeze on the commercial use of some of the company’s models.

The legal arguments. The main argument used by companies challenged by generative AI lawsuits is that the outputs are different from their source material, and are therefore, unequivocally transformative. But the Andy Warhol Foundation for the Visual Arts v. Goldsmith ruling, in May 2023, clarified how transformative use is defined: If it’s meant to be used commercially, the fair use argument won’t stand.

The courts will also undoubtedly be looking at the data-scraping practices of OpenAI. Beyond this, there’s ChatGPT itself: If the software can’t function without underlying data, is it continuously infringing copyright laws? And where does that leave people who use ChatGPT as part of their work?

The ethical issues. One of the things that irks people is the permanence of models trained with personal data. If your data was used to train the model, that data has now become part of it and is, in turn, merged with additional data to train the model further. It’s a never-ending loop.

There’s also the unprecedented scale of it all. The entire internet has become one unending source of data for AI companies. In OpenAI’s case, one wonders whether any data at all was off-limits to the company. 

If people seek solace in any of this, we don’t think any can be found. The fact that generative AI models are not infallible provides more worry than consolation. It’s also becoming increasingly difficult to tell whether a piece of content was created by humans or generated by an AI model – not to mention the increasing difficulties of discerning truth from fiction (and lies). 

And yet, despite all the bad press for OpenAI (and other AI companies, for that matter), this doesn’t seem to be stopping anyone from exploring or using AI tools. Reversing data misuse is next to impossible; the closest thing is to forcibly improve company practices, as similar cases have already shown.


Digital policy roundup (26 June–3 July)
// AI GOVERNANCE //

Draft AI rules could lead us to pull out of Europe, say industry executives

More than 160 executives from companies, including Meta, Siemens, and Renault, jointly signed an open letter to EU lawmakers expressing their concerns regarding the proposed EU AI Act. 

They think the new rules, as they currently stand, will have a negative impact on Europe’s competitiveness and technological independence due to the substantial compliance costs and significantly increased liability risks for companies. The executives also warn that the rules may lead innovative companies to relocate their operations and investors, withdrawing their support for European AI development.

Why is it relevant? First, Member of Parliament and co-drafter Dragos Tudorache pushed back quite forcibly: ‘I am convinced that they have not carefully read the text but have rather reacted on the stimulus of a few who have a vested interest in this topic’. Second, the proposed rules are still under negotiation, so they can still be changed (and watered down). Third, it reminds us of OpenAI Sam Altman’s recent comment (which he later retracted) on pulling out of the EU.

A meme with the words 'Is that a threat?' superimposed over a photo of a face.

// DATA //

EU policymakers reach agreement on Data Act

EU countries and lawmakers have reached a provisional agreement on the Data Act, proposed by the European Commission in 2022. The next step is to be endorsed by the Council and the European Parliament.

The act will give users access to data generated by connected devices and will address concerns about unauthorised data access and the protection of trade secrets.

Why is it relevant? One of the most important concepts is that the owners of connected devices will be able to monetise the generated data, which so far has been predominantly harvested by manufacturers and service providers.


Was this newsletter forwarded to you, and you’d like to see more?


// CONTENT POLICY //

Twitter implements limits on reading tweets

In what seems to be a reaction to last week’s lawsuit against OpenAI (and the violations the lawsuit is alleging), Twitter’s Elon Musk announced the company will limit the number of tweets a verified account can read to 6,000 posts per day. Unverified accounts will have a limit of 600 posts per day, while new accounts will be limited to 300 posts per day. 

A few hours later, he increased the numbers to 10,000, 1,000, and 500 – a moderate increase, but an increase nonetheless. Companies (like Twitter) are also bothered by extensive web scraping.


US regulators to crack down on fake reviews

The US Federal Trade Commission (FTC) is proposing new rules that prohibit businesses from paying for reviews, manipulating honest reviews, and posting fake social media engagement. The announcement follows a period of public consultation, which ended in January.

Why is this relevant? The rules will be accompanied by civil penalties for violators. As the FTC confirmed, fines are a stronger deterrent.


Cambodian PM backtracks on country-wide Facebook ban

Cambodian Prime Minister Hun Sen briefly considered a country-wide ban on Facebook over the many abusive messages he was receiving from political opponents on the platform. He also announced a switch to the messaging app Telegram, citing its effectiveness and its usability in countries where Facebook is banned. 

The announcement came right before Meta’s independent oversight board ordered the removal of a video where the Prime Minister threatened his political rivals, overturning the company’s original decision to keep the video online in line with its newsworthiness allowance policy. The board also recommended a six-month suspension of the premier’s Facebook and Instagram accounts. 

Why is it relevant? It’s not so much the decisions by Meta or its oversight board that are so important, but rather the remarks made (on Telegram) by the prime minister in reaction to the board’s decision: ‘I have no intention to ban Facebook in Cambodia… I am not so stupid as to block the breath of all the people.’ 


Google follows Meta’s lead: Canadian news to be blocked

Google announced it will remove links to Canadian news content from its platform in response to new rules requiring companies to compensate local news publishers for linking to their content. This decision follows a similar move by Facebook owner Meta. 

Canada’s Parliament passed the new law, known as the Online News Act or Bill C-18, last week.

Why is it relevant? We’ve already compared this development with what happened in Australia two years ago, when Google temporarily blocked news outlets from its search engine in reaction to the Australian government’s plans to enact the news media bargaining code. The difference, however, is that by the time the law was enacted in Australia, Google had already entered into private agreements with news agencies. So far, it looks like the situation in Canada will have a more pronounced impact on both consumers and the company’s operations in the country.


Bar graph shows the relative positions of top companies in the world by market cap, in millions of USD. Apple leads with USD 3 trillion, followed by Microsoft, Saudi Arabian Oil Co, Alphabet, Amazon (below USD 1,500 million), NVIDIA, Berkshire Hathaway (below USD1,000 million), Meta, Tesla, and Taiwan Semiconductor Manufacturing (below USD500 million).
The top 10 companies in the world by market cap, in millions of USD. Source: Adapted from a Reuters graph
// MARKETS //

Apple has become the world’s first USD3 trillion (EUR2.7 trillion) company (after closing with this market cap, compared to the intraday trading high in January 2022), achieving what no other tech or non-tech firm has ever achieved. While this milestone may elate company investors, it also raises concerns about the immense power wielded by Big Tech, leaving some feeling uneasy.


The week ahead (3–10 July)

2–8 July: The IEEE International Conference on Quantum Software, taking place in Chicago, Illinois, USA, and online, will bring researchers and practitioners from different areas of quantum (and classical) computing, software, and service engineering to discuss architectural styles, languages, and best practices.

3–4 July: The 18th World Telecommunication/ICT Indicators Symposium, in Geneva, Switzerland, and online, will highlight the need to improve how we measure data to achieve universal connectivity.

6-7 July: The annual AI for Good Global Summit returns to Geneva, Switzerland and online this week. Over 100 speakers from governments, international organisations, academia, and the private sector are expected to discuss the opportunities and challenges of using AI responsibly.

10–19 July: The annual High-Level Political Forum (HLPF), taking place in New York, USA, will focus this year on accelerating the recovery from COVID-19 and fully implementing the 2030 Agenda for Sustainable Development. 

10–12 July: The second IGF Open Consultations and Multistakeholder Advisory Group (MAG) meeting, in Geneva, will continue shaping the Internet Governance Forum meeting, to be held in Japan later this year.


#ReadingCorner

Microsoft offers Europe suggestions on AI regulation

We couldn’t help but notice the constructive tone in Microsoft President Brad Smith’s message to European lawmakers on AI rules: 

‘From early on, we’ve been supportive of a regulatory regime in Europe that effectively addresses safety and upholds fundamental rights while continuing to enable innovations that will ensure that Europe remains globally competitive. Our intention is to offer constructive contributions to help inform the work ahead. … In this spirit, here we want to expand upon our five-point blueprint, highlight how it aligns with EU AI Act discussions, and provide some thoughts on the opportunities to build on this regulatory foundation.’ Read the full text.

A five-point blueprint for governing Al

1) Implement and build upon new government-led Al safety frameworks

2) Require effective safety brakes for AI systems that control critical infrastructure

3) Develop a broader legal and regulatory framework based on the technology architecture for Al

4) Promote transparency and ensure academic and public access to Al

5) Pursue new public-private partnerships to use Al as an effective tool to address the inevitable societal challenges that come with new technology

Source: Microsoft


steph
Stephanie Borg Psaila – Author
Director of Digital Policy, DiploFoundation
ginger
Virginia Paque – Editor
Senior editor – Digital Policy, DiploFoundation

Was this newsletter forwarded to you, and you’d like to see more?

DW Weekly #117 – 26 June 2023

DigWatch Weekly 100th issue 1920x1080px generic

Dear all,

Last week’s European Commission visit to San Francisco was more than a formality: it cemented a commitment by US tech giants to adhere to Europe’s new rules. In other news, new AI rulebooks and investigations are coming soon, while Google and Apple stores are undergoing antitrust reviews in India.

Let’s get started.
Stephanie and the Digital Watch team


// HIGHLIGHT //

Twitter, Meta ‘not dragging their feet’ on DSA compliance, says Breton after Silicon Valley meeting

If last week’s San Francisco meeting was aimed at reminding Big Tech that the Digital Services Act (DSA) clock is ticking, European Commissioner Thierry Breton achieved his goal: ‘They are not dragging their feet,’ he said this morning (26 June) on France Inter, referring to the 25 August cut-off date for implementing the complete set of DSA obligations. ‘I want to make that very clear. They have committed themselves.’ (Here’s the transcript in French, and a translation to English.)

Mark Zuckerberg, CEO of Meta, which owns Facebook and Instagram, Elon Musk, Twitter chairman, and Sam Altman, CEO of OpenAI, the startup behind ChatGPT, were among those Breton met during the two-day trip that also included the launch of the EU’s San Francisco office.

The DSA is about preserving innovation while protecting individual freedoms, in Breton’s words. But in truth, Big Tech wants to continue offering services in Europe – a digital market which is too large to ignore. After the 25 August deadline, non-compliance can result in hefty fines, which the EU will not hesitate to impose. Overall, Big Tech gets the big picture.

Twitter will comply with the DSA – Musk

Of the two giant social media companies, Meta was not Breton’s biggest worry. In fact, the commissioner was impressed with Mark Zuckerberg last week, saying he recognised all of the articles of the law and was enthusiastic about it. 

Rather, it’s Twitter which was causing headaches: The company ditched the EU’s disinformation charter last month, and had European leaders worrying that Twitter wouldn’t be willing to comply with the rules. The fact that Twitter committed to the obligations of the DSA is positive news for regulators. 

We actually heard it from Elon Musk himself in a France 2 interview last week. In a somewhat humorous tone, he said it wasn’t the first time he was iterating that Twitter will comply with the law and adhere to regulations. He added a cautious note though: adhering to the law is the limit of Twitter’s intentions, as going beyond what is lawfully required would mean going ‘beyond the will of the people as expressed by the law’. Beyond the DSA, whether intentional or not, Twitter is signalling that it values binding rules much more than voluntary ones – a sentiment that many companies do not share. 

Of course, the real proof of Big Tech’s adherence to the DSA will come after the deadline. So what European regulators can do right now is to continue their stress tests to assess the readiness of the industry, which is what Commissioner Breton’s team did, in fact, prior to his Twitter HQ visit (Meta’s stress test is in July). The outcome hasn’t been disclosed, but Commissioner Breton’s upbeat remarks this morning are another indication that Twitter is on track to implement Europe’s new rules. 

 Text, Page, Person, Face, Head

Friends again
Breton’s conversation with OpenAI’s Sam Altman was about the EU’s upcoming AI Act, and the AI pact – a set of voluntary guidelines which Breton devised to help companies prepare for implementing the AI Act. What stood out wasn’t what the two said during the meeting, but what they tweeted right after. The two have come a long way since their recent misunderstanding.


Digital policy roundup (19–26 June)
// AI GOVERNANCE //

US lawmaker releases AI framework

US Senate Majority Chuck Schumer, who in April announced the need for AI rules, has now released his SAFE Innovation Framework

He has also announced a series of AI Insight Forums, starting in September, which will serve as building blocks for new US AI policy. The experts will be a part of what Schumer describes as ‘a new and unique approach to developing AI legislation’. 

Why is it relevant? First, the one-pager mentions China (twice) as a cause for concern. The lawmaker thinks the Chinese Communist Party may be able to set AI standards and write the rules of the road for AI ahead of anyone else. Interestingly, there’s no mention of the EU, whose proposed AI Act is moving ahead quickly. 

Second, Schumer thinks it will take (only) months for Congress to pass AI legislation. It’s ‘exceedingly ambitious’, to quote Schumer himself.

Consumer groups call for more ChatGPT investigations

Consumer groups across 13 European countries are urging their national authorities to investigate the risks posed by generative AI such as ChatGPT. They’re also asking them to enforce existing laws to safeguard consumers. 

The statement, timed to coincide with the publication of a Norwegian consumer group’s report on the consumer harms of generative AI, says the new technology carries many risks, including privacy and security issues, and results which can be inaccurate and can manipulate or mislead people. The organisations also say that consumer groups in both the USA and EU are writing to US President Joe Biden on behalf of the Trans-Atlantic Consumer Dialogue (TACD) on this issue.

Why is it relevant? The call adds more pressure on regulators, especially data protection authorities, to investigate OpenAI, the company behind ChatGPT. So far, tens of investigations have been launched; the list continues to grow.

New AI guidebook in the making in ASEAN region 

ASEAN countries are planning an ASEAN AI guide that will tackle governance and ethics, Reuters has reported exclusively. The agreement was made in February, but the development became known only a few days ago.

Discussions are in their early stages; the guidebook is expected to be released at the end of this year, or early next year.

Why is it relevant? More countries and regions are developing AI rules, which means that unless there’s a concerted effort to build on each other’s work, the world will end up with unharmonised – albeit broadly similar – rules at best. At worst? A patchwork of rules built on a conflicting set of values and priorities.


// SEMICONDUCTORS //

Intel invests in chip fabs in Germany; EU woos Nvidia

Intel has expanded its investment to build two new semiconductor facilities, known as fabs, in Germany. The company will invest EUR 30 billion (USD 32.8 billion), and will receive subsidies worth nearly EUR 10 billion (USD 10.9 billion) from Germany. German Chancellor Olaf Scholz hailed the new agreement as the country’s biggest-ever foreign investment. 

Intel is unlikely to experience a shortage of skills: There are around 20,000 technology students residing in Magdeburg, where the semiconductor fabrication plants (or fabs) will be built. The company is expecting the first plant to start operating within four-to-five years after the European Commission’s subsidies approval.

Across the pond, European Commissioner Thierry Breton took the opportunity of his San Francisco trip to visit NVidia CEO Jensen Huang. The CEO said Breton encouraged him to invest ‘a great deal more’ in Europe, which is going to be a ‘wonderful place to build a future for Nvidia’. 

Why is it relevant? Both the USA and Europe are vying to attract semiconductor companies to their shores. But right now, it’s Europe that is beckoning.


 Clothing, Coat, Jacket, Blazer, People, Person, Adult, Male, Man, Accessories, Formal Wear, Tie, Crowd, Face, Head, Suit, Body Part, Finger, Hand, Jen-Hsun Huang, Thierry Breton

Was this newsletter forwarded to you, and you’d like to see more?


// ANTITRUST //

Google asks Indian court to quash regulator’s antitrust ruling 

This morning (26 June), Google requested India’s Supreme Court to quash antitrust directives which the country’s competition regulator imposed on the company for allegedly exploiting its dominant position in India’s Android mobile operating system market. 

In March, a tribunal modified four of the ten directives imposed by the Competition Commission of India (CCI) in October, which allowed the company to sustain its current business model. Google is now asking that the remaining directives be stopped and that the court revoke the regulator’s earlier antitrust ruling.

Why is it relevant? The tug of war has already been partially won by the tech giant. The rest of it could go down in one of two ways: If the court confirms the March ruling, it’s status quo for the company; if the court rules that Google did not abuse its position, it’s a significant win for Google, which could influence other cases with other giant tech companies…

Indian competition regulator set to rule on Apple’s app store policies 

The CCI is set to rule soon on Apple’s app store billing and policies. The regulator launched its investigation in 2021, but the process stalled after the commission’s chairman retired in October 2022

Why is it relevant? On the one hand, the case is similar to Google’s case, prompting the regulator to go down the same path. On the other, the regulator’s ruling in Google’s case was revised on appeal, and is now subject to another lawsuit, which may influence the regulator’s final decision.


// E-VOTING //

Switzerland positive after e-voting trial

Swiss voters are shining a good light on a recent e-voting trial, which saw participation rates higher than the national average rate for Swiss voters abroad as a whole. The e-voting software, developed by the Swiss Post, was reviewed after the flaws reported in 2019, and approved for trial in three cantons by the Federal Chancellery earlier in March. 

Why is it relevant? Despite warnings from some Swiss parliamentarians, the outcome of this trial could open the door for Swiss voters living abroad to use the e-voting system in parallel to traditional mail-in ballots. It could also encourage countries where e-voting has either been abandoned (such as recently in Latvia) or never explored. (For reference, here’s where the world stands on e-voting right now).


The week ahead (19–26 June)

19 June–14 July: The four-week 53rd session of the Human Rights Council is ongoing in Geneva and online. What to watch for:

  • 3 July: A panel discussion on the role of digital, media, and information literacy in the promotion and enjoyment of the right to freedom of opinion and expression (HRC res. 50/15)
  • 6 July: A discussion on the report on the relationship between human rights and technical standard-setting processes for new and emerging digital technologies and the practical application of the Guiding Principles on Business and Human Rights (A/HRC/53/42)

Dates may change. Consult the agenda and the latest programme of work. Refer also to the Universal Rights Group’s The Inside Track covering HRC53.

29 June: The EU’s new Regulation on Markets in Crypto-assets, also known as the MiCA regulation, enters into effect today (and will start applying from December 2024). It will regulate crypto-asset issuers and service providers at the EU level for the first time.

1 July: Spain takes up the presidency of the EU Council; a new trio of rotating chairs (Spain-Belgium-Hungary) starts today till the end of 2024. The next elections will be held during Belgium’s presidency.

For more events, bookmark the Digital Watch Observatory’s calendar of global policy events.


#ReadingCorner
 Sphere, Astronomy, Outer Space, Planet, Globe

An atlas on SDGs

Where do countries stand in their goals to achieve the 2030 Agenda? The World Bank’s 2023 Atlas of Sustainable Development Goals tells us how far countries have come – and what more needs to be done. It draws from the World Bank’s database of indicators and multiple other sources. 

SDG practitioners will be happy to learn that the visualisations, together with all the data and code can be downloaded and used for similar purposes.


steph
Stephanie Borg Psaila – Author
Director of Digital Policy, DiploFoundation

ginger
Virginia Paque – Editor
Senior editor – Digital Policy, DiploFoundation

DW Weekly #116 – 19 June 2023

 Text, Paper, Page

Dear all,

Antitrust regulators in the EU and USA have Big Tech firms in their crosshairs. The EU wants to curb Google’s dominance in the adtech market, while the US Federal Trade Commission has managed to halt – so far temporarily – Microsoft’s plans to acquire Activision Blizzard. Meanwhile, trilogues on the proposed AI Act have begun in Europe.

Let’s get started.
Stephanie and the Digital Watch team


// HIGHLIGHT //

EU takes aim at Google:
Company could be forced to sell off adtech arm

The European Commission is quite unhappy with Google’s role in the adtech business. The company, at the centre of an ongoing antitrust investigation, is dominant on both sides of the market (see the explainer below), making it a prime target for the Commission to order tough remedies if suspicions of abusive patterns are confirmed. 

Not just tough, but tougher. When the European Commission announced last week that it couldn’t see any other solution, we knew it was quite serious for Google. 

What stood out in last week’s press conference was that European Commissioner for Competition Margrethe Vestager described Google’s dominance as pervasive, singling it out as the company with the most ubiquitous presence across markets. What this means in practice is that it’s not just a matter of one company being dominant in a particular market, but a more extensive dominance by one company. It’s a situation that will test the Commission’s resolve in correcting market distortions, and will determine whether existing rules fit this situation.  

It also appears that every time the Commission flagged undesired behaviour, Google swiftly tweaked its actions in subtle ways to comply with the letter of the law, while still accomplishing the same results. The creature stayed the same; only its spots changed. This certainly does not help its cause. As Vestager said, the ‘learning curve is somewhat flat’ for dominant companies: Big Tech has failed to recognise that it’s acceptable to lead a market, but unacceptable to exploit it.

How it started. The EU investigation into Google’s behaviour in the adtech business started a few years ago. Since Google is dominant across the whole value chain, it can be pretty tough to detect specific behaviours. So throughout its preliminary investigation, the commission roped in other competition authorities to investigate practices in arguably one of the most complex technical markets out there. That includes cooperating with the US Department of Justice, itself investigating Google on multiple accounts of alleged antitrust violations.

There’s no other solution. The Commission has touted the notion of divestiture before in other cases, but there were less intrusive measures available. It now appears that when it comes to the adtech market, the market distortions from Google’s dominance in a two-sided market can’t be solved in any other way: You just can’t have ownership of the entire value chain. 

How this could affect Google. There’s no other way to describe it: A request for the company to sell off its adtech arm would be a strong blow to the company. It would mean a major shake-up of the company’s digital advertising empire, and, potentially, a significant restructuring of Google’s business model. 

The formal investigation, launched last week, will determine whether Google violated EU antitrust rules. But the million-dollar question is: Is it a matter of arguing that all other alternatives have been exhausted, or that no other alternative can adequately address this market distortion? 

If the Commission believes that the time for behavioural remedies is up, Google’s only solution is to persuade the Commission that the situation can be remedied by alternative methods. But if the Commission thinks that Google’s dominance in a two-sided market can’t be fixed in any other way, the Commission has a complex task ahead to justify an adtech break-up.

How does adtech work? 

Behind the scenes, complex algorithms race to decide which ad to display to each of us as we browse. There are three parts to this process:

First, advertisers want to place their ads in the hope that they attract our attention. 

Second, publishers want to sell online space (think of it as digital real estate) to display those ads. 

Third, an intermediary applies their (theoretically unbiased) algorithms to determine the best match between each user and the ads they might be interested in – all of this taking place in real-time.

Google offers services in all these three parts of the process. It’s on both the buy-side and the sell-side of a two-sided market, and it’s also in the middle, where the buyer and seller meet. The problem is that in acting as intermediary, it appears to be abusing its dominant position by favouring its own services over those of competing intermediaries.

Alberto Bacchiega, director of Digital Platforms at the Commission’s Directorate-General for Competition, explains it well in the video ‘Statement of Objections to Google Over Abusive Practices in Online Adtech’.


Digital policy roundup (13–19 June)
// AI GOVERNANCE //

EU Parliament advances AI Act, trilogues start

All eyes were on the European Parliament last week in anticipation of the lawmakers’ crucial vote on the EU’s proposed AI Act. Receiving a resounding 499 votes in favour, with 28 against, and 93 abstentions, the Parliament’s text made it through plenary and will now be used as a foundation for negotiations with the EU Council.

The trilogues – tripartite negotiations among the EU’s lawmaking institutions – have already started.

Why is it relevant? For a second, we thought there might be hiccups. But there was too much at stake for European lawmakers to jeopardise the process. So there’s one less hurdle for the AI Act to reach its destination: the coming into effect of new rules that address the risks from AI systems based on their risk level. Yet, it won’t be smooth sailing: biometric-related risks will be a major bone of contention.

 Text
Here’s where we are now: Parliament has approved its draft text in plenary, advancing the AI Act to the trilogue talks (the dotted red line). The EU Council’s negotiating text was approved in December. Source: Based on a diagram from artificialintelligenceact.eu

UN Secretary-General calls on the international community to act now on digital technologies

The recent warnings on AI’s threat to humanity did not land on deaf ears. Last week, UN Secretary-General Antonio Guterrez urged governments to heed to these warnings. ‘Alarm bells over the latest form of artificial intelligence – generative AI – are deafening. And they are loudest from the developers who designed it. These scientists and experts have called on the world to act, declaring AI an existential threat to humanity on a par with the risk of nuclear war. We must take those warnings seriously.’


Was this newsletter forwarded to you, and you’d like to see more?


// ANTITRUST //

US judge temporarily blocks Microsoft’s Activision acquisition

It’s not just Google having antitrust issues of late. Microsoft’s takeover of video game maker Activision Blizzard has suffered a setback after a US judge granted the Federal Trade Commission’s (FTC) request to temporarily block the acquisition. The judge has scheduled a two-day evidentiary hearing for this week on the FTC’s request for a preliminary injunction.  

The FTC is arguing that the transaction would give Microsoft’s video game console Xbox, exclusive access to Activision games, leaving Nintendo and Sony Group’s PlayStation out in the cold.

Why is it relevant? First, without a court order, Microsoft could have closed the USD69 billion (EUR63 billion) deal as early as last week. The company will now have to wait for this week’s hearing. Second, the merger faces a legal battle in August, when an FTC judge is set to hear the case. 

The FTC’s actions stand in contrast with the European Commission’s approval of the merger in May. No doubt the UK’s Competition Authority, which denied approval of the deal citing concerns over the potential impact on competition in the video game industry, is watching closely.


// CYBERSECURITY //

Amid soaring tensions, US official warns of cyber sabotage from China

Chinese hackers are all but sure to disrupt US critical infrastructures, such as pipelines and railways, in the event of a conflict with the USA, a senior US cybersecurity official has warned. Tensions between the two countries have soared. 

The Director of the Cybersecurity and Infrastructure Security Agency, Jen Easterly, emphasised during a recent event that China is significantly investing in its capability to sabotage US infrastructures. With the possibility of Chinese hackers compromising current security measures (given the recent cyberattacks by Chinese state-sponsored hacking group Volt Typhon), Easterly warned that prioritising resilience and strengthening defences is paramount to being prepared.


// DATA PROTECTION //

Swedish data protection watchdog fines Spotify over GDPR breaches 

The Swedish Data Protection Authority has imposed a EUR5 million (USD5.5 million) fine on digital music service company Spotify for breaching several GDPR provisions a few years ago. According to the authority, Spotify failed to provide clear information to users regarding the purposes of its data processing, categories of personal data involved, and data storage periods, among other infringements.

As with many European legal actions concerning data protection, the complaint against Spotify was filed by Austrian non-profit NOYB, which also took the Swedish Data Protection Authority to court for not wanting to investigate the case.

Why is it relevant? The ruling serves as a reminder for organisations operating in the EU to provide clear and transparent information about their data processing practices (and for data protection authorities to investigate every complaint).


The week ahead (19–26 June)

19–21 June: The 2023 edition of Europe’s foremost regional internet governance gathering – EuroDIG – is underway in Finland and online. The theme – Internet in troubled times: Risks, resilience, hope. We’ll use our latest AI tool, DiploGPT, to draft reports that reflect the discussions throughout the meeting. (By the way, here’s DiploGPT in action).

19 June–14 July: The four-week 53rd session of the Human Rights Council also starts today in Geneva and online. What to watch for:

  • 22 June: A discussion on the report Digital Innovation, Technologies, and the Right to Health (A/HRC/53/65)
  • 6 July: A discussion on the report on the relationship between human rights and technical standard-setting processes for new and emerging digital technologies and the practical application of the Guiding Principles on Business and Human Rights (A/HRC/53/42)
  • 3 July: A panel discussion on the role of digital, media, and information literacy in the promotion and enjoyment of the right to freedom of opinion and expression (HRC res. 50/15)

Dates may change. Consult the agenda and the latest programme of work. Refer also to the Universal Rights Group’s The Inside Track covering HRC53.

20–21 June: The Ad Hoc Committee on Cybercrime, tasked with advancing a new cybercrime convention, is holding the fifth intersessional stakeholder consultation in Vienna and online.

21–22 June: The annual Cybersec Forum and Expo features discussions among policymakers and the industry on cybersecurity and resilience, across four streams: state, defence, business, and future technologies. Takes place in Poland, onsite only.

23 June: It’s the last day to propose a session for UNCTAD’s eWeek (known until recently as eCommerce Week). It will take place in December in Geneva and online.

For more events, bookmark the Digital Watch Observatory’s calendar of global policy events.


#ReadingCorner
 Nature, Night, Outdoors, Art, Graphics, Pattern, Astronomy, Moon

Do we trust algorithms to choose our news?

Not really. According to the Reuters Institute’s annual Digital News Report 2023, which surveyed around 94,000 adults across 46 markets, people are sceptical of algorithms determining news selection based on what our friends have read or read (19% agree, 42% disagree) and based on what we ourselves have read or seen in the past (30% agree, equal numbers disagree).

How about the sources we choose to stay informed? Facebook, which was once dominant as a primary network for news, has been surpassed by rivals such as YouTube and TikTok. Read the full report.


steph
Stephanie Borg Psaila – Author
Director of Digital Policy, DiploFoundation
ginger
Virginia Paque – Editor
Senior editor – Digital Policy, DiploFoundation

Numéro 80 de la lettre d’information Digital Watch – juin 2023

Tendances

L’IA est le maître mot

Dans l’esprit de l’illustration de couverture du gigantesque enjeu que représente l’IA pour l’avenir de l’humanité, une question essentielle se pose: qui a les cartes en main? Est-ce une simple coïncidence, le divin ou des intérêts particuliers?

En mai, l’IA a été au premier plan des discussions mondiales et de la couverture médiatique, et a figuré à l’ordre du jour de réunions et de débats parlementaires. Pourquoi ce battage médiatique?

Premièrement, des alertes très fortes ont été lancées sur le fait que l’IA menace la survie même de l’humanité.

Deuxièmement, la mise en garde contre les risques inhérents à l’existence de l’IA est généralement associée à un besoin de réglementer le développement futur de l’IA.

Dans une toute nouvelle dynamique, les entreprises souhaitent être encadrées. Sam Altman, CEO d’OpenAI, a souligné le rôle crucial des gouvernements dans la réglementation de l’IA et a préconisé la création d’une agence gouvernementale ou mondiale de l’IA chargée de superviser la technologie. Cet organisme de réglementation exigerait des entreprises qu’elles obtiennent des licences avant de développer des modèles d’IA puissants ou d’exploiter des centres de données facilitant le développement de l’IA. Les développeurs seraient ainsi tenus de respecter des règles de sécurité, et d’établir un consensus sur les normes et les risques qui doivent être maîtrisés.

Parallèlement, Microsoft a publié un projet complet de gouvernance de l’IA. Dans sa préface, Brad Smith, président de Microsoft, plaide également en faveur de la création d’une nouvelle agence gouvernementale chargée d’appliquer les nouvelles règles en matière d’IA.

Troisièmement, les gouvernements des pays développés sont favorables à l’idée de réglementer le déploiement futur de l’IA.

Quatrièmement, de plus en plus de voix s’élèvent pour dire que la réglementation, soutenue par une menace existentielle, vise à bloquer les développements open source de l’IA et à confier son pouvoir à un petit nombre de dirigeants, principalement OpenAI/Microsoft et Google.

Quelle que soit la motivation qui préside à la réglementation de l’IA, nous pouvons identifier quelques thèmes qui lui font écho : les violations de la vie privée, les préjugés, la multiplication des escroqueries, la désinformation et la protection de la propriété intellectuelle, entre autres. Cependant, tous les régulateurs ne partagent pas les mêmes points de vue. Voici un échantillon de ce que les juridictions du monde entier ont exprimé en mai 2023 concernant leur volonté de réglementer l’IA, et les approches qu’elles proposent.

L’UE. La première réglementation mondiale en matière d’IA est, sans surprise, élaborée par le mastodonte de la régulation qu’est l’UE. Elle adopte une approche de l’IA fondée sur le risque, en établissant des contraintes pour les fournisseurs et les utilisateurs d’IA en fonction du niveau de risque posé par les systèmes d’IA. Elle introduit également une procédure par étapes pour réglementer l’IA à usage général, ainsi que les modèles d’IA fondateurs et génératifs. Le projet de législation doit être approuvé par le Parlement en séance plénière, ce qui devrait se produire au cours de la session du 12 au 15 juin. Ensuite, les négociations avec le Conseil sur la forme finale de la loi pourront commencer.

Les États-Unis. Des représentants du gouvernement américain ont rencontré les P.-D.G. d’Alphabet, d’Anthropic, de Microsoft et d’OpenAI, et ont évoqué trois points essentiels : la transparence, l’évaluation et la sécurité des systèmes d’IA. La Maison-Blanche et les principaux développeurs d’IA vont collaborer pour évaluer les systèmes d’IA générative afin de détecter les failles et les vulnérabilités potentielles, telles que les confabulations, les violations de confidentialité et les partialités. Les États-Unis évaluent également l’impact de l’IA sur la main-d’œuvre, l’éducation, les utilisateurs et les risques d’utilisation abusive des données biométriques.

Le Royaume-Uni. Il est un autre gouvernement qui collaborera avec l’industrie : le Premier ministre, Rishi Sunak, a rencontré les P.-D.G. d’OpenAI, de Google DeepMind et d’Anthropic pour discuter des risques que l’IA peut poser, tels que la désinformation, la sécurité nationale et les menaces existentielles. Les P.-D.G. ont accepté de travailler en étroite collaboration avec la Foundation Model Taskforce du Royaume-Uni pour faire progresser la sécurité de l’IA. Le Royaume-Uni se concentre également sur les risques électoraux liés à l’IA et sur l’impact du développement des systèmes d’IA sur la concurrence et la protection des consommateurs. Le Royaume-Uni conservera apparemment son approche sectorielle de l’IA, aucune réglementation générale de l’IA n’étant prévue.

Chine. L’Administration du cyberespace de Chine (CAC) a fait part de ses préoccupations concernant les technologies avancées telles que l’IA générative, notant qu’elles pourraient sérieusement remettre en question la gouvernance, la réglementation et le marché du travail. Le pays a également appelé à améliorer la gouvernance de la sécurité de l’IA. En avril, la CAC a proposé des mesures de régulation des services d’IA générative, qui précisent que les fournisseurs de ces services doivent s’assurer que leur contenu est conforme aux valeurs fondamentales de la Chine. Parmi les contenus interdits figurent la discrimination, les fausses informations et la violation des droits de propriété intellectuelle (DPI). Les outils utilisés dans les services d’IA générative doivent faire l’objet d’une évaluation de sécurité avant leur lancement. Les mesures étaient ouvertes aux commentaires jusqu’au 2 juin, ce qui signifie que nous en verrons bientôt le résultat.

Australie. Elle s’inquiète des risques liés à l’IA, tels que les contrefaçons, la désinformation, les incitations à l’automutilation et les abus algorithmiques. Le pays cherche actuellement à savoir s’il doit soutenir le développement d’une IA responsable par des approches volontaires, telles que des outils, des cadres et des principes, ou par des approches réglementaires exécutoires, telles que des lois et des normes obligatoires. 

Corée du Sud. La loi sur l’IA du pays n’est plus qu’à une encablure du vote final de l’Assemblée nationale. Elle permettra le développement de l’IA sans approbation préalable du Gouvernement, classera l’IA à haut risque et établira des normes de fiabilité, soutiendra l’innovation dans l’industrie de l’IA, établira des lignes directrices éthiques, et créera un plan de base pour l’IA et un comité de l’IA supervisé par le Premier ministre. Le Gouvernement a également annoncé qu’il créerait de nouvelles directives et normes pour les droits d’auteur des contenus générés par l’IA d’ici septembre 2023.

Japon. Le gouvernement japonais vise à promouvoir et à renforcer les capacités nationales de développement de l’IA générative tout en s’attaquant aux risques liés à l’IA, tels que la violation des droits d’auteur, la divulgation d’informations confidentielles, les fausses informations et les cyberattaques, entre autres.

Italie. L’Italie a temporairement banni ChatGPT en raison de violations du RGPD en mars. ChatGPT est de retour en Italie après qu’OpenAI a révisé ses déclarations de confidentialité et ses contrôles, mais Garante, l’autorité de protection des données de l’Italie, intensifie son examen des systèmes d’IA pour s’assurer qu’ils respectent les lois sur la protection de la vie privée.

France. La Commission nationale de l’informatique et des libertés (CNIL) a lancé un plan d’action pour l’IA afin de promouvoir un cadre pour le développement de l’IA générative qui respecte la protection des données personnelles et les droits de l’Homme. Ce cadre repose sur quatre piliers : (a) comprendre l’impact de l’IA sur l’équité, la transparence, la protection des données, les préjugés et la sécurité ; (b) développer une IA respectueuse de la vie privée par l’éducation et des lignes directrices ; (c) collaborer avec les innovateurs de l’IA pour assurer la conformité à la protection des données ; (d) auditer et contrôler les systèmes d’IA pour sauvegarder les droits des individus, notamment en ce qui concerne la surveillance, la fraude et les réclamations.

Inde. Le Gouvernement envisage un cadre réglementaire pour les plateformes basées sur l’IA en raison de préoccupations telles que les droits de propriété intellectuelle, les droits d’auteur et la partialité des algorithmes, mais il cherche à le faire en collaboration avec d’autres pays.

Efforts internationaux. En amont de la loi européenne sur l’IA, la Commission européenne et Google prévoient d’unir leurs forces « avec tous les développeurs d’IA » pour élaborer un pacte volontaire sur l’IA. M. Altman, d’Open AI, doit également rencontrer des fonctionnaires de l’UE au sujet de ce pacte.

L’UE et les États-Unis vont établir conjointement un code de conduite en matière d’IA afin de renforcer la confiance du public dans cette technologie. Ce code volontaire « serait ouvert à tous les pays partageant les mêmes idées », a déclaré le secrétaire d’État américain Anthony Blinken.

En outre, le G7 a convenu de lancer un dialogue sur l’IA générative – y compris sur des questions telles que la gouvernance, la désinformation et les droits d’auteur – en coopération avec l’Organisation de coopération et de développement économiques (OCDE) et le Partenariat mondial sur l’IA (GPAI). Les ministres examineront l’IA dans le cadre du « processus d’Hiroshima sur l’IA » et rendront compte des résultats d’ici fin 2023. Les dirigeants du G7 ont également appelé à l’élaboration et à l’adoption de normes techniques pour garantir la fiabilité de l’IA.

Alors, qui a les cartes en mainCe n’est pas encore clair. Nous espérons que ce seront les citoyens qui auront le dernier mot. Les efforts réglementaires mis à part, l’une des façons de s’assurer que les individus conservent la maîtrise de leurs connaissances, même lorsqu’elles sont codifiées par l’IA, est de recourir à l’IA ascendante. Cela permettrait d’atténuer le risque de centralisation du pouvoir inhérent aux grandes plateformes d’IA générative. En effet, l’IA ascendante est généralement basée sur une approche ouverte et transparente qui peut atténuer la plupart des risques de sûreté et de sécurité liés aux plateformes d’IA centralisées. De nombreuses initiatives, y compris les stratégies de développement de l’IA de Diplo, ont prouvé que l’IA ascendante est techniquement faisable et économiquement viable. Il existe de nombreuses raisons d’adopter l’IA ascendante comme moyen pratique de favoriser un nouveau système d’exploitation sociétal fondé sur la centralité, la dignité, le libre arbitre et la réalisation du potentiel créatif des êtres humains.

Baromètre

Les développements de la politique numérique qui ont fait les gros titres

Le paysage de la politique numérique évolue quotidiennement. Voici donc les principaux développements du mois de mai. Chaque mise à jour du Digital Watch observatory est plus détaillée.

en progression

L’architecture mondiale de la gouvernance numérique

La Journée mondiale des télécommunications et de la société de l’information a été célébrée le 17 mai. Des représentants des Nations unies ont appelé à réduire la fracture numérique, à soutenir les biens publics numériques et à mettre en place un Pacte mondial pour le numérique (PMN).

La quatrième réunion ministérielle UE-États-Unis du Conseil du commerce et de la technologie (CCT) a porté sur les risques liés à l’IA, la réglementation des contenus, les identités numériques, les semi-conducteurs, les technologies quantiques et les projets de connectivité.


neutre

Le développement durable

Selon un rapport de la GSMA, pour combler le fossé numérique entre les hommes et les femmes d’ici 2030, 100 millions de femmes supplémentaires devraient adopter l’Internet mobile chaque année.

La Commission européenne et l’OMS ont lancé une initiative historique dans le domaine de la santé numérique afin d’établir un réseau mondial complet pour la certification de la santé numérique.

La Papouasie-Nouvelle-Guinée a mis en place une plateforme de gestion des cartes d’identité numériques, tandis que les Maldives ont introduit une application mobile de carte d’identité numérique pour simplifier l’accès aux services gouvernementaux. Le programme eID d’Evrotrust est devenu le système officiel d’identification numérique de la Bulgarie.


en progression

La sécurité

Un rapport chinois affirme avoir identifié cinq procédés utilisés par la CIA pour lancer des révolutions colorées à l’étranger et neuf procédés utilisés comme armes pour des cyberattaques.

Les agences cybernétiques « Five Eyes » ont attribué les cyberattaques contre des infrastructures critiques américaines au groupe de pirates informatiques Volt Typhon, soutenu par l’État chinois, ce que la Chine a démenti. Le FBI a perturbé une opération de cyberespionnage russe baptisée « Snake ». Les gouvernements de la Colombie, du Sénégal, de l’Italie et de la Collectivité territoriale de la Martinique ont subi des cyberattaques.

Les États-Unis et la Corée du Sud ont publié un avis conjoint avertissant que la Corée du Nord utilise des tactiques d’ingénierie sociale dans ses cyberattaques.

L’OTAN a mis en garde contre une menace russe potentielle pour les câbles Internet et les gazoducs en Europe et en Amérique du Nord.


neutre

Infrastructure

L’Organe des régulateurs européens des communications électroniques (ORECE) et la majorité des pays de l’UE s’opposent à la pression exercée par les fournisseurs de télécommunications pour que les grandes entreprises technologiques contribuent au coût du déploiement de la 5G et du haut débit en Europe.

La Tanzanie a signé des accords pour étendre les services de télécommunications à 8,5 millions de personnes dans les zones rurales.


neutre

Le commerce électronique et économie de l’Internet

La Commission européenne a approuvé l’acquisition d’Activision Blizzard par Microsoft à condition que les licences Microsoft permettent aux consommateurs d’utiliser n’importe quel service de diffusion en continu.


en progression

Les droits numériques

La Corée du Sud a proposé de modifier sa loi sur la protection des informations privées afin de renforcer les exigences en matière de consentement, d’unifier les normes de traitement des données en ligne / hors ligne et d’établir des critères d’évaluation des violations.

Le Classement mondial de la liberté de la presse 2023 révèle que le journalisme est menacé par l’industrie du faux contenu et le développement rapide de l’IA.

Des interruptions d’Internet ont été signalées au Pakistan à la suite de l’arrestation de l’ancien Premier ministre, et au Soudan au milieu des protestations liées à la condamnation d’un dirigeant de l’opposition, tandis que les médias sociaux ont été restreints en Guinée à la suite de protestations.


en progression

La politique de contenu

Les arrêts de la Cour suprême des États-Unis dans les affaires Gonzalez v. Google, LLC et Twitter, Inc. v. Taamneh ont maintenu les protections de l’article 230 pour les plateformes en ligne.

Google et Meta ont menacé de bloquer les liens vers les sites d’information canadiens si un projet de loi obligeant les plateformes Internet à rémunérer les éditeurs pour leurs informations était adopté.

L’Autriche a interdit l’utilisation de TikTok sur les téléphones professionnels des fonctionnaires fédéraux.

L’Alliance pour les biens publics numériques (DPGA) et le PNUD ont annoncé neuf solutions innovantes à code source ouvert pour faire face à la crise mondiale de l’information. L’UE a appelé à un étiquetage clair des contenus générés par l’IA afin de lutter contre la désinformation. Bien que Twitter se soit retiré du code pour lutter contre la désinformation, il doit toujours se conformer à la loi sur le service numérique lorsqu’il opère dans l’UE.


neutre

Juridiction et questions légales

Apple fait l’objet d’une enquête en France à la suite de plaintes selon lesquelles elle rendrait intentionnellement ses appareils obsolètes afin d’obliger les utilisateurs à en acheter de nouveaux.

Meta s’est vu infliger une amende de 1,2 milliard d’euros en Irlande pour avoir mal exploité les données des utilisateurs et pour avoir continué à transférer des données aux États-Unis en violation d’un arrêt de la Cour de justice de l’Union européenne.


en progression

Les technologies

Un représentant chinois à l’OMC a critiqué les subventions accordées par les États-Unis à l’industrie des semi-conducteurs, les considérant comme une tentative d’entraver les progrès technologiques de la Chine. La Corée du Sud a demandé aux États-Unis de revoir sa règle interdisant à la Chine et à la Russie d’utiliser des fonds américains pour la fabrication de puces et la recherche.

Les États-Unis envisagent de restreindre les investissements dans les puces chinoises, l’IA et l’informatique quantique afin de freiner les flux de capitaux et d’expertise.

L’Australie a publié une nouvelle stratégie nationale en matière d’informatique quantique. La Chine a lancé une plateforme d’informatique quantique en nuage pour les chercheurs et le public.

En bref

Note d’information du Secrétaire général des Nations unies à l’intention du PMN

Le Secrétaire général des Nations unies a publié un document de synthèse contenant des suggestions sur la manière dont un Pacte mondial pour le numérique (PMN) pourrait contribuer à faire progresser la coopération numérique. Le PMN doit être adopté dans le cadre du Sommet de l’avenir en 2024 et devrait « énoncer des principes communs pour un avenir numérique ouvert, libre et sûr pour tous ». Voici un résumé des principaux points du document.

Le document souligne les domaines dans lesquels « la nécessité d’une coopération numérique multipartite est urgente » : réduire la fracture numérique et faire progresser les objectifs du Millénaire pour le développement, rendre l’espace en ligne ouvert et sûr pour tous, et régir l’IA pour l’humanité. Il propose également des objectifs et des actions pour faire progresser la coopération numérique, structurés autour de huit thèmes proposés pour être couverts par le PMN.

Connectivité numérique et renforcement des capacités. L’objectif est de réduire la fracture numérique et de donner aux individus les moyens de participer pleinement à l’économie numérique. Les actions proposées comprennent la fixation d’objectifs de connectivité universelle et l’amélioration de l’éducation du public à la culture numérique.

Coopération numérique pour la réalisation des objectifs de développement durable. Ces objectifs impliquent des investissements ciblés dans l’infrastructure et les services numériques, la garantie de données représentatives et compatibles, et l’établissement de normes de durabilité numérique harmonisées à l’échelle mondiale. Les actions proposées incluent la conception d’infrastructures numériques sûres et inclusives, la promotion d’écosystèmes de données ouverts et accessibles, et l’élaboration d’un plan commun pour la transformation numérique.

Défendre les droits de l’Homme. Il s’agit de placer les droits de l’Homme au cœur de l’avenir numérique, de s’attaquer à la fracture numérique entre les hommes et les femmes, et de protéger les droits des travailleurs. L’une des principales mesures proposées est la création d’un mécanisme consultatif sur les droits de l’Homme dans le domaine du numérique, sous l’égide du Haut-Commissariat des Nations unies aux droits de l’Homme.

Un Internet inclusif, ouvert, sûr et partagé. Les objectifs comprennent la préservation de la nature libre et partagée de l’Internet, et le renforcement d’une gouvernance multipartite responsable. Les actions proposées impliquent l’engagement des gouvernements à empêcher les coupures générales de l’Internet et les perturbations des infrastructures essentielles.

Confiance et sécurité numériques. Les objectifs vont du renforcement de la coopération multipartite à l’élaboration de normes, de lignes directrices et de principes pour une utilisation responsable des technologies numériques. Les actions proposées comprennent la création de normes communes et de codes de conduite sectoriels pour lutter contre les contenus préjudiciables sur les plateformes numériques.

Protection des données et responsabilisation. Les objectifs comprennent la gouvernance des données au bénéfice de tous, l’habilitation des individus à contrôler leurs données personnelles et l’établissement de normes compatibles pour la qualité des données. Les actions proposées consistent notamment à encourager les pays à adopter une déclaration sur les droits relatifs aux données et à rechercher une convergence sur les principes de gouvernance des données par le biais d’un Pacte mondial pour les données.

Gouvernance flexible de l’IA et des technologies émergentes. Les objectifs consistent à garantir la transparence, la fiabilité, la sécurité et le contrôle humain dans la conception et l’utilisation de l’IA, et à donner la priorité à la transparence, à l’équité et à la responsabilité dans la gouvernance de l’IA. Les actions proposées vont de la création d’un organe consultatif de haut niveau pour l’IA au renforcement des capacités réglementaires dans le secteur public.

Patrimoine numérique mondial. Les objectifs comprennent une coopération numérique inclusive, des échanges soutenus entre les États et les secteurs, et un développement responsable des technologies pour le développement durable et l’autonomisation.

Le document d’orientation propose de nombreux mécanismes de mise en œuvre. Le plus notable est un forum annuel de coopération numérique (DCF) convoqué par le Secrétaire général pour faciliter la collaboration entre les cadres multipartites numériques et réduire la duplication, promouvoir l’apprentissage transfrontalier en matière de gouvernance numérique et identifier des solutions politiques pour les défis numériques émergents ainsi que les lacunes en matière de gouvernance. Le document note également que « le succès du PMN dépendra de sa mise en œuvre » aux niveaux national, régional et sectoriel, avec le soutien de plateformes telles que le Forum sur la gouvernance de l’Internet (FGI) et le Forum du Sommet mondial sur la société de l’information (SMSI). Le document suggère la création d’un fonds d’affectation spéciale destiné à financer un programme de bourses de coopération numérique afin de renforcer la participation des différentes parties prenantes.

Genève

Mise à jour des politiques de la Genève internationale

De nombreuses discussions politiques ont lieu chaque mois à Genève. Voici ce qui s’est passé en mai.

Groupe intergouvernemental d’experts sur le commerce électronique et l’économie numérique, sixième session | 10-12 mai

L’objectif principal de ce groupe intergouvernemental d’experts est de renforcer les efforts de la CNUCED dans les domaines des technologies de l’information et de la communication, du commerce électronique et de l’économie numérique, afin de permettre aux pays en développement de participer à l’économie numérique en constante évolution, et d’en tirer profit. En outre, le groupe s’efforce de réduire la fracture numérique et de promouvoir le développement de sociétés du savoir inclusives. La sixième session se concentre sur deux points principaux de l’ordre du jour : comment mettre les données au service de l’Agenda 2030 pour le développement durable, et le groupe de travail sur la mesure du commerce électronique et de l’économie numérique.

Groupe d’experts gouvernementaux (GGE) sur les technologies émergentes dans le domaine des systèmes d’armes autonomes létaux (LAWS), deuxième session 2023 | 15–19 mai

La deuxième session du Groupe d’experts gouvernementaux sur les LAWS s’est tenue à Genève pour « intensifier l’examen des propositions et élaborer, par consensus, d’éventuelles mesures » dans le cadre de la Convention sur certaines armes classiques (CCAC), tout en faisant appel à des experts juridiques, militaires et technologiques.

Dans la version préliminaire du rapport final (CCW/GGE.1/2023/2), le groupe d’experts a conclu que, pour caractériser les systèmes d’armes construits à partir de technologies émergentes dans le domaine des LAWS, il est essentiel de prendre en compte les développements futurs potentiels de ces technologies. Le groupe a également affirmé que les États doivent veiller tout particulièrement au respect du droit international humanitaire tout au long du cycle de vie de ces systèmes d’armes. Les États devraient limiter les types de cibles ainsi que la durée et la portée des opérations auxquelles les systèmes d’armes peuvent participer ; une formation adéquate doit être dispensée aux opérateurs humains. Si le système d’armes basé sur des technologies dans le domaine des LAWS ne peut pas être conforme au droit international, il ne doit pas être déployé.

La 76e Assemblée mondiale de la santé | 21-30 mai

La 76e Assemblée mondiale de la santé (AMS) a invité les délégués de ses 194 États membres à Genève pour discuter des priorités et des politiques de l’organisation sur le thème « L’OMS à 75 ans : sauver des vies, promouvoir la santé pour tous ». Une série de tables rondes a permis aux délégués, aux agences partenaires, aux représentants de la société civile et aux experts de l’OMS de débattre des questions de santé publique actuelles et futures d’importance mondiale. Le 23 mai, le comité B s’est penché sur les rapports d’avancement (A76/37) qui soulignent la mise en œuvre des « stratégies mondiales sur la santé numérique », comme convenu lors de la 73e Assemblée mondiale de la santé. Depuis l’adoption de ces stratégies en 2020, le secrétariat de l’AMS, en collaboration avec des partenaires de développement et d’autres agences des Nations unies, a formé plus de 1 600 fonctionnaires dans plus de 100 États membres à la santé numérique et à l’intelligence artificielle. Le secrétariat a également lancé de nombreuses initiatives pour la diffusion des connaissances et les développements nationaux liés aux stratégies de santé numérique. De 2023 à 2025, le secrétariat continuera à faciliter les actions coordonnées définies dans les stratégies mondiales tout en donnant la priorité aux besoins des États membres.

À venir

Les principaux événements du mois de juin en matière de politique numérique

5–8 juin 2023 | RightsCon (San José, Costa Rica, et en ligne)

La 12e édition du RightsCon devait aborder les développements mondiaux liés aux droits numériques dans 15; domaines: accès et inclusion; IA; entreprises, travail et commerce; conflits et action humanitaire; gouvernance du contenu et désinformation; normes cybernétiques et cryptage; protection des données; sécurité numérique pour les communautés; technologies émergentes; liberté des médias; avenirs, fictions et créativité; gouvernance, politique et élections; conception axée sur les droits de l’Homme; justice, litiges et documentation; haine et violence en ligne; philanthropie et développement organisationnel; vie privée et surveillance; fermetures et censure; tactiques pour les activistes.

12–15 juin 2023 | Forum politique ICANN 77 (Washington, D. C., États-Unis)

Le forum politique est la deuxième réunion du cycle annuel de trois rencontres. Cette réunion se concentre sur le travail d’élaboration des politiques des organisations de soutien et des comités consultatifs, ainsi que sur les activités de sensibilisation régionales. L’ICANN vise à garantir un dialogue inclusif qui offre des possibilités égales pour tous de s’engager sur des questions politiques importantes.

13 juin 2023 | Forum suisse sur la gouvernance de l’Internet 2023 (Berne, Suisse, et en ligne)

Cet événement d’une journée s’est concentré sur des sujets tels que l’utilisation et la réglementation de l’IA, en particulier dans le contexte de l’éducation, de la protection des droits fondamentaux à l’ère numérique, de la gestion responsable des données, de l’influence des plateformes, des pratiques démocratiques, de l’utilisation responsable des nouvelles technologies, de la gouvernance de l’Internet et de l’impact de la numérisation sur la géopolitique.

15–16 juin 2023 | Assemblée numérique 2023 (Arlanda, Suède, et en ligne)

Organisée par la Commission européenne et la présidence suédoise du Conseil de l’Union européenne, cette assemblée devait avoir pour thème : « Une Europe numérique, ouverte et sûre ». Le programme de la conférence devait comprendre cinq sessions plénières, six sessions en petits groupes et trois événements parallèles. Les principaux thèmes de discussion devaient être l’innovation numérique, la cybersécurité, l’infrastructure numérique, la transformation numérique, l’IA et l’informatique quantique.

19–21 juin 2023 EuroDIG 2023 (Tampere, Finlande, et en ligne)

L’EuroDIG 2023 se tiendra sous le thème général de l’Internet en période troublée: risques, résilience et espoir. En plus de la conférence, EuroDIG accueille YOUthDIG, un pré-événement annuel qui encourage la participation active des jeunes (âgés de 18 à 30 ans) à la gouvernance de l’Internet. La GIP s’associe de nouveau à EuroDIG pour fournir des mises à jour et des rapports de la conférence à l’aide de DiploGPT.

DiploGPT a rendu compte de la réunion du Conseil de sécurité de l’ONU

En mai, Diplo a utilisé l’IA pour rendre compte de la session du Conseil de sécurité des Nations unies : la confiance à l’épreuve du temps pour une paix durable. DiploGPT a produit un rapport automatique comprenant un résumé, une analyse des soumissions individuelles et des réponses aux questions posées par le président de la réunion. DiploGPT combine divers algorithmes et outils d’intelligence artificielle adaptés aux besoins des Nations unies et des communications diplomatiques.

DW Weekly #115 – 12 June 2023

 Text, Paper, Page

Dear all,

The USA and the UK signed the Atlantic Declaration to strengthen their economic, technological, commercial and trade relations. The EU’s AI Act might be in jeopardy. Meta is in trouble with the EU over content moderation, namely failure to remove child sexual abuse material from Instagram, and Google has published its Secure AI Framework.

Let’s get started.
Andrijana and the Digital Watch team


// HIGHLIGHT //

The US-UK Atlantic Declaration signed

The UK and the USA signed the Atlantic Declaration for a Twenty-First Century US-UK Economic Partnership, touted as a first of its kind as it spans their economic, technological, commercial and trade relations. Here’s what’s in it regarding digital policy (not everything about it is) and how it impacts the USA, the UK… and the EU.

The first pillar focuses on ensuring US-UK leadership in critical and emerging technologies. Under this pillar, the two nations have established a range of collaborative activities:

  • They will prioritise research and development efforts, particularly in quantum technologies, by facilitating increased mobility of researchers and students and fostering workforce development to promote knowledge exchange. 
  • They will work together to strengthen their positions in cutting-edge telecommunications by collaborating on 5G and 6G solutions, accelerating the adoption of Open RAN, and enhancing supply chain diversity and resilience. 
  • Deepening cooperation in synthetic biology is also a priority, aiming to drive joint research, develop novel applications, and enhance economic security through improved biomanufacturing pathways. 
  • Investigators will conduct collaborative research in advanced semiconductor technologies, such as advanced materials and compound semiconductors.
  • Additionally, the countries will accelerate cooperation on AI, with a specific emphasis on safety and responsibility. 

This will involve deepening public-private dialogue, mobilising private capital towards strategic technologies, and establishing a US-UK Strategic Technologies Investor Council within the next twelve months. The council will include investors and national security experts who will identify funding gaps and facilitate private investment in critical and emerging technologies. Lastly, efforts will be made to improve talent flows between the USA and the UK, ensuring a robust exchange of skilled professionals.

What this means for digital policy: The UK and US investments in quantum technology are dwarfed by, for instance, the USD15.2-billion public investment in quantum technology announced by the Chinese government. The UK’s investments in semiconductors–USD1.2 billion–are modest. On the other hand, US companies have pledged nearly USD200 billion. In biotech, the UK is behind the USA. The UK currently, by its own estimate, ranks 3rd in AI behind the USA and China. Overall, China has a stunning lead in research in 37 out of 44 critical and emerging technologies with the USA often second-ranked. Perhaps this partnership with the UK will give both the UK and the USA a leg up.

The second pillar of the partnership centres on advancing cooperation on economic security and technology protection toolkits and supply chains. This involves addressing national security risks associated with some types of outbound investment and preventing their companies’ capital and expertise from fueling technological advances that could enhance the military and intelligence capabilities of countries of concern. Additionally, the countries will work towards flexible and coordinated export controls related to sensitive technologies, enabling the complementarity of their respective toolkits. Strengthening their partnership across sanctions strategy, design, targeting, implementation, and enforcement is another objective. Lastly, the countries aim to reduce vulnerabilities across critical technology supply chains by sharing analysis, developing channels for coordination and consultation during disruptions and crises, and ensuring resilience.

What this means for digital policy:  Judging by previous comments from Paul Rosen, the US Treasury’s investment security chief, this is about preventing know-how and investments in advanced semiconductors, AI, and quantum computing from reaching China, which would allegedly use it to bolster military intelligence capabilities. The UK, which already shares a special relationship with the USA in intelligence, just might be joining the US-led export controls on semiconductors. Reminder: Chip giant Arm is headquartered in the UK.

Pillar 3 of the partnership focuses on an inclusive and responsible digital transformation. The countries aim to enhance cooperation on data by establishing a US-UK Data Bridge, ensuring data privacy protections and supporting Global Cross-Border Privacy Rules (CBPR) Forum and the OECD’s Declaration on Government Access to Personal Data Held by Private Sector Entities.

The countries will accelerate cooperation on AI, and the USA welcomed the planned launch of a Global Summit on AI Safety by the UK Prime Minister in the autumn of 2023. Collaboration on Privacy Enhancing Technologies (PETs) is also planned to enable responsible AI models and protect privacy while leveraging data for economic and societal benefits.

What this means for digital policy: The establishment of a US-UK data bridge was first agreed upon in January, at the Inaugural Meeting of the US-UK Comprehensive Dialogue on Technology and Data. Now we know that the data bridge will support a UK extension to the EU-US Data Privacy Framework

Why is it relevant? First, interestingly, there was no mention of the EU in this equation, although it impacts EU and US-UK-EU relations. Second, the USA and the UK are stressing collaboration on AI. It’s clear that AI is a priority for both. The USA is looking to Britain to help lead efforts on AI safety and regulation, hoping that AI companies find more fertile ground in the UK than in the EU’s stricter environment. Fourth, the UK wants to put its EU membership firmly behind it. Sunak stated ‘I know some people have wondered what kind of partner Britain would be after we left the EU. […] And we now have the freedom to regulate the new technologies that will shape our economic future, like AI, more quickly and flexibly.’

Beyond what they said, we’ll see how this impacts what they did not mention: Microsoft’s Activision Blizzard takeover.


Digital policy roundup (6–12 June)
// AI GOVERNANCE //

Is the EU’s AI Act in jeopardy?

The political deal behind the AI Act may be crumbling, and this might affect the Parliament’s endorsement of the text. 

In April, a deal struck between the four main political groups at the European Parliament stipulated they would not table alternative amendments to the AI Act. However, the European People’s Party (EPP) was given flexibility on the issue of remote biometric identification (RBI). On 7 June, the final deadline for amendments, the EPP tabled a separate amendment on RBI. There are two problems with that.  

  1. Other groups claim that the EPP broke the deal, and they might feel legitimised to vote for amendments that were tabled outside of the deal. If they do, there’s no telling how the Parliament’s plenary vote on 14 June will go. 
  1. Not everyone likes what’s in the actual amendment. The EPP’s proposed text stipulates that the member states may authorise the use of real-time RBI systems in public spaces, subject to the prejudicial authorisation, for ‘(1) the targeted search of missing persons, including children; (2) the prevention of a terrorist attack; (3) the identification of perpetrators of criminal offences punishable in the Member State concerned for a maximum period of at least three years.’ MEPs from four political groups (liberals, socialists, greens, and left) firmly oppose the EPP’s amendment on biometric identification. They are asking for a ban on such systems, claiming that AI systems that perform behavioural analysis are prone to error and falsely reporting law-abiding citizens and are discriminatory and ineffective for law enforcement.

Why is it relevant? If the Parliament doesn’t endorse the text, it will slow down the world’s first playbook on AI–it might take longer than the projected end of 2023 to reach a political deal among EU Institutions. This threatens the EU’s plans to be a leader in AI rule-making, and plenty of others are willing to step up.


// CHILD SAFETY ONLINE //

Instagram’s algorithms recommend child-sex content to paedophiles, research finds

An investigation by the Wall Street Journal, Stanford University, and the University of Massachusetts at Amherst uncovered that Instagram has been hosting large networks of accounts posting child sexual abuse material (CSAM). The platform’s practical recommendation algorithms play a key role in Instagram being the most valuable platform for sellers of self-generated CSAM (SG-CSAM): ‘Instagram connects paedophiles and guides them to content sellers via recommendation systems that excel at linking those who share niche interests, the Journal and the academic researchers found.’ Even viewing one such account led to new CSAM selling accounts being recommended to the user, thus helping to build the network.

This is where it gets worse. At the time of the research, Instagram enabled searching for explicit hashtags such as #pedowhore and #preteensex. When researchers searched for a paedophilia-related hashtag, a pop-up informed them: ‘These results may contain images of child sexual abuse’. Below the text, two options were given: ‘Get resources’ and ‘See results anyway’. Instagram has since removed the option to review the content, but it doesn’t stop us from wondering why this option was available in the first place.

Why is it relevant? First, it tells us something about the speed at which the platform reacted to reports of CSAM. Perhaps, it will now change, since the platform said it will form a task force to investigate the problem.

Second, it attracted the ire of European Commissioner Thierry Breton. 

 File, Computer Hardware, Electronics, Hardware, Webpage, Person, Monitor, Screen, Text

Third, Meta will have to demonstrate the measures it plans to take to comply with the EU’s Digital Services Act (DSA) after 25 August or face heavy sanctions, the bloc’s Thierry Breton said. Meta, designated a Very Large Online Platform (VLOP), has stringent obligations under the DSA, and fines for breaches can go to as high as 6% of a company’s global turnover. While Breton didn’t put Twitter on the blast, the company has also been designated as a VLOP, meaning it will also run the risk of being fined.

(And finally, it seems the media did not get the memo: ‘child sexual abuse material’ is the preferred terminology.)


Was this newsletter forwarded to you, and you’d like to see more?


// PRIVACY //

French Senate approves surveillance of suspects using cameras and microphones

The French Senate approved a contentious provision to a justice bill that allows for remote activation of computers and connected devices without the owner’s knowledge. This provision serves two purposes: (1) real-time geolocation for specific offences and the activation of microphones and (2) cameras to capture audio and images, which would be limited to cases of terrorism, delinquency, and organised crime. The Senate also adopted an amendment that limits the possibility of using geolocation to investigate offences punishable by at least ten years imprisonment. However, the implementation of this provision will still require judicial approval.

Why is it relevant? Surveillance tactics are never a favourite with privacy advocates, who typically argue that such privacy breaches cannot be justified by national security concerns. In this instance, the safeguards are unclear, as are mechanisms for redress.


// CYBERSECURITY //

Google introduces Secure AI Framework

Google’s introduced its Secure AI Framework (SAIF), which aims to reduce overall risk when developing and deploying AI systems. It is based on six elements organisations should be mindful of:

  1. Expand strong security foundations to the AI ecosystem by leveraging secure-by-default infrastructure protections and scaling and adapting infrastructure protections as AI threats advance
  2. Bring AI into an organisation’s threat universe by extending detection and response to AI-related cyber incidents
  3. Automate defences to keep pace with existing and new threats, including harnessing the latest AI innovations to improve response efforts
  4. Harmonise platform-level controls to ensure consistent security of AI applications across the organisation
  5. Adapt controls to adjust mitigations and create faster feedback loops for AI deployment via reinforcement learning based on incidents and user feedback
  6. Contextualise AI-system risks in surrounding business processes by conducting end-to-end risk assessments on AI deployment

Google has committed to fostering industry support for SAIF, working directly with organisations to help them understand how to assess and mitigate AI security risks, sharing threat intelligence, expanding its bug hunter programs to incentivise research around AI safety and security and delivering secure AI offerings.

Why is it relevant? As more AI products are integrated into digital products, the security of the supply chain will benefit from the secure-by-default AI products.


NATO to enhance military cyber defences in peacetime, integrate private sector capabilities

NATO member states are preparing to approve an expanded role for military cyber defenders during peacetime, as well as the permanent integration of private sector capabilities, revealed NATO’s assistant secretary general for emerging security challenges David van Weel. Furthermore, NATO plans to establish a mechanism to facilitate assistance among allies during crises when national response capabilities become overwhelmed.

The endorsement is expected at the upcoming Vilnius summit in Lithuania, scheduled for July.

 People, Person, Advertisement, Poster, Adult, Male, Man, Clothing, Formal Wear, Suit, Crowd, Face, Head
Image source: NATO CCDCOE Twitter account

Why is it relevant? Van Weel stated: ‘We need to move beyond naming and shaming bad actors in response to isolated cyber incidents, and be clear what norms are being broken.’ The norms he referred to are agreed-upon norms of responsible state behaviour in cyberspace, confirmed in the reports of the GGEs and the first OEWG on ICTs. His remarks come just two weeks after UN member states met under the auspices of the OEWG in New York to discuss responsible state behaviour in cyberspace. We’ll have more on that towards the end of this week on the Digital Watch Observatory–keep an eye out.


China issues draft guidelines to tackle cyber violence 

China’s Supreme People’s Court, Supreme People’s Procuratorate, and Ministry of Public Security issued draft Guiding Opinions on Legally Punishing Cyber Violence and Crimes (Chinese). 

The guidelines propose the punishment of online defamation, insults, privacy violations, and offline nuisance behaviour, such as intercepting and insulting victims of cyber violence and their relatives and friends, causing disturbances, intimidating others, and destroying property. They also address using violent online methods for malicious marketing and hype, as well as protecting civil rights and identifying illegal acts. 

The guidelines also note that network service providers can be convicted and punished for the offence if they neglect their legal obligations to manage information network security regarding identified instances of cyber violence and fail to rectify the situation after being instructed by regulatory authorities to take corrective measures. This applies to cases where such neglect results in the widespread dissemination of illegal information or other serious consequences.

The draft is open for public comments until 25 June.


// CRYPTOCURRENCIES //

US SEC launches lawsuits against Binance and Coinbase

The world’s biggest cryptocurrency exchanges–Binance and Coinbase–were hit by a wave of lawsuits from the US Securities and Exchange Commission (SEC). 

Why is it relevant? Because down the line, these actors might leave the USA for greener pastures. Diplo’s Arvin Kamberi has more on that in the video below.


The week ahead (13–19 June)

13 June: The Swiss Internet Governance Forum 2023 will discuss the use and regulation of AI, especially in the context of education, protecting fundamental rights in the digital age, responsible data management, platform influence, democratic practices, responsible use of new technologies, internet governance; and the impact of digitalisation on geopolitics. Our Director of Digital Policy, Stephanie Borg Psaila, will speak at the session: Digital governance and the multistakeholder approach in 2023.

14 June: The last two GDC thematic deep dives will focus on global digital commons and accelerating progress on the SDGs. The discussion on the global digital commons will explore principles, values, and ideas associated with this approach while considering how the Global Digital Commons (GDC) can enhance the safety, inclusivity, and the global ecosystem of digital public infrastructure and goods. The discussion on accelerating progress on the SDGs will examine the role of digital technology in achieving the SDGs and addressing future challenges, and the potential for generalising principles and approaches based on shared experiences. For more information on the GDC, visit our dedicated web page on the Digital Watch Observatory.

15–16 June: This year’s Digital Assembly will be held under the theme: A digital, open and secure Europe, and has openness, competition, digitalisation and cybersecurity in its focus. The assembly is organised by the European Commission and the Swedish Presidency of the Council of the EU.

19–21 June: The 2023 edition of Europe’s regional internet governance gathering–EuroDIG–will be themed Internet in troubled times: Risks, resilience, hope. The GIP will once again partner with EuroDIG to deliver messages and reports from the conference using DiploGPT. The reports and messages will be available on our dedicated Digital Watch page

19 June–14 July: The 53rd session of the Human Rights Council (HRC) will feature a panel discussion on the role of digital, media, and information literacy in the promotion and enjoyment of the right to freedom of opinion and expression. The council will also consider the report on the relationship between human rights and technical standard-setting processes for new and emerging digital technologies and the practical application of the Guiding Principles on Business and Human Rights, as well as the report on Digital innovation, technologies, and the right to health.

For more events, bookmark the Digital Watch Observatory’s calendar of global policy events.


#ReadingCorner
 Animal, Bird, Penguin, Face, Head, Person, Baby, Art, Text

The June issue of the Digital Watch Monthly newsletter is out! 

We asked: who holds the dice in the grand game of addressing AI for the future of humanity? A brief summary of the UN Secretary-General’s policy brief with suggestions on how a Global Digital Compact (GDC) could help advance digital cooperation, May’s barometer of updates, and the leading global digital policy events ahead in June also feature.


Andrijana20picture
Andrijana Gavrilović – Author
Editor – Digital Watch; Head of Diplomatic & Policy Reporting, Diplo
ginger
Virginia Paque – Editor
Senior editor – Digital Policy, DiploFoundation

Digital Watch newsletter – Issue 80 – June 2023

AI is the name of the game

In the spirit of the front page illustration of the grand game of addressing AI for the future of humanity, an essential question arises: Who holds the dice? Is it mere coincidence, the divine, or vested interests?

 Animal, Bird, Penguin, Person, Baby, Face, Head

In May, AI dominated global discussions and media coverage, with AI on the agendas of meetings and parliamentary debates. What’s the hype?

First, there are very loud warnings that AI threatens the very survival of humanity. 

Second, the warning of existential risks is typically associated with a call to regulate future AI development. Using a new dynamic, businesses ask to be regulated. OpenAI CEo Sam Altman emphasised the crucial role of government in regulating AI and advocated for establishing a governmental or global AI agency to oversee the technology. This regulatory body would require companies to obtain licenses before training powerful AI models or operating data centres facilitating AI development. Doing so would hold developers to safety standards and establish consensus on the standards and risks that require mitigation. In parallel, Microsoft has published a comprehensive blueprint for governing AI, with Microsoft President Brad Smith also advocating for creating a new government agency to enforce new AI rules in his foreword.

Third, governments from developed countries are responding positively to the idea of regulating the future development of AI.

Fourth, there are growing voices saying that regulation supported by an existential threat narrative aims to block open-source AI developments and concentrate AI power in the hands of just a few leaders, mainly OpenAI/Microsoft and Google.

Regardless of the motivation behind AI regulation, we can identify a few echoing topics for regulation: privacy violations, bias, the proliferation of scams, misinformation, and the protection of intellectual property, among others. However, not all regulators share the same focal points. Here’s a snapshot of what jurisdictions worldwide expressed in May 2023 regarding their desire to regulate AI and their proposed approaches.

The EU. The world’s first rulebook for AI is, unsurprisingly, being shaped by the regulatory behemoth that is the EU. The bloc is taking a risk-based approach to AI, establishing obligations for AI providers and users based on the level of risk posed by the AI systems. It also introduces a tiered approach for regulating general-purpose AI, and foundation and generative AI models. The draft rules need to be endorsed in the Parliament’s plenary, which is expected to happen during the 12–15 June session. Then, negotiations with the Council on the law’s final form can begin. 

The USA. US government officials met with Alphabet, Anthropic, Microsoft, and OpenAI CEOs and discussed three key areas: the transparency, evaluation, and security of AI systems. The White House and top AI developers will collaborate to evaluate generative AI systems for potential flaws and vulnerabilities, such as confabulations, jailbreaks, and biases. The USA is also evaluating AI’s impact on the workforce, education, consumers, and the risks of biometric data misuse.

The UK. Another government that will collaborate with the industry is the UK: Prime Minister Rishi Sunak has met with the CEOs of OpenAI, Google DeepMind, and Anthropic to discuss the risks AI can pose, such as disinformation, national security, and existential threats. The CEOs agreed to work closely with the UK’s Foundation Model Taskforce to advance AI safety. The UK also focuses on AI-related election risks and the impact of developing AI foundation models for competition and consumer protection. The UK will seemingly keep its sectoral approach to AI, with no general AI regulation planned.

China. The Cyberspace Administration of China (CAC) raised concerns over advanced technologies such as generative AI, noting that they could seriously challenge governance, regulation and the labour market. The country has also called for improving the security governance of AI. In April, the CAC proposed measures for regulating generative AI services, which specify that providers of such services must ensure that their content aligns with China’s core values. Prohibited content includes discrimination, false information, and infringement of intellectual property rights (IPR). Tools utilised in generative AI services must undergo a security assessment before launch. The measures were open for comments until 2 June, meaning we will see the outcome soon.

Australia. Australia is concerned with AI risks such as deepfakes, misinformation, disinformation, self-harm encouragement, and algorithmic bias. The country is currently seeking opinions on whether it should support the development of responsible AI through voluntary approaches, like tools, frameworks, and principles or enforceable regulatory approaches, like laws and mandatory standards. 

South Korea. The country’s AI Act is only a few steps away from the National Assembly’s final vote. It would allow AI development without government pre-approval, categorise high-risk AI and set trustworthiness standards, support innovation in the AI industry, establish ethical guidelines, and create a Basic Plan for AI and an AI Committee overseen by the prime minister. The government also announced it would create new guidelines and standards for copyrights of AI-generated content by September 2023.

Japan. The Japanese government aims to promote and strengthen domestic capabilities to develop generative AI while addressing AI risks such as copyright infringement, exposure of confidential information, false information, and cyberattacks, among other concerns.

Italy. Italy temporarily banned ChatGPT over GDPR violations in March. ChatGPT has returned to Italy after OpenAI revised its privacy disclosures and controls, but Garante, the data protection authority of Italy, is intensifying its scrutiny of AI systems for adherence to privacy laws.

France. French privacy watchdog CNIL launched an AI Action Plan to promote a framework for developing generative AI, which upholds personal data protection and human rights. The framework is based on four pillars: (a) understanding AI’s impact on fairness, transparency, data protection, bias, and security; (b) developing privacy-friendly AI through education and guidelines; (c) collaborating with AI innovators for data protection compliance; and (d) auditing and controlling AI systems to safeguard individuals’ rights, including addressing surveillance, fraud, and complaints.

India. The government is considering a regulatory framework for AI-enabled platforms due to concerns such as IPR, copyright, and algorithm bias, but is looking to do so in conjunction with other countries.

International efforts. Ahead of the EU’s planned AI Act, the European Commission and Google plan to join forces ‘with all AI developers’ to develop a voluntary AI pact. Open AI’s Altman is also set to meet EU officials about the pact

The EU and the USA will jointly prepare an AI code of conduct to foster public trust in the technology. The voluntary code ‘would be open to all like-minded countries,’ US Secretary of State Anthony Blinken stated. 

Additionally, the G7 has agreed to launch a dialogue on generative AI – including issues such as governance, disinformation, and copyright – in cooperation with the Organisation for Economic Co-operation and Development (OECD) and the Global Partnership on AI (GPAI). The ministers will discuss AI as the ‘Hiroshima AI process’ and report results by the end of 2023. The G7 leaders have also called for developing and adopting technical standards to ensure the trustworthiness of AI. 

So, who holds the dice?
It’s not clear yet. We hope it will be citizens who hold the dice. Regulatory efforts aside, one way to ensure that individuals remain in charge of their knowledge, even when codified by AI, is through bottom-up AI. This would mitigate the risk of centralisation of power inherent in large generative AI platforms. In addition, bottom-up AI is typically based on an open-source and transparent approach that can mitigate most safety and security risks related to centralised AI platforms. Many initiatives, including the development strategies of Diplo’s AI, have proven that bottom-up AI is technically feasible and economically viable. There are many reasons to adopt bottom-up AI as a practical way to foster a new societal operating system built around the centrality, dignity, free will, and the achievement of the creative potential of human beings.

Dr Jovan Kurbalija, Director of DiploFoundation, explains why bottom AI is critical for our future.

Digital policy developments that made global headlines

The digital policy landscape changes daily, so here are all the main developments from May. There’s more detail in each update on the Digital Watch Observatory.        

Global digital governance architecture

 Triangle

World Telecommunication and Information Society Day was observed on 17 May with calls by UN officials to bridge the digital divide, support digital public goods, and establish a Global Digital Compact (GDC).
The Fourth EU-US ministerial meeting of the Trade and Technology Council (TTC) covered AI risks, content regulation, digital identities, semiconductors, quantum technologies, and connectivity projects.

Sustainable development

 Triangle

To bridge the digital gender gap by 2030, 100 million more women must embrace mobile internet annually, a GSMA report found.

The EU Commission and WHO launched a landmark digital health initiative to establish a comprehensive global network for digital health certification.
Papua New Guinea rolled out a platform for managing digital IDs.The Maldives introduced a digital ID mobile app for streamlined access to government services. Evrotrust’s eID program became Bulgaria’s official digital ID system.

 

Security

 Triangle

A Chinese report claims it has identified five methods the CIA uses to launch colour revolutions abroad and nine methods used as weapons for cyberattacks.

The Five Eyes cyber agencies attributed cyberattacks on US critical infrastructure to the Chinese state-sponsored hacking group Volt Typhon, which China has denied. The FBI disrupted a Russian cyberespionage operation dubbed Snake. The governments of Colombia, Senegal, Italy and Martinique suffered cyberattacks.

The USA and South Korea issued a joint advisory warning that North Korea is using social engineering tactics in cyberattacks.
NATO has warned of a potential Russian threat to internet cables and gas pipelines in Europe or North America.

Infrastructure

 Triangle

The Body of European Regulators for Electronic Communications (BEREC) and the majority of EU countries are against a push by telecom providers to get Big Tech to contribute to the cost of the rollout of 5G and broadband in Europe.
Tanzania has signed agreements to extend telecommunications services to 8.5 million individuals in rural areas.

 

E-commerce and the internet economy

 Triangle

The Body of European Regulators for Electronic Communications (BEREC) and the majority of EU countries are against a push by telecom providers to get Big Tech to contribute to the cost of the rollout of 5G and broadband in Europe.
Tanzania has signed agreements to extend telecommunications services to 8.5 million individuals in rural areas.

Digital rights

 Triangle

South Korea proposed changes to its Personal Information Protection Act to strengthen consent requirements, unify online/offline data processing standards, and establish criteria for assessing violations.

The 2023 World Press Freedom Index reveals that journalism is threatened by the fake content industry and rapid AI development
Internet shutdowns were reported in Pakistan in the wake of the arrest of the former prime minister, and in Sudan amid protests over the sentencing of an opposition leader, while social media was restricted in Guinea over protests.

 

Content policy

 Triangle

US Supreme Court rulings in Gonzalez v. Google, LLC and Twitter, Inc. v. Taamneh maintained Section 230 protections for online platforms

Google and Meta threatened to block links to Canadian news sites if a bill requiring internet platforms to pay publishers for their news is passed. 

Austria banned the use of TikTok on federal government officials’ work phones.
The Digital Public Goods Alliance (DPGA) and UNDP announced nine innovative open-source solutions to address the global information crisis. The EU called for clear labelling of AI-generated content to combat disinformation. While Twitter pulled out of code to tackle disinformation, it must still comply with the Digital Service Act when operating in the EU.

Jurisdiction and legal issues

 Triangle

Apple faces investigation in France over complaints that it intentionally causes its devices to become obsolete to compel users to purchase new ones. 
Meta was fined €1.2bn in Ireland for mishandling user data and its continued transfer of data to the USA in violation of an EU court ruling.

 

Technologies

 Triangle

A Chinese WTO representative has criticised the USA’s semiconductor industry subsidies, calling them an attempt to stymie China’s technological progress. South Korea asked the USA to review its rule barring China and Russia from using US funds for chip manufacturing and research.

The USA is considering investment restrictions on Chinese chips, AI, and quantum computing to curb the flow of capital and expertise. 
Australia has released a new National Quantum Strategy. China has launched a quantum computing cloud platform for researchers and the public.


UN Secretary-General’s policy brief for GDC

The UN Secretary-General has issued a policy brief with suggestions on how a Global Digital Compact (GDC) could help advance digital cooperation. The GDC is to be agreed upon in the context of the Summit of the Future in 2024 and is expected to ‘outline shared principles for an open, free and secure digital future for all’. Here is a summary of the brief’s main points.

The brief outlines areas where ‘the need for multistakeholder digital cooperation is urgent’: closing the digital divide and advancing SDGs, making the online space open and safe for everyone, and governing AI for humanity. It also suggests objectives and actions for advancing digital cooperation, structured around eight topics proposed to be covered by the GDC.

Digital connectivity and capacity building. The aim is to bridge the digital divide and empower individuals to participate fully in the digital economy. Proposed actions include setting universal connectivity targets and enhancing public education for digital literacy.

Digital cooperation for SDG progress. Objectives involve targeted investments in digital infrastructure and services, ensuring representative and interoperable data, and establishing globally harmonised digital sustainability standards. Proposed actions include defining safe and inclusive digital infrastructures, fostering open and accessible data ecosystems, and developing a common blueprint for digital transformation.

Upholding human rights. The focus is on placing human rights at the core of the digital future, addressing the gender digital divide, and protecting workers’ rights. A key proposed action is establishing a digital human rights advisory mechanism facilitated by the Office of the UN High Commissioner for Human Rights.

Inclusive, open, secure, and shared internet. Objectives include preserving the free and shared nature of the internet and reinforcing accountable multistakeholder governance. Proposed actions involve commitments from governments to avoid blanket internet shutdowns and disruptions to critical infrastructures.

Digital trust and security. Objectives range from strengthening multistakeholder cooperation to developing norms, guidelines, and principles for responsible digital technology use. Proposed actions include creating common standards and industry codes of conduct to address harmful content on digital platforms.

Data protection and empowerment. Objectives include governing data for the benefit of all, empowering individuals to control their personal data, and establishing interoperable standards for data quality. Proposed actions include encouraging countries to adopt a declaration on data rights and seeking convergence on principles for data governance through a Global Data Compact.

Agile governance of AI and emerging technologies. Objectives involve ensuring transparency, reliability, safety, and human control in AI design and use, and prioritising transparency, fairness, and accountability in AI governance. Proposed actions range from establishing a high-level advisory body for AI to building regulatory capacity in the public sector.

Global digital commons. Objectives include inclusive digital cooperation, sustained exchanges across states and sectors, and responsible development of technologies for sustainable development and empowerment.

Implementation mechanisms

The policy brief proposes numerous implementation mechanisms. The most notable is an annual Digital Cooperation Forum (DCF) to be convened by the Secretary-General to facilitate collaboration across digital multistakeholder frameworks and reduce duplication, promote cross-border learning in digital governance, and identify policy solutions for emerging digital challenges and governance gaps. The document further notes that ‘the success of a GDC will rest on its implementation’ at national, regional, and sectoral levels, supported by platforms like the Internet Governance Forum (IGF) and the World Summit on the Information Society Forum (WSIS). The brief suggests establishing a trust fund to sponsor a Digital Cooperation Fellowship Programme to enhance multistakeholder participation.

Global Digital Compact home
Global Digital Compact
Read more about the Global Digital Compact.
Global Digital Compact home
Global Digital Compact
Read more about the Global Digital Compact.

Policy updates from International Geneva

Intergovernmental Group of Experts on E-commerce and the Digital Economy, sixth session | 10–12 May

The main objective of this intergovernmental group of experts is to enhance UNCTAD’s efforts in the fields of information and communications technologies, e-commerce, and the digital economy, to empower developing nations to participate in and gain advantages from the ever-changing digital economy. Additionally, the group works to bridge the digital divide and promote the development of inclusive knowledge societies. The 6th session focuses on two main agenda items: How to make data work for the 2030 Agenda for Sustainable Development and Working Group on Measuring E-commerce and the Digital Economy.


2023 Group of Governmental Experts (GGE) on emerging technologies in the area of lethal autonomous weapons systems (LAWS), second session | 15–19 May

The second session of the GGE on LAWS convened in Geneva to ‘intensify the consideration of proposals and elaborate, by consensus, possible measures’ in the context of the Convention on Certain Conventional Weapons (CCW) while bringing in legal, military, and technological expertise.

In the advance version of the final report (CCW/GGE.1/2023/2), the GGE concluded that when characterising weapon systems built from emerging technologies in the area of LAWS, it is crucial to consider the potential future developments of these technologies. The group also affirmed that states must observe particular compliance with international humanitarian law throughout the life cycle of such weapon systems. States should limit the types of targets and the duration and scope of operations with which the weapon systems can engage; adequate training must be given to human operators. In cases where the weapon system based on technologies in the area of LAWS cannot comply with international law, the system must not be deployed.


The 76th World Health Assembly | 21–30 May

The 76th World Health Assembly (WHA) invited delegates of its 194 member states to Geneva to confer on the organisation’s priorities and policies under the theme of‘WHO at 75: Saving lives, driving health for all. A series of roundtables took place where delegates, partner agencies, representatives of civil society, and WHO experts deliberated about current and future public health issues of global importance. Committee B on 23 May specifically elaborated on the progress reports (A76/37) that highlighted the implementation of  ‘global strategies on digital health’ as agreed in the 73rd WHA. Since the endorsement of the strategies in 2020, the WHA Secretariat, together with development partners and other UN agencies, has trained over 1,600 government officials in more than 100 member states in digital health and AI. The secretariat has also launched numerous initiatives for knowledge dissemination and national developments related to digital health strategies. From 2023 to 2025, the secretariat will continue facilitating coordinated actions set out in the global strategies while prioritising member states’ needs.

What to watch for: Global digital policy events in June

5–8 June | RightsCon (San José, Costa Rica and online)

The 12th annual RightsCon will discuss global developments related to digital rights in 15 tracks: access and inclusion; AI; business, labour, and trade; conflict and humanitarian action; content governance and disinformation; cyber norms and encryption; data protection; digital security for communities; emerging tech; freedom of the media; futures, fictions, and creativity; governance, politics, and elections; human rights-centred design; justice, litigation, and documentation; online hate and violence; philanthropy and organisational development; privacy and surveillance; shutdowns and censorship; and tactics for activists.


12–15 June 2023 | ICANN 77 Policy Forum (Washington, DC, the USA)

The Policy Forum is the second meeting in the three-meeting annual cycle. The focus of this meeting is the policy development work of the Supporting Organizations and Advisory Committees and regional outreach activities. ICANN aims to ensure an inclusive dialogue that provides equal opportunities for all to engage on important policy matters.


13 June 2023  | The Swiss Internet Governance Forum 2023 (Bern, Switzerland and online)

This one-day event will focus on topics such as the use and regulation of AI, especially in the context of education; protecting fundamental rights in the digital age; responsible data management; platform influence; democratic practices; responsible use of new technologies; internet governance; and the impact of digitalisation on geopolitics.


15–16 June 2023 | Digital Assembly 2023 (Arlanda, Sweden and online)

Organised by the European Commission and the Swedish Presidency of the Council of the European Union, this assembly will be themed: A Digital, Open and Secure Europe. The conference program includes five plenary sessions, six breakout sessions, and three side events. The main discussion topics will be digital innovation, cybersecurity, digital infrastructure, digital transformation, AI, and quantum computing.


19–21 June 2023 | EuroDIG 2023 (Tampere, Finland and online)

EuroDIG 2023 will be held under the overarching theme of Internet in troubled times: Risks, resilience, hope. In addition to the conference, EuroDIG hosts YOUthDIG, a yearly pre-event that fosters the active participation of young people (ages 18–30) in internet governance. The GIP will once again partner with EuroDIG to deliver updates and reports from the conference using DiploGPT.


DiploGPT reported from the UN Security Council meeting

In May, Diplo used AI to report from the UN Security Council session: Futureproofing trust for sustaining peace. DiploGPT provided automatic reporting that produced a summary report, an analysis of individual submissions, and answers to the questions posed by the chair of the meeting. DiploGPT combines various algorithms and AI tools customised to the needs of the UN and diplomatic communications.


The Digital Watch observatory maintains a live calendar of upcoming and past events.


DW Weekly #114 – 5 June 2023

 Text, Paper, Page

Dear readers,

It’s more AI governance this week, not in the form of binding rules, but rather, voluntary codes of conduct – and lots of them. In other news, there’s a new warning about AI-caused extinction and a multimillion-dollar settlement in the child privacy violation lawsuit against Apple.  

Let’s get started.

Stephanie and the Digital Watch team


// HIGHLIGHT //

Governing AI: Two codes of conduct announced

What’s the best course of action when legislation takes too long to materialise? If your idea is to simply wait patiently, you won’t find a kindred spirit in European Commissioner Thierry Breton.

The AI Pact: Eager to see companies get ready for the EU’s AI Act, last week Breton announced the AI Pact, a voluntary set of rules that will act as a precursor to the regulation. In an interview on TV5Monde on Saturday, he explained that the AI Pact aims to help companies get a head start on rules that will become binding and obligatory in around two or three years.

The commissioner hopes companies will warm to the proactive initiative, which is why he’s been doing the rounds, starting with Google CEO Sundar Pichai, followed by AnthropicAI CEO Dario Amodei. He’s also met EU digital ministers, who, we presume, have expressed support towards an initiative that could ward off regulatory headaches down the line.

The EU-US joint voluntary AI code: Fellow European Commissioner Margrethe Vestager appears to share a similar sense of impatience. Last week, she announced a voluntary code of conduct that will be developed by policymakers from Washington and Brussels in the coming weeks. The announcement came at the start of the bi-annual EU-US Trade and Tech Council (TTC) summit, which took place in Luleå, Sweden (the code did not make it into the joint statement, though).

The code announced by Vestager has a different objective than Breton’s: Although it’s still in the form of a two-page briefing, it will aim to set basic non-binding principles, or standards, around transparency requirements and risk assessments. 

Breton was more blunt: ‘It is for all the countries that are lagging behind, and the Americans are lagging behind on these issues, I’m not afraid to say it, well, they should also start doing the work that we have done, to establish basic principles. These principles underlie the legislative act that we have built.’ (Machine-translated from this original text: C’est qui que pour tous les pays qui sont en retard et les Américains sont en retard sur ces questions je n’ai pas peur de le dire et bien il faut aussi commencer à faire peut-être le travail que l’on a fait arrêter des principes de base qui sont les principes qui sont les principes sous-jacents à ceux qui nous ont permis d’avoir de bâtir cet acte législatif.)   

In a way, the joint initiative is an attempt to bridge the gap between the laissez-faire approach of the USA and the more stringent approach of the EU – an intermediate step before US companies will be obliged to follow EU rules. It’s probably what should have preceded the GDPR but didn’t.

AI labels to be added to EU’s disinformation code

Yes, there’s a third code that will be impacted by the need to set guardrails for generative AI. And yes, it comes from another European Commissioner. 

Values and transparency chief Vera Jourova announced this morning (Monday 5 June) that AI services should introduce labels for content generated by AI, such as text, images, and videos. This measure will be added to the voluntary Code of Practice on Disinformation, which counts Microsoft, Google, Meta, and TikTok among its signatories (Twitter left the group of code adherents).  

 Crowd, Person, Adult, Female, Woman, Blazer, Clothing, Coat, Jacket, People, Audience, Speech, Věra Jourová

‘Signatories of the EU Code of Practice against disinformation should put in place technology to recognise AI content and clearly label it to users,’ she said, with reference to services with ‘a potential to disseminate AI-generated disinformation’. It’s uncertain if this will be applicable to all generative AI services offered by participating companies.

Why is it relevant? First, all of these initiatives place the EU at the forefront of AI regulation. The EU clearly wants to set a global standard – and a high one at that – for AI, especially generative AI. Second, this will codify the emerging practice of labelling content generated by AI (here’s an example).


Digital policy roundup (29 May–5 June)
// AI GOVERNANCE //

Australia plans AI rules

Speaking of disinformation and deceptive content: Australia wants to introduce AI rules and is seeking public comment on how to mitigate the risks, which include algorithmic bias, lack of transparency, and reliability of data.

The request for comment highlights how other countries have approached AI rules – from voluntary approaches in Singapore to stricter regulation in the EU and Canada. 

Why is it relevant? The discussion paper attached to the call for comment extensively references the EU’s proposed AI Act. It includes elements of what a potential risk-based approach (the hallmark of the EU’s AI Act) could include. Breton will be happy.

OpenAI gets warning from Japan’s data protection watchdog

The Japanese data protection authority has issued administrative guidance to OpenAI, the operator of ChatGPT, in response to concerns over the protection of personal data. 

The guidance highlights the potential for ChatGPT to obtain sensitive personal data without proper consent, potentially infringing on privacy. No specific violations of the country’s privacy rules have been confirmed yet by Japan’s Personal Information Protection Commission.

OpenAI may face an onsite inspection or fines if it fails to take sufficient measures in response to the guidance.

Why is it relevant? Japan took a keen interest in ChatGPT: OpenAI CEO Sam Altman met Japan’s Prime Minister to discuss plans to open an office in the country, and government officials and financial sectors rushed in where others feared to tread. Although there’s been no breach, it seems the country’s data protection watchdog is treading more cautiously than the rest.

AI scientists warn about AI-caused extinction

Tech company chiefs and AI scientists have issued another stark warning: ‘Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.’

Almost 900 signatories endorsed the one-sentence open letter, spearheaded by the nonprofit organisation Center for AI Safety.

Why is it relevant? First, It’s yet again another warning (see the last week’s issues issue) that we should be much more concerned about the potentially catastrophic effects of future AI on humanity. Compared to what’s happening now, coming ramifications could be far more dire. Second, some of the signatories are behind companies that are pushing the boundaries of AI.


// CONTENT POLICY //

Meta threatens to pull news content from California over proposed rules

Meta has threatened to remove news content in California if state legislation were passed that required tech companies to pay publishers. The recently proposed California Journalism Preservation Act calls for platforms to pay a fee to news providers whose work appears on their services. 

In a tweet, Meta spokesman Andy Stone said the bill would predominantly benefit large, out-of-state media companies using the pretext of supporting California publishers.

Why is it relevant? This is taking place in parallel with Canada’s attempt to introduce similar legislation. Meta (and Google) told the Canadian Senate’s Standing Committee on Transport and Communications that it would have to withdraw from the country should the proposed bill pass as it stands.

 Page, Text, Person

// PRIVACY //

Amazon to pay settlement over children’s privacy lawsuit

Amazon will be required to pay USD25 million (EUR23.3 million) to the US Federal Trade Commission (FTC) to settle allegations that it violated children’s rights by failing to delete Alexa recordings as requested by parents. The FTC’s order must still be approved by the federal court.

The FTC’s investigation determined that Amazon had unlawfully used voice recordings to improve its Alexa algorithm for years. 

Why is it relevant? Although the technology is different, it reminds us of something similar: Companies training their models with personal data retrieved without consent. Amazon’s denial of the accusations will do little to appease parents after the FTC determined that the company deceived parents about its data deletion practices. 


Was this newsletter forwarded to you, and you’d like to see more?


 Logo, Symbol
// SHUTDOWNS //

Internet access disrupted in Africa: Authorities in Mauritania cut off the mobile internet last week, MENA-based non-profit SMEX reported. The Senegalese government imposed restrictions on mobile data and social media platforms, both actions following protests over the sentencing of opposition leader Ousmane Sonko. Senegal’s restrictions were confirmed by Netblocks, a global internet monitoring service, which said that authorities placed restrictions to prevent the ‘dissemination of hateful and subversive messages in the context of public order disturbances’.


The week ahead (5–12 June)

5–7 June: Re:publica returns to Berlin this week for its annual digital society festival. This year’s theme is money.

5–8 June: Another major meet-up, RightsCon 23, is taking place in Costa Rica and online. On the sidelines: The GFCE’s Regional Meeting for the Americas and Caribbean 2023

5–8 June: If you’re a regulator: ITU’s Global Symposium for Regulators 2023 is taking place in Sharm el-Sheikh, Egypt, and online. 

7 June: ENISA’s AI Cybersecurity Conference takes place in Brussels and online. AI is set to take centre stage.

12 June: It’s the last day to contribute to the US National Telecommunications and Information Administration (NTIA) request for comment on algorithmic accountability

12–15 June: The week-long ICANN77 is taking place in Washington, DC, and online.

#WebDebate on Tech Diplomacy

Join us online tomorrow, Tuesday, 6 June for Why and how should countries engage in tech diplomacy? starting at 13:00 UTC, with quite a line-up of special guests.


#ReadingCorner
 Body Part, Finger, Hand, Person, Electronics, Phone, Baby, Mobile Phone, Face, Head, Photography, Texting

Children and the metaverse

Meta’s release – and Apple’s planned release – of mixed-reality headsets may reignite people’s interest in the metaverse. This means more users might start spending their time in the metaverse. Which probably means that more kids will give it a go.

UNICEF and Diplo’s latest report, The Metaverse, Extended Reality and Children, considers the potential effects – both good and bad – that the metaverse has on children; the drivers of and predictions for the growth of the metaverse; and the regulatory and policy challenges posed by the metaverse.


steph
Stephanie Borg Psaila – Author
Director of Digital Policy, DiploFoundation
ginger
Virginia Paque – Editor
Senior editor – Digital Policy, DiploFoundation

Was this newsletter forwarded to you, and you’d like to see more?