Home | Newsletters & Shorts | DW Weekly #109 – 1 May 2023

DW Weekly #109 – 1 May 2023

DigWatch Weekly 100th issue 1920x1080px generic

Dear readers,

It’s more AI this week, after the G7 wrapped up a weekend-long ministerial meeting, with data flows joining the ranks on the ministers’ agenda. Microsoft is far from impressed at the UK’s decision to block its acquisition of Activision Blizzard, while the European Commission announces the names of very large platforms and search engines that will face tougher content regulation and consumer protection rules. The European Parliament reached a political deal on the AI Act, but we’ll cover that after the vote in plenary.

Stephanie and the Digital Watch team


// HIGHLIGHT //

G7 countries take on AI, data flows

As we anticipated last week, the G7 digital ministers wrapped up their weekend meeting in Japan with a focus on AI and data flows, setting the stage for new developments in some areas and a bit of a  letdown in others. The good parts first.

If AI is queen, data is still king 

The digital ministers of the world’s seven largest economies – Canada, France, Germany, Italy, Japan, the UK, and the USA – will start implementing Japan’s plan for a Data Free Flow with Trust (DFFT).

In essence, this approach will try to reconcile the need for data to flow freely, with the need to keep personal data safe and uphold people’s right to privacy. Although the group of seven have strong economies in common, the way the USA approaches data protection is at odds with stronger European safeguards. Max Schrems can tell us a thing or two about this.

The new plans outlined in the G7 digital ministers’ declaration involve setting up a new entity in the coming months, called an Institutional Arrangement for Partnership (IAP), and choosing the OECD to lead the IAP’s work. Although the OECD has only 37 members, its recent success in getting over 140 countries to agree on a new global digital tax deal shows that it’s capable of navigating complex terrains. Data flows are clearly a politically sensitive, highly charged issue.

The DFFT was first proposed by Japan’s former prime minister Shinzo Abe, debuted at the World Economic Forum annual meeting in Davos, and was later endorsed by the G20. Since Japan is chairing the G7 this year, it will want to see the IAP up and running by the end of 2023. Good news for DFFT supporters: The pressure’s on for the IAP stream. 

Generative AI: G7 digital ministers hedge their bets 

While things will move quickly on the data flows front, the ministerial declaration is somewhat of a letdown when it comes to regulating generative AI tools such as ChatGPT.

The group of seven did acknowledge the popularity that generative AI tools have gained quickly and the need to take stock of benefits and challenges. But the best the ministers could offer was a vague, non-committal plan:   

‘We plan to convene future G7 discussions on generative AI which could include topics such as… (transparency, disinformation)… These discussions should harness expertise and leverage international organisations such as the OECD to consider analysis on the impact of policy developments and GPAI to conduct relevant practical projects.’

What they did commit to is ‘to support interoperable tools for trustworthy AI’, meaning: to develop tools that allow AI tools to work together seamlessly. This includes developing standards and promoting dialogue on interoperability between governance frameworks.

Meanwhile, there’s still a possibility that the G7 heads of state, meeting later this month in Hiroshima, will take more concrete steps to tackle the privacy and security concerns of generative AI. 

 Groupshot, Person, People, Clothing, Coat, Formal Wear, Suit, Adult, Male, Man, Blazer, Jacket, Face, Head, Nathaniel Fick, Margrethe Vestager, Sogyal Rinpoche, Tarō Konō, Yasutoshi Nishimura
G7 digital ministers and other delegates during the first day of a two-day meeting in Japan, on 29 April 2023. Credit: Kyodo

Digital policy roundup (24 April–1 May)
// AI //

Italy lifts ban on ChatGPT after OpenAI introduces privacy improvements

The Italian data protection regulator has confirmed it has allowed OpenAI’s ChatGPT to resume operations in Italy after the company implemented several privacy-enhancing changes. 

The Italian Garante Per La Protezione Dei Dati Personali temporarily blocked the AI software in response to four concerns: a data breach (which the company said was a bug), unlawful data collection, inaccurate results, and the lack of any age verification checks. OpenAI has now fulfilled most of the regulator’s requests. It added information on how users’ data is collected and used and allows users to opt out of data processing that trains the algorithmic model. 

What’s next? There are still two requests from the regulator that OpenAI must implement in Italy: It needs to implement an age-based gated system to keep children safe from accessing inappropriate content (this will serve as a testbed for age-verification systems), and it needs to launch a publicity campaign to inform users about their right to opt out of data processing for training the model. 

Why the emphasis on a publicity campaign for users? Because there’s no opt-in for users to consent to data processing for training algorithms (OpenAI will rely on legitimate interest). So should users object, their recourse is to submit an opt-out form to OpenAI. 

Meanwhile, scrutiny by the EU’s ad hoc task force and other data protection watchdogs continues.

USA: A new bill to create a task force to review AI policies

Driven by the need to review AI policy, Democratic Senator Michael Bennet introduced a bill that would create a task force to review AI policy and make recommendations. It would then terminate its operations after 18 months. 

Why is this relevant? Inasmuch as the idea behind it is good, it could take longer for the task force to materialise than for it to complete its job.


// ANTITRUST //

UK competition watchdog blocks Microsoft’s purchase of Activision Blizzard

The UK’s Competition and Markets Authority (CMA) has blocked Microsoft’s acquisition of Activision Blizzard, valued at USD68.7 billion (EUR62.5 billion), over concerns that it would negatively affect the cloud gaming industry.

We might have known this would happen: In February, the watchdog said the merger would harm competition and proposed several remedies. Even though Microsoft’s reassurances seemed promising, it did not dissuade the watchdog strongly enough to overturn its initial thoughts, and a war of words ensued.

Why is this relevant? It’s relevant because of what happens next. An unsuccessful appeal by Microsoft could influence the decisions of the US Federal Trade Commission and the European Commission. If past experience is anything to go by, a second rejection by the European Commission will convince the FTC to block the merger as well. 

Different Activision Blizzard game characters pose for a group photo.
Some of the characters in Activision Blizzard’s games. (Credit: Activision Blizzard)

// DSA //

Digital Services Act: European Commission identifies 19 very large tech companies

The European Commission has designated 19 tech companies under two categories – very large online platforms (VLOPs) and very large online search engines (VLOSEs) – which will need to comply with stricter rules under the Digital Services Act. 
These companies have more than 45 million monthly active users, according to the data the companies themselves had to disclose last February.

What happens next? The companies must comply with the new rules within four months. The rules include no ad targeting based on a user’s sensitive information (such as political opinion), tougher measures to curb the spread of illegal content, and a requirement to carry out their first risk assessment.

You’ve earned a badge!

The 17 very large online platforms are: Alibaba AliExpress, Amazon Store, Apple AppStore, Booking.com, Facebook, Google Play, Google Maps, Google Shopping, Instagram, LinkedIn, Pinterest, Snapchat, TikTok, Twitter, Wikipedia, YouTube, and Zalando.

The 2 very large online search engines are: Bing and Google Search.


// CONTENT POLICY //

China to root out false news about Chinese businesses

The Central Cyberspace Administration of China will carry out a three-month nationwide campaign to remove fake news about Chinese businesses from online circulation. The aim is to allow ‘enterprises and entrepreneurs’ to work in ‘a good atmosphere of online public opinion’.

Why is it relevant? There’s nothing new about China’s ‘clean-up cyberspace’ campaigns (known as Qinglang) – these campaigns actually started in 2016. But the fact that the government wants to improve local businesses’ reputation shows its intent to promote its domestic market.

Brazil blocks (and reinstates) Telegram over non-disclosure of personal data 

Brazil’s Supreme Court temporarily suspended access to messaging app Telegram for users in the country after the company failed to comply with an order to provide data linked to a group of neo-Nazi organisations using the platform. 

Telegram CEO Pavel Durov said that the data requested by the court ‘is technologically impossible for us to obtain’, as the users had already left the platform. But the court disagreed. 

The court has lifted its suspension, but retained the non-compliance fine of one million reais (USD198,000 or EUR182,000) per day until the company provides the requested data.

Why is it relevant? Because the same thing happened to Telegram last year and to WhatsApp in previous years. 

Telegram CEOs message April 2023

(Click here for the original Du Rove Channel machine-readable message)


// CHILDREN //

Out of control? Severe child sexual abuse imagery on the rise

The news that no one wants to hear: The number of images depicting child sexual abuse classified as severe has more than doubled since 2020. 

The annual report of the Internet Watch Foundation (IWF), a non-profit that works to eliminate abusive content from the internet, reveals more harrowing trends. For instance, content involving children aged 7-10 increased by 60%, with most victims being girls. Some of the most extreme abuse is committed against very young children, including babies.

Why is it relevant? 

First, it comes out at the same time as the results of a two-year investigation by The Guardian, which found tech company Meta is struggling to prevent criminals from using its platforms, Facebook and Instagram, in an effort to counter child sexual abuse. 

Second, because it strengthens the call, reiterated by law enforcement agencies a fortnight ago (and by the IWF in its report), for tech companies to prioritise child safety over end-to-end encryption. The agencies say that encryption shouldn’t come at the expense of diminishing companies’ abilities to identify abusive content.


The week ahead (1–7 May)

1–4 May: This year’s Web Summit, which gathers leaders and start-ups from the tech and software industries, is taking place in Brazil this week. 

3 May: It’s World Press Freedom Day! To celebrate the 30th anniversary of this international day, UNESCO is holding a special event in New York on 2 May, which will also be livestreamed.

3 May: Last day to provide feedback on the EU’s initiative on virtual worlds: A head start towards the next technological transition

3–4 May: The 6G Global Summit is happening in Bahrain (and online).

3–5 May: This year’s forum on Science, Technology and Innovation for the Sustainable Development Goals (STI Forum), taking place in New York, is about accelerating the post-Covid-19 recovery.

5 May: A stakeholder workshop organised by the EU will discuss how to ensure effective compliance with the data-related rules in the Digital Markets Act. It’s being held in Brussels and online.


steph
Stephanie Borg Psaila
Director of Digital Policy, DiploFoundation

Was this newsletter forwarded to you, and you’d like to see more?