DW Weekly #113 – 29 May 2023

DigWatch Weekly 100th issue 1920x1080px generic

Dear all,

OpenAI CEO Sam Altman was in the news again, not only because of the European tour he’s embarked on, but over things he said and wrote last week. In other news, Microsoft president Brad Smith joined in the private sector’s call for regulating AI. Meta was hit with a historic fine over data mishandling, while the Five Eyes have attributed a recent spate of cyberattacks to China.

Let’s get started.
Stephanie and the Digital Watch team


// HIGHLIGHT //

OpenAI’s Sam Altman says forget about existing AI, it’s future AI that should worry us

There were two reasons why OpenAI CEO Sam Altman made headlines last week. The first concerns a threat he made to European lawmakers (which he then took back) about regulating AI. That’s about regulating existing AI.

The second is his warning on the existential risks which AI could pose to humanity. That’s about regulating future AI. Let’s start with this one.

Regulating future AI… now   

Doomsday theories abound these days. We just don’t know if we’ll see AI take over the world in our lifetime, or that of our children – or if it will ever even come to that at all. 

Sam Altman, the man behind OpenAI’s ChatGPT, which took the world by storm in the space of a few weeks, is probably one of the few people who could predict what AI might be capable of in ten years’ time, within acceptable levels of miscalculations. (That’s also why he’s in our Highlight section again this week).

In case he felt he wasn’t vocal enough during the recent hearing before the US Senate Judiciary, he’s now written about it again: ‘Given the picture as we see it now, it’s conceivable that within the next ten years, AI systems will exceed expert skill level in most domains… superintelligence will be more powerful than other technologies humanity has had to contend with in the past.’ That would give us ten years before it could all go awry. Considering the time it takes for an EU regulation to see the light of day, ten years is not a long time.

So how should we regulate future AI? Altman sees a three-pronged approach to what he calls superintelligence. The first is a government-backed project where companies agree to safety guardrails based on the rate of growth in AI capability (however this will be measured). This reminds us of what economist Samuel Hammond wrote recently on the need for a Manhattan Project for AI Safety.

The second is to form an international authority, similar to the International Atomic Energy Agency, with authority over AI above a certain capability, to ‘inspect systems, require audits, test for compliance with safety standards, place restrictions on degrees of deployment and levels of security, etc’. 

The third is more research on safety and alignment issues, but we won’t go into this for now.

What is interesting here is the emphasis on regulations based on capabilities. It’s along the same lines as what he argued before US lawmakers the week before: In his view, the stronger or more powerful the algorithm or resource, the stricter the rules should be. By comparison, the EU’s upcoming AI Act takes on a risk-based approach: the higher the risk, the stricter the rules.

Along this reasoning, models that fall below Altman’s proposed capability threshold would not be included under this (or any?) regulation. Why? He thinks that (a) today’s models are not as risky as future versions will be, and (b) that companies and open-source projects shouldn’t have to face burdensome mechanisms like licences or audits. ‘By contrast’, he writes, ‘the systems we are concerned about will have power beyond any technology yet created, and we should be careful not to water down the focus on them by applying similar standards to technology far below this bar.’ 

(Over-) regulating existing AI 

He does have a point. If more powerful models are misused, their power to cause harm is significantly higher. The rules that would apply to riskier models should therefore be more onerous. And he’s probably right that a moratorium wouldn’t stop the advancements from continuing in secret.

But there are also major flaws with Altman’s logic. First, it’s not an either/or scenario, as he suggests. Shifting the focus to tomorrow’s AI, just because it will be more powerful, won’t make today’s issues go away. Today’s issues still need to be tackled, and soon.

This logic explains why he felt compelled to criticise the EU’s upcoming AI Act as a case of over-regulation. Licences and regulations, to him, are an unnecessary burden on companies whose systems carry more or less the same risks as other internet technologies (and therefore, he probably thinks are also insignificant compared to those which more powerful AI systems will pose in the next ten years).  

Second, existing models are the basis for more powerful ones (unless he knows something that we don’t). Hence, the project and authority that Altman envisions should start addressing the issues we see today, based on the capabilities we have today. Guardrails need to be in place today. 

And yet, it’s not Altman’s criticism that angered European Commissioner Thierry Breton, but rather his threat of pulling out of Europe over the proposed rules. If there’s one action that a threat could trigger, it would be the immediate implementation of guardrails.

Tweet from Thierry Breton on 25 May says 'There is no point in attempting blackmail -- claiming that by crafting a clear framework, Europe is holding up the rollout of generative #AI. To the contrary! With the "AI Pact" I proposed, we aim to assist companies in their preparations for the EU AI ACT. The words 'Is that a threat' appear over a photo of a woman with blond hair.

Digital policy roundup (22–29 May)
// AI GOVERNANCE //

Microsoft proposes five-point blueprint for AI regulation

Microsoft has published a blueprint for governing AI, which includes placing tight rules (or safety breaks) on high-risk AI systems that are being deployed to control critical infrastructure, and creating new rules to govern highly capable AI foundation models. In his forward to the blueprint, Microsoft president Brad Smith also called for a new government agency to implement these new rules.

A five-point blueprint for governing Al

  1. Implement and build upon new government-led Al safety frameworks
  2. Require effective safety brakes for Al systems that control critical infrastructure
  3. Develop a broader legal and regulatory framework based on the technology architecture for Al
  4. Promote transparency and ensure academic and public access to Al
  5. Pursue new public-private partnerships to use Al as an effective tool to address the inevitable societal challenges that come with new technology

Source: Microsoft

Why is it relevant? First, some of the proposals in Microsoft’s blueprint are similar to OpenAI Sam Altman’s proposals. For instance:

  • Microsoft proposes ways of governing highly capable AI foundation models – more or less what Altman describes as superintelligent systems. These powerful new AI models are at the frontier of research and development and are emerging at advanced data centres using internet-scale datasets.
  • Like Altman, Smith is not thinking about ‘the rich ecosystem of AI models that exists today’, but rather the small class of edgy AI models that are redefining the frontier.
  • And, again just like Altman, Smith believes in a framework consisting of rules, licensing requirements, and testing.

Second, Microsoft’s blueprint goes a step further (and is closer to the EU’s risk-based approach) in calling for safety breaks on AI systems used within critical infrastructure sectors. Not all AI systems used in these sectors are high-risk, but those that manage or control infrastructure systems for electricity grids or water systems, for instance, require tighter controls.


EU, Google to develop voluntary AI pact ahead of new AI rules

Thierry Breton, the commissioner in charge of the EU’s digital affairs, and Google chief executive Sundar Pichai agreed last week to work on a voluntary AI pact ahead of new regulations. The agreement will help companies develop and implement responsible AI practices. 

Why is it relevant? First, Breton said companies can’t afford to wait until the AI regulation is in place to start complying with the rules. Second, the commissioner used his meeting with Pichai to call out other companies who pick and choose the regulations they’ll implement. We’re assuming he’s referring to OpenAI and Twitter.


// DATA PROTECTION //

Meta’s record fine puts pressure on EU, USA to conclude data transfer framework  

Meta has been fined a record-breaking EUR1.2 billion (USD1.29 billion) and given six months to stop data transfers of European Facebook users from the EU to the USA. 

The fine was imposed by the Irish Data Protection Commissioner (DPC) after the company continued to transfer data despite the EU Court of Justice’s ruling of 2020 invalidating the EU-USA Privacy Shield framework. The data protection regulator concluded that the legal basis that Meta used to continue transferring data did not afford European citizens adequate protection of their rights. 

Why is it relevant? The company will appeal, so there’s still a long way to go before the fine is confirmed. But the pressure’s on for EU and US officials negotiating the new data protection framework. The new Trans-Atlantic Data Privacy Framework, announced in March 2022, has not yet been finalised.


Was this newsletter forwarded to you, and you’d like to see more?


// CYBERCRIME //

Five Eyes attribute cyberattacks to China

The intelligence agencies of the USA, Australia, Canada, New Zealand, and the UK – called the Five Eyes – have attributed recent cyberattacks on US critical infrastructures to the Chinese state-sponsored hacking group Volt Typhon.

Responding to the joint cybersecurity advisory issued by the intelligence agencies, China’s foreign ministry spokesperson Mao Ning dismissed the advisory as disinformation. ‘No matter how the tactics change, it does not change the fact that the US is the empire of hacking,’ she said.


// GDC //

Public institutions ‘ill-equipped to assess and respond to digital challenges’ – UN Secretary-General

Most governments do not have sufficient skills to respond to digital challenges, a result of decades of underinvestment in state capacities. The UN Secretary-General’s latest policy brief says that government capacities should therefore be a priority for cooperation on digital issues. 

In case you’re not following the process already: The Global Digital Compact is an initiative of the Secretary-General for promoting international digital cooperation. UN member states are expected to agree on the principles forming part of the Global Digital Compact during next year’s Summit of the Future. 

If you’ve already read the brief: We wouldn’t blame you for thinking that the brief proposes quite a few mechanisms at a time when there are already hundreds of them in place. After all, the Secretary-General’s initiative followed a report which recommended that we ‘make existing intergovernmental forums and mechanisms fit for the digital age rather than rush to create new mechanisms’.

If you haven’t done so already: Consider contributing to the informal consultations. The next two deep dives are in two weeks’ time.


The week ahead (29 May–4 June)

30 May: The G7 AI working group’s first meeting, effectively kickstarts the Hiroshima AI process.

30–31 May: The EU-US Trade and Technology Council (TTC) meets in Sweden. No, they won’t tackle data flows. Yes, they will tackle a host of other issues – from AI to green tech.

31 May–2 June: The Council of Europe’s Committee on AI will hold its 6th meeting in Strasbourg, under the chairmanship of Ambassador Thomas Scheider, who was re-elected during the 5th meeting.  

31 May–2 June: The 15th International Conference on Cyber Conflict (CyCon), organised by the NATO Cooperative Cyber Defence Centre of Excellence, takes place in Tallinn, Estonia.

31 May: Join us online or in Geneva for our conference on Building Trust in Digital Identities. As more governments around the world are exploring and implementing digital identity (or e-ID) solutions, we look at safety, security, and interoperability issues.

For more events, bookmark the DW observatory’s calendar of global policy events.


#ReadingCorner
A human hand sketching a robot on graph paper

How do we avoid becoming knowledge slaves? By developing bottom-up AI

‘If you’ve ever used these [ChatGPT or similar] tools, you might have realised that you’re revealing your thoughts (and possibly emotions) through your questions and interactions with the AI platforms,’ writes Diplo’s Executive Director Dr Jovan Kurbalija. ‘You can therefore imagine the huge amount of data these AI tools are gathering and the patterns that they’re able to extract from the way we think.’ 

The consequence is that we “risk becoming victims of ‘knowledge slavery’ where corporate and/or government AI monopolies control our access to our knowledge.” There’s a solution: developing bottom-up AI. Read the full article.

steph
Stephanie Borg Psaila – Author
Director of Digital Policy, DiploFoundation
ginger
Virginia Paque – Editor
Senior editor – Digital Policy, DiploFoundation

Was this newsletter forwarded to you, and you’d like to see more?

DW Weekly #112 – 22 May 2023

DigWatch Weekly 100th issue 1920x1080px generic

Dear readers,

The search for ways to govern AI reached the US Senate Judiciary halls last week, with a hearing involving OpenAI’s Sam Altman, among others. The G7 made negligible progress on tackling AI issues, but significant progress on operationalising the Data Free Flow with Trust approach.   

Let’s get started.

Stephanie and the Digital Watch team


// HIGHLIGHT //

US Senate hearing: 10 key messages from OpenAI’s CEO Sam Altman 

If OpenAI Sam Altman’s hearing before the US Congress last week reminded you of Mark Zuckerberg’s testimony a few years ago, you’re not alone. Both CEOs testified before the Senate Judiciary Committee (albeit different subcommittees), and both called for regulation of their respective industries.

However, there’s a significant distinction between the two. Zuckerberg was asked to testify in 2018 primarily due to concerns surrounding data privacy and the Cambridge Analytica scandal. In Altman’s case, there was no scandal: lawmakers are trying to figure out how to navigate the uncharted territory of AI. And with Altman’s hearing coming several years later, lawmakers now have more familiarity with policies and approaches that proved effective, and those that failed. 

Here are ten key messages Altman delivered to lawmakers during last week’s subcommittee hearing.

1. We need regulations that employ a capabilities-based approach…

Amid discussions around the EU’s forthcoming AI Act, which will take on a risk-based approach (the higher the risk, the stricter the rules), Altman argued that US lawmakers should favour a power- or capabilities-based strategy (the stronger or more powerful the algorithm, the stricter the rules). 

He suggested that lawmakers consider ‘a combination of licensing and testing requirements for development and release of AI models above a threshold of capabilities’.

What would these capabilities look like? According to Altman, the benchmark would be determined by what the models can accomplish. So presumably, one would take AI’s abilities at the time of regulation as a starting point, and gradually increase the benchmarks as AI improves its abilities.

2. Regulations that will tackle more powerful models…

We know it takes time for legislation to be developed. But let’s say lawmakers were to introduce new legislation tomorrow: Altman thinks that the starting point should be more powerful models, rather than what exists right now.

‘Before we released GPT-4, our latest model, we spent over six months conducting extensive evaluations, external red teaming and dangerous capability testing. We are proud of the progress that we made. GPT-4 is more likely to respond, helpfully and truthfully and refuse harmful requests than any other widely deployed model of similar capability… We think that regulatory intervention by governments will be critical to mitigate the risks of increasingly powerful models.’

3. Regulations that acknowledge that users are just as responsible…

Altman did not mince words: ‘Certainly companies like ours bear a lot of responsibility for the tools that we put out in the world, but tool users do as well.’

Hence the need for a new liability framework, Altman restated.

4. …And regulations that place the burden on larger companies.

Altman notes that regulation comes at the risk of slowing down the American industry ‘in such a way that China or somebody else makes faster progress.’ 

So how should lawmakers deal with this risk? Altman suggests that the regulatory pressure should be on the larger companies that have the resources to handle the burden, unlike smaller companies. ‘We don’t wanna slow down smaller startups. We don’t wanna slow down open source efforts.’ 

5. Independent scorecards are a great idea, as long as they recognise that its ‘early stages’

When a Senator asked Altman whether there should be independent testing labs to provide scorecards that indicate ‘whether or not the content can be trusted, what the ingredients are, and what the garbage going in may be, because it could result in garbage going out’, Altman’s positive response was followed by a caveat.

‘These models are getting more accurate over time… (but) this technology is in its early stages. It definitely still makes mistakes… Users are pretty sophisticated and understand where the mistakes are… that they need to be responsible for verifying what the models say, that they go off and check it.’

The question is, when will (it be convenient to say that) the technology outgrew its early stages? 

6. Labels are another great idea for telling fact from fiction

Altman points out that to assist people understand what they’re reading and viewing, it helps if there are labels to tell people what they’re looking at. ‘People need to know if they’re talking to an AI, if, if content that they’re looking at might be generated or might not’.

The generated content will still be out there, but at least, creators of the generated content can be transparent with their viewers, and viewers can make informed choices, he said.

7. It takes three to tango: the combined effort of government, the private sector, and users to tackle AI governance

Neither regulation nor scorecards or labels will be sufficient on their own. Altman referred to the birth of photoshopped images, highlighting how people rapidly learned to understand that images might be photoshopped and the tool misused.

The same applies to AI: ‘It’s going to require a combination of companies doing the right thing, regulation and public education.’

8. Generative AI won’t be the downfall of news organisations

The reason is simple, according to Altman: ‘The current version of GPT-4 ended training in 2021. It’s not a good way to find recent news.’

He acknowledges that other generative tools built on top of ChatGPT can pose a risk for news organisations (presumably referring to the ongoing battle in Canada, and previously in Australia, on media bargaining), but also thinks that it was the internet that let news organisations down.

9. AI won’t be the downfall of jobs, either

Altman reassured lawmakers that ‘GPT-4 and other systems like it are good at doing tasks, not jobs’. We reckon jobs are made up of tasks, and that’s why Altman might have chosen different words later in his testimony.

‘GPT-4 will entirely automate away some jobs, and it will create new ones that we believe will be much better… This has been continually happening… So there will be an impact on jobs. We try to be very clear about that, and I think it will require partnership between the industry and government, but mostly action by the government to figure out how we want to mitigate that.’

10. Stay calm and carry on: GPT is ‘a tool, not a creature’

We had little doubt about that, but what Altman said next might have been aimed at reassuring those who said they’re worried about humanity’s future: GPT-4 is a tool ‘that people have a great deal of control over and how they use it.’ 
The question for Altman is: how far are we from losing control over AI? It’s a question no one asked him.


Digital policy roundup (15–22 May)
// AI & DATA //

G7 launches Hiroshima AI dialogue process

The G7 has agreed to launch a dialogue on generative AI – including issues such as governance, disinformation, and copyright – in cooperation with the Organisation for Economic Co-operation and Development (OECD) and the Global Partnership on AI (GPAI). Sunday’s announcement, which came at the end of the three-day summit in Hiroshima, Japan, provides the details of what the G7 digital ministers agreed to in April. The working group tasked with the Hiroshima AI process is expected to start its work this year. 

The G7 also agreed to support the development of AI standards. (Refresher: Here’s the G7 digital ministers’ Action Plan on AI interoperability.)  

Why is this relevant? On the home front, with the exception of a few legislative hotspots working on AI rules, most governments are worrying about generative AI (including ChatGPT) but are not yet ready to take legislative action. On the global front, while the G7’s Hiroshima AI process is at the forefront of tackling generative AI, the group acknowledges that there’s a serious discrepancy among the G7 member states’ approaches to policy. The challenges are different, but the results are similar. 

G7 greenlights plans for Data Free Flow with Trust concept

The G7 had firmer plans in place for data flows. As anticipated, the G7 endorsed the plan for operationalising the Data Free Flow with Trust (DFFT) concept, outlined last month by the G7 digital ministers.

The leaders’ joint statement draws attention to the difference between unjustifiable data localisation regulations and those that serve the public interests of individual countries. The practical application of this disparate treatment remains uncertain; the new Institutional Arrangement for Partnership (IAP), which will be led by the OECD, has a lot of work ahead.

Why is this relevant? The IAP’s work won’t be easy. As the G7 digital ministers acknowledged, there are significant differences in how G7 states (read: the USA and EU countries) approach cross-border data flows. But as any good negotiator will say, identifying commonalities offers a solid foundation, so the G7 communique’s language (also found in previous G7 and G20 declarations) remains promising. Expect accelerated progress on this initiative in the months to come. 

 Accessories, Formal Wear, Tie, Face, Head, Person, Photography, Portrait, People, Adult, Male, Man, Clothing, Suit, Computer Hardware, Electronics, Hardware, Monitor, Screen, Crowd, Necktie, Eric Schmidt

Ex-Google CEO says AI regulation should be left to companies 

Former Google CEO Eric Schmidt believes that governments should leave AI regulation to companies since no one outside the tech industry has the necessary expertise. Watch the report or read the transcript (excerpt):

NBC: You’ve described the need for guardrails and what I’ve heard from you is, we should not put restrictive regulations from the outside, certainly from policymakers who don’t understand it. I have to say I don’t hear a lot of guardrails around the industry in that. it really just as I’m understanding it from you comes down to what the industry decides for itself.

Eric Schmidt: When this technology becomes more broadly available, which it will and very quickly, the problem is going to be much worse. I would much rather have the current companies define reasonable boundaries. 

NBC: It shouldn’t be a regulatory framework. It maybe shouldn’t even be a sort of a democratic vote. It should be the expertise within the industry that helps to sort that out. 

Eric Schmidt: The industry will first do that because there’s no way a non-industry person can understand what is possible. It’s just too new, too hard, there’s not the expertise. There’s no one in the government who can get it right, but the industry can roughly get it right and then the government can put a regulatory structure around it.


// SECTION 230 //

Section 230 unaffected by two US Supreme Court judgements

As anticipated, the US Supreme Court left Section 230 untouched in two judgements involving families of people killed by Islamist extremists overseas. The families tried to hold social media platforms liable for allowing extremists on their platforms or recommending such content to users, arguing that Section 230 (a rule that protects internet platforms from liability for third-party content posted on the platforms) should not shield the platforms.

What the Twitter vs Taamneh (21-1496) judgement says: US Supreme Court justices agreed unanimously to reverse a lower court’s judgement against Twitter, in a case initiated by the US relatives of Nawras Alassaf, who was killed in Istanbul in 2017. The Supreme Court struck down claims that Twitter aided extremist groups: Twitter’s ‘algorithms appear agnostic as to the nature of the content, matching any content (including ISIS’ content) with any user who is more likely to view that content. The fact that these algorithms matched some ISIS content with some users, therefore, does not convert defendants’ passive assistance into active abetting.’

What the Gonzalez vs Google (21-1333) judgement says: In its judgement in a parallel case, the US Supreme Court sent back the lawsuit brought by the family of Nohemi Gonzalez, who was fatally shot in Paris in 2015, to the lower court. The Supreme Court declined to even address the scope of Section 230, as the family’s claims were likely to fail in the light of the Twitter case.


// TIKTOK //

EU unfazed by TikTok’s cultural diplomacy at Cannes

TikTok’s partnership with the Festival de Cannes was the talk of the French town last week. But TikTok’s cultural diplomacy efforts, which appeared at Cannes for the second year, failed to impress the European Commission.

Referring to TikTok’s appearance at Cannes, in an interview on France TV’s Télématin (jump to 1’17’) European Commissioner Thierry Breton said that the company ‘still (has) a lot of room for improvement’, especially when it comes to safeguarding children’s data. Breton also confirmed that he was in talks with TikTok’s CEO recently, presumably about the Digital Service Act commitments, which very large platforms need to deliver on by 25 August.


The week ahead (22–28 May)

23–25 May: The 3rd edition of the Quantum Matter International Conference – QUANTUMatter 2023 – takes place in Madrid, Spain. Experts will tackle the latest in quantum technologies, emerging quantum materials and novel generations of quantum communication protocols, quantum sensing, and quantum simulations.

23–26 May: The Open-Ended Working Group (OEWG) will hold informal intersessional meetings that will comprise the Chair’s informal roundtable discussion on capacity-building and discussions on topics under the OEWG’s mandate.

24 May: If you haven’t yet proposed a session for this year’s Internet Governance Forum (IGF) (to be held on 8–12 October in Kyoto, Japan), you still have a couple of days left until the extended deadline.

24–25 May: The 69th meeting of TF-CSIRT, the task force that coordinates Computer Security and Incident Response Teams in Europe, takes place in Bucharest, Romania.

24–26 May: The 16th international Computers, Privacy, and Data Protection (CPDP) conference, taking place in Brussels and online, will deal with ‘Ideas that drive our digital world’, mostly related to AI governance, and, well, data protection.

25 May: There will be lots to talk about during the Global Digital Compact’s next thematic deep dive, on AI and emerging technologies (UTC’s afternoon) and digital trust and security (UTC’s evening). Register to participate.

For more events, bookmark the DW observatory’s calendar of global policy events.


steph
Stephanie Borg Psaila – Author
Director of Digital Policy, DiploFoundation
ginger
Virginia Paque – Editor
Senior editor – Digital Policy, DiploFoundation

Was this newsletter forwarded to you, and you’d like to see more?

DW Weekly #111 – 15 May 2023

DigWatch Weekly 100th issue 1920x1080px generic

Dear readers,

Once again we’re starting with an AI-related highlight: There’s been progress in the AI Act’s legislative journey, and the EU could well see the proposed rules come into effect very soon. In other news, ChatGPT is now being monitored in Latin America, while TikTok got kicked off Austrian governments’ phones. 

Let’s get started.

Stephanie and the Digital Watch team


// HIGHLIGHT //

Europe’s AI Act moves ahead: Why, how, and what’s next

The EU’s proposed AI Act moved ahead last week after a key vote at the committee level. In practice, this means that the proposed law could very well be passed by the end of this year.  Let’s break all of this down and see what this means for companies and consumers.

Who voted. Last week, members of the European Parliament in two committees – the Internal Market Committee and the Civil Liberties Committee – voted on hundreds of amendments made to the European Commission’s original draft rules.

The AI Act proposes a sliding scale of rules based on risk. Practices with an unacceptable level of risk will be prohibited; those considered high-risk will carry a strict set of obligations; less risky ones will have more relaxed rules, and so on.  

Why the vote is relevant. First, lawmakers wanted to ensure that general-purpose AI – like ChatGPT – is captured by the new draft rules. Second, in the grand scheme of things, when one of the principal EU entities agrees on a text, that marks a significant milestone in the EU’s multi-stage legislative process.

An infographic illustrates the process of an ordinary legislative procedure, starting with an initiative from the European Commission, proceeding to a First Draft, and then undergoing independent reviews in the Council of the EU and European Parliament before continuing the joint discussions, until reaching its final form. The icons for First Draft and the Council of the EU have green checkmarks, while the icon for the European Parliament has a light green dotted-line check mark indicating its ongoing status. A vertical dashed red line divides the illustration between the separate Council of the EU and European Parliament steps and the joint amendments and agreement step, indicating its current status.
To give you a better idea of where we are: As soon as Parliament approves its draft text in plenary during the upcoming 12–15 June session (marked by the dotted green checkmark), the AI Act advances to the trilogue talks (the dotted red line). The EU Council’s negotiating text was approved in December. Source: Based on a diagram from artificialintelligenceact.eu

What Parliament’s version of the AI Act says. There are many new amendments, so we’ve rounded up the most important:

  • Tough rules for ChatGPT-like tools. Parliament’s amendments regulate ChatGPT and similar tools. The proposed rules define generative AI systems under a new category called Foundation Models (this is what underpins generative AI tools). Providers would have to abide by similar obligations as high-risk systems. This means applying safety checks, data governance measures, and risk mitigations before introducing models into the market. The proposed rules would also oblige them to consider foreseeable risks to health, safety, and democracy.
  • Copyright tackled, too (sort of). Providers would also need to be transparent: They would need to inform people that content was machine-generated and provide a summary of (any) copyrighted materials used to train their AIs. It’s then up to the rights holders to sue for copyright infringement if they so decide.
  • New prohibited practices introduced. Parliament wants to see intrusive and discriminatory uses of AI systems banned. These would include real-time remote biometric identification systems in public spaces, the use of emotion recognition systems by law enforcement and employers, and the indiscriminate scraping of biometric data from social media or CCTV footage for creating databases (reminds us of Clearview AI’s practices – coincidentally last week, Austria’s Data Protection Authority ruled against the company.).
  • Consumers may complain. The proposal boosts people’s right to file complaints about AI systems and receive explanations of decisions based on high-risk AI systems that impact their rights.
  • Research activities excluded. AI components provided under open-source licences are also excluded. So if a company says it is experimenting with a system, it might be able to avoid the rules. But if it then implements that system? The rules would apply.

Compare and contrast with China’s plans. Interestingly, the draft rules on generative AI, which China published last month, contain similar provisions on transparency, accountability, data protection, and risk management. The Chinese version does, however, go much further on copyright (tools can’t be trained on data that infringes intellectual property rights) and the accuracy of information (whatever is generated by AI needs to be true and accurate) – two major concerns for governments around the world.

What to expect in the next stage. Once the discussions between the EU Council and European Parliament (on the Commission’s proposal) start – the so-called trilogues – there’s a risk that the rules could get watered down. It’s not necessarily a matter of diluting the stringent rules for providers – six months on from the introduction of ChatGPT, there’s a pretty clear understanding of what these tools can, cannot, and shouldn’t be allowed to do. 

Rather, it’s more a matter of governments wanting to ensure their own freedom to use AI tools in ways they deem essential for people’s safety (including some practices that Parliament wants banned) and to address national security concerns when needed. 

As for timelines, there’s pressure from all sides to see this through by the end of the year. Providers want legal certainty; users want protection, and the Spanish Presidency (which takes the helm of the EU Council from June to December this year) will want to be remembered for seeing the law through.

Digital policy roundup (815 May)
// AI //

Data protection authorities in Latin America monitoring ChatGPT

Latin American data protection watchdogs forming part of the Ibero-American Data Protection Network (RIPD) are monitoring OpenAI’s ChatGPT for potential privacy breaches. The network, comprised of 16 authorities from 12 countries, is also coordinating action around children’s access to the AI tool and other risks, such as misinformation.

Why is it relevant? ChatGPT has become a global concern, far beyond the investigative action we’d expect from the usual regulatory hotspots (USA, Europe, China).


// COMPETITION //

European Commission approves Microsoft’s acquisition of Activision

Microsoft’s acquisition of Activision, the creator of the widely popular Call of Duty video game franchise, has received the European Commission’s seal of approval. Microsoft must adhere fully to its commitments for approval to be granted.

Since the EU’s antitrust regulators believe that Microsoft could harm competition if it made Activision’s games exclusive to its own cloud game streaming service, the company will now have to give consumers a licence to stream anywhere they like. 

Why is it relevant? This contrasts with last month’s decision by the UK’s Competition and Markets Authority (CMA) to block the acquisition over concerns that the merger would negatively affect the cloud gaming industry. This decision will be confirmed or rejected on appeal. In the USA, the Federal Trade Commission’s case is scheduled for a hearing on 2 August. 

Flow shows remedies to avoid anti-competitive actions by Microsoft with Activision games. Icons show Microsoft must commit to 'No harm to the distribution of Activision Blizzard console games. It must ameliorate access limitations to Activision Blizzard's games by offering free licence access to all Activision games for cloud game streaming providers and users, creating opportunities for innovation, and preventing barriers for competitors.

// TIKTOK //

Austria blocks TikTok from government phones

The Austrian government has joined other countries in banning Chinese-owned TikTok from being used on federal government officials’ official phones.

The announcement was made by Austria’s Federal Chancellor, together with his vice-chancellor (and minister of culture and arts), and the ministers for finance and home affairs. Citing the ban by the European Commission in February,  Austria is concerned with three issues:

  1. Foreign authorities (read: China) potentially having technical access to official devices through the app’s functions and exploiting any vulnerabilities to access sensitive information.
  2. The potential for data protection and security breaches through the collection of a large amount of personal information and sensor data.
  3. The risk of influencing the opinion-forming process of public officials, such as through the manipulation of search results.

Why is it relevant? This adds momentum to the ongoing anti-TikTok wave in Europe and the USA and adds pressure on the company to prove its trustworthiness and security measures to avoid being blocked.


// ANTITRUST //

Italy investigating Apple for alleged abuse of dominant position in app market

The Italian competition authority (the Autorità Garante della Concorrenza e del Mercato – AGCM) has launched an investigation into Apple’s uneven application of its own app tracking policies, which the agency says is a potential abuse of the company’s dominant position in the online app market. 

What’s the issue about? If you’re an iPhone or iPad user, you might have noticed a privacy pop-up (such as the one below) when installing third-party apps that try to track you. That’s a feature of Apple’s App Tracking Transparency (ATT) policy introduced two years ago. The problem is that the same interruption doesn’t apply to Apple itself when its own apps try to track you, so users are more likely to think twice before allowing a third-party app to track their activity. In addition, the advertising data passed on to third-party developers is inferior to the data that Apple possesses, putting third-party developers at a disadvantage.

Screenshot of a phone shows the message: Allow 'PalAbout' to track your activity across other companies' apps and websites? Your data will be used to deliver personalised ads to you, followed by two options, 1. Ask App Not to Track, and 2. Allow.

Why is it relevant? Not only are Apple’s App Store practices being probed by the EU’s competition authority in at least three separate cases (there’s a fourth concerning mobile app payments, which continued last week), but the ATT policy itself is being investigated elsewhere, including the UK, Germany, and California.


// DATA PROTECTION //

GSMA gets GDPR fine for use of facial recognition during annual event 

The Spanish data protection authority has confirmed that GSMA, the organiser of the annual Mobile World Congress (MWC), violated the GDPR. The fine of EUR 200,000 (USD 218,000) was also confirmed on appeal. 

The authority found that GSMA failed to conduct the necessary impact assessments before deciding to collect biometric data on almost 20,000 participants for the MWC’s 2021 edition. Worse, sensitive biometric information was a mandatory step of the registration procedure, with no possibility to opt out.

Why is it relevant? If you’re an event organiser and you’re thinking of using facial recognition to automate participants’ entry to your event, think again. Despite our tendency to focus mainly on Big Tech’s use of our data, the GDPR covers more than that. Every person or organisation handling personal data, including sensitive details like biometrics, falls under the regulation.


// SHUTDOWNS //

Internet shutdowns amid unrest

Internet access was cut off in several regions of Pakistan last week, while access to Twitter, Facebook, and YouTube has been entirely restricted in the wake of the arrest of Pakistan’s former Prime Minister Imran Khan.

In Sudan, ongoing conflict has led to energy shortages, which, in turn, led to prolonged internet outages.

Both shutdowns were confirmed by NetBlocks, a global internet monitoring service.


Video shot of Trudeau with the closed caption 'still saying that it doesn't want to pay journalists for the work they do'.

Trudeau slams Meta. Canada’s Prime Minister Justin Trudeau rebuked Meta for refusing to compensate publishers for news articles that appear on its platform, calling it ‘deeply irresponsible’. The day before, Google and Meta testified at the Senate’s Standing Committee on Transport and Communications hearing, urging revisions to the proposed online news bill (C-18) to avoid their departure from Canada.

The week ahead (15–21 May)

16 May: OpenAI CEO Sam Altman and IBM Chief Privacy and Trust Office Christina Montgomery appear at a hearing of the US Senate Judiciary Subcommittee on Privacy, Technology and the Law to discuss AI governance and oversight of rules. Watch live at 10:00 EDT (14:00 UTC).

16 May: The first ministerial meeting of the new EU-India Trade and Technology Council (TTC), launched in February, takes place in Brussels today.

16–17 May: The agenda of heads of government attending the 4th Council of Europe Summit in Reykjavik, Iceland, includes AI governance.

17 May: Happy World Telecommunication and Information Society Day! The celebration marks the signing of the first International Telegraph Convention in 1865 and the creation of the Geneva-based International Telecommunication Union (ITU).

19–21 May: The G7 Summit takes place in Hiroshima, Japan, this week. As Japan’s Prime Minister announced recently, AI will also be on the agenda. (As a refresher, read our coverage in Weekly #108: ChatGPT to debut on multilateral agenda at G7 Summit).

For more events, bookmark the observatory’s calendar of global policy events.

#ReadingCorner
Sophos State of Ransomware 2023

Ransomware attacks: A sobering reality check
The latest edition of Sophos’ annual report, State of Ransomware 2023, confirms that ransomware remains a major threat, especially for target-rich, resource-poor organisations. Although ransomware attacks haven’t increased compared to the previous year, a higher number of attacks now involve encrypting data (and then stealing it). Read the report.

steph
Stephanie Borg Psaila – Author
Director of Digital Policy, DiploFoundation
ginger
Virginia Paque – Editor
Senior editor – Digital Policy, DiploFoundation

Was this newsletter forwarded to you, and you’d like to see more?


Numéro 79 de la lettre d’information Digital Watch – mai 2023

En bref

Pentagone : la fuite sur Discord est plus grave qu’on ne le pense

De temps en temps, les renseignements des agences américaines et de leurs alliés sont exposés à de graves fuites. La divulgation, en avril, d’une cinquantaine de documents top secret sur le service de discussion de jeux Discord en est l’une des plus flagrantes.

La publication de messages diplomatiques par WikiLeaks en 2010, les révélations d’Edward Snowden en 2013, et la divulgation des outils de piratage de la National Security Agency et de la CIA en 2016 et 2017 figurent parmi les plus grandes fuites de l’ère moderne.

Outrage ou haussement d’épaules ? Réaction dégressive

Chaque nouvelle fuite semble susciter de moins en moins d’indignation au niveau mondial. Aussi, lorsqu’une autre fuite des services de renseignement américains a fait surface en avril sur Discord (une plateforme sociale relativement peu connue), elle n’a guère suscité d’intérêt. Si le sensationnalisme peut entraver les efforts des forces de l’ordre, le désintérêt n’est pas non plus très utile.

La fuite sur Discord a été révélée le 6 avril par le New York Times. Jack Teixeira, 21 ans, aviateur de première classe dans la Garde nationale aérienne du Massachusetts, est à l’origine de la fuite.

Il n’a pas été difficile pour le FBI de l’identifier. Il a téléchargé les documents dans une discussion en ligne sur Discord (un serveur) qu’il administrait officieusement, et a suivi l’enquête du FBI jusqu’à sa propre fuite. Il a été inculpé quelques jours plus tard.

Prise pour une fausse information 

En peu de temps, les documents divulgués ont été diffusés sur d’autres plateformes de médias sociaux par des utilisateurs qui pensaient qu’il s’agissait de faux documents. La possibilité que les documents soient top secret n’a pas semblé être prise en compte.
Comme l’a rapporté CNN, « la plupart des utilisateurs [de Discord] ont diffusé les fichiers pensant au départ qu’il s’agissait de faux », a déclaré l’un des utilisateurs de Discord. Lorsqu’ils ont été déclarés authentiques, ils se trouvaient déjà sur Twitter et d’autres plateformes.

 Page, Text

Un très mauvais calendrier

Bien qu’il n’y ait jamais de bon moment, cette fuite est survenue à une période particulièrement sensible du conflit qui oppose la Russie à l’Ukraine. 

Même si les données n’étaient pas aussi détaillées que lors des fuites précédentes, cette dernière brèche a fourni des détails intimes sur la situation actuelle en Ukraine, ainsi que des renseignements sur deux des plus proches alliés des États-Unis : la Corée du Sud et Israël.

Tandis que l’Europe a été en grande partie épargnée, les informations divulguées ont révélé que l’Ukraine dispose de forces spéciales européennes sur le terrain, et que près de la moitié des chars en route vers Kiev proviennent de Pologne et de Slovénie. Les conséquences collatérales de la fuite s’étendent à de nombreux pays.

Toujours en circulation 

Quelques jours après l’annonce de l’enquête du Pentagone, les documents ayant fuité étaient toujours accessibles aussi bien sur Twitter que sur d’autres plateformes, ce qui a relancé le débat sur la responsabilité des entreprises de médias sociaux dans les affaires liées à la sécurité nationale. Il n’existe pas de solution unique pour résoudre les problèmes de filtrage des contenus sur les médias sociaux, ce qui complique le suivi. 

Malheureusement, mais, comme on pouvait s’y attendre, des fuites sont inévitables, surtout lorsque des renseignements classifiés sont accessibles à un aussi grand nombre de personnes. En 2019, 1,25 million de citoyens américains avaient une autorisation d’accès aux informations les plus secrètes des États-Unis.

Une possibilité serait donc que les plateformes de médias sociaux renforcent leur politique en matière de contenu lorsqu’il s’agit de fuites d’informations de renseignement. Si l’ancien employé de Twitter interrogé par CNN a dit vrai, « la publication de documents militaires américains classifiés ne constituerait probablement pas une violation de la politique de Twitter en matière de documents piratés ». Une autre solution serait que les entreprises renforcent leurs capacités de contrôle du contenu. Pour éviter d’imposer des charges trop lourdes aux jeunes entreprises ou aux petites plateformes, les capacités devraient être adaptées à la masse d’utilisateurs d’une plateforme (le cadre utilisé par la loi sur les services numériques de l’UE en est un bon exemple).

Le problème devient plus complexe lorsque des contenus illégaux sont partagés sur des plateformes qui utilisent le cryptage de bout en bout. Comme l’ont souligné à maintes reprises les services répressifs, s’il ne fait aucun doute que le cryptage joue un rôle important dans la protection de la vie privée, il entrave également leur capacité à identifier, poursuivre et réprimer les infractions.

Pour l’instant, nous devrions nous concentrer sur le fait que la dernière fuite a été téléchargée par un utilisateur sur un forum public de médias sociaux, malgré les dommages potentiels pour la sécurité nationale de son propre pays (les États-Unis) et le risque pour les citoyens d’un pays déchiré par la guerre (l’Ukraine). C’est sans aucun doute la plus grande des préoccupations.

 Text, Document, Business Card, Paper

Baromètre

Les développements de la politique numérique qui ont fait la une des journaux internationaux

Le paysage de la politique numérique évolue quotidiennement. Voici donc les principaux éléments du mois d’avril. Vous trouverez plus de détails dans chaque mise à jour du Digital Watch Observatory.

neutre

L’architecture mondiale de la gouvernance numérique

Les ministres du numérique du G7 commenceront à mettre en œuvre le projet japonais de libre circulation des données en toute confiance (DFFT) par l’intermédiaire d’un nouvel organe, l’Arrangement institutionnel pour le partenariat (IAP), dirigé par l’Organisation de coopération et de développement économiques (OCDE). Ils ont également discuté de l’IA, de l’infrastructure numérique et de la concurrence.


neutre

Le développement durable

Le Forum mondial des données de l’ONU, qui s’est tenu à Hangzhou, en Chine, a appelé à une meilleure gouvernance des données et à une collaboration accrue entre les gouvernements pour parvenir à un avenir durable. Le secrétaire général de l’ONU, António Guterres, a déclaré que les données restent un élément essentiel du développement et du progrès au XXIe siècle


en progression

La sécurité

Le Pentagone a entamé une enquête sur la fuite de plus de 50 documents classifiés qui se sont retrouvés sur la plateforme de médias sociaux Discord (voir notre article en pages 2-3). Une opération internationale conjointe des forces de l’ordre a permis de saisir le Genesis Market, un marché du dark web.

La Commission européenne a annoncé un plan de 1,1 milliard d’euros (1,2 milliard de dollars) visant à renforcer les capacités de l’UE à lutter contre les attaques et à favoriser une meilleure coordination entre les États membres.

TikTok a été interdit sur les appareils gouvernementaux en Australie; le Centre national de cybersécurité irlandais a également recommandé aux fonctionnaires de s’abstenir d’utiliser TikTok sur leurs appareils.
Le rapport annuel de l’Internet Watch Foundation (IWF) a révélé que les images d’abus sexuels graves sur des enfants sont en augmentation.


neutre

Le commerce électronique et économie de l’Internet

L’autorité britannique de la concurrence et des marchés (CMA) a bloqué l’acquisition d’Activision Blizzard par Microsoft, craignant qu’elle n’ait un impact négatif sur le secteur des jeux en nuage. Microsoft fera appel.

La Commission européenne a désigné 19 entreprises technologiques comme étant de très grandes plateformes en ligne (17) et de très grands moteurs de recherche en ligne (2), qui devront se conformer à des règles plus strictes dans le cadre de la nouvelle loi sur les services numériques.
La Commission sud-coréenne du commerce équitable (FTC) a infligé une amende à Google pour pratiques commerciales déloyales. Un groupe de jeunes entreprises indiennes a demandé à un tribunal local de suspendre le nouveau système de facturation in-app de Google. Au Royaume-Uni, Google permettra aux développeurs Android d’utiliser d’autres options de paiement


neutre

Infrastructure

Le Conseil et le Parlement de l’UE sont parvenus à un accord politique provisoire sur le règlement des semi-conducteurs, qui vise à faire doubler la production mondiale de puces de l’UE de 20 % d’ici à 2030.


en progression

Les droits numériques

Des gouvernements du monde entier ont lancé des enquêtes sur le ChatGPT d’OpenAI, principalement parce que les pratiques de l’entreprise violaient les droits des personnes en matière de protection de la vie privée et des données (voir notre article principal).

Le gouvernement indien envisage d’ouvrir Aadhaar, le système d’identité numérique du pays, aux entités privées pour authentifier les identités des utilisateurs.
Les députés européens ont voté contre une proposition visant à autoriser les transferts de données personnelles des citoyens de l’UE vers les États-Unis dans le cadre du nouvel accord UE–États-Unis sur la protection des données personnelles.


neutre

La politique de contenu

L’administration centrale du cyberespace de la Chine va mener une campagne nationale de trois mois pour éliminer de la circulation en ligne les fausses nouvelles concernant les sociétés chinoises. L’objectif est de permettre aux entreprises et aux entrepreneurs de travailler dans une bonne atmosphère d’opinion publique en ligne.


neutre

Juridiction et questions légales

La Cour suprême du Brésil a bloqué – puis rétabli – l’application de messagerie Telegram pour les utilisateurs du pays après que l’entreprise n’a pas fourni de données liées à un groupe d’organisations néonazies utilisant la plateforme.
Un tribunal de Los Angeles a rejeté une demande de dommages-intérêts déposée par un conducteur de Tesla, l’entreprise ayant réussi à faire valoir que le logiciel de conduite partiellement automatisée n’était pas un système autopiloté.


en progression

Les nouvelles technologies

Aux États-Unis, l’Administration Biden étudie des mesures potentielles de responsabilisation pour les systèmes d’intelligence artificielle. L’appel à commentaires de la National Telecommunications and Information Administration (NTIA) se poursuit jusqu’au 10 juin. Un sénateur démocrate américain a déposé un projet de loi visant à créer un groupe de travail chargé d’examiner la politique en matière d’IA. Le ministère américain de la Sécurité intérieure a également annoncé la création d’un nouveau groupe de travail chargé de « diriger l’utilisation responsable de l’IA pour sécuriser le territoire national » tout en se défendant contre l’utilisation malveillante de l’IA.

Un groupe de 11 membres du Parlement européen demande instamment au président des États-Unis et au chef de la Commission européenne de co-organiser un sommet mondial de haut niveau sur la gouvernance de l’IA. L’administration centrale du cyberespace de la Chine (CAC) a proposé de nouvelles mesures pour réglementer les services d’IA générative. Le projet est ouvert aux commentaires du public jusqu’au 10 mai.
Des dizaines d’organisations de défense et d’experts en sécurité des enfants ont demandé à Meta de renoncer à son projet d’autoriser les enfants à entrer dans son monde de réalité virtuelle, Horizon Worlds, en raison des risques potentiels de harcèlement et de violation de la vie privée pour les jeunes utilisateurs.

En bref

Pourquoi les autorités enquêtent sur ChatGPT:
les trois raisons principales

Grâce à sa capacité à reproduire des réponses humaines dans les interactions textuelles, le système ChatGPT d’OpenAI a été salué comme une percée dans la technologie de l’IA. Mais les gouvernements ne sont pas tout à fait convaincus. Qu’est-ce qui les inquiète?

Protection de la vie privée et des données

Tout d’abord, il y a la question centrale de la collecte présumée illégale de données, la pratique trop courante consistant à collecter des données personnelles sans le consentement ou la connaissance de l’utilisateur.

C’est l’une des raisons pour lesquelles l’autorité italienne de protection des données, le Garante per la Protezione dei Dati Personali, a imposé une interdiction temporaire à ChatGPT. L’entreprise a répondu à la plupart des préoccupations de l’autorité et le logiciel est à nouveau disponible en Italie, mais cela ne résout pas tous les problèmes.

D’autres autorités de protection des données s’intéressent à ce problème, notamment la Commission nationale de l’informatique et des libertés (CNIL) en France, qui a reçu au moins deux plaintes, et l’Agencia Española de Protección de Datos (AEPD) en Espagne. Enfin, le Conseil européen de la protection des données (EDPD) vient de créer un groupe de travail dont l’activité liée au ChatGPT consistera à coordonner les positions des autres autorités européennes.

Les préoccupations relatives à la protection des données ne se limitent toutefois pas à l’Europe. La plainte déposée par le Center for Artificial Intelligence and Digital Policy (CAIDP) auprès de la Federal Trade Commission (FTC) des États-Unis fait valoir que les pratiques d’OpenAI comportent de nombreux risques pour la vie privée. Le Commissariat à la protection de la vie privée du Canada mène également une enquête.

Peu fiable

Ensuite, il y a le problème des résultats inexacts. Le modèle ChatGPT d’OpenAI a été utilisé par plusieurs entreprises, dont Microsoft Bing, pour générer du texte. Cependant, comme le confirme OpenAI elle-même, l’outil n’est pas toujours précis.La fiabilité a été l’un des éléments à l’origine de la décision de l’Italie d’interdire ChatGPT, et dans l’une des plaintes reçues par la CNIL française. Dans sa plainte auprès de la FTC, la CAIDP a également affirmé que les pratiques d’OpenAI étaient trompeuses, car l’outil est «très persuasif», même si le contenu n’est pas fiable.

 Symbol, Text, Ammunition, Grenade, Weapon, Number, Logo

Dans le cas de l’Italie, OpenAI a déclaré à l’autorité qu’il était «techniquement impossible, à l’heure actuelle, de rectifier les inexactitudes». Ce n’est pas très rassurant quand on sait que ces outils d’IA peuvent être utilisés dans des contextes sensibles, comme les soins de santé et l’éducation. Le seul recours, pour l’instant, est de fournir aux utilisateurs de meilleurs moyens de signaler les informations inexactes.

La sécurité des enfants

Troisièmement, il y a la question de la sécurité des enfants et l’absence d’un système de vérification de l’âge. L’Italie et le CAIPD ont tous deux fait valoir que, dans l’état actuel des choses, les enfants peuvent être exposés à des contenus qui ne sont pas adaptés à leur âge ou à leur degré de maturité.

Bien qu’OpenAI soit de retour en Italie après avoir introduit une question d’âge sur le formulaire d’inscription de ChatGPT, la demande de l’autorité pour un système de barrière basé sur l’âge est toujours d’actualité. OpenAI devait soumettre ses plans d’ici au mois de mai et les mettre en œuvre d’ici au mois de septembre. Cette demande coïncide avec les efforts déployés par l’Union européenne pour améliorer la manière dont les plateformes confirment l’âge de leurs utilisateurs.

Tant que de nouveaux outils d’IA apparaîtront, nous nous attendons à ce que les technologies d’IA fassent l’objet d’un examen minutieux, notamment en ce qui concerne les risques potentiels en matière de protection de la vie privée et des données. La réponse d’OpenAI aux diverses demandes et enquêtes pourrait constituer un précédent pour la manière dont les entreprises d’IA seront tenues responsables de leurs pratiques à l’avenir. Dans le même temps, il est de plus en plus nécessaire de renforcer la réglementation et la surveillance des technologies d’IA, en particulier en ce qui concerne les algorithmes d’apprentissage automatique.

Genève

Mise à jour des politiques de la Genève internationale

Ligne d’action C4 du SMSI: Comprendre l’apprentissage par l’IA: implications pour les pays en voie de développement | 17 avril

Un événement organisé par l’UIT et l’OIT a permis d’examiner l’impact des technologies de l’IA sur l’écosystème mondial de l’éducation. 

Se concentrant principalement sur les problèmes rencontrés par les pays du Sud, les experts ont discuté de l’utilisation de ces technologies dans des domaines tels que la surveillance des examens, la transcription des cours magistraux, l’analyse de la réussite des étudiants, les tâches administratives des enseignants et le retour d’information en temps réel sur les questions posées par les étudiants.

Ils ont également évoqué la charge de travail supplémentaire pour les enseignants, qui doivent s’assurer qu’eux-mêmes et leurs apprenants maîtrisent les outils nécessaires, ainsi que l’utilisation et le stockage de données personnelles par les fournisseurs de technologies d’IA, et d’autres acteurs du système éducatif. 

Les solutions à ces défis doivent également tenir compte du déficit de compétences numériques et des problèmes de connectivité.

70e session de la Commission de la CEE-ONU: Transformations numériques et vertes pour le développement durable dans la région | 18–19 avril

La 70e session de la Commission économique des Nations unies pour l’Europe (CEE-ONU) a accueilli des représentants au niveau ministériel des États membres de la CEE-ONU pour un événement de deux jours qui a abordé la transformation numérique et verte pour le développement durable en Europe, l’économie circulaire, les transports, l’énergie, le financement du changement climatique et les matières premières critiques.

L’événement a permis aux participants d’échanger leurs expériences et leurs réussites, de faire le point sur les activités de la Commission et d’examiner les questions liées à l’intégration économique et à la coopération entre les pays de la région. La session a mis l’accent sur la nécessité d’une transformation verte pour relever les défis urgents liés au changement climatique, à la perte de biodiversité et aux pressions environnementales, et a souligné le potentiel des technologies numériques pour le développement économique, la mise en œuvre des politiques et la gestion des ressources naturelles.

Journée des femmes dans les TIC 2023 | 27 avril

La Journée internationale des femmes dans les TIC, un événement annuel qui promeut l’égalité des sexes et la diversité dans l’industrie technologique, avait pour thème «Des compétences numériques pour la vie».

La célébration mondiale a eu lieu au Zimbabwe dans le cadre du Sommet Transform Africa 2023, tandis que d’autres régions ont organisé leurs propres événements et célébrations.

L’événement a été institué par l’UIT en 2011, et il est désormais célébré dans le monde entier. Les gouvernements, les entreprises, les institutions académiques, les agences des Nations unies et les ONG soutiennent l’événement, offrant aux femmes la possibilité de s’informer sur les TIC, de rencontrer des modèles et des mentors, et d’explorer différentes carrières dans l’industrie.

À ce jour, l’événement a accueilli plus de 11 400 activités dans 171 pays, auxquelles ont participé plus de 377 000 femmes et jeunes femmes.

À venir

À surveiller :
Événements mondiaux en matière de politique numérique en avril

10–12 mai 2023 | Groupe intergouvernemental d’experts sur le commerce électronique et l’économie numérique (Genève et en ligne)

Le groupe d’experts de la CNUCED sur le commerce électronique et l’économie numérique se réunit chaque année pour examiner les moyens d’aider les pays en développement à s’engager dans l’économie numérique en pleine évolution et à en tirer profit, ainsi qu’à réduire la fracture numérique. La réunion a deux points de fond à l’ordre du jour : comment mettre les données au service du Programme de développement durable à l’horizon 2030 et le Groupe de travail sur la mesure du commerce électronique et de l’économie numérique.

19–21 mai 2023 | Sommet du G7 à Hiroshima 2023 (Hiroshima, Japon)

Les dirigeants du Groupe des sept économies avancées ainsi que les présidents du Conseil européen et de la Commission européenne se réunissent chaque année pour discuter de questions politiques mondiales cruciales. Pendant la présidence japonaise, en 2023, le Premier ministre japonais Fumio Kishida a identifié plusieurs priorités pour le sommet, notamment l’économie mondiale, la sécurité énergétique et alimentaire, le désarmement nucléaire, la sécurité économique, le changement climatique, la santé mondiale et le développement. Les outils d’intelligence artificielle seront également à l’ordre du jour.

24–26 mai 2023 | 16e conférence internationale du CPDP (Bruxelles et en ligne)

La prochaine conférence «Computers, Privacy and Data Protection» – Ordinateurs, vie privée et protection des données (CPDP) –, dont le thème est «Les idées qui animent notre monde numérique», se concentrera sur des questions émergentes telles que la gouvernance et l’éthique de l’IA, la sauvegarde des droits de l’enfant à l’ère des algorithmes, et le développement d’un cadre durable de transfert de données entre l’Union européenne et les États-Unis. Chaque année, la conférence réunit des experts de divers domaines, notamment du monde universitaire, du droit, de l’industrie et de la société civile, afin de favoriser le débat sur la protection de la vie privée et des données.

29–31 mai 2023 | Forum GLOBSEC 2023 à Bratislava (Bratislava, Slovaquie)

La 18e édition du forum de Bratislava réunira des représentants de haut niveau de divers secteurs pour relever les défis qui façonnent l’évolution du paysage mondial dans quatre domaines principaux: la défense et la sécurité, la géopolitique, la démocratie et la résilience, ainsi que l’économie et les affaires. Le forum de trois jours comprendra plus de 100 intervenants et plus de 40 sessions.

30 mai–2 juin 2023 CyCon 2023 (Tallinn, Estonie)

Le Centre d’excellence en coopération pour la cyberdéfense de l’OTAN accueillera la CyCon 2023, une conférence annuelle qui aborde les questions urgentes de cybersécurité d’un point de vue juridique, technologique, stratégique et militaire. Sous le thème «Meeting Reality», l’événement de cette année réunira des experts gouvernementaux, militaires et industriels, qui se pencheront sur les cadres politiques et juridiques, les technologies qui changent la donne, les hypothèses de cyberconflits, le conflit russo-ukrainien et les cas d’utilisation de l’IA dans le domaine de la cybersécurité.

DW Weekly #110 – 8 May 2023

DigWatch Weekly 100th issue 1920x1080px generic

Dear all,

Policymakers were quite busy last week, proposing new laws, new strategies, and holding new consultations on laws and strategies. Actors reacting to some of these developments did not mince their words. But first, updates from the world of generative AI, as has become customary these days.

Let’s get started.
Stephanie and the Digital Watch team


// HIGHLIGHT //

The US White House’s approach to regulating AI:
Innovate, tackle risks, repeat

Generative AI tools, like ChatGPT, have reignited one of the most prominent dilemmas policymakers face: How to regulate this emerging technology without hindering innovation. The world’s three AI hotspots – the USA, the EU, and China – are each known for their distinct approaches to regulation: ranging from a hands-off attitude to strict regulation and enforcement frameworks.

Of the three, the USA has always favoured an innovation-first approach. It was this approach that helped a handful of American companies become the massive tech behemoths they are. But fast forward to 2023, a year that generative AI has hugely impacted: The ambience of today differs significantly from the atmosphere that enveloped Big Tech when they were just starting up. 

US policymakers and top government officials have been sounding alarm bells over these risks in recent weeks. AI experts called for a moratorium on further development of generative AI tools: ‘We do not have the guardrails in place, the laws that we need, the public education, or the expertise in government to manage the consequences of the rapid changes that are now taking place,’ the chair of research institute Centre for AI and Digital Policy told the US Congress recently.

Despite all these warning bells, two developments last week signalled that the White House would continue favouring a guarded innovation-first approach as long as the risks are tackled.

The first was a high-level meeting between US Vice President Kamala Harris and the CEOs of Alphabet/Google, Anthropic, Microsoft, and OpenAI. According to the invitation the CEOs received, the aim was to have ‘a frank discussion of the risks we each see in current and near-term AI development, actions to mitigate those risks, and other ways we can work together’.

And the meeting, at which President Joe Biden also made a brief appearance, was exactly that. Harris told the CEOs that they needed to make sure their products were safe and secure before deploying them to the public, that they needed to mitigate the risks (to privacy, democratic values, and jobs), and that they needed to set an example for others, consistent with the US’ voluntary frameworks on AI. 

In essence, the White House signalled that, for now, it has decided to trust that the companies will act responsibly, leaving it to Congress to figure out how tech companies can be held responsible. During a press call the day before, in reply to whether the administration ‘trust(ed) these companies to do that proactively given the history that we’ve seen in Silicon Valley with other technologies like social media’, the reply by senior administration officials was: 

‘Clearly there will be things that we are doing and will continue to do in government, but we do think that these companies have an important responsibility. And many of them have spoken to their responsibilities. And, you know, part of what we want to do is make sure we have a conversation about how they’re going to fulfil those pledges.’

Tweeted photo from US President Biden shows him visiting an AI meeting table led by  Vice-President Harris. The photo carries the quote, 'What you're doing has enormous potential --' .

The second was the announcement of new measures ‘to promote responsible AI innovation’: funding to launch new research institutes; upcoming guidance on the use of AI by the federal government; and, interestingly, an endorsement of a red-teaming event at DEFCON 31 that will bring together AI experts, researchers, and students to dissect popular generative AI tools for vulnerabilities.

Why would the White House support a red-teaming event? First, because it’s a practical way of reducing the number of vulnerabilities and, therefore, limiting risks. Hackers will be able to experiment on jailbroken versions of the software, confidentially report vulnerabilities, and the companies will be given time to fix their software.

Second, it opens up the software to the scrutiny of the (albeit limited) public. Unless people know what’s really under the bonnet, they can’t report issues or help fix it.

Third, it’s low-hanging fruit for any approach that favours giving companies a free hand to innovate for now and taking other steps that do not involve heavy-handed regulation.

The question is not whether these steps will be enough. They’re not, as new AI tools will continue to be developed. Rather, it’s whether this guarded trust is misplaced and whether policymakers have learned from the past. As Federal Trade Commission chair Lina Khan wrote, ‘The trajectory of the Web 2.0 era was not inevitable — it was instead shaped by a broad range of policy choices. And we now face another moment of choice.’


Digital policy roundup (1–8 May)
// AI //

UK’s competition authority launches review of AI models

The UK’s Competition and Markets Authority (CMA) has kickstarted an initial review of how AI models impact competition and consumers. The focus is strictly on ensuring that AI models do not harm consumer welfare nor restrict competition in the market.

The CMA has called for public input (till 3 June). Depending on the findings, the CMA may consider regulatory interventions to address any anti-competitive issues arising from the use of AI models.

Why is this relevant? Because it contrasts with the steps the CMA’s counterpart across the pond – the Federal Trade Commission – pledged to take.


// CABLES //

NATO warns of potential Russian threat to undersea pipelines and cables

NATO’s intelligence chief David Cattler has warned that there is a ‘significant risk’ that Russia could attack critical infrastructure in Europe or North America, such as internet cables and gas pipelines, as part of its conflict with the West over Ukraine.

The ability to undermine the security of Western banking, energy, and internet networks is becoming a tremendous strategic advantage for NATO’s opponents, Cattler said.

Why is this relevant? Apart from the warning itself, the comments came a day before NATO’s Secretary General Jens Stoltenberg met with industry leaders to discuss the security of critical undersea infrastructure. Undoubtedly, the security of the Nord Stream pipeline’s surrounding region was on the agenda.


// SENDER-PAYS //

Stakeholders oppose fair share fee

A coalition of stakeholders has come together to publicly caution against the potential implementation of a fair share fee imposed on content providers who would be obliged to pass it on to telecom companies. 

The group, which includes NGOs, cloud associations, and broadband service providers, was reacting to a consultation launched by the European Commission in February. It’s not the consultation itself they’re worried about, but any misleading conclusions the consultations might lead to. 

Why is this relevant? First, the signatories to the statement think there’s ‘no evidence that a real problem’ exists. Second, they say the fee would be a potential violation of the net neutrality principle – a principle that the EU has staunchly protected.


// MEDIA //

Google and Meta voice opposition to Canada’s online news bill

The battle over Canada’s proposed online news bill continues. In last week’s hearing by the Senate’s Standing Committee on Transport and Communications, both Google and Meta said that they would have to withdraw from Canada should the proposed bill pass as it stands now.

One of the main issues is that the bill obliges companies to pay news publishers for linking to their sites, ‘making us lose money with every click’, according to Google’s vice-president for news, Richard Gingras.

Why is this relevant? Because Google and Meta have repeated their threat that they’re ready to leave if the bill isn’t revised.


// PRIVACY //

EU’s top court rules on two GDPR cases 

In the first – Case C-487/21 – the Court of Justice of the EU clarified that the right of access under the GDPR entitles individuals to obtain a faithful reproduction of their personal data. That can mean entire documents, if there’s personal data on each page. 

In the second – Case C-300/21 –  the court confirmed that the right to compensation under the GDPR is subject to three conditions: infringement of the GDPR, material or non-material damages resulting from the infringement, and a causal link between the damage and the infringement. But violation of the GDPR alone does not automatically entitle the claimant to compensation. That’s up to national laws to determine.


Was this newsletter forwarded to you, and you’d like to see more?


// IPR //

EU Commission releases non-binding recommendation to combat online piracy of live events

The European Commission has opted for a non-binding strategy to combat the piracy of live events, generating dissatisfaction among both lawmakers and rightsholders. 

The measure outlines several recommendations for national authorities, rightsholders, and intermediary service providers to tackle the issue of live event piracy more effectively, but its non-binding nature falls short of what is required to address the issue.

Why is this relevant? Because the European Commission went ahead with its plans despite not one but two complaints from a group of parliamentarians calling for a legislative instrument to counter online piracy.


// AUTONOMOUS CARS //

China publishes draft standards for smart vehicles 

China’s Ministry of Industry has published a series of draft technical standards for autonomous vehicles (Chinese) – developed within a National Automobile Standardisation Technical Committee – that will address cybersecurity and data protection issues. The public can comment till 5 July.

One of the standards requires that data generated by autonomous vehicles be stored locally. The government wants to ensure that any sensitive data stays within China’s borders.

Another standard will require autonomous vehicles to be equipped with data storage equipment to allow data to be retrieved and analysed in the case of an accident. Reminds us of flight data recorder black boxes.


The week ahead (8–14 May)

9–12 May: The Women in Tech Global Conference, in hybrid format, will bring women active in the technology sector together to discuss their perspectives on tech leadership, gender parity, digital economy, and more.

10 May: Last day for feedback on two open consultations: The European Commission’s single charger draft rules and China’s proposed regulation for generative AI tools (Chinese).

10–12 May: UNCTAD’s Intergovernmental Group of Experts on E-commerce and the Digital Economy meets in Geneva for its sixth session.

11 May: In the European Parliament, the joint committee (IMCO/LIBE) vote on the report on the Artificial Intelligence Act takes place today.
For more events, bookmark the observatory’s calendar of global policy events.


#ReadingCorner
AI pioneer Geoffrey Hinton.

AI an urgent threat, says AI pioneer

AI pioneer Geoffrey Hinton, who turned 75 in December and who recently resigned from Google, tells news portal Reuters that AI could pose a ‘more urgent’ threat to humanity than climate change. In another interview with the Guardian, he says there’s no simple solution.


starlink

Starlink arrives in Africa, but South Africa left behind

Starlink, the satellite internet constellation developed by Elon Musk’s SpaceX, has started operating in Nigeria, Mozambique, Rwanda, and Mauritius over the past few months, with 19 more African countries scheduled for launch this year and the next. But South Africa is notably missing from this list. Could this be due to South Africa’s foreign ownership rule, which grants licences only to companies with at least 30% South African ownership of the company seeking to operate there? A Ventures Africa contributor investigates.


steph
Stephanie Borg Psaila – Author
Director of Digital Policy, DiploFoundation
ginger
Virginia Paque – Editor
Senior editor – Digital Policy, DiploFoundation

Was this newsletter forwarded to you, and you’d like to see more?

DW Weekly #109 – 1 May 2023

DigWatch Weekly 100th issue 1920x1080px generic

Dear readers,

It’s more AI this week, after the G7 wrapped up a weekend-long ministerial meeting, with data flows joining the ranks on the ministers’ agenda. Microsoft is far from impressed at the UK’s decision to block its acquisition of Activision Blizzard, while the European Commission announces the names of very large platforms and search engines that will face tougher content regulation and consumer protection rules. The European Parliament reached a political deal on the AI Act, but we’ll cover that after the vote in plenary.

Stephanie and the Digital Watch team


// HIGHLIGHT //

G7 countries take on AI, data flows

As we anticipated last week, the G7 digital ministers wrapped up their weekend meeting in Japan with a focus on AI and data flows, setting the stage for new developments in some areas and a bit of a  letdown in others. The good parts first.

If AI is queen, data is still king 

The digital ministers of the world’s seven largest economies – Canada, France, Germany, Italy, Japan, the UK, and the USA – will start implementing Japan’s plan for a Data Free Flow with Trust (DFFT).

In essence, this approach will try to reconcile the need for data to flow freely, with the need to keep personal data safe and uphold people’s right to privacy. Although the group of seven have strong economies in common, the way the USA approaches data protection is at odds with stronger European safeguards. Max Schrems can tell us a thing or two about this.

The new plans outlined in the G7 digital ministers’ declaration involve setting up a new entity in the coming months, called an Institutional Arrangement for Partnership (IAP), and choosing the OECD to lead the IAP’s work. Although the OECD has only 37 members, its recent success in getting over 140 countries to agree on a new global digital tax deal shows that it’s capable of navigating complex terrains. Data flows are clearly a politically sensitive, highly charged issue.

The DFFT was first proposed by Japan’s former prime minister Shinzo Abe, debuted at the World Economic Forum annual meeting in Davos, and was later endorsed by the G20. Since Japan is chairing the G7 this year, it will want to see the IAP up and running by the end of 2023. Good news for DFFT supporters: The pressure’s on for the IAP stream. 

Generative AI: G7 digital ministers hedge their bets 

While things will move quickly on the data flows front, the ministerial declaration is somewhat of a letdown when it comes to regulating generative AI tools such as ChatGPT.

The group of seven did acknowledge the popularity that generative AI tools have gained quickly and the need to take stock of benefits and challenges. But the best the ministers could offer was a vague, non-committal plan:   

‘We plan to convene future G7 discussions on generative AI which could include topics such as… (transparency, disinformation)… These discussions should harness expertise and leverage international organisations such as the OECD to consider analysis on the impact of policy developments and GPAI to conduct relevant practical projects.’

What they did commit to is ‘to support interoperable tools for trustworthy AI’, meaning: to develop tools that allow AI tools to work together seamlessly. This includes developing standards and promoting dialogue on interoperability between governance frameworks.

Meanwhile, there’s still a possibility that the G7 heads of state, meeting later this month in Hiroshima, will take more concrete steps to tackle the privacy and security concerns of generative AI. 

 Groupshot, Person, People, Clothing, Coat, Formal Wear, Suit, Adult, Male, Man, Blazer, Jacket, Face, Head, Nathaniel Fick, Margrethe Vestager, Sogyal Rinpoche, Tarō Konō, Yasutoshi Nishimura
G7 digital ministers and other delegates during the first day of a two-day meeting in Japan, on 29 April 2023. Credit: Kyodo

Digital policy roundup (24 April–1 May)
// AI //

Italy lifts ban on ChatGPT after OpenAI introduces privacy improvements

The Italian data protection regulator has confirmed it has allowed OpenAI’s ChatGPT to resume operations in Italy after the company implemented several privacy-enhancing changes. 

The Italian Garante Per La Protezione Dei Dati Personali temporarily blocked the AI software in response to four concerns: a data breach (which the company said was a bug), unlawful data collection, inaccurate results, and the lack of any age verification checks. OpenAI has now fulfilled most of the regulator’s requests. It added information on how users’ data is collected and used and allows users to opt out of data processing that trains the algorithmic model. 

What’s next? There are still two requests from the regulator that OpenAI must implement in Italy: It needs to implement an age-based gated system to keep children safe from accessing inappropriate content (this will serve as a testbed for age-verification systems), and it needs to launch a publicity campaign to inform users about their right to opt out of data processing for training the model. 

Why the emphasis on a publicity campaign for users? Because there’s no opt-in for users to consent to data processing for training algorithms (OpenAI will rely on legitimate interest). So should users object, their recourse is to submit an opt-out form to OpenAI. 

Meanwhile, scrutiny by the EU’s ad hoc task force and other data protection watchdogs continues.

USA: A new bill to create a task force to review AI policies

Driven by the need to review AI policy, Democratic Senator Michael Bennet introduced a bill that would create a task force to review AI policy and make recommendations. It would then terminate its operations after 18 months. 

Why is this relevant? Inasmuch as the idea behind it is good, it could take longer for the task force to materialise than for it to complete its job.


// ANTITRUST //

UK competition watchdog blocks Microsoft’s purchase of Activision Blizzard

The UK’s Competition and Markets Authority (CMA) has blocked Microsoft’s acquisition of Activision Blizzard, valued at USD68.7 billion (EUR62.5 billion), over concerns that it would negatively affect the cloud gaming industry.

We might have known this would happen: In February, the watchdog said the merger would harm competition and proposed several remedies. Even though Microsoft’s reassurances seemed promising, it did not dissuade the watchdog strongly enough to overturn its initial thoughts, and a war of words ensued.

Why is this relevant? It’s relevant because of what happens next. An unsuccessful appeal by Microsoft could influence the decisions of the US Federal Trade Commission and the European Commission. If past experience is anything to go by, a second rejection by the European Commission will convince the FTC to block the merger as well. 

Different Activision Blizzard game characters pose for a group photo.
Some of the characters in Activision Blizzard’s games. (Credit: Activision Blizzard)

// DSA //

Digital Services Act: European Commission identifies 19 very large tech companies

The European Commission has designated 19 tech companies under two categories – very large online platforms (VLOPs) and very large online search engines (VLOSEs) – which will need to comply with stricter rules under the Digital Services Act. 
These companies have more than 45 million monthly active users, according to the data the companies themselves had to disclose last February.

What happens next? The companies must comply with the new rules within four months. The rules include no ad targeting based on a user’s sensitive information (such as political opinion), tougher measures to curb the spread of illegal content, and a requirement to carry out their first risk assessment.

You’ve earned a badge!

The 17 very large online platforms are: Alibaba AliExpress, Amazon Store, Apple AppStore, Booking.com, Facebook, Google Play, Google Maps, Google Shopping, Instagram, LinkedIn, Pinterest, Snapchat, TikTok, Twitter, Wikipedia, YouTube, and Zalando.

The 2 very large online search engines are: Bing and Google Search.


// CONTENT POLICY //

China to root out false news about Chinese businesses

The Central Cyberspace Administration of China will carry out a three-month nationwide campaign to remove fake news about Chinese businesses from online circulation. The aim is to allow ‘enterprises and entrepreneurs’ to work in ‘a good atmosphere of online public opinion’.

Why is it relevant? There’s nothing new about China’s ‘clean-up cyberspace’ campaigns (known as Qinglang) – these campaigns actually started in 2016. But the fact that the government wants to improve local businesses’ reputation shows its intent to promote its domestic market.

Brazil blocks (and reinstates) Telegram over non-disclosure of personal data 

Brazil’s Supreme Court temporarily suspended access to messaging app Telegram for users in the country after the company failed to comply with an order to provide data linked to a group of neo-Nazi organisations using the platform. 

Telegram CEO Pavel Durov said that the data requested by the court ‘is technologically impossible for us to obtain’, as the users had already left the platform. But the court disagreed. 

The court has lifted its suspension, but retained the non-compliance fine of one million reais (USD198,000 or EUR182,000) per day until the company provides the requested data.

Why is it relevant? Because the same thing happened to Telegram last year and to WhatsApp in previous years. 

Telegram CEOs message April 2023

(Click here for the original Du Rove Channel machine-readable message)


// CHILDREN //

Out of control? Severe child sexual abuse imagery on the rise

The news that no one wants to hear: The number of images depicting child sexual abuse classified as severe has more than doubled since 2020. 

The annual report of the Internet Watch Foundation (IWF), a non-profit that works to eliminate abusive content from the internet, reveals more harrowing trends. For instance, content involving children aged 7-10 increased by 60%, with most victims being girls. Some of the most extreme abuse is committed against very young children, including babies.

Why is it relevant? 

First, it comes out at the same time as the results of a two-year investigation by The Guardian, which found tech company Meta is struggling to prevent criminals from using its platforms, Facebook and Instagram, in an effort to counter child sexual abuse. 

Second, because it strengthens the call, reiterated by law enforcement agencies a fortnight ago (and by the IWF in its report), for tech companies to prioritise child safety over end-to-end encryption. The agencies say that encryption shouldn’t come at the expense of diminishing companies’ abilities to identify abusive content.


The week ahead (1–7 May)

1–4 May: This year’s Web Summit, which gathers leaders and start-ups from the tech and software industries, is taking place in Brazil this week. 

3 May: It’s World Press Freedom Day! To celebrate the 30th anniversary of this international day, UNESCO is holding a special event in New York on 2 May, which will also be livestreamed.

3 May: Last day to provide feedback on the EU’s initiative on virtual worlds: A head start towards the next technological transition

3–4 May: The 6G Global Summit is happening in Bahrain (and online).

3–5 May: This year’s forum on Science, Technology and Innovation for the Sustainable Development Goals (STI Forum), taking place in New York, is about accelerating the post-Covid-19 recovery.

5 May: A stakeholder workshop organised by the EU will discuss how to ensure effective compliance with the data-related rules in the Digital Markets Act. It’s being held in Brussels and online.


steph
Stephanie Borg Psaila
Director of Digital Policy, DiploFoundation

Was this newsletter forwarded to you, and you’d like to see more?

Digital Watch newsletter – Issue 79 – May 2023

Pentagon: The leak on Discord is more significant than we think

From time to time, the intelligence of US agencies and its allies are exposed in major leaks. April’s leak of 50-or-so top secret documents on the gaming chat service Discord was one of the most egregious.

The release of diplomatic cables by WikiLeaks in 2010, the 2013 disclosures by Edward Snowden, and the disclosure of the National Security Agency and CIA’s hacking tools in 2016 and 2017 rank among the world’s biggest modern-time leaks.

Outrage or shrug? Diminishing response

Every new leak seems to generate less and less outrage on a global level. So when another US intelligence leak surfaced in April on Discord (a relatively unknown social platform), it hardly caused a blip on the radar. While sensationalism can hinder law enforcement’s efforts, disinterest isn’t exactly helpful either.

The Discord leak was revealed on 6 April by the New York Times. Behind the leak was  21-year-old Jack Teixeira, an airman first class in the Massachusetts Air National Guard. 

It wasn’t difficult for the FBI to identify him. He uploaded the documents to an online community on Discord (a server) that he was unofficially administrating, and tracked the FBI investigation into his own leak. He was charged a few days later.

Mistaken for fake news

In that short time, the leaked documents were spread to other social media platforms by users who thought the documents were fake. The possibility of the documents being top secret didn’t seem to register.

As CNN reported: ‘Most [Discord] users distributed the files because they thought they were fake at first,’ one Discord said. ‘By the time they were confirmed as legitimate, they were already all over Twitter and other platforms.’

A Google Trends graph shows how people’s interest in searching for information related to leaks has dwindled over time
A Google Trends graph shows how people’s interest in searching for information related to leaks has dwindled over time

Very bad timing

Not that there is ever a good time, but this leak arrived at a particularly sensitive moment in Russia’s ongoing conflict against Ukraine. 

Although the data was not as comprehensive as in previous leaks, this latest breach provided intimate details about the current situation in Ukraine, as well as intelligence on two of the US’s closest allies: South Korea and Israel. 

While Europe was mostly spared, the leaked information uncovered that Ukraine has European special forces on the ground and that almost half of the tanks en route to Kyiv are from Poland and Slovenia. The collateral consequences of the leak extend to many countries.

Still out there

Days after the Pentagon announced its investigation, the leaked documents could still be accessed on Twitter and other platforms, promoting a debate about the responsibility of social media companies in cases involving national security. There’s no one solution that can solve the content moderation issues on social media, complicating the follow-up. 

Unfortunately, but unsurprisingly, leaks are bound to happen, especially when classified information is accessible to so many people. In 2019, there were 1.25 million US citizens with clearance to access the USA’s top-secret information.

One solution, therefore, is for social media platforms to strengthen their content policies when it concerns leaks of intelligence information. If the former Twitter employee interviewed by CNN is correct, ‘the posting of classified US military documents would likely not be a violation of Twitter’s hacked materials policy’. Another possibility is for companies to strengthen their content moderation capabilities. To avoid imposing impossible burdens on start-up or small platforms, capabilities should be matched to the size of a platform’s user base (the framework used by the EU’s Digital Services Act is a good example).

The issue becomes more complex when illegal material is shared on platforms that use end-to-end encryption. As law enforcement agencies have emphasised time and time again, while there’s no doubt that encryption plays an important role in safeguarding privacy, it also hampers their ability to identify, pursue, and prosecute violations.  

For now, we should focus on the fact that the latest leak was uploaded by a user to a public forum on social media, despite the potential damage to the national security of their own country (the USA) and the risk to citizens of a war-torn country (Ukraine). That is undoubtedly the biggest concern

criminal complaint

Digital policy developments that made global headlines

The digital policy landscape changes daily, so here are all the main developments from April. There’s more detail in each update on the Digital Watch Observatory.        

Global digital governance architecture

same relevance

G7 digital ministers will start implementing Japan’s plan for a Data Free Flow with Trust (DFFT) through a new body, the Institutional Arrangement for Partnership (IAP), led by the Organisation for Economic Co-operation and Development (OECD). They also discussed AI, digital infrastructure, and competition.

Sustainable development

same relevance

The UN World Data Forum, held in Hangzhou, China, called for better data governance and increased collaboration between governments to achieve a sustainable future. UN Secretary-General António Guterres said that data remains a critical component of development and progress in the 21st century.

Security

increasing relevance

The Pentagon started investigating the leak of over 50 classified documents that turned up on the social media platform Discord. See our story on pages 2–3. A joint international law enforcement operation seized the Genesis Market, a dark web market.

The European Commission announced a EUR1.1 billion (USD1.2 billion) plan to strengthen the EU’s capabilities to fend off attacks and support more coordination among member states. 

TikTok was banned on government devices in Australia; the Irish National Cyber Security Centre also recommended that government officials refrain from using TikTok on devices.

The annual report of the Internet Watch Foundation (IWF) revealed that severe child sexual abuse imagery is on the rise.

E-commerce and the internet economy

same relevance

The UK’s Competition and Markets Authority (CMA) blocked Microsoft’s acquisition of Activision Blizzard over concerns that it would negatively affect the cloud gaming industry. Microsoft will appeal.

The European Commission designated 19 tech companies as very large online platforms (VLOPs) (17) and very large online search engines (VLOSEs) (2), which will need to comply with stricter rules under the new Digital Services Act. 

South Korea’s Fair Trade Commission (FTC) fined Google for unfair business practices. A group of Indian start-ups asked a local court to suspend Google’s new in-app billing fee system. In the UK, Google will let Android developers use alternate payment options.

Infrastructure

same relevance

The EU’s Council and Parliament reached a political agreement over the new Chips Act, which aims to double the EU’s global output of chips by 20% by 2030.

Digital rights

increasing relevance

Governments around the world launched investigations into OpenAI’s ChatGPT, principally over concerns that the company’s practices violated people’s privacy and data protection rights. See our main story.

The Indian government is considering opening up Aadhaar, the country’s digital identity system, to private entities to authenticate users’ identities. 

European MEPs voted against a proposal to allow personal data transfers of EU citizens to the USA under the new EU-US Data Privacy Framework.

Content policy

same relevance

The Central Cyberspace Administration of China will carry out a three-month nationwide campaign to remove fake news about Chinese businesses from online circulation. The aim is to allow enterprises and entrepreneurs to work in a good atmosphere of online public opinion.

Jurisdiction and legal issues

same relevance

Brazil’s Supreme Court blocked – and then reinstated – messaging app Telegram for users in the country after the company failed to provide data linked to a group of neo-Nazi organisations using the platform. 

A Los Angeles court dismissed a claim for damages by a Tesla driver, after the company successfully argued that the partially automated driving software was not a self-piloted system.

New technologies

increasing relevance

In the USA, the Biden administration is studying potential accountability measures for AI systems. The National Telecommunications and Information Administration’s  (NTIA) call for feedback runs till 10 June. A US Democratic Senator has introduced a bill that would create a task force to review AI policy. The US Department of Homeland Security also announced a new task force to ‘lead in the responsible use of AI to secure the homeland’ while defending against malicious use of AI.

A group of 11 members of the European Parliament are urging the US President and European Commission chief to co-organise a high-level global summit on AI governance. 

The Cyberspace Administration of China (CAC) proposed new measures for regulating generative AI services. The draft is open for public comments until 10 May.

Dozens of advocacy organisations and children’s safety experts called on Meta to halt its plans to allow kids into its virtual reality world, Horizon Worlds, due to potential risks of harassment and privacy violations for young users.


Why authorities are investigating ChatGPT: The top 3 reasons

With its ability to replicate human-like responses in text-based interactions, OpenAI’s ChatGPT has been hailed as a breakthrough in AI technology. But governments aren’t entirely sold on it. So what’s worrying them?

Privacy and data protection

Firstly, there’s the central issue of allegedly unlawful data collection, the all-too-common practice of collecting personal data without the user’s consent or knowledge. 

This is one of the reasons why the Italian privacy watchdog, the Garante per la Protezione dei Dati Personali, imposed a temporary ban on ChatGPT. The company addressed most of the authority’s concerns, and the software is now available in Italy again, but that doesn’t solve all the problems.

The same concern is being tackled by other data protection authorities, including France’s Commission nationale de l’informatique et des libertés (CNIL), which received at least two complaints, and Spain’s Agencia Española de Protección de Datos (AEPD). Then there’s the European Data Protection Board (EDPD)’s newly-launched task force, whose ChatGPT-related work will involve coordinating the positions of the other European authorities.

Concerns around data protection have not been limited to Europe, however. The complaint by the Center for Artificial Intelligence and Digital Policy (CAIDP) to the US Federal Trade Commission (FTC) argued that OpenAI’s practices contain numerous privacy risks. Canada’s Office of the Privacy Commissioner is also investigating.

Unreliable 

Secondly, there’s the issue of inaccurate results. OpenAI’s ChatGPT model has been used by several companies, including Microsoft Bing, to generate text. However, as OpenAI itself confirms, the tool is not always accurate. Reliability was one of the issues behind Italy’s decision to ban ChatGPT, and in one of the complaints received by the French CNIL. The CAIDP’s complaint to the FTC also argued that OpenAI’s practices were deceptive since the tool is ‘highly persuasive’, even if the content is unreliable. In Italy’s case, OpenAI told the authority it was ‘technically impossible, as of now, to rectify inaccuracies’. That’s of little reassurance, considering how these AI tools can be used in sensitive contexts such as healthcare and education. The only recourse, for now, is to provide users with better ways to report inaccurate information.

OpenAi

Children’s safety

Thirdly, there’s the issue of children’s safety and the absence of an age verification system. Both Italy and the CAIPD argued that, as things stand, children can be exposed to content that is inappropriate for their age or level of maturity. 

Even though OpenAI has returned to Italy after introducing an age question on ChatGPT’s sign-up form, the authority’s request for an age-based gated system still stands. OpenAI must submit its plans by May and implement them by September. This request coincides with efforts by the EU to improve how platforms confirm their users’ age. 

As long as new AI tools keep emerging, we expect to see continued scrutiny of AI technologies, particularly around their potential privacy and data protection risks. OpenAI’s response to the various demands and investigations may set a precedent for how AI companies are held accountable for their practices in the future. At the same time, there is a growing need for greater regulation and oversight of AI technologies, particularly around machine learning algorithms.

Policy updates from International Geneva

WSIS Action Line C4: Understanding AI-powered learning: Implications for developing countries | 17 April

An ITU and the ILO event examined the impact of AI technologies on the global education ecosystem. 

Focusing mostly on the issues experienced by the Global South, experts discussed how these technologies were being used in areas such as exam monitoring, faculty lecture transcriptions, student success analyses, teachers’ administrative tasks, and real-time feedback to student questions. 

They also talked about the added workload for teachers to ensure that they and their learners are proficient with the necessary tools, as well as the use and storage of personal data by the providers of AI technologies and others within the educational system. 

Solutions to these challenges must also address the existing digital skills gap and connectivity issues.


UNECE Commission’s 70th Session: Digital and Green Transformations for Sustainable Development in the Region | 18–19 April

The 70th session of the UN Economic Commission for Europe (UNECE) Commission hosted ministerial-level representatives from UNECE member states for a two-day event that tackled digital and green transformation for sustainable development in Europe, the circular economy, transport, energy, financing for climate change, and critical raw materials. 

The event allowed participants to exchange experiences and success stories, review progress on the Commission’s activities, and consider issues related to economic integration and cooperation among countries in the region. The session emphasised the need for a green transformation to address pressing challenges related to climate change, biodiversity loss, and environmental pressures, and highlighted the potential of digital technologies for economic development, policy implementation, and natural resource management.


Girls in ICT Day 2023 | 27 April

The International Girls in ICT Day, an annual event that promotes gender equality and diversity in the tech industry, was themed Digital Skills for Life. 

The global celebration was held in Zimbabwe as part of the Transform Africa Summit 2023, while other regions conducted their own events and celebrations

The event was instituted by ITU in 2011, and it is now celebrated worldwide. Governments, businesses, academic institutions, UN agencies, and NGOs support the event, providing girls with opportunities to learn about ICT, meet role models and mentors, and explore different career paths in the industry. 

To date, the event has hosted over 11,400 activities held in 171 countries, with more than 377,000 girls and young women participating.

What to watch for: Global digital policy events in May

10–12 May 2023 | Intergovernmental Group of Experts on E-commerce and the Digital Economy (Geneva and online) 

UNCTAD’s group of experts on e-commerce and the digital economy meets annually to discuss ways of supporting developing countries to engage in and benefit from the evolving digital economy and narrowing the digital divide. The meeting has two substantive agenda items: How to make data work for the 2030 Agenda for Sustainable Development and the Working Group on Measuring E-commerce and the Digital Economy.


19–21 May 2023 | G7 Hiroshima Summit 2023 (Hiroshima, Japan)

The leaders of the Group of Seven advanced economies, along with the presidents of the European Council and the European Commission, convene annually to discuss crucial global policy issues. During Japan’s presidency in 2023, Japanese Prime Minister Fumio Kishida identified several priorities for the summit, including the global economy, energy and food security, nuclear disarmament, economic security, climate change, global health, and development. AI tools will also be on the agenda.


24–26 May 2023 | 16th International CPDP conference (Brussels and online) 

The upcoming Computers, Privacy, and Data Protection (CPDP) conference, themed ‘Ideas That Drive Our Digital World’, will focus on emerging issues such as AI governance and ethics, safeguarding children’s rights in the algorithmic age, and developing a sustainable EU-US data transfer framework. Every year, the conference brings together experts from diverse fields, including academia, law, industry, and civil society, to foster discussion on privacy and data protection.


29–31 May 2023 | GLOBSEC 2023 Bratislava Forum (Bratislava, Slovakia)

The 18th edition of the Bratislava Forum will bring together high-level representatives from various sectors to tackle the challenges shaping the changing global landscape across four main areas: defence and security, geopolitics, democracy and resilience, and economy and business. The three-day forum will feature more than 100 speakers and over 40 sessions.


30 May–2 Jun 2023  | CyCon 2023 (Tallinn, Estonia) 

The NATO Cooperative Cyber Defence Centre of Excellence will host CyCon 2023, an annual conference that tackles pressing cybersecurity issues from legal, technological, strategic, and military perspectives. Themed ‘Meeting Reality’, this year’s event will bring together experts from government, military, and industry to address policy and legal frameworks, game-changing technologies, cyber conflict assumptions, the Russo-Ukrainian conflict, and AI use cases in cybersecurity.

The Digital Watch observatory maintains a live calendar of upcoming and past events.


DW Weekly #108 – 24 April 2023

DigWatch Weekly 100th issue 1920x1080px generic

Dear all,

Policymakers have been particularly busy this week. We cover most of the newly proposed regulations below, together with a trend that’s picking up: calls for multilateral cooperation on AI regulation. Plus: Cybersecurity (and, to a certain extent, AI) is dominating this week’s discussions.

Let’s get started.

Stephanie and the Digital Watch team


// HIGHLIGHT //

ChatGPT to debut on multilateral agenda at G7 Summit

Governments’ concerns on how to regulate AI tools the likes of ChatGPT are taking on a multilateral dimension. 

Japan, this year’s chair of the G7 (which gathers seven of the world’s largest economies), will include generative AI on the G7 Summit in Hiroshima agenda, scheduled to take place 19–21 May. Although AI is not (yet?) on the official list of topics that will be discussed at the summit (Russia and China top the list of topics on the agenda), Japan’s Prime Minister Fumio Kishida confirmed the plan last week.

Japan has taken a keen interest in ChatGPT. Earlier in April, the CEO of OpenAI (the company behind ChatGPT), Sam Altman, made Japan the destination for his maiden overseas visit, where he met Prime Minister Kishida to discuss opening an office in the country. While authorities around the world have been reluctant to use ChatGPT for official business over privacy and security concerns, the Japanese city of Yokosuka became the first city to use ChatGPT in its municipal offices. Media reports announced that financial groups in Japan would also employ ChatGPT for internal use.

#Factbox: In March 2023, the Japanese ranked as one of the top three populations in the world to visit the ChatGPT website, according to data by SimilarWeb. The data indicates that Japan has the highest proportion of users after the USA and India.

 Chart, Plot, Map

That’s not to say Japan is not concerned with growing privacy and security concerns. On the home front, a government taskforce led by the country’s cabinet will tackle the pros and cons of generative AI. At the G7, Prime Minister Kishida said that AI discussions during the Hiroshima Summit would tackle the creation of international rules around the use of AI. 

In the lead-up to the summit, the trio of Japanese digital ministers hosting the G7 Digital and Tech Ministers’ Meeting this week will call for accelerated research into generative AI due to concerns over their impact on society. An action plan on AI governance is also in the pipeline. Digital Minister Taro Kono said he wants ‘the G7 to send out a unified message’ on the issue; Communications Minister Takeaki Matsumoto said Japan would like to lead multilateral efforts in advancing and regulating AI.

As a country with growing ambitions in generative AI tools, Japan is planning to show G7 countries and the rest of the world that it can lead the way for multilateral action that tackles ongoing concerns while leveraging AI’s potential. Fellow G7 countries Canada, France, Germany, Italy, the UK, and to a certain extent, the USA – all of whom are investigating OpenAI’s ChatGPT – will probably welcome any initiative that tackles privacy and security concerns with open arms.


Digital policy roundup (17–24 April)
// AI //

European lawmakers urge US President to hold global summit on AI

A group of EU parliamentarians are urging US President Joe Biden and European Commission chief Ursula von der Leyen to come together for a high-level global summit on AI, to set preliminary governing principles for developing, controlling, and deploying powerful AI. 

The statement, which was signed by 11 members of the EU Parliament, also calls on the Trade and Technology Council to facilitate an agenda, and on other countries to get involved in setting rules of the road for ‘very powerful AI’.

Why is this relevant? The lawmakers’ call adds to the growing momentum for international cooperation on AI. The statement calls on Biden and von der Leyen to lead a global effort, which is, practically speaking, in line with the work of the G7 group of countries (See also: Last week’s coverage on the regulatory efforts of China, the EU, and the USA over general purpose AI).

US Homeland Security creating AI task force

Meanwhile, the chief of the US Department of Homeland Security, Alejandro Mayorkas, announced a new task force to ‘lead in the responsible use of AI to secure the homeland’, while also defending ‘against the malicious use of this transformational technology’.


// CYBERSECURITY //

EU announces mega plan to strengthen cybersecurity capabilities 

The European Commission has announced a EUR1.1 billion (USD1.2 billion) plan to strengthen the EU’s capabilities to fend off attacks and support more coordination among member states. The proposed regulation, called the Cyber Solidarity Act, will introduce three things.

The first is a European Cyber Shield, comprised of Security Operations Centres (SOCs) across the EU, whose main task will be detecting cyber threats. The second is a Cyber Emergency Mechanism, to help ensure that EU member states are prepared for and ready to respond to major cyber attacks. The third is a Cybersecurity Incident Review Mechanism, to assess large-scale incidents after they occur. The commission also launched a Cybersecurity Skills Academy to address the ongoing skills shortage in the sector.

What’s next? It will be the usual legislative process: The European Parliament and the EU Council will each start debating the proposed text.


Was this newsletter forwarded to you, and you’d like to see more?


// CHIPS //

EU one step closer to passing Chips Act

The EU’s Council and Parliament have reached a political agreement over the new Chips Act, which aims to double the EU’s global output of chips by 20% by 2030.

The new framework includes the Chips for Europe Initiative, which is expected to mobilise €43 billion in public and private investments to entice chip makers to build factories in the EU. 

Why is it relevant? The EU is trying to compete with the USA in terms of subsidies. But the EU’s plan falls a little short compared to the US Chips for America Act’s $52 billion.

What’s next? The agreement needs to be endorsed and formally adopted by both institutions.

A circuit board with a prominently displayed chip.

// COMPETITION //

Google to allow app developers in UK to use alternative payment systems

The UK competition watchdog has announced that Google will allow app developers in the UK to offer different payment systems of their choice. The UK’s Competition and Markets Authority (CMA), which has provisionally agreed with Google’s proposed commitments, is also seeking the industry’s feedback (consultation open till 19 May) to make sure that the company’s commitments ‘are appropriate’ – in other words, whether these promises are enough to appease app developers.

What’s in it for app developers? More options, and therefore, more competition, Google explains. App developers will be able to break away from Google Play’s billing system, which currently accounts for a whopping 90% of the native app downloads, taking a cut of up to 30% from every in-app purchase.

What’s in it for Google? At face value, these commitments look like they’re all in favour of app developers. But in practice, Google’s cut will only be reduced to 26-27% – which is still a hefty service fee for app developers who will also be required to pay an additional fee for an alternative service. More than that, if the CMA agrees to these commitments, it will have to drop its investigation into Google’s alleged anti-competitive practices as part of the deal. 

What’s next? For now, the CMA has said that Google’s proposed commitments are sufficient to address the concerns it had at the start of the antitrust investigation. If nothing changes, the CMA will confirm the deal.


// TIKTOK //

Ireland adds itself to list of countries banning TikTok on official devices

One US Congress hearing and several country bans later, TikTok, owned by Chinese company ByteDance, is still regarded as risky for official devices.

This time, the advice is coming from the Irish National Cyber Security Centre. which has recommended that staff in government departments and state agencies refrain from using TikTok on official devices.

Why is this relevant? First, TikTok is under global scrutiny over its data practices. Second, security trumps any other economic interest for Ireland and possibly many other countries. In Irish head of government Leo Varadkar’s words, the company was ‘a big investor in Ireland and employs a lot of people’, but the government has to ‘take the advice of cybersecurity experts’.


// DIGITAL IDs //

India to allow private entities to use Aadhaar

The Indian government is considering opening up Aadhaar, the country’s digital identity system, to private entities for authenticating the identity of users. 

The Ministry of Electronics and IT is proposing a short amendment to the Aadhaar rules, through which any non-government entity can request clearance to link to it. Entities will need to explain why they want to use Aadhaar; the ministry will then assess whether the proposal succeeds in ‘promoting ease of living of residents and enabling better access to services for them’. A public consultation runs till 5 May.


// AUTONOMOUS VEHICLES //

Tesla emerges unscathed in Autopilot car crash trial

The Los Angeles Superior Court has dismissed a USD3 million (EUR2.7 million) claim for damages by a Tesla driver, Justine Hsu, over alleged defects in the car’s Autopilot system. Tesla successfully argued that the partially automated driving software was not a self-piloted system.


The week ahead (24–30 April)

24 April: Data protection is the theme of the next thematic deep dive in preparation for the  intergovernmental negotiations on the Global Digital Compact (GDC). These in-depth discussions are organised by Rwanda and Sweden as co-facilitators. Refer to the guiding questions before registering. (Learn more about the GDC process on the Digital Watch Observatory’s dedicated space).

24 April: The UN marks the International Day of Multilateralism and Diplomacy for Peace.

24–27 April: The UN World Data Forum 2023 will look at data and statistics, focusing on how to strengthen the use of data for sustainable development. Expect UN Secretary-General Antonio Guterres’ to address the forum and an outcome document charting the progress of discussions. It’s in hybrid format: in situ in Hangzhou, Zhejiang Province in China and online.

24–27 April: The annual RSA conference, hosted annually in San Francisco, USA, discusses issues ‘covering the entire spectrum of cybersecurity’. Expect keynotes from some of the world’s industry leaders. Livestreams will be available.

25–26 April: The European Cyber Agora has cybersecurity and cyber diplomacy at its core. Now that the EU has launched a plan to strengthen its cyber capabilities, there will be lots to talk about. The event is facilitated by Microsoft, the German Marshall Fund of the United States, and EU Cyber Direct, and it’s in hybrid format (Brussels and online).

26 April: The Global Forum On Cyber Expertise (GFCE) is holding its European meeting back to back with the Cyber Agora. Also in hybrid format (Brussels and online), the focus is cyber capacity building in Europe and Africa. 

26 April: WIPO celebrates World Intellectual Property Day. This year’s theme is Women and IP: Accelerating innovation and creativity.

26–27 April: POLITICO Live’s 6th Europe Tech Summit will talk policy and regulation. Focusing on Europe, this two-day hybrid event will bring top EU executives to discuss cyber threats, emerging tech, standards, and everything in between.

29–30 April: The G7 Digital and Tech Ministers’ Meeting in Takasaki, Gunma, will tackle AI tools such as ChatGPT. Also on the agenda: a framework for the so-called Data Free Flow with Trust (DFFT: a concept aimed at fostering cross-border data flows through harmonised approaches to promote openness and trust in data flows) championed by Japan in 2019. The G7 meeting will be hosted by Japan’s economy minister, internal affairs and communications minister, and minister for digital transformation.


steph
Stephanie Borg Psaila
Director of Digital Policy, DiploFoundation

Was this newsletter forwarded to you, and you’d like to see more?

Numéro 78 de la lettre d’information Digital Watch – avril 2023

Baromètre

Les développements de la politique numérique qui ont fait la une de la presse mondiale

Le paysage de la politique numérique évolue quotidiennement. Voici donc les principaux développements du mois de mars. Vous trouverez plus de détails dans chaque mise à jour du Digital Watch Observatory.

Architecture mondiale de la gouvernance numérique

neutre

Les cofacilitateurs du Pacte mondial pour le numérique (PMN) ont organisé un examen thématique approfondi de l’inclusion numérique et de la connectivité afin de préparer les négociations intergouvernementales sur le PMN.


Développement durable

en progression

Le rapport 2023 de la CNUCED sur la technologie et l’innovation explore les avantages potentiels de l’innovation verte pour les pays en développement, notamment la stimulation de la croissance économique et le renforcement des capacités technologiques.

La Commission européenne a dévoilé le règlement pour une industrie « zéro net » visant à stimuler les technologies énergétiques propres dans l’UE, et à soutenir la transition vers un système énergétique plus durable et plus sûr. Elle a également adopté une nouvelle proposition visant à rendre la réparation des biens plus facile et moins coûteuse pour les consommateurs. Enfin, elle a présenté un nouvel acte visant à renforcer la résilience et la sécurité des chaînes d’approvisionnement en matières premières essentielles dans l’UE, en réduisant la dépendance à l’égard des importations en provenance de pays tiers.

L’alliance numérique entre l’Union européenne, l’Amérique latine et les Caraïbes a été créée. Elle se concentre sur la construction d’infrastructures numériques, ainsi que sur la promotion de la connectivité et de l’innovation.


Sécurité

en progression

Une série de documents ayant fait l’objet d’une fuite, les « Vulkan files », révèle les tactiques de cyberguerre de la Russie contre des adversaires tels que l’Ukraine, les États-Unis, le Royaume-Uni et la Nouvelle-Zélande. L’équipe ukrainienne d’intervention en cas d’urgence informatique (CERT-UA) a enregistré une recrudescence des cyberattaques contre l’Ukraine depuis le début de l’année.

Un nouveau rapport d’Europol tire la sonnette d’alarme quant à l’utilisation abusive potentielle de grands modèles linguistiques (tels que ChatGPT, Bard, etc.). Les forces de l’ordre internationales ont saisi le Genesis Market du dark web, populaire pour la vente de produits numériques aux cybercriminels.
La National Cyber Force (NCF) du Royaume-Uni a dévoilé les détails de son approche en matière d’opérations cybernétiques responsables.


Infrastructure

neutre

Des sociétés de télécommunications chinoises publiques investissent 500 millions de dollars dans la construction de leur propre réseau de câbles Internet sous-marins à fibre optique, afin de concurrencer un projet similaire soutenu par les États-Unis, dans le cadre de la guerre technologique à laquelle se livrent les deux pays.

L’ICANN, l’organisation responsable de la gestion du registre d’adresses de l’Internet, se prépare à lancer une nouvelle série de gTLD.

Commerce électronique et économie de l’Internet

neutre

Un groupe de haut niveau a été créé pour fournir à la Commission européenne des conseils et une expertise concernant la mise en œuvre et l’application de la loi sur les marchés numériques (DMA).

Le Brésil va imposer de nouvelles mesures fiscales pour lutter contre la concurrence déloyale des géants asiatiques du commerce électronique et limiter les avantages fiscaux accordés aux entreprises.


Les droits numériques

en progression

L’Organisation des États ibéro-américains (OEI) a adopté la Charte ibéro-américaine des principes et des droits dans l’environnement numérique afin de garantir l’inclusion dans les sociétés de l’information par l’exercice des droits de l’Homme fondamentaux.

Un organisme de surveillance britannique a infligé une amende de 16 millions de dollars à TikTok pour avoir collecté des données sur des enfants sans le consentement de leurs parents. Une ONG portugaise a poursuivi TikTok pour avoir permis à des enfants de moins de 13 ans de s’inscrire sans autorisation parentale et sans protection adéquate.


La politique de contenu

en progression

Google ne bloquera plus les contenus d’information au Canada, ce qu’il faisait temporairement en réponse à un projet de réglementation qui obligerait les plateformes Internet à rémunérer les entreprises de médias canadiennes pour la mise à disposition de contenus d’information. Dans le même temps, Meta a annoncé qu’il mettrait fin à l’accès aux contenus d’information pour les utilisateurs canadiens si les règles étaient introduites sous leur forme actuelle.

Les premiers ministres de la Moldavie, de la République tchèque, de la Slovaquie, de l’Estonie, de la Lettonie, de la Lituanie, de la Pologne et de l’Ukraine ont signé une lettre ouverte appelant les entreprises technologiques à contribuer à la lutte contre la diffusion de fausses informations.


Juridiction et les questions juridiques

neutre

Un juge américain a décidé que le programme de prêt de livres numériques de l’Internet Archive violait les droits d’auteur, ce qui pourrait constituer un précédent juridique pour les futures bibliothèques en ligne.

Le Bureau de l’information du Conseil d’État chinois (SCIO) a publié un livre blanc récapitulant les lois et réglementations du pays en matière d’Internet.
Les autorités de régulation britanniques ont revu leur position sur l’acquisition d’Activision Blizzard par Microsoft, alors qu’elles craignaient auparavant que cette opération ne nuise à la concurrence dans le secteur des jeux pour consoles.


Les nouvelles technologies

en progression

L’Italie a imposé une limitation d’utilisation (temporaire) à ChatGPT, le chatbot basé sur l’IA.

L’UNESCO a appelé les gouvernements à mettre en œuvre immédiatement sa recommandation sur l’éthique de l’intelligence artificielle.

Les législateurs français ont adopté un projet de loi visant à utiliser une technologie de surveillance basée sur l’IA pour assurer la sécurité des Jeux olympiques de Paris en 2024.

Le Japon a annoncé de nouvelles restrictions sur les exportations d’équipements de fabrication de puces vers les pays qui présentent des risques pour la sécurité.

En bref

Mettre en place des barrières de sécurité pour les données

TikTok est devenu la cible de critiques de la part de plusieurs pays en raison de problèmes liés à la confidentialité des données et à la sécurité nationale. Le cœur du problème semble résider dans la propriété de TikTok par la société chinoise ByteDance, car la loi chinoise de 2017 sur le renseignement national exige des entreprises qu’elles contribuent au travail des services de renseignement de l’État, ce qui suscite des craintes quant au transfert des données des utilisateurs vers la Chine. En outre, certains craignent que le gouvernement chinois n’utilise la plateforme à des fins d’espionnage ou à d’autres intentions malveillantes. Plusieurs pays ont poursuivi TikTok en justice pour avoir exposé des enfants à des contenus préjudiciables et à d’autres pratiques susceptibles de porter atteinte à leur vie privée.

TikTok a tenté d’apaiser les craintes de deux leaders mondiaux en matière de réglementation technologique : les États-Unis et l’Union européenne. L’entreprise s’est engagée à transférer les données américaines aux États-Unis dans le cadre du projet Texas. La sécurité des données européennes serait assurée par le projet Clover, qui comprend des passerelles de sécurité qui détermineront l’accès aux données et les transferts de données en dehors de l’Europe, un audit externe des processus de données par une société de sécurité européenne tierce et de nouvelles technologies de renforcement de la protection de la vie privée.

Le mois dernier, la Belgique, la Norvège, les Pays-Bas, le Royaume-Uni, la France, la Nouvelle-Zélande et l’Australie ont publié des lignes directrices interdisant l’installation et l’utilisation de TikTok sur les appareils gouvernementaux. L’interdiction envisagée par le Japon est plus générale : les législateurs proposeront d’interdire les plateformes de médias sociaux si elles sont utilisées pour des campagnes de désinformation.

Le témoignage très médiatisé du PDG de TikTok, Shou Chew, devant le Congrès américain n’a pas permis à l’entreprise d’obtenir une grande faveur juridique aux États-Unis : les législateurs ne sont toujours pas convaincus que TikTok n’est pas dépendante de la Chine. Il semble que les États-Unis vont adopter une loi (très probablement la loi RESTRICT) visant à interdire l’application. La bataille risque d’être rude : les critiques soutiennent que l’interdiction de TikTok pourrait violer les droits du premier amendement et créerait un dangereux précédent en limitant le droit à la liberté d’expression en ligne. Une autre option est la cession, par laquelle ByteDance vendrait les activités américaines de TikTok à une entité détenue par les États-Unis.

Qu’en pense la Chine ?

Début mars, la Chine a violemment critiqué les États-Unis : le porte-parole du ministère chinois des Affaires étrangères, Mao Ning, a déclaré : « Nous demandons aux institutions et aux personnes américaines concernées de se débarrasser de leur parti pris idéologique et de leur mentalité de guerre froide à somme nulle, de considérer la Chine et les relations sino-américaines sous un angle objectif et rationnel, de cesser de présenter la Chine comme une menace en citant des informations erronées, de cesser de dénigrer le parti communiste chinois et de cesser d’essayer de marquer des points politiques aux dépens des relations sino-américaines. » M. Ning a ajouté : « Comment les États-Unis, première superpuissance mondiale, peuvent-ils être aussi peu sûrs d’eux-mêmes pour craindre à ce point l’application préférée d’un jeune ? »

M. Ning a également critiqué l’UE au sujet de la restriction imposée à TikTok, notant que l’Union devrait « respecter l’économie de marché et la concurrence loyale, cesser d’exagérer et d’abuser du concept de sécurité nationale, et fournir un environnement commercial ouvert, équitable, transparent et non discriminatoire à toutes les entreprises ». Des remarques similaires ont été répétées à la mi-mars par le porte-parole du ministère des Affaires étrangères, Wang Wenbin.

Alors que les informations selon lesquelles les États-Unis exigeraient une cession ont été confirmées par un représentant de TikTok, M. Wenbin a également fait remarquer que « les États-Unis n’ont pas encore démontré, preuves à l’appui, que TikTok menace leur sécurité nationale » et qu’« ils devraient cesser de répandre des informations erronées sur la sécurité des données ».

Le ministère chinois du Commerce a tracé une ligne dans le sable : le Gouvernement chinois s’opposerait à la vente ou à la cession de TikTok conformément aux règles d’exportation de la Chine pour 2020. Ces remarques ont été faites le jour même où M. Chew a témoigné devant le Congrès, ce qui jette un doute supplémentaire sur l’indépendance de TikTok par rapport au gouvernement chinois.

La Chine a également fait des « démarches solennelles » auprès de l’Australie au sujet de l’interdiction australienne de TikTok sur les appareils gouvernementaux.

Quelle perspective d’avenir pour TikTok ?

Etre  optimiste et espérer que l’application ne soit pas interdite ne pourrait pas suffire. Aux États-Unis, le sort de TikTok sera probablement décidé par les tribunaux. Il y a de fortes chances que les autres pays mentionnés dans cet article fassent de même.

Le modèle GPT-4 : repousser les limites, soulever des inquiétudes

Le monde de l’IA a connu une multitude de développements passionnants en mars. Si l’arrivée de GPT-4 promet de porter le traitement du langage naturel et la reconnaissance d’images à de nouveaux sommets, les préoccupations soulevées par l’initiative « Pause Giant AI Experiments », une lettre ouverte sur les implications éthiques des expériences d’IA à grande échelle, ne peuvent être ignorées.

OpenAI a annoncé le développement de GPT-4, un grand modèle multimodal qui peut traiter à la fois du texte et des images en tant qu’entrées. Cette annonce marque une étape importante dans l’évolution des modèles GPT, les modèles GPT-3 et GPT-3.5 étant limités au traitement du texte. La capacité de GPT-4 à traiter des modalités multiples élargira les possibilités de traitement du langage naturel et de reconnaissance d’images, ce qui ouvrira de nouvelles perspectives pour les applications d’intelligence artificielle. Cette évolution ne manquera pas de susciter beaucoup d’intérêt et d’impatience, car la communauté de l’IA attend des précisions sur les capacités de GPT-4 et son impact potentiel dans ce domaine.

Avec la capacité de traiter 32 000 tokens de texte, contrairement à GPT-3, qui était limité à 4 000 tokens, GPT-4 offre des possibilités accrues pour la création de contenus longs, l’analyse de documents et les conversations approfondies (la tokenisation est un moyen de séparer un morceau de texte en unités plus petites appelées « tokens » ; ici, les tokens peuvent être des mots, des caractères ou des sous-mots). Le dernier modèle, GPT-4, est capable de traiter et de générer de longs passages de texte. Il a obtenu des résultats impressionnants lors d’une série de tests de certification académique et professionnelle, tels que le LSAT, le GRE, le SAT, les examens d’AP et une simulation d’examen du barreau.

Ce qui a suscité une grande controverse parmi le public, c’est le fait que le nombre de paramètres du modèle et les informations sur les données d’entraînement n’ont pas été rendus publics, que le document de recherche publié par les développeurs n’offre pas beaucoup d’informations, et que même les fonctionnalités annoncées ne sont pas encore disponibles. En outre, l’accès à GPT-4 est limité à ceux qui s’inscrivent sur la liste d’attente ou s’abonnent au service premium ChatGPT Plus.

Ce buzz a apparemment été la goutte d’eau qui a fait déborder le vase pour beaucoup. Peu de temps après, un groupe de chercheurs en IA, dont Elon Musk et Steve Wozniak, a signé l’initiative « Pause Giant AI Experiments », une lettre ouverte exhortant les laboratoires d’IA à freiner. La lettre appelle à une interdiction mondiale de la formation de systèmes d’IA plus puissants que GPT-4. Elle s’inquiète du risque que l’IA devienne une « menace pour l’existence de la civilisation humaine ». Elle souligne que l’IA pourrait être utilisée pour créer des armes autonomes et « surpasser le contrôle humain ». La lettre poursuit en suggérant que l’IA pourrait éventuellement devenir si puissante qu’elle pourrait créer une superintelligence qui surpasserait les êtres humains.

Les signataires ne sont pas les seuls à avoir des craintes. Stephen Hawking a prévenu que l’IA pourrait éventuellement « sonner le glas de la race humaine ». Même Bill Gates a déclaré que certains risques existaient. Toutefois, M. Gates a également affirmé (ce qui n’est pas surprenant, puisque OpenAI est soutenue par Microsoft) qu’une pause dans le développement de l’IA ne résoudrait pas les problèmes et qu’une telle pause serait difficile à mettre en œuvre.

La lettre ouverte a relancé le débat au sein de la communauté scientifique et technologique sur l’importance d’un développement responsable de l’IA, notamment en répondant aux préoccupations concernant les préjugés, la transparence, les suppressions d’emplois, la protection de la vie privée et le risque de militarisation de l’IA. Les pouvoirs publics et les entreprises technologiques ont un rôle important à jouer dans la réglementation de l’IA, notamment en définissant des lignes directrices éthiques, en investissant dans la recherche sur la sécurité et en assurant la formation des personnes travaillant dans ce domaine.

Cet article vous est présenté par le AI and Data Lab de Diplo. Ce laboratoire suit de près l’évolution du journal de l’IA, mène des expériences telles que « l’IA peut-elle battre l’intuition humaine ? », et crée des applications telles que ce rapport.

À Diplo, nous discutons également de l’impact de l’IA sur notre avenir dans le cadre d’une série de webinaires. Rejoignez-nous le 2 mai pour discuter de l’éthique et de la gouvernance de l’IA d’un point de vue non occidental.

Quoi de neuf concernant les négociations sur la cybersécurité ?

Le groupe de travail à composition non limitée (GTCNL) des Nations unies sur la cybersécurité a tenu sa quatrième session de fond. Nous vous en présentons les grandes lignes ci-dessous.

Menaces existantes et potentielles. Les risques liés à la chaîne d’approvisionnement, l’utilisation d’instruments alimentés par l’IA, les rançongiciels et les retombées des cyberattaques russes sur l’Ukraine, qui ont affecté les infrastructures en Europe, ont été mentionnés, parmi d’autres menaces, au cours de la session. Le Kenya a proposé de créer un répertoire des Nations unies sur les menaces communes. L’UE a proposé de formuler une position commune sur les rançongiciels, et la République tchèque a proposé une discussion plus détaillée sur le comportement responsable des États dans le développement de nouvelles technologies.

Règles, normes et principes. La Russie et la Syrie ont fait valoir que les règles non contraignantes existantes ne réglementent pas efficacement l’utilisation des TIC pour prévenir les conflits interétatiques et ont proposé de rédiger un traité juridiquement contraignant. D’autres pays (comme le Sri Lanka et le Canada) ont critiqué cette proposition. L’Égypte a fait valoir que l’élaboration de nouvelles normes n’entrait pas en conflit avec le cadre normatif existant.

Droit international (DI). La plupart des États ont réaffirmé l’applicabilité du droit international à l’espace cybernétique, mais certains (Cuba, Inde, Jordanie, Nicaragua, Pakistan, Russie, Syrie…) ont fait valoir que l’applicabilité automatique était prématurée et ont soutenu une proposition de traité juridiquement contraignant. La Russie a présenté un concept actualisé de la « Convention des Nations unies sur la garantie de la sécurité internationale de l’information », avec la Biélorussie et le Nicaragua comme co-parrains. La plupart des États ne sont pas favorables à l’élaboration d’un nouvel instrument juridiquement contraignant.

En ce qui concerne le droit international humanitaire (DIH), l’UE et la Suisse ont affirmé son applicabilité ; toutefois, la Russie et le Belarus ont refusé l’application automatique du DIH dans le cyberespace, invoquant l’absence de consensus sur ce qui constitue une attaque armée.

Les principes de la Charte des Nations unies et le respect des obligations des États ont également été discutés pour la première fois, nous semble-t-il. La plupart des États ont également soutenu la proposition canado-suisse d’inclure ces sujets, le règlement pacifique des différends, le DIH et la responsabilité de l’État dans le programme de travail du GTCNL en 2023. 

Mesures de confiance (CBM). Certaines délégations ont appelé à une participation plus active des organisations régionales afin qu’elles partagent leurs expériences au sein du GTCNL. Il y a également eu un large accord pour établir un répertoire des points de contact (POC), bien que les États aient continué à discuter de qui devrait être nommé comme POC (agences ou personnes particulières), quelles fonctions ils devraient avoir, etc.

Renforcement des capacités. Certains pays ont souligné que le programme d’action visant à promouvoir un comportement responsable de l’État sera le principal instrument pour structurer les initiatives de renforcement des capacités. L’Iran a souligné que l’UIT pourrait être un forum permanent de coordination à cet égard. Cuba a soutenu cette idée.

Les États ont également discuté du contenu de la proposition indienne sur le portail mondial de coopération en matière de cybersécurité. Singapour et les Pays-Bas ont toutefois rappelé les portails de coopération existants, tels que les cyberportails de l’UNIDIR et du GFCE.

Un dialogue institutionnel régulier. Les partisans du Programme d’action ont souligné la complémentarité du GTCNL et du Programme d’action. Certains États ont évoqué la possibilité de discuter de normes cybernétiques supplémentaires dans le cadre du Programme d’action, si nécessaire, et ont demandé que le GTCNL consacre une session au Programme d’action. La Chine a fait remarquer que les États qui ont soutenu la résolution sur le programme d’action sapent le statut du GTCNL. La Russie, la Biélorussie et le Nicaragua ont proposé un organe permanent doté de mécanismes d’examen comme alternative au Programme d’action. Certains États ont toutefois prévenu que des pistes de discussion parallèles nécessiteraient davantage de ressources.

Prochaines étapes. Le président prévoit d’organiser une réunion virtuelle informelle à la fin du mois d’avril pour que les répertoires régionaux de POC puissent partager leurs expériences. Le deuxième document officieux révisé sur le répertoire de POC est attendu après. Une réunion entre les sessions sur l’IL et le dialogue institutionnel régulier se tiendra vers la fin du mois de mai. Le projet de rapport annuel de situation zéro est également attendu pour le début du mois de juin. Les États examineront le rapport annuel de situation lors de la 5e session de fond, qui se tiendra du 24 au 28 juillet 2023.Lisez notre rapport détaillé de la session.

Genève

Mise à jour des politiques de la Genève internationale

Forum du SMSI 2023 | 13–17 mars

L’édition 2023 du Forum du Sommet mondial sur la société de l’information (SMSI) a comporté plus de 250 sessions explorant un large éventail de questions liées aux TIC pour le développement et à la mise en œuvre des lignes d’action du SMSI convenues en 2003. Le forum comprenait également un volet de haut niveau qui soulignait, entre autres, l’urgence de faire progresser l’accès à l’Internet, sa disponibilité et son caractère abordable en tant que moteurs de la numérisation, ainsi que l’importance d’encourager la confiance dans les technologies numériques. L’événement a été accueilli par l’UIT, et organisé conjointement avec l’UNESCO, la CNUCED et le Programme des Nations unies pour le développement (PNUD). D’autres résultats du forum seront publiés par l’UIT sur la page dédiée. 

Le dernier jour du forum, le Diplo et la Geneva Internet Platform (GIP), ainsi que les missions permanentes de Djibouti, du Kenya et de la Namibie, ont organisé une session sur le renforcement des voix de l’Afrique dans les processus numériques mondiaux. Cette session a souligné la nécessité d’une coopération renforcée – à l’intérieur et à l’extérieur de l’Afrique – pour mettre en œuvre les stratégies de transformation numérique du continent et veiller à ce que les intérêts africains soient correctement représentés et pris en compte dans les processus internationaux de gouvernance numérique. Renforcer et développer les capacités individuelles et institutionnelles, coordonner les positions communes sur les questions d’intérêt mutuel, tirer parti de l’expertise des acteurs de divers groupes de parties prenantes, et assurer une communication efficace et efficiente entre les missions et les capitales sont quelques-unes des mesures suggérées pour garantir que les voix africaines sont pleinement et significativement représentées sur la scène internationale. Lisez les conclusions de la session.

La 1re session du GEG sur les LAWS (Lethal Autonomous Weapons Systems) | 6–10 mars

Le Groupe d’experts gouvernementaux (GEG) sur les technologies émergentes dans le domaine des systèmes d’armes autonomes létaux ( LAWS – Lethal Autonomous Weapons Systems) a tenu sa première session en mars. Au cours de cette réunion de cinq jours, le groupe s’est concentré sur les aspects suivants des technologies émergentes dans le domaine des LAWS : la caractérisation des LAWS (définitions et portée) ; l’application du DIH (interdictions et réglementations éventuelles) ; l’interaction homme-machine, le contrôle humain significatif, le jugement humain et les considérations éthiques ; la responsabilité et l’obligation de rendre des comptes ; les examens juridiques ; l’atténuation des risques et les mesures de confiance.

La 26e session de la Commission de la science et de la technologie au service du développement (CSTD) s’est tenue à Bruxelles | 27–31 mars

La 26e session de la CSTD a abordé (a) la technologie et l’innovation pour une production plus propre, plus productive et plus compétitive, et (b) la garantie de l’eau potable et de l’assainissement pour tous : une solution par la science, la technologie et l’innovation. 

Lors de la cérémonie d’ouverture, Rebeca Grynspan, secrétaire générale de la CNUCED, a fait une déclaration dans laquelle elle a insisté sur le fait que l’humanité se trouvait à un tournant décisif, entre défis mondiaux et possibilités technologiques. La secrétaire générale a souligné le déclin inquiétant du progrès humain global au cours des deux dernières années, qui met en péril nos objectifs d’avenir durable. La résolution de ces problèmes économiques, sociaux et environnementaux importants nécessite une action mondiale coordonnée.

La session a également été l’occasion de présenter le rapport 2023 sur la technologie et l’innovation, qui identifie les opportunités cruciales et les solutions à base de particules permettant aux pays en développement d’utiliser l’innovation pour une croissance durable.

À venir

À surveiller :
événements mondiaux en matière de politique numérique en avril

11–21 avril, Comité ad hoc sur la cybercriminalité (Vienne, Autriche)

La coalition numérique Partner2Connect (P2C) est une alliance multipartite visant à mobiliser des ressources, des partenariats et des engagements pour parvenir à une connectivité universelle et significative. Créée en 2021 par l’UIT, le projet de feuille de route numérique du Secrétaire général des Nations Unies et de l’Envoyé pour la technologie, la coalition a franchi des étapes importantes en 2022. La réunion annuelle, qui se tiendra au siège de l’UIT à Genève, en examinera les succès et les défis à ce jour, ainsi que les projets visant à connecter les personnes non desservies dans le monde entier.

13 avril, La GDC en profondeur : gouvernance de l’Internet (en ligne)

Les cofacilitateurs du Pacte mondial pour le numérique (PMN) organisent une série d’approfondissements thématiques afin de préparer les négociations intergouvernementales sur le PMN. La discussion du 13 avril portera sur la gouvernance de l’Internet. Au fur et à mesure que ces discussions détaillées se dérouleront, la GIP examinera la manière dont les thèmes principaux ont été abordés dans différents documents politiques clés. Visitez notre page dédiée dans le Digital Watch Observatory pour en savoir plus sur la façon dont les questions liées à la gouvernance de l’Internet ont été abordées dans ces documents.

24–27 avril, Forum mondial de l’ONU sur les données (Hangzhou, Chine)

Le Forum mondial des données des Nations unies fait progresser l’innovation en matière de données, encourage la coopération, génère un soutien politique et financier pour les initiatives en matière de données, et facilite les progrès vers l’amélioration des données pour le développement durable. Le forum se concentre sur les domaines thématiques suivants : innovation et partenariats pour des données de meilleure qualité et plus inclusives ; maximisation de l’utilisation et de la valeur des données pour une meilleure prise de décision ; construction de la confiance et de l’éthique dans les données ; tendances émergentes et partenariats pour développer l’écosystème des données.

24–27 avril, RSA (San Francisco, USA)

La conférence RSA 2023 se tiendra sur le thème « Plus forts ensemble », et proposera des séminaires, des ateliers, des formations, une exposition, des discours d’ouverture et des activités interactives.

29–30 avril, réunion des ministres du Numérique et de la Technologie du G7 2023 (Hangzhou, Chine)

La réunion des ministres du Numérique et de la Technologie du G7 abordera diverses questions liées à la numérisation, y compris les préoccupations émergentes et les changements dans l’environnement mondial des affaires numériques. Les ministres discuteront d’un cadre pour rendre opérationnelle la libre circulation des données en toute confiance (DFFT), en coopération avec le G7 et d’autres pays, tout en respectant les réglementations nationales, en améliorant la transparence, en garantissant l’interopérabilité et en promouvant les partenariats public-privé. L’opérationnalisation du DFFT devrait aider les PME et d’autres acteurs à utiliser en toute sécurité des données provenant du monde entier, ce qui leur permettra de développer des activités transfrontalières.

DW Weekly #107 – 17 April 2023

DigWatch Weekly 100th issue 1920x1080px generic

Dear readers,

As authorities grapple with ChatGTP and similar AI tools, the first regulatory initiatives are now in sight. OpenAI, the company behind ChatGPT, is in for a troubled period with new investigations.

Meanwhile, tech companies are under pressure over many other issues – from hosting content, which is deemed a national security concern, to new fines and probes for (alleged) anticompetitive practices. 

Let’s get started.

Stephanie and the Digital Watch team


// HIGHLIGHT //

Governments vs ChatGPT: A whirlwind of regulations in sight

ChatGPT, the AI-powered tool that allows you to chat and get answers to almost any question (we’re not guaranteeing it’s the right answer), has taken the world by storm. 

Things are progressing fast. In the space of just two days, we learned that Google is creating a new AI-based search engine (unrelated to chatbot Bard which it launched last month), Elon Musk has created a new company, X.AI, which is probably linked to his effort to build an everything app called X, and China’s e-commerce giant Alibaba launched its own ChatGPT-style AI model

Now, governments around the world are starting to take notice of the potential of these tools. They are launching investigations into ChatGPT (in April, we covered Italy’s temporary ban, and the investigations that privacy regulators in France, Ireland, Switzerland, Germany, and Canada are considering) and are now ramping up efforts to introduce new rules. 

The new developments from last week are coming from the three main AI hotspots: the EU, China, and the USA.

1. The latest from Europe

Known for its tough rules on data protection and digital services/markets, the EU is inching close to seeing its AI Act – proposed by the European Commission two years ago – materialise. While the European Council has adopted its stance, the draft is currently being debated by the European Parliament (it will then need to be negotiated between the three EU institutions in the so-called trilogues). Progress is slow, but sure.

As policymakers debate the text, a group of experts argue that general-purpose AI systems carry serious risks and must not be exempt under the new EU legislation. Under the proposed rules, certain accountability requirements apply only to high-risk systems. The experts argue that software such as ChatGPT needs to be assessed for its potential to cause harm and must also have commensurate safety measures in place. The rules must also look at the entire life cycle of a product.

What does this mean? If the rules are updated to consider, for instance, the development phase of a product, this means that we won’t just wait to look at whether an AI model was trained on copyrighted material, or on private data, after the fact. Rather, a product is audited before its launch. This is quite similar to what China is proposing (see below) and what the USA will be looking into soon (details further down).

The draft rules on general-purpose AI are still up for debate at the European Parliament, so things might still change. 

Meanwhile, prompted by Italy’s ban and Spain’s request to look into privacy concerns surrounding ChatGPT, the EU’s data protection watchdog has launched a task force to coordinate the work of European data protection authorities.

There’s little information about the European Data Protection Board’s (EDPB) new task force other than a decision to tackle ChatGPT-related action during the EDPB’s next plenary (scheduled for 26 April).  

2. The latest from China

China has also taken a no-nonsense approach to regulating tech companies in recent years. The Cyberspace Administration of China (CAC) has wasted no time in proposing new measures for regulating generative AI services, which are open for public comments until 10 May. 

The rules. Providers need to ensure that content reflects the country’s core values, and shouldn’t include anything that might disrupt the economic and social order. No discrimination, false information, or intellectual property infringements are allowed. Tools must undergo a security assessment before being launched.

Who they apply to. The onus of responsibility falls on organisations and individuals that use these tools to generate text, images, and sounds for public consumption. They are also responsible for making sure that pre-trained data is lawfully sourced.

The industry is also calling for prudence. The Payment & Clearing Association of China has advised its industry members to avoid uploading confidential information to ChatGPT and similar AI tools, over risks of cross-border data leaks.

3. The latest from the USA

Well-known for its laissez-faire approach to regulating technological innovation, the USA is taking (baby) steps towards new AI rules.

The Biden administration is studying potential accountability measures for AI systems, such as ChatGPT. In its request for public feedback (which runs until 10 June), the National Telecommunications and Information Administration (NTIA) of the Department of Commerce is looking into new policies for AI audits and assessments that tackle bias, discrimination, data protection, privacy, and transparency. 

What this exercise covers. Everything and anything that falls under the definition of ‘AI system’ and ‘automated systems’, including technology that can ‘generate outputs such as predictions, recommendations, or decisions influencing real or virtual environments’. 

What’s next? There’s already a growing interest in regulating AI governance tools, the NTIA writes, so this exercise will help it advise the White House on how to develop an ecosystem of accountability rules. 

Separately, there are also indications that Senate Democrats are working on new legislation spearheaded by Majority Leader Chuck Schumer. A draft is in circulation, but don’t expect anything tangible soon unless the initiative secures bipartisan support.

And in a bid to avoid facing intellectual property infringements, music company Universal Music Group has ordered streaming platforms, including Spotify and Apple, to block AI services from scraping melodies and lyrics from copyrighted songs, according to the Financial Times. The company fears that AI systems are being trained on the artists’ intellectual property. IPR lawsuits are looming.


Digital policy roundup (10–17 April)
// OPENAI //

Italy tells OpenAI: Comply or face ban

The Italian Data Protection Authority (GDPD), which was the first to open an investigation into OpenAI’s ChatGPT, has provided the company with a list of demands it must comply with by 30 April, before the authority may lift its temporary ban

The Italian government wants OpenAI to let people know how personal data will be used to train the tool, and to request consent from users (or based on legitimate interest) before processing their personal data. 

But a more challenging request is for the company to implement an age-gating system for underaged users and to introduce measures for identifying accounts used by children (the latter must to be in place by 30 September). 

Why is this relevant? The age-verification request coincides with efforts by the EU to improve how platforms verify their users’ age. The new eID proposal, for instance, will introduce a much-needed framework of certification and interoperability for age-verification measures. The way OpenAI tackles this issue will be a testbed for new measures. 

More European countries launch probes into ChatGPT

France’s data protection regulator (CNIL) has opened a formal investigation on ChatGPT after receiving five complaints. These include complaints from member of parliament Eric Bothorel, lawyer Zoé Vilain, and developer David Libeau

Germany’s data protection conference (DSK), the body of independent German data protection supervisory authorities of the federal and state governments, has opened an investigation into ChatGPT. The announcement was made by the North Rhine-Westphalia watchdog (the DSK itself has been mum about it).

Spain’s data protection agency (AEPD) announced an independent investigation in parallel to the work being carried out by the EDPD.


// CYBERSECURITY //

Classified Pentagon documents leaked on social media

The Pentagon is investigating the leak of over 50 classified documents that turned up on the social media platform Discord. Jack Teixeira, a 21-year-old US air national guardsman, suspected of leaking the documents, was charged in Boston, USA, on Friday under the Espionage Act.

Days after the Pentagon announced its investigation, the leaked documents could still be accessed on Twitter and other platforms, prompting a debate on the responsibility of social media companies in cases involving national security.

Jack Teixeira

Russia accuses Pentagon, NATO of masterminding Ukraine attacks against Russia

The press office of Russia’s Federal Security Service (FSB) has accused the Pentagon and NATO countries of being behind massive cyberattacks from Ukraine against Russia’s critical infrastructure.

The FSB claims that over 5,000 hacker attacks on Russian critical infrastructure have been recorded since the beginning of 2022 and that cyberattack units of Western countries are using Ukrainian territory to carry out these attacks.

USA-Russia cyber impasse on hold?

Meanwhile, Russia’s official news agency TASS reported that the USA has maintained contact with Russia on cybersecurity issues. 

US Department of State’s Ambassador-at-Large for Cyberspace and Digital Policy Nathaniel Fick told TASS that channels of communication remain open. ‘Yes, I’m across the table from Russian counterparts with some frequency, and with Chinese as well,’ he said.


// ANTITRUST //

South Korea fines Google for abusing global market dominance

Google is in trouble in South Korea after the country’s Fair Trade Commission (FTC) fined the company USD 31.9 million (EUR 29.2 million) for unfair business practices.

The FTC found that Meta entered into agreements with Korean mobile game companies between June 2016 and April 2018, which banned them from releasing their content on One Store, a local marketplace which rivals Meta’s marketplace.

Indian start-ups seek court order to block Google’s in-app billing system

Google could also be in trouble in India after a group of start-ups, led by the Alliance of Digital India Foundation (ADIF), asked an Indian court to suspend the company’s new in-app billing fee system, until the antitrust authority probes Google’s failure to comply with an October 2022 order. A new antitrust directive, issued in October, allowed the use of third-party billing services for in-app payments.

Tweet from the ADIF.
The Alliance of Digital India Foundation (ADIF) comments on the South Korean decision to fine Google. Source: @adif_India

// DATA FLOWS //

European MEPs vote against proposal to greenlight westward data transfers

The proposal to allow personal data transfers of EU citizens with the USA under the new EU-US Data Privacy Framework has been rejected by the European Parliament

Parliamentarians expressed concerns about the adequacy of US data protection laws and called for enhanced safeguards to protect the personal data of European citizens. The proposed framework does not provide sufficient safeguards, according to the members of parliament.

While the European Parliament’s position is not legally binding, it adds pressure on the European Commission to reconsider its approach to data transfers with the US and prioritise more robust data protection measures.


// METAVERSE //

Meta urged to keep kids off of the metaverse 

Dozens of advocacy organisations and children’s safety experts are calling on Meta to halt its plans to allow kids into its virtual reality world, Horizon Worlds. In a letter addressed to Meta CEO Mark Zuckerberg, the groups and experts expressed concerns about the potential risks of harassment and privacy violations for young users in the metaverse app. 

The experts also said that given Meta’s track record of addressing damaging design after harm has occurred, they are requesting that Meta not allow kids into the metaverse until it can ensure their safety and privacy with robust measures in place.

A child wearing a virtual reality headset.

Meta says metaverse can transform education

Was it a coincidence that two days before, Meta’s Global Affairs Chief Nick Clegg penned an article lauding metaverse’s potential for education

In any case, Clegg explains how the metaverse can enable access to educational resources and opportunities for learners across geographical and economic barriers and how virtual learning classrooms, simulations, and collaborative environments can enhance learning outcomes.

Clegg also acknowledges a need for responsible and inclusive design of metaverse educational experiences, with a focus on privacy, safety, and accessibility. 


The week ahead (17–23 April)

16–19 April: The American Registry for Internet Numbers (ARIN) 51st Public Policy and Members Meeting in Florida is discussing internet number resources, regional policy development, and the overall advancement of the internet.

21–23 April: The closing session of the European Commission citizens’ panel on the metaverse and other virtual worlds will ask participants to turn their ideas into concrete recommendations. They’ll be asked to suggest policy measures to help shape the evolution of virtual worlds.


#ReadingCorner
Tech Diplomacy front cover

Tech diplomacy in the Bay Area

In 2018, Diplo’s techplomacy mapping exercise explored how different diplomatic representations interact with the San Francisco Bay Area ecosystem. Since then, a lot has changed, which prompted Diplo to update its research. The 2023 report, Tech Diplomacy Practice in the San Francisco Bay Area’, launched last week, makes some important observations.

Tech diplomacy has matured, moving from informal engagements to more structured, formal engagements. Government representations in the San Francisco Bay Area and the structures within tech companies that act as partners to the conversation have become both more diverse and complex, adding challenges to reaching one another. San Francisco is seeing more and more collaborations between international diplomatic representations and tech companies to achieve common goals. Read the full text.


steph
Stephanie Borg Psaila
Director of Digital Policy, DiploFoundation

Was this newsletter forwarded to you, and you’d like to see more?