Home | Newsletters & Shorts | DW Weekly #113 – 29 May 2023

DW Weekly #113 – 29 May 2023

DigWatch Weekly 100th issue 1920x1080px generic

Dear all,

OpenAI CEO Sam Altman was in the news again, not only because of the European tour he’s embarked on, but over things he said and wrote last week. In other news, Microsoft president Brad Smith joined in the private sector’s call for regulating AI. Meta was hit with a historic fine over data mishandling, while the Five Eyes have attributed a recent spate of cyberattacks to China.

Let’s get started.
Stephanie and the Digital Watch team


// HIGHLIGHT //

OpenAI’s Sam Altman says forget about existing AI, it’s future AI that should worry us

There were two reasons why OpenAI CEO Sam Altman made headlines last week. The first concerns a threat he made to European lawmakers (which he then took back) about regulating AI. That’s about regulating existing AI.

The second is his warning on the existential risks which AI could pose to humanity. That’s about regulating future AI. Let’s start with this one.

Regulating future AI… now   

Doomsday theories abound these days. We just don’t know if we’ll see AI take over the world in our lifetime, or that of our children – or if it will ever even come to that at all. 

Sam Altman, the man behind OpenAI’s ChatGPT, which took the world by storm in the space of a few weeks, is probably one of the few people who could predict what AI might be capable of in ten years’ time, within acceptable levels of miscalculations. (That’s also why he’s in our Highlight section again this week).

In case he felt he wasn’t vocal enough during the recent hearing before the US Senate Judiciary, he’s now written about it again: ‘Given the picture as we see it now, it’s conceivable that within the next ten years, AI systems will exceed expert skill level in most domains… superintelligence will be more powerful than other technologies humanity has had to contend with in the past.’ That would give us ten years before it could all go awry. Considering the time it takes for an EU regulation to see the light of day, ten years is not a long time.

So how should we regulate future AI? Altman sees a three-pronged approach to what he calls superintelligence. The first is a government-backed project where companies agree to safety guardrails based on the rate of growth in AI capability (however this will be measured). This reminds us of what economist Samuel Hammond wrote recently on the need for a Manhattan Project for AI Safety.

The second is to form an international authority, similar to the International Atomic Energy Agency, with authority over AI above a certain capability, to ‘inspect systems, require audits, test for compliance with safety standards, place restrictions on degrees of deployment and levels of security, etc’. 

The third is more research on safety and alignment issues, but we won’t go into this for now.

What is interesting here is the emphasis on regulations based on capabilities. It’s along the same lines as what he argued before US lawmakers the week before: In his view, the stronger or more powerful the algorithm or resource, the stricter the rules should be. By comparison, the EU’s upcoming AI Act takes on a risk-based approach: the higher the risk, the stricter the rules.

Along this reasoning, models that fall below Altman’s proposed capability threshold would not be included under this (or any?) regulation. Why? He thinks that (a) today’s models are not as risky as future versions will be, and (b) that companies and open-source projects shouldn’t have to face burdensome mechanisms like licences or audits. ‘By contrast’, he writes, ‘the systems we are concerned about will have power beyond any technology yet created, and we should be careful not to water down the focus on them by applying similar standards to technology far below this bar.’ 

(Over-) regulating existing AI 

He does have a point. If more powerful models are misused, their power to cause harm is significantly higher. The rules that would apply to riskier models should therefore be more onerous. And he’s probably right that a moratorium wouldn’t stop the advancements from continuing in secret.

But there are also major flaws with Altman’s logic. First, it’s not an either/or scenario, as he suggests. Shifting the focus to tomorrow’s AI, just because it will be more powerful, won’t make today’s issues go away. Today’s issues still need to be tackled, and soon.

This logic explains why he felt compelled to criticise the EU’s upcoming AI Act as a case of over-regulation. Licences and regulations, to him, are an unnecessary burden on companies whose systems carry more or less the same risks as other internet technologies (and therefore, he probably thinks are also insignificant compared to those which more powerful AI systems will pose in the next ten years).  

Second, existing models are the basis for more powerful ones (unless he knows something that we don’t). Hence, the project and authority that Altman envisions should start addressing the issues we see today, based on the capabilities we have today. Guardrails need to be in place today. 

And yet, it’s not Altman’s criticism that angered European Commissioner Thierry Breton, but rather his threat of pulling out of Europe over the proposed rules. If there’s one action that a threat could trigger, it would be the immediate implementation of guardrails.

Tweet from Thierry Breton on 25 May says 'There is no point in attempting blackmail -- claiming that by crafting a clear framework, Europe is holding up the rollout of generative #AI. To the contrary! With the "AI Pact" I proposed, we aim to assist companies in their preparations for the EU AI ACT. The words 'Is that a threat' appear over a photo of a woman with blond hair.

Digital policy roundup (22–29 May)
// AI GOVERNANCE //

Microsoft proposes five-point blueprint for AI regulation

Microsoft has published a blueprint for governing AI, which includes placing tight rules (or safety breaks) on high-risk AI systems that are being deployed to control critical infrastructure, and creating new rules to govern highly capable AI foundation models. In his forward to the blueprint, Microsoft president Brad Smith also called for a new government agency to implement these new rules.

A five-point blueprint for governing Al

  1. Implement and build upon new government-led Al safety frameworks
  2. Require effective safety brakes for Al systems that control critical infrastructure
  3. Develop a broader legal and regulatory framework based on the technology architecture for Al
  4. Promote transparency and ensure academic and public access to Al
  5. Pursue new public-private partnerships to use Al as an effective tool to address the inevitable societal challenges that come with new technology

Source: Microsoft

Why is it relevant? First, some of the proposals in Microsoft’s blueprint are similar to OpenAI Sam Altman’s proposals. For instance:

  • Microsoft proposes ways of governing highly capable AI foundation models – more or less what Altman describes as superintelligent systems. These powerful new AI models are at the frontier of research and development and are emerging at advanced data centres using internet-scale datasets.
  • Like Altman, Smith is not thinking about ‘the rich ecosystem of AI models that exists today’, but rather the small class of edgy AI models that are redefining the frontier.
  • And, again just like Altman, Smith believes in a framework consisting of rules, licensing requirements, and testing.

Second, Microsoft’s blueprint goes a step further (and is closer to the EU’s risk-based approach) in calling for safety breaks on AI systems used within critical infrastructure sectors. Not all AI systems used in these sectors are high-risk, but those that manage or control infrastructure systems for electricity grids or water systems, for instance, require tighter controls.


EU, Google to develop voluntary AI pact ahead of new AI rules

Thierry Breton, the commissioner in charge of the EU’s digital affairs, and Google chief executive Sundar Pichai agreed last week to work on a voluntary AI pact ahead of new regulations. The agreement will help companies develop and implement responsible AI practices. 

Why is it relevant? First, Breton said companies can’t afford to wait until the AI regulation is in place to start complying with the rules. Second, the commissioner used his meeting with Pichai to call out other companies who pick and choose the regulations they’ll implement. We’re assuming he’s referring to OpenAI and Twitter.


// DATA PROTECTION //

Meta’s record fine puts pressure on EU, USA to conclude data transfer framework  

Meta has been fined a record-breaking EUR1.2 billion (USD1.29 billion) and given six months to stop data transfers of European Facebook users from the EU to the USA. 

The fine was imposed by the Irish Data Protection Commissioner (DPC) after the company continued to transfer data despite the EU Court of Justice’s ruling of 2020 invalidating the EU-USA Privacy Shield framework. The data protection regulator concluded that the legal basis that Meta used to continue transferring data did not afford European citizens adequate protection of their rights. 

Why is it relevant? The company will appeal, so there’s still a long way to go before the fine is confirmed. But the pressure’s on for EU and US officials negotiating the new data protection framework. The new Trans-Atlantic Data Privacy Framework, announced in March 2022, has not yet been finalised.


Was this newsletter forwarded to you, and you’d like to see more?


// CYBERCRIME //

Five Eyes attribute cyberattacks to China

The intelligence agencies of the USA, Australia, Canada, New Zealand, and the UK – called the Five Eyes – have attributed recent cyberattacks on US critical infrastructures to the Chinese state-sponsored hacking group Volt Typhon.

Responding to the joint cybersecurity advisory issued by the intelligence agencies, China’s foreign ministry spokesperson Mao Ning dismissed the advisory as disinformation. ‘No matter how the tactics change, it does not change the fact that the US is the empire of hacking,’ she said.


// GDC //

Public institutions ‘ill-equipped to assess and respond to digital challenges’ – UN Secretary-General

Most governments do not have sufficient skills to respond to digital challenges, a result of decades of underinvestment in state capacities. The UN Secretary-General’s latest policy brief says that government capacities should therefore be a priority for cooperation on digital issues. 

In case you’re not following the process already: The Global Digital Compact is an initiative of the Secretary-General for promoting international digital cooperation. UN member states are expected to agree on the principles forming part of the Global Digital Compact during next year’s Summit of the Future. 

If you’ve already read the brief: We wouldn’t blame you for thinking that the brief proposes quite a few mechanisms at a time when there are already hundreds of them in place. After all, the Secretary-General’s initiative followed a report which recommended that we ‘make existing intergovernmental forums and mechanisms fit for the digital age rather than rush to create new mechanisms’.

If you haven’t done so already: Consider contributing to the informal consultations. The next two deep dives are in two weeks’ time.


The week ahead (29 May–4 June)

30 May: The G7 AI working group’s first meeting, effectively kickstarts the Hiroshima AI process.

30–31 May: The EU-US Trade and Technology Council (TTC) meets in Sweden. No, they won’t tackle data flows. Yes, they will tackle a host of other issues – from AI to green tech.

31 May–2 June: The Council of Europe’s Committee on AI will hold its 6th meeting in Strasbourg, under the chairmanship of Ambassador Thomas Scheider, who was re-elected during the 5th meeting.  

31 May–2 June: The 15th International Conference on Cyber Conflict (CyCon), organised by the NATO Cooperative Cyber Defence Centre of Excellence, takes place in Tallinn, Estonia.

31 May: Join us online or in Geneva for our conference on Building Trust in Digital Identities. As more governments around the world are exploring and implementing digital identity (or e-ID) solutions, we look at safety, security, and interoperability issues.

For more events, bookmark the DW observatory’s calendar of global policy events.


#ReadingCorner
A human hand sketching a robot on graph paper

How do we avoid becoming knowledge slaves? By developing bottom-up AI

‘If you’ve ever used these [ChatGPT or similar] tools, you might have realised that you’re revealing your thoughts (and possibly emotions) through your questions and interactions with the AI platforms,’ writes Diplo’s Executive Director Dr Jovan Kurbalija. ‘You can therefore imagine the huge amount of data these AI tools are gathering and the patterns that they’re able to extract from the way we think.’ 

The consequence is that we “risk becoming victims of ‘knowledge slavery’ where corporate and/or government AI monopolies control our access to our knowledge.” There’s a solution: developing bottom-up AI. Read the full article.

steph
Stephanie Borg Psaila – Author
Director of Digital Policy, DiploFoundation
ginger
Virginia Paque – Editor
Senior editor – Digital Policy, DiploFoundation

Was this newsletter forwarded to you, and you’d like to see more?