DW Weekly #134 – 30 October 2023

 Text, Paper, Page

Dear readers,

The stage is set for some major AI-related developments this week. Biden’s executive order on AI, and the G7’s guiding principles and code of conduct, are out. On Wednesday and Thursday, the UK will host the much-anticipated AI Safety Summit, where political leaders and CEOs will focus squarely on AI risks. In other news, the landscape for children’s online safety is changing, while antitrust lawsuits and investigations show no signs of easing up.

Let’s get started.

Stephanie and the Digital Watch team


// HIGHLIGHT //

Biden issues AI executive order; G7 adopts AI principles and code of conduct

You can tell how much AI is on governments’ minds by how many developments take place in a week – or in this case, one day.

Today’s double bill – Biden’s new executive order on AI, and the G7’s guiding principles on AI and code of conduct for developers – was highly anticipated. The White House first announced plans for the executive order in July; more recently, Biden mentioned it again during a tech advisors’ meeting. As for the G7, Japan Prime Minister Fumio Kishida has been providing regular updates on the Hiroshima AI Process for months. 

Executive order targets federal agencies’ deployment of AI

Biden’s executive order represents the government’s most substantial effort thus far to regulate AI, providing actionable directives where it can, and calling for bipartisan legislation where needed (such as data privacy). There are three things that stand out:

AI safety and security. The order places heavy emphasis on safety and security by requiring, for instance, that developers of the most powerful AI systems share their safety test results and other critical information with the US government. It also requires that AI systems used in critical infrastructure sectors be subjected to rigorous safety standards.

Sectoral approach. Apart from certain aspects that apply to all federal agencies, the order employs a somewhat sectoral approach to federal agencies’ use of AI (in contrast with other emerging laws such as the EU’s AI Act). For instance, the order directs the US Department of Health and Human Services to advance the responsible use of AI in healthcare, the Department of Commerce to develop guidelines for content authentication and watermarking to clearly label AI-generated content, and the Department of Justice to address algorithmic discrimination. 

Skills and research. The order directs authorities to make it easier for highly skilled workers to study and work in the country, an attempt to boost the USA’s technological edge. It will also heavily promote AI research through funding, access to AI resources and data, and new research structures.

G7’s principles place risk-based responsibility on developers

The G7 has adopted two texts: The first is a list of 11 guiding principles for advanced AI. The second – a code of conduct for organisations developing advanced AI – repeats the principles but expands on some of them with details on how to implement them. Our three main highlights:

Risk-based. One notable similarity with the EU’s AI Act is the risk-based element, which places responsibility on developers of AI to adequately assess and manage the risks associated with their systems. The EU promptly welcomed the texts, saying they will ‘complement, at an international level, the legally binding rules that the EU co-legislators are currently finalising under the EU AI Act’.

A step further. The texts build on the existing OECD AI Principles, but in some instances they go a few steps further. For instance, they encourage developers to develop and deploy reliable content authentication and provenance mechanisms, where technically feasible, such as watermarking or other techniques to enable users to identify AI-generated content. 

(Much) softer approach. Differing viewpoints on AI regulation exist among the G7 countries, ranging from strict enforcement to more innovation-friendly guidelines. The documents allow jurisdictions to adopt the code in ways that align with their individual approaches. But despite this flexibility, a few other provisions are overly vague. Take the provision on privacy and copyright, for instance: ‘Organisations are encouraged to implement appropriate safeguards, to respect rights related to privacy and intellectual property, including copyright-protected content.’ That’s probably not specific enough to provoke change.

Amid mounting concerns about the risks associated with AI, today’s double bill begs the question: Will these developments succeed in changing the security landscape for AI? Biden’s executive order has the most significant strength: Although it lacks enforcement teeth, it carries the constitutional weight to manage federal agencies. But on a global scale, perspectives vary so greatly that their influence is limited. And yet, today’s developments are just the beginning this week.


Digital policy roundup (23–30 October)

// MIDDLE EAST //

Musk’s Starlink to provide internet access to Gaza for humanitarian purposes

Elon Musk confirmed on Saturday that his SpaceX’s Starlink will provide internet connectivity to ‘internationally recognised aid organisations’ in Gaza. This prompted Israel’s communication minister, Shlomo Karhi, to express strong opposition about the Starlink’s potential exploitation by Hamas.

Responding to Karhi’s tweet, Musk replied: ‘We are not so naive. Per my post, no Starlink terminal has attempted to connect from Gaza. If one does, we will take extraordinary measures to confirm that it is used *only* for purely humanitarian reasons. Moreover, we will do a security check with both the US and Israeli governments before turning on even a single terminal.’

A telephone and internet blackout isolated people in the Gaza Strip on Saturday, which added to Israel’s weeks-long suspension of electricity and fuel to Gaza.  

Why is it relevant? First, it shows how internet connectivity is increasingly being weaponised during conflicts. Second, the world half-expected Starlink to intervene, given the role it played during the Ukraine conflict, and in other countries affected by natural disasters. But its (public) promise to get go-aheads from both governments could expose the company to new dimensions of responsibility and risks, and could be counterproductive to the aid organisations who so desperately need access to coordinate their relief efforts.

Screenshot of exchange on X

// KIDS ONLINE //

Meta sued by 33 US states over children’s mental health

Meta, Instagram and Facebook’s parent company, is facing a new legal battle from over 30  US states, which are alleging that the company engaged in deceptive practices and contributed to a mental health crisis among young users of its social media platforms. 

The lawsuit claims that Meta intentionally and knowingly used addictive features while concealing the potential risks of social media use, violating consumer protection laws, and breaching privacy regulations concerning children under 13. 

Why is it relevant? The concerns raised in this lawsuit have been simmering for quite some time. Two years ago, Meta’s former employee Frances Haugen catapulted them into the public consciousness after leaking thousands of internal documents to the press and testifying to the US Senate about the company’s practices. Since then, the issue  even showed up on US President Joe Biden’s radar earlier this year. Biden called for tighter regulation ‘to stop Big Tech from collecting personal data on kids and teenagers online’.

Case details: People of the State of California v. Meta Platforms, Inc. et al., District Court, Northern District of California, 4:23-cv-05448


UK implements Online Safety Act, imposing child safety obligations on companies

The UK’s Online Safety Act, which imposes new responsibilities on social media companies, came into effect last week after the law received royal assent. 

Among other obligations, social media platforms will be required to swiftly remove illegal content, ensure that harmful content (such as adult pornography) is inaccessible to children, enforce age limits and verification measures, provide transparent information about risks to children, and offer easily accessible reporting options for users facing online difficulties. As is to be expected, there are harsh fines – up to GBP 18 million (USD 21.8 million) or 10% of global annual revenues – in store for non-compliance.

Why is it relevant? For many years, the UK relied on companies’ self-regulated efforts to keep children safe from harmful content. The industry’s initially well-intentioned efforts gradually yielded to alternate choices that prioritised financial interests – the self-regulation experiment is now over, as one child safety expert put it.


Was this newsletter forwarded to you, and you’d like to see more?


A robotic arm with an articulated hand hovers over a keyboard as though ready to type.

// CYBERWARFARE //

US official: North Korea and other states using AI in cyberwarfare

US Deputy National Security Advisor Anne Neuberger has confirmed that North Korea is using AI to escalate its cyber capabilities. In a recent press briefing (held on the sidelines of Singapore International Cyber Week), Neuberger explained: ‘We have observed some North Korean and other nation-state and criminal actors try to use AI models to help accelerate writing malicious software and finding systems to exploit.’ Although experts have often spoken about the risks of AI in cyberwarfare, it’s the first time there’s been an open acknowledgement of its use in offensive cyberattacks. There will be lots to talk about in London this week.


// ANTITRUST //

Google paid billions of dollars to be default search engine

Alphabet’s Google paid USD 26.3 billion (EUR 24.8 billion) to other companies in 2021 to ensure its search engine was the default on web browsers and mobile phones. This was revealed by a company executive testifying during the US Department of Justice’s (DOJ) antitrust trial and in a court record, which the presiding judge refused to redact.

The case, filed in 2020, concerns Google’s search business, which the DOJ and state attorneys-general consider ‘anticompetitive and exclusionary’ sustaining its monopoly on the digital advertising market. 

Why is it relevant? First, the original complaint had already indicated that ‘Google pays billions of dollars each year to distributors… to secure default status for its general search engine’. The exact figures have now been made known. Second, this will make it even more difficult for Google to argue against the implications of its exclusionary agreements with other companies.

Case details: USA v. Google LLC, District Court, District of Columbia, 1:20-cv-03010


Japan’s competition authority investigating Google’s practices

The Japan Fair Trade Commission (JFTC) is seeking information on Google’s suspected anti-competitive behaviour in the Japanese market, as part of an investigation still in its early stages.

The commission will determine whether Google excluded or restricted the activities of its competitors by entering into exclusionary agreements with other companies.

Why is this relevant? If it all sounds too familiar, that’s because the Japan case is very similar to the US DoJ’s ongoing case against Google.


The week ahead (30 October–6 November)

1–2 November: The UK will host its much-anticipated AI Safety Summit in the historic Bletchley Park, Milton Keynes. British Prime Minister Rishi Sunak will welcome CEOs of leading companies and political leaders, including US Vice President Kamala Harris, European Commission President Ursula von der Leyen, and UN Secretary-General Antonio Guterres. In addition to discussing AI capabilities, risks, and cross-cutting challenges, the UK government is expected to announce an AI Safety Institute, which ‘will advance the world’s knowledge of AI safety and it will carefully examine, evaluate and test new types of AI’, the Prime Minister said. Here’s the discussion paper and the two-day programme.

1–2 November: The Global Cybersecurity Forum gathers in Riyadh, Saudi Arabia, for its annual event, which will this year be dedicated to ‘charting shared priorities in cyberspace’.

3–4 November: The 4th AI Policy Summit takes place in Zurich, Switzerland (at the ETH Zurich campus) and online. Diplo (publisher of this newsletter) is a strategic partner.

4–10 November: The Internet Engineering Task Force (IETF) is gathering in Prague, Czechia and online for its 118th annual meeting

6 November: Deadline for very large online platforms and search engines to publish their first transparency reports under the EU’s Digital Services Act. A handful of platforms have already published theirs: Amazon, LinkedIn, Pinterest, Snapchat, Zalando, Bing, and yes, TikTok.


#ReadingCorner
Image of human head made up of wired connections

Exploring the state of AI in 2023

The topic of AI safety, which appears for the first time in the annual State of AI report, has gained widespread attention and spurred governments and regulators worldwide into action, the 2023 report explains. Yet, beneath this flurry of activity lie significant divisions within the AI community and a lack of substantial progress towards achieving global governance, with governments pursuing conflicting approaches. Read the report.


How to manage AI risks

A group of AI experts has summed up the risks of upcoming, advanced AI systems in a seven-page open letter that urges prompt action, including regulations and safety measures by AI companies. ‘Large-scale social harms and malicious uses, as well as an irreversible loss of human control over autonomous AI systems are looming’, they warn. 


AI and social media: Driving us down the rabbit hole

Harvard professor Lawrence Lessig holds a critical stance on the impact of AI and social media, and an even more critical perspective on the human capacity for critical thinking. ‘People have a naïve view: They open up their X feed or their Facebook feed, and [they think] they’re just getting stuff that’s given to them in some kind of neutral way, not recognizing that behind what’s given to them is the most extraordinary intelligence that we have ever created in AI that is extremely good at figuring out how to tweak the attitudes or emotions of the people they’re engaging with to drive them down rabbit holes of engagement.’ Read the interview.


FWAzpGt5 steph
Stephanie Borg Psaila – Author
Director of Digital Policy, DiploFoundation
nPHsW2zD ginger
Virginia Paque – Editor
Senior Editor Digital Policy, DiploFoundation

DW Weekly #133 – 23 October 2023

 Text, Paper, Page

Dear all,

The spread of illegal content and fake news linked to the Middle East conflict has been worrying EU and US policymakers, who are putting more pressure on social media companies to step up their efforts. The USA-China trade war is escalating with tighter restrictions on US chip exports to China and retaliation by China. As other updates confirm, it’s been anything but blue skies as of late. But let’s get started.

Stephanie and the Digital Watch team


// HIGHLIGHT //

China unveils Global AI Governance Initiative as part of Belt and Road

In a significant stride towards shaping the trajectory of AI on a global scale, China’s President Xi Jinping announced the Global AI Governance Initiative (GAIGI) during the opening speech of last week’s Third Belt and Road Forum. 

The initiative is expected to bring together all 155 countries that make up the Belt and Road Initiative. This will make it one of the largest global AI governance forums.

Key tenets. Releasing additional details, the Foreign Ministry’s spokesperson said the strategic initiative will focus on five aspects. It will ensure that AI development remains synonymous with human progress, which is quite a noble aim. It will promote mutual benefit, and ‘oppose drawing ideological lines or forming exclusive groups to obstruct other countries from developing AI’ – a clear dig at Western allies. It would establish a testing and assessment system to evaluate and mitigate AI-related risks, which reminds us of the risk-based approach the EU is taking in its upcoming AI Act. It will also support efforts to develop consensus-based frameworks, ‘with full respect for policies and practices among countries,’ and provide vital support to developing nations to build their AI capacities.

Chinese President Xi Jinping stands behind a wide podium covered in flowers.

First-mover advantage. In recent months, China has been moving swiftly to regulate its homegrown AI industry. Its interim measures on generative AI, effective since August, were a world first; it introduced rules for the ethical application of science and tech (including AI).  China is now looking at basic security requirements for generative AI. Very few acknowledge that despite its deeply ideological approach, China was the first to regulate generative AI, giving itself a significant advantage and mileage in the race to influence global standards. So much so that even US experts are now suggesting that the USA and its allies should engage with China ‘to learn from its experience and explore whether any kind of global consensus on AI regulation is possible’.

China’s approach. Interestingly, the interim measures are a watered-down version – or at least, a less robust version compared to its initial draft – a signal that China was favouring a more industry-friendly approach. A few weeks after the measures came into effect, eight major Chinese tech companies obtained approval from the Cyberspace Administration of China (CAC) to deploy their conversational AI services. In between the USA’s underwhelming progress on AI regulation, and the EU’s strict approach, China’s approach could easily gain appeal on the international stage.

Quasi-global. The international audience watching that stage is very large. With over 150 countries forming part of the Belt and Road Initiative, China’s Global AI Governance Initiative will be one of the largest AI governance forums. But the coalition’s size is not the only reason why the initiative will be highly influential. As the Belt and Road Initiative celebrates its 10th anniversary, China is extolling its success in stimulating nearly USD 1 trillion in investment, forming more than 3,000 cooperative projects, creating 420,000 jobs, and lifting 40 million people out of poverty. All of this gives China geopolitical clout and leverage.

Showtime. China’s Global AI Governance Initiative will undoubtedly influence other processes. Of the coalitions that have launched their own vision or process for regulating AI, the most recent is the draft guide to AI ethics, which the Association of Southeast Asian Nations (ASEAN) is working on. The unveiling of China’s initiative comes a few weeks before the UK’s AI Safety Summit (see programme), which China is set to attend (even though it’s still unclear who will represent China – the decision will indicate the level of significance China gives to the UK process). 

Xi’s speech conveys a willingness to engage: ‘We stand ready to increase exchanges and dialogue with other countries and jointly promote the sound, orderly and secure AI development in the world’. But as China’s Global Times writes, ‘China is already a very important force in global AI development… there is no way the USA and its Western allies can set up a system of AI management and regulation while squeezing China out.’


Digital policy roundup (16–23 October)

// DISINFORMATION //

EU formally asks Meta, TikTok for details on anti-disinformation measures

As the Middle East conflict unfolds, ‘the widespread dissemination of illegal content and disinformation linked to these events carries a clear risk of stigmatising certain communities and destabilising our democratic structures’, to quote European Commission Thierry Breton.

Last week, we wrote how Breton personally reached out to X’s Elon Musk, TikTok’s Shou Zi Chew, Alphabet’s Sundar Pichai, and Meta’s Mark Zuckerberg, urging them to promptly remove illegal content from their platforms. Two days later, X received a formal request for information.

Now, the European Commission has sent formal requests for information about the measures they have taken to curb the spread of illegal content and disinformation to Meta and TikTok (Alphabet has been spared so far, it seems). Meta has been documenting the measures publicly.

Deadlines. The companies must provide the commission with information on crisis response measures by 25 October and measures to protect the integrity of elections by 8 November (plus in TikTok’s case, how it’s protecting kids online). As we mentioned previously, we don’t think this exchange will stop with just a few polite letters.


DSA not yet fully operational? Honour it just the same

The European Commission is applying pressure on EU member states to implement parts of the DSA months ahead of its full implementation on 17 February 2024. The ongoing wars and instabilities have led to an ‘unprecedented increase in illegal and harmful content being disseminated online’, it said.

The commission is appealing to the countries’ ‘spirit of sincere cooperation’ to go ahead and form the planned informal network once the DSA starts applying fully, to take coordinated action, and to assist it with enforcing the DSA.  

Why is it relevant? It shows the commission’s (or rather, Breton’s) eagerness to see the DSA applied. It’s the kind of pressure that one can hardly choose to ignore.


US senator urges social media platforms to curb deceptive news

Disinformation is not just a concern for European policymakers. US Senator Michael Bennett has also written to the CEOs of Meta, Google, TikTok, and X to take prompt action against ‘deceptive and misleading content about the Israel-Hamas conflict’, which he says is ‘spreading like wildfire’.

Bennett’s letter was quite critical: ‘In many cases, your platforms’ algorithms have amplified this content, contributing to a dangerous cycle of outrage, engagement, and redistribution… Your platforms have made particular design decisions that hamper your ability to identify and remove illegal and dangerous content.’ 

Why is it relevant? First, it shows that concerns about the spread of disinformation and illegal content in the context of the Middle East conflict are not limited to European policymakers alone (although the approach taken by both sides hasn’t been quite the same). Second, Bennett is drawing attention to the platforms’ algorithms (something that the EU did not mention), which have arguably played a significant role in inadvertently promoting misleading content and creating filter bubbles.

@Senator Bennet (Michael Bennet) tweets ‘Because of social media companies' practices, deceptive and misleading content about the Israel-Hamas conflict is spreading like wildfire. We need an independent agency able to write rules to prevent foreign disinformation and increase transparency. The tweet is accompanied by a blurb from The Hill saying ‘Senate Democrat questions tech giants on efforts to stop false Israel-Hamas conflict content. Screenshot of Michael Bennet links to his intervention in the US Senate:  https://trib.al/PnNXOBl.

Was this newsletter forwarded to you, and you’d like to see more?


// CHIPS //

USA tightens restrictions on semiconductor exports to China

The US Department of Commerce’s (DOC) Bureau of Industry and Security (BIS) has tightened export restrictions on advanced semiconductors to China and other countries that are subject to an arms embargo. In practice, this means that China will be unable to obtain high-end chips that are used to train powerful AI models and equipment that can enable the production of tiny chips that are used for AI.

China reacted strongly to the BIS decision, calling these measures ‘unilateral bullying’, and an abuse of export control measures. The measures are an expansion of semiconductor export restrictions implemented last year

Why is it relevant? This latest tit-for-tat is meant to close loopholes from the 2022 measures. US Secretary of Commerce Gina Raimondo says that the objective remains unchanged: to restrict China from advancements in AI that are vital for its military applications. But the Washington-based Semiconductor Industry Association cautions that export controls ‘could potentially harm the US semiconductor ecosystem instead of advancing national security’.


 Adult, Female, Person, Woman, Crowd, Male, Man, People, Audience, Speech, Flag, Head, Condoleezza Rice
The heads of US, UK, Australian, Canadian and New Zealand security agencies meeting publicly for the first time, on a stage at Stanford University. Credit: FBI

// CYBERSECURITY //

Five Eyes warn of China’s ‘innovation theft’ campaign

The heads of the Five Eyes security agencies – composed of the USA, UK, Australia, Canada and New Zealand – have warned of a sizeable Chinese espionage campaign to steal commercial secrets. The agency heads met publicly for the first time during a security summit held in Silicon Valley. Over 20,000 people in the UK have been approached online by Chinese spies, the head of the UK’s MI5 told the BBC.


// NET NEUTRALITY //

US FCC vote kicks off process to restore net neutrality rules

The US Federal Communications Commission (FCC) has voted in favour of starting the process to restore net neutrality rules in the USA. The rules were originally adopted by the Obama administration in 2015, but repealed a few years later under the Trump government.

The steps ahead. Although net neutrality proponents will have uttered a collective sigh of relief at this renewal, the process involves multiple steps, including a period for public comments. 

Why is it relevant? We won’t state the obvious about net neutrality, or how the FCC will broaden its reach. Rather, we’ll highlight what chairwoman Jessica Rosenworcel said last week: There are already several state-led open internet policies that providers are abiding by right now; it’s time for a national one.


// COMPETITION //

South Africa investigating competition in local news media and adtech market

South Africa’s Competition Commission has launched an investigation into the distribution of media content and the advertising technology (adtech) markets that link buyers and sellers of digital advertising. 

The investigation will also determine whether digital platforms such as Meta and Google are engaging in unfair competition with local news publishers by using their content to generate advertising revenue.

Why is it relevant? First, it shows how global investigations – most notably in Australia and Canada – are drawing attention to Big Tech’s behaviour in other markets, and are influencing the measures taken by other regulators. Second, it reflects rising concerns about the shift from print advertising to digital content and advertising – a trend that is not sparing anyone.


// DIGITAL EURO //

ECB launches prep phase for digital euro

The European Central Bank (ECB) has announced a two-year prep phase for the digital euro, which will work on its regulatory framework and the technical setup. The phase starts on 1 November, and comes after a two-year research phase. 

The ECB made it clear that the launch doesn’t mean that the digital euro is a certainty. But if there’s eventually a green light, the digital euro will function similarly to online wallets or bank accounts, and will be guaranteed by the ECB. It will only be available to EU residents.

Why is it relevant? Digital currencies issued by central banks (known as Central Bank Digital Currencies (CBDCs)) are in a rapidly developing phase worldwide. Last year, a BIS report said that two-thirds of the world’s central banks are considering introducing CBDC in the near future. Even though only a few countries – such as China, Sweden, and a handful of Caribbean countries – have launched digital currencies or pilot projects, the EU is treading slowly but surely, expecting the digital euro to coexist alongside physical cash and to introduce measures that would safeguard its existing commercial banking sector.


The week ahead (23–30 October)

21–26 October: ICANN78, the organisation’s 25th annual general meeting, is ongoing in Hamburg, Germany and online.

24–26 October: The CEOs of some of the world’s leading telecoms operators are meeting in Paris for the 5G World Summit this week. 

25–26 October: The European Commission’s Global Gateway Forum – dubbed the European response to China’s Belt and Road Forum – is taking place in Brussels. 

25–27 October: Nashville, Tennessee, will host the 13th (ISC)2 Security Congress, convening the cybersecurity community in person and online.


#ReadingCorner
 Advertisement, Poster, Adult, Female, Person, Woman, Face, Head

Online abuse of kids ‘escalating’

Child sexual exploitation and abuse online is escalating worldwide, in both scale and methods, the latest We Protect Global Alliance’s threat assessment warns. To put this into numerical perspective, the reports of abuse material reported in the USA in 2019 dwarfs the 32 million reports made in 2022. It gets worse: ‘The true scale of child sexual exploitation and abuse online is likely greater than this as a lot of harm is not reported.’ Read the report, including its recommendations.

File photo of a child using a digital device.

If abuse is on the rise, why isn’t the tech industry doing more?

As the eSafety Commissioner of Australia noted last week, some of the biggest tech companies just aren’t living up to their responsibilities to halt the spread of online child sexual abuse content and livestreaming. 

‘Within online businesses much of the child safety and wider consumer agenda is marked as an overhead cost not a profit centre …’, writes John Carr, a UK leading expert in child internet safety. ‘Companies will obey clearly stated laws. But the unvarnished truth is many are also willing to exploit any and all available wiggle room or ambiguity to minimise or delay the extent of their engagement with anything which does not contribute directly to the bottom  line. If it makes them money they need no further encouragement. If it doesn’t, they do.’ Read the blog post.


FWAzpGt5 steph
Stephanie Borg Psaila – Author
Director of Digital Policy, DiploFoundation
nPHsW2zD ginger
Virginia Paque – Editor
Senior Editor Digital Policy, DiploFoundation

IGF 2023 – Final Report

Kyoto, 8 – 12 October 2023

This year’s IGF came at a time of heightened global tension. As the Middle East conflict unfolded, aspects related to internet fragmentation, cybersecurity during times of war, and mis- and disinformation entered prominently into the IGF 2023 debates.

During the discussions at this year’s record-breaking IGF (with 300 sessions, 15 days of video content, and 1,240 speakers), participants also debated other topics at length – from the Global Digital Compact (GDC) and other processes to AI policy (such as the Hiroshima AI Process – more further down), data governance dilemmas, and narrowing the digital divide.

The following 10 questions are derived from detailed reports from hundreds of workshops and sessions at the IGF 2023.

10 questions debated at IGF 2023


1. How can AI be governed?

 Person, Art, Drawing, Head, Face, Body Part, Hand, Villaño III

There seems to be some form of general consensus among stakeholders – both public and private – that we need to govern AI if we are to leverage it for the benefit of humanity. But what exactly to govern, and, even more importantly, how to do so, remains open for debate.

And so it is no surprise that the IGF featured quite a few such debates, as sessions explored national and international AI governance options, highlighted the need for transparency in both the technical development of AI systems and in the governance processes themselves, and questioned whether to regulate AI applications/uses or capabilities.

Highlights 

Just as was the case with the internet, AI is set to impact the entire world, albeit in different ways and at different speeds. And so, setting some form of international governance mechanisms to guide the development and deployment of human-centric, safe and trustworthy AI is essential. The jury is still unsure whether to have international guiding principles, stronger regulations, new agencies, etc.

But there is already a body of work to build upon, from the OECD’s AI principles and the UNESCO recommendation on AI ethics to the G7 Hiroshima AI Process and the EU’s approach to developing voluntary AI guardrails ahead of the AI Act coming into force. Japan’s Prime Minister announced at the start of the IGF that a draft set of guiding principles and a code of conduct for developers of advanced AI is to be put on the table for approval at the upcoming G7 Summit. The texts form part of the Hiroshima AI Process, kickstarted during last May’s G7 Summit.

If the world is to move ahead with some form of global AI governance approach, then this approach needs to be defined in an inclusive manner. There is a tendency for countries and regional blocs with more robust regulatory frameworks to shape governance practices globally, but the voices and interests of smaller and developing countries must be more meaningfully represented and considered.

Take Latin America and Africa, for example: They provide significant raw materials, resources, data, and labour for AI development, but their participation in global processes does not strongly reflect this. Moreover, the discussion on AI harms is still predominantly framed through the Global North lens. To ensure an inclusive and fair AI governance process, reducing regional disparities, strengthening democratic institutions, and promoting transparency and capacity development are essential.

The Brussels effect – where EU regulations made in Brussels become influential worldwide – featured in some discussions. The EU’s AI Act will likely influence regulatory approaches in other jurisdictions globally. However, countries must consider their unique local contexts when designing their regulations and policies to ensure they respond to and reflect local needs and realities. And, of course, this so-called AI localism should also apply when integrating local knowledge systems into AI models. By incorporating this local knowledge, AI models can better address distinct local and regional challenges.

Multistakeholder cooperation in shaping AI governance mechanisms was highlighted as essential. With the private sector driving AI innovation, its involvement in AI governance is inevitable and indispensable. Such an involvement also needs to be transparent, open, and trustworthy. 

But it is not all about laws and regulations. Technical standards also have a role to play in advancing trustworthy AI. Different technical standards are necessary within the AI ecosystem at different levels, encompassing certifications for evaluating quality management systems and ensuring product-level adherence to specific benchmarks and requirements. These standards aim to maintain efficient operations, promote reliability, and foster trust in AI products and services.

It was argued that a balanced mix of voluntary standards and legal frameworks could be the way forward. Here, too, there is a need for actors in developing countries to actively engage in shaping AI standards rather than merely adapting to standards set by external entities.

While we wait for new international regulations to be developed, a wide range of actors could adopt or adapt new or existing voluntary standards for AI. For instance, the Institute of Electrical and Electronics Engineers (IEEE) developed a value-based design approach that UNICEF uses. The implementation of AI also requires a deep understanding of established ethical guidelines. To this end, UNESCO has published the first-ever global Guidance on Generative AI in Education and Research, which aims to support countries in implementing immediate actions and planning long-term policies to properly use generative AI tools. 

Aside from laws, regulations, and technical standards, what else could help achieve a human-centric and inclusive approach to AI? Forums and initiatives such as the Global Partnership on AI (GPAI), the Frontier Model Forum, the Partnership on AI, and the MLCommons have a role to play. They can promote the secure and ethical advancement of cutting-edge AI models – by establishing common definitions and understandings of AI system life cycles, creating best practices and standards, and fostering information sharing between policymakers and industry. And states should look into allocating resources to the development of publicly accessible AI technology as a way to ensure wider access to AI technology and its benefits.


2. What will be the future of the IGF in the context of the Global Digital Compact (GDC) and the WSIS+20 Review Process?

 Book, Publication, Comics, Person, Face, Head, Bulldozer, machine

From 2003 to 2005, the World Summit of the Information Society (WSIS) came up with several outcome documents meant, among other goals, to advance a more inclusive information society and establish key principles for what was back then a fresh new term: internet governance. The IGF itself was an outcome of WSIS.

In 2025, a WSIS+20 review process will look at the progress made in implementing WSIS outcomes and will, most likely, decide on the future of the IGF (as its current mandate expires in 2025). In parallel with preparing for WSIS+20, UN member states will also have to negotiate on the Global Digital Compact, expected to be adopted in 2024 as a pact for an ‘open, free, and secure digital future for all’.  

So, the next two years are set to be intensive. New forums are under consideration. Some existing structures may be strengthened.  International organisations are gearing up for an ‘AI mandate race’ that will shape their future and, in some cases, question their very existence. 

The IGF’s future will be significantly influenced by the rapidly changing policy environment, as discussed in Kyoto.  

Highlights 

The Global Digital Compact (GDC) sparked a lot of interest in official sessions, including one main session, bilateral meetings, and corridor chats, with two underlying issues:

IGF input into GDC drafting: The IGF community would like to see more multistakeholder participation throughout the GDC drafting process. Mimicking the IGF mode of operation is unrealistic, as the GDC will be negotiated under UN General Assembly rules. However, while following the UNGA rules of procedure, the GDC should continue to make every effort to include all stakeholders’ perspectives, as it has in the past. Stakeholders were also encouraged to communicate with their national representatives in order to contribute more to the GDC process. (Bookmark our GDC tracker)

Participation of the IGF in GDC implementation: Several speakers stressed that the IGF should play a prominent role in the development of implementing the GDC. The IGF Leadership Panel, for example, argued that the IGF should play a central role in GDC follow-up processes. The relation between the IGF and a potential Digital Cooperation Forum, as suggested in the UN Secretary-General’s policy brief, was the “elephant in the room” during the IGF in Kyoto.

Inclusion in governance was in focus during the session on the participation of Small Island Developing States (SIDS) in digital governance processes. The debate brought up an interesting paradox. Although SIDS have the formal possibility of participating in the IGF, they often lack the resources to do so effectively. Other small groups from civil society, business, and academia encounter a similar participatory paradox.

Changes in the global architecture may have a two-fold impact on SIDS. Firstly, the proliferation of digital forums could further strain their already stretched participation capacity. Secondly, the GDC may propose new forms of participation reflecting the specificities of small actors with limited resources. For any future digital governance architecture to work, it will be important for SIDS and other small actors, from businesses to civil society, to be able to have stronger voices.

 Electronics, Hardware, Diagram

The IGF debates indicated the renewed relevance of the WSIS process ahead of review in 2025. The G77 is particularly keen to base GDC negotiations on the WSIS Tunis Agenda and the Geneva Declaration of Principles, as stated in the recently adopted G77 Havana Declaration. The G77 argued for a triangulation of digital governance structures among Agenda 2030, WSIS, and the GDC. 

Whatever policy outcomes will be reflected in the GDC and the WSIS+20 review, the IGF should be refined, improved, and adapted to the rapidly changing landscape of AI and broader digital developments. More attention should also be given to involving missing communities in IGF debates. The IGF Plus approach was mentioned in discussions in Kyoto. 

In Kyoto, international organisations fueled the race for AI mandates to secure a place in the developing frameworks for handling AI. According to Diplo’s analysis of AI in IOs, almost every UN organisation has some AI initiative in place.

In the emerging AI era, many organisations are faced with existential questions about their future and how to manage new policy issues. The primary task facing the UN system and its member states in the upcoming years will be managing the race to put an AI mechanism in place. Duplication of effort, overlapping mandates, and the inevitable confusion when addressing the impact of AI could impede effective multilateralism.


3. How to use IGF’s wealth of data for an AI-supported, human-centred future?

 Person, Face, Head

The immense amount of data accumulated through the IGF over the past 18 years is a public good that belongs to all stakeholders. It presents an opportunity for valuable insights when mined and analysed effectively, with AI applications serving as useful tools in this process.

Highlights 

The IGF has accumulated a vast repository of knowledge generated by the discussions at the annual forum and its communities over the years (e.g. session recordings and reports; documents submitted for public consultation; IGF messages and annual reports; outputs of youth and parliamentary tracks, best practice forums, policy networks, and dynamic coalitions; summaries of MAG meetings; reports from national, regional and youth IGF initiatives). But this is an underutilised resource that could be used to build a sustainable, inclusive, and human-centric digital future.

Diplo and the GIP supported the IGF Secretariat in organising a side session to discuss how to unlock the IGF’s knowledge to gain AI-driven insights for our digital future.

Jovan Kurbalija smiles as he sets down the microphone on a panel at IGF2023.

AI can increase the effectiveness of disseminating and utilising the knowledge generated by the IGF. It can also help identify underrepresented and marginalised groups and disciplines in the IGF processes, allowing the IGF to increase its focus on involving them. 

Moreover, AI can assist in managing the busy schedule of IGF sessions by linking them to similar discussions from previous years, aiding in coordinating related themes over time. It can visually represent hours of discussions and extensive content as a knowledge graph, as demonstrated by Diplo’s experiment with AI-enhanced reporting at IGF2023.

An intricate multicoloured lace network of lines and nexuses representing a knowledge graph of Day 0 of IGF2023.

Importantly, preserving the IGF’s knowledge and modus operandi can show the relevance and power of respectful engagement with different opinions and views. Since this approach is not automatic in our time, the IGF’s impact could extend beyond internet governance and have a more profound effect on the methodology of global meetings.


4. How can risks of internet fragmentation be mitigated?

 CAD Diagram, Diagram, Device, Grass, Lawn, Lawn Mower, Plant, Tool

The escalating threat of fragmentation challenges the internet’s global nature. Geopolitical tensions, misinformation, and digital protectionism reshape internet governance, potentially compromising its openness. A multidimensional approach is crucial to understanding and mitigating fragmentation. Inclusive dialogue and international norms play a vital role in reducing these risks. 

Highlights 

Internet fragmentation would pose significant challenges to the global and interconnected nature of the internet. It would hinder communication, stifle innovation, and undermine the intended functioning of the internet. Throughout the week, different sessions tackled these issues and how to reduce the risks of internet fragmentation.

The internet, as we know it, cannot be taken for granted any more. Geopolitical tensions, the weaponisation of the internet, dis- and misinformation, and the pursuit of digital sovereignty through protectionism could potentially fracture the open nature of the internet. The same can be said for restrictions on access to certain services, internet shutdowns, and censorship.

One way of examining the risks is to look at the different dimensions of fragmentation, fragmentation of the user experience, that of the internet’s technical layer, and fragmentation of internet governance and coordination (explained in detail in this background paper), and the consequences each of them carries. 

Policymakers can also use this approach to create a cohesive and comprehensive regulatory approach that does not lead to internet fragmentation (for instance, a layered approach to sanctions can help prevent unintended consequences like hampering internet access). In fact, state control over the public core of the internet and its application layer is a major concern. Different technologies operate at several layers of the internet, and different entities manage those distinct layers.

Disruptions in the application layer could lead to disruptions in the entire internet. Therefore, governance of the public core calls for careful consideration, a clear understanding of these distinctions, and deep technical knowledge. 

International norms are critical to reducing the risk of fragmentation. International dialogue in forums like the IGF is invaluable for inclusive discussions and contributions from diverse stakeholders, including different perspectives about fragmentation between the Global North and Global South.

Countries pursue their policies at the national level, but they also need to be mindful of harmonising with regulatory frameworks with extraterritorial reach. In developing national and regional regulatory frameworks, it is indispensable to elicit multistakeholder input, particularly considering the perspectives of marginalised and vulnerable communities. Public policy functions cannot be entrusted entirely to private corporations (or even governments). The involvement of technical stakeholders in public policy processes is essential for sound, logical, informed decision-making and improved governance that protects the technical infrastructure.


5. What challenges arise from the negotiations on the UN treaty on cybercrime?

 Adult, Bride, Female, Person, Wedding, Woman, People, Clothing, Hat, Body Part, Hand, Art

As negotiations on the new UN cybercrime treaty enter the last mile, they were highly prominent topics of IGF2023. The broad scope of the current draft of the UN Cybercrime Treaty, the lack of adequate human rights safeguards, the absence of a commonly agreed-upon definition of cybercrime, and the uncertain role of the private sector in combating cybercrime are some of the crucial challenges addressed during the sessions. 

Highlights

As the Main Session: Cybersecurity, Trust & Safety Online, and the session Risks and Opportunities of a new UN Cybercrime Treaty noted, provisions to ensure human rights protection seem blurred. The wide discretion left to states in adopting the provisions related to online content, among others, could leave plenty of wiggle room for authoritarian regimes to target and arbitrarily prosecute activists, journalists, and political opponents. Additionally, retaining personal data from individuals accused of an alleged cybercrime offence could open the door for the misuse and infringement of their right to privacy.

Provisions regarding cybercrime offences need to be clarified, too, as there is no commonly agreed-upon definition of cybercrime. For now, it is clear that we need to separate cyber-dependent serious crimes (like terrorist attacks using autonomous cyberweapons) from cyber-enabled actions (like online speech) that help commit crimes and violate human rights. Additionally, there is a need to overcome cybercrime impunity, especially in cases where states are unwilling or unable to combat it.

International cooperation between states and the private sector is yet another aspect that members have to agree on. Essentially, there is a need to ensure more robust and comprehensive provisions to address capacity development and technical assistance. It was noted that these provisions should facilitate cooperation across different legal jurisdictions and promote relationships with law enforcement agencies.

The role of the private sector is another stumbling stone in the negotiations. The proposed provisions put the private sector in a rather challenging position as they would have to comply with the laws of different jurisdictions. This means that conflicts of laws, including existing international instruments such as the Budapest Convention, would be inevitable and need to be harmonised somehow.

What if states cannot agree on an international treaty? Well, there are still ways to strengthen the fight against cybercrime. Options include establishing a database of cybersecurity experts for knowledge sharing, pooling knowledge for capacity development, expanding the role of organisations like INTERPOL, and encouraging states and businesses to allocate more resources to strengthen their cybersecurity posture.

Has the UN Cybercrime Treaty draft opened Pandora’s box? It always depends on how someone perceives it. What is clear from the sessions discussed is that many challenges need to be addressed as the ‘deadline’ for the UN Cybercrime Treaty approaches.


6. Will the new global tax rules be as effective as everyone is hoping for?

 Lamp, machine, Wheel, Lighting

Over the years, the growth of the digital economy – and how to tax it – has led to major concerns over the adequacy of tax rules. The IGF discussion focused on the necessity for clear and open dialogues on digital taxation, and for a just and equitable tax revenue distribution. There are hurdles to implementing effective taxation measures. The involvement of a wider range of stakeholders could be pivotal in shaping workable solutions for the taxation of businesses of tech titans.

Highlights

Global tax rules could ameliorate the unfair consequences of tax havens, provide consistent approaches to allocating profits and reducing uncertainty for multinational companies. The OECD/G20 made significant steps in this direction: In 2021, over 130 countries came together to support a new two-pillar solution. This will introduce a 15% effective minimum tax rate in most jurisdictions and will oblige multinationals to pay tax in countries where their users are located (rather than where they have a physical presence). In parallel, the UN Tax Committee revised its UN Model Convention to include a new article on taxing income from digital services.

For these models to be effective, they need to fully counter the scenarios that have, in the past, allowed multinationals to reduce their tax bills. First, multinational corporations have traditionally shifted profits to low-tax jurisdictions, which has deprived countries in the Global South of their fair share of tax revenue. Second, neither of the two frameworks addresses the issue of tax havens directly (although the minimum tax will help mitigate this issue). Third, the OECD and UN models do not fully take into account the power dynamics between countries in the Global North (which has historically been in the lead in international tax policymaking) and the Global South. 

Until recently, countries in the Global South felt these measures alone were insufficient to ensure tax justice. They, therefore, opted to adopt various strategies to tax digital services, including the introduction of digital services taxes (DSTs) that target income from digital services.

Despite the OECD’s recent efforts to accommodate the interests of developing nations, experts from the Global South remain cautious, opining that these countries should carefully consider all implications before signing international tax treaties and perhaps even sign these treaties only after they see their effects play out.


7. How to address misinformation and protection of digital communication during times of war?

 Firearm, Weapon, Gun, Handgun, Rifle, Face, Head, Person

In the midst of ongoing conflicts, new concerns about the impact of misinformation have arisen. The primary concern is how this impacts civilians residing in volatile regions. Misinformation adds to confusion, division, and physical and psychological distress, especially for civilians caught in the middle. 

Digital communication also has a decidedly operational role in conflict situations, completely different from any military use. It should provide secure communication to reach and inform those in need. The security and robustness of digital networks therefore become critical in ensuring humanitarian assistance. 

Highlights

The old wisdom that the truth is the first victim of war has been amplified by digital technology. The session Safeguarding the free flow of information amidst conflict explained how disseminating harmful information can exacerbate pre-existing social tensions and grievances, leading to increased violence and violations of humanitarian law. 

The spread of misinformation can cause distress and psychological burdens among individuals living in conflict-affected areas. Misinformation hampers their ability to access potentially life-saving information during emergencies. The distortion of facts and the influence on beliefs and behaviours as a consequence of disseminating harmful information also raise tensions in conflict zones.

In times of peace, experts advocate for a multi-faceted approach to addressing misinformation in conflict zones. In times of war, the immediate concerns focus primarily on ensuring the safety and well-being of civilians. If communication channels are disrupted, the spread of misinformation can be even more dangerous.

In these situations, humanitarian organisations and tech companies must work together to establish secure channels and provide accurate information to those in need. Additionally, efforts should be made to counter cyber threats and protect critical infrastructure. In fact, with the growing reliance on a shared digital infrastructure, civilian entities are more likely to be inadvertently targeted through error or tech failure. The interconnectedness of digital systems means that an attack on one part of the infrastructure can have far-reaching consequences, potentially affecting civilians who are not directly involved in the conflict zone. 

The involvement of international organisations and governments is essential in coordinating these efforts and ensuring that humanitarian principles are upheld. Special consideration should also be given to the safety and protection of those working in the digital infrastructure sector during times of conflict.


8. How can data governance be strengthened?

 Furniture, Book, Publication

Organised, transparent data governance is crucial in today’s digital landscape and requires clear standards for coherence and consistency, an enabling environment requiring effort, trust, and adaptability from all sectors, and public-private partnerships for addressing critical issues. Intermediaries play a key role in bridging gaps. The Data Free Flow with Trust (DFFT) concept, introduced by Japan in 2019, also promises to strengthen data governance by enabling global data flows while ensuring security and privacy. 

Highlights

Data governance plays a critical role in ensuring the effective and responsible use of data, especially in today’s digital age. Discussions during an open forum on public-private partnerships served to identify important measures that can help improve or expand upon existing data governance approaches.

First, clear standards and operating procedures can promote coherence and consistency in data governance. The lack of coherence is one of the main reasons for underwhelming private sector contributions. By defining and implementing robust standards, both the public and private sectors could have a common framework to work upon, facilitating collaboration and maximising the potential for data-driven initiatives.

Second, an enabling environment is essential for effective data governance. This environment requires time, effort, proof-of-concept, trust, and adaptability. Creating such an environment necessitates the involvement of all sectors – public, private, and civil society. 

Third, public-private initiatives are crucial to helping bridge data gaps related to critical issues like climate change, poverty, and inequality. Collaboration between the public and private sectors allows for the pooling of resources, expertise, and knowledge, enabling a more holistic approach to addressing these challenges.

Successful public-private partnerships require investment, time, and trust-building efforts. Parties involved must dedicate time to cultivating relationships and fostering mutual understanding. This may include the participation of dedicated individuals from both the private sector and governmental organisations. Their active presence can facilitate effective communication, coordination, and alignment of goals, leading to fruitful collaborations.

Related to public-private initiatives is the role that intermediaries or brokers have to help bridge the skills and capacity gaps between sectors by combining their expertise and resources to drive collaboration and support the achievement of sustainable development goals.

The sustainability of public-private partnerships also depends on the size and global reach of the involved entities. For instance, large firms with global reach are well-positioned to enable such partnerships. They possess the necessary resources, capabilities, and networks to maintain and nourish relationships, ensuring long-term viability and impact in driving sustainable development. 

Much was also said about Data Free Flow with Trust (DFFT) – a concept first championed by Japan during the G20 summit in 2019 – which aims to strengthen data governance by facilitating the smooth flow of data worldwide while ensuring data security and privacy for users. 
Speakers in High-Level Leaders Session I: Understanding Data Free Flow with Trust (DFFT) emphasised how the DFFT concept can help strengthen data governance in additional ways. It provides a framework for harmonising and aligning the different national or regional perspectives, encourages public-private data partnerships, and promotes using regulatory and operational sandboxes as practical solutions to foster good governance among stakeholders.


9. How can the digital divide be bridged?

 Book, Comics, Publication, Person, Art, Graphics, Computer, Electronics, Pc, Face, Head

Although discussions on bridging the digital divide might seem repetitive, the persistence of this topic is warranted by the stark reality revealed in the latest data from the International Telecommunication Union (ITU): approximately 5.4 billion people are using the internet. That leaves 2.6 billion people offline and still in need of access.

Highlights

In the pursuit of universal and meaningful connectivity, precise data tracking emerges as a cornerstone for informed decision-making. Data tracking equips stakeholders with the insights needed to identify areas requiring attention and improvement. Through a blend of quantitative indicators (numerical data and statistical analysis) and a qualitative approach (subjective assessments, such as in-depth case studies), a comprehensive connectivity assessment is achieved, facilitating effective individual country evaluations. 


What needs to be improved? While the efforts of international organisations, especially ITU and UNESCO in data collection are complementary, they are often not perfectly coordinated. Other areas for improvement include the lack of quality data on how communities use the internet, a lack of reliable indicators for safety and security, as well as speed, and reckoning realities that rural regions may not be fully reflected in the data collected.

There are several solutions, from regional collaboration and initiatives to the utilisation of emerging technologies. 

One proposed approach to expanding internet access involves utilising Low Earth Orbit (LEO) satellites. LEO satellites offer the potential to deliver real-time and reliable internet connectivity to remote or hard-to-reach regions worldwide. Nevertheless, several concerns have surfaced, primarily concerning the cost of accessing such services, their environmental impact, and the technical challenges associated with large-scale LEO satellite deployment.

To make it possible for LEO satellites to be used and deployed effectively, countries need to review their laws, make sure they are in line with international space law, and get involved in international decision-making bodies like ITU and COPUOS to help make policies and rules that support this.

To bridge the digital divide, it is essential to address various factors and develop comprehensive strategies that go beyond connectivity. There is a need for digital solutions customised to fit specific local environments. These strategies must address issues regarding the affordability and availability of devices and technologies and the availability of content and digital skills, as these deficiencies still pose barriers to full internet access.

In the broader context of the digital divide, AI and large language models (LLMs) were highlighted as having the potential to redefine and expand digital skills and literacy. Moreover, including native languages in these models can enable digital interactions, particularly for individuals with lower literacy skills.  

The goal of bridging the digital divide can only be achieved through partnerships and collaborations embodied in regional initiatives. Thus, Regional Internet Registries (RIRs) have an important role, particularly in regions that are underserved or have limited access to internet resources.

RIRs often go beyond their narrow mandates in the allocation and registration of internet number resources within a specific region of the world. RIRs have facilitated collaboration and knowledge sharing by adopting a multistakeholder and regional approach, leading to a more connected and equitable internet landscape.

One of the RIRs’ main strengths is building community trust. This trust has been established through their work on regional and local issues such as connectivity and support for community networks and Internet Exchange Points (IXPs). 


The EU’s initiative, the Global Gateway, was identified as a good example of a collaborative effort to bridge the digital divide. Notable efforts under the project involve forging alliances with countries in Latin America and the Caribbean, implementing the Building the Europe Link to Latin America (BELLA) program for fibre optic cables, establishing regional cybersecurity hubs and strengthening the overall digital ecosystem.


10. How do digital technologies impact the environment?

 Art, Person, Doodle, Drawing, Face, Head

We’ve broken too many environmental records this year. June, July, and August 2023 are the hottest three months ever documented, September 2023 was the hottest September ever recorded; and 2023 is firmly set to be the warmest year on record. Global temperatures will likely surge to record levels in the next five years. Therefore, the discussion of the overall impact of digital technologies on the environment at the IGF was particularly critical.

Highlights

Data show that digital technologies contribute 1% to 5% of greenhouse gas emissions and consume 5% to 10% of global energy

Internet use comes with a hefty energy bill, even for seemingly small things like sending texts – it gobbles up data and power. In fact, the internet’s carbon footprint amounts to 3.7% of global emissions

AI, the coolest kid on the block, leaves a significant carbon footprint too: For instance, the training of GPT-3 resulted in 552 metric tons of carbon emissions, equivalent to driving a passenger vehicle over 2 million kilometers. ChatGPT ‘drinks’ a 500ml bottle of fresh water for every simple conversation with about 20 to 50 questions.

The staggering number of devices globally (over 6.2 billion) need frequent charging, contributing to significant energy consumption. Some of these devices also perform demanding computational tasks requiring substantial power, increasing the numbers. Moreover, the rapid pace of electronic device advancement and devices’ increasingly shorter lifespans have exacerbated the e-waste problem. 

In contrast, digital technologies also have the potential to cut emissions by 20% by 2050 in the three highest-emitting sectors – energy, mobility, and materials. 2050 is a bit far away, though, and immediate actions are critically needed to hit the 2030 Agenda targets.

What can we do? To harness the potential benefits of digitalisation and minimise its environmental footprint, we need to raise awareness about our available sustainable sources and establish standards for their use. If we craft and implement policies right from the inception of a new technological direction, we can create awareness among innovators and start-up stakeholders about its carbon footprint to ensure environmentally-conscious design.

Initiatives from organisations such as the Institute of Electrical and Electronics Engineers (IEEE) in setting technology standards and promoting ethical practices, particularly concerning AI and its environmental impact, as well as collaboration among organisations like GIZ, the World Bank, and ITU in developing standards for green data centres, highlight how working together globally is imperative for sustainable practices. 

We can also harmonise measurement standards to track the environmental impacts of digital technologies. This will enable policymakers and stakeholders to develop more effective strategies for mitigating the negative impacts.

We can use satellites and high-altitude connectivity devices to make the internet more sustainable. We can take the internet to far-off places using renewable energy sources, like solar power. 
We can also leverage digital technologies to generate positive impacts. For instance, AI can be used to optimise electrical supply and demand, reduce energy waste and greenhouse gas emissions, and revolutionise the generation and management of renewable energy.



Data analysis of IGF 2023

To analyse the discussions at IGF, we first recorded them. The total length of that footage is almost 15 days long: 14 days, 21 hours, 22 minutes, and 30 seconds, to be precise. Talk about a packed programme!

Then we used DiploAI to transcribe IGF2023 discussions verbatim and counted 3,242,715 words spoken. That is nearly three times the length of the longest book in the world – Marcel Proust’s À  la recherche du temps perdu. If an IGF 2023 book of transcripts were published, an average reader, who reads 218 words per minute, would need 217 hours – that’s nine days! – to read it cover to cover.

Using DiploAI, we analysed this text corpus and extracted key points, totalling to 288,364 words. Then DiploAI extracted the essence of discussions and the most important words spoken. The 10 most mentioned words were: AI, internet, data, support, government, importance, technology, issue, regulation, and global. It is interesting to note that the 11th most mentioned word was digital. 

Word cloud shows the relative frequency of words in the IGF2030 corpus textus. AI stands out clearly as the most prominent term, with internet, data, support, government, importance, technology, issue, regulation, global following.

Prefix monitor

Other prefixes followed a similar pattern compared to the previous three years. 

Digital was still the most used prefix, with a total of 8,661 references. This is nearly a 63% increase in frequency compared to IGF 2022, when it was referenced  5,346 times.

Online and cyber took 2nd and 3rd places, respectively, with 3,682 and 3,532 mentions. While cyber remained in third place, there was a 98% increase since last year, when it was mentioned 1,789 times. 

The word tech came in 4th place, as it did last year, a significant decrease compared to 2021, when it held the 2nd spot.

Finally, virtual remained in 5th place, accounting for 2.5% of the analysed prefixes.

Word cloud shows the relative frequency of words in the IGF2030 corpus textus. AI stands out clearly as the most prominent term, with internet, data, support, government, importance, technology, issue, regulation, global following.

Diplo and GIP at IGF 2023

Reporting from the IGF: AI and human expertise combined

With 300+ sessions and 15 days worth of video footage featuring 1,240 speakers and 16.000 key points, IGF2023 was the largest and most dynamic IGF gathering so far. For the 9th consecutive year, the GIP and Diplo provided just-in-time reports and analyses from the discussions. This year, we added our new AI reporting tool, to the mix. Diplo’s human experts and AI tool work together in this hybrid system to deliver a more comprehensive reporting experience.

This hybrid approach consists of several stages:

  1. Online real-time recording of IGF sessions. First, our recording team set up an online recording system that captured all sessions at the IGF. 
  2. Uploading recordings for transcription. Once these virtual sessions were recorded, they were uploaded to our transcribing application, serving as the raw material for our transcription team, which helped the AI application split transcripts by speaker. Identifying which speaker contributed is essential for analysing the multitude of perspectives presented at the forum – from government bodies to civil society organisations. This granularity enabled more nuanced interpretation during the analysis phase.
  3. AI-generated IGF reports. With the speaker-specific transcripts in hand (or on-screen), we utilised advanced AI algorithms to generate preliminary reports. These AI-driven reports identified key arguments, topics, and emerging trends in discussions. To provide a multi-dimensional view, we created comprehensive knowledge graphs for each session and individual speakers. These graphical representations mapped the intricate connections between speakers’ arguments and the corresponding topics, serving as an invaluable tool for analysis 
  4. Writing dailies. Our team of analysts used AI-generated reports to craft comprehensive daily analyses. 

You can see the results of that approach – session reports and dailies – on our IGF2023 Report page

You are presently reading the culmination of our efforts: the top highlights from the discussions at IGF2023. These debates are presented in a Q&A format, tackling the Global Digital Compact (GDC), AI, concerns about internet fragmentation, negotiations on cybercrime, digital taxation, misinformation, data governance, the digital divide, and climate change.


Diplo crew in Kyoto

Diplo and the GIP were actively engaged at IGF2023, organising and participating in various sessions.

 Art, Collage, Person, Adult, Male, Man, People, Clothing, Jeans, Pants, Pottery, Face, Head, Footwear, Shoe
IGF panellists Sorina Teleanu and Jovan Kurbalija in front of a screen projecting the same view of the panel.

8-12 October

Diplo and GIP booth at IGF 2023 village


IGF panellists Sorina Teleanu and Jovan Kurbalija in front of a screen projecting the same view of the panel.

Sunday, 8 October

Bottom-up AI and the right to be humanly imperfect (organised by Diplo) | Read more


Pavlina Ittleson sits on an IGF panel in front of a table with a laptop and monitor.

Tuesday, 10 October

How to enhance participation and cooperation of CSOs in/with multistakeholder IG forums (co-organised by Diplo) | Read more


Anastasiya Kazakova speaks into a microphone at an IGF session.

Wednesday, 11 October

Ethical principles for the use of AI in cybersecurity (participation by Anastasiya Kazakova) | Read more


Sorina Teleanu speaks into a microphone on an IGF panel.

Thursday, 12 October

IGF to GDC- An Equitable Framework for Developing Countries (participation by Sorina Teleanu) | Read more


A panel moderator watches Vladimir Radunović on a projection screen as he speaks remotely at the session.

Thursday, 12 October

ICT vulnerabilities: Who is responsible for minimising risks? (co-hosted by Diplo) | Read more


Next Steps?

Line drawing depicts a busy street with cars and pedestrians. Many signposts and billboards congest the view with announcements for different IGF meetings.

Start preparing for IGF 2024 by following Digital Watch coverage of governance topics, actors, and processes.

DW Weekly #132 – 16 October 2023

 Text, Paper, Page

Dear all,

As the conflict in the Middle East unfolds, and the world watches closely, those relying on social media for updates are left confused over what’s real and what’s not. This may be just the beginning of an age dominated by mis- and disinformation. In other news, there are new AI guidelines in the pipeline, while the EU has unveiled plans for a Digital Networks Act (which we’ll cover when things solidify a bit more).

Let’s get started.

Stephanie and the Digital Watch team

PS. Due to a technical glitch, this issue has been published a bit later than usual. Our apologies.


// HIGHLIGHT //

How the Middle East crisis is being (mis-)reported online

In recent days, as people have been grappling with the violence unfolding in Israel and Gaza, social media platforms have become inundated with graphic images and videos of the conflict. Without diminishing the gravity of what’s happening in the Middle East and the need to make it known, the problem with such social media content is that some of it is fake.

What’s fake, exactly? There’s a distinction between reporting something that didn’t happen and repurposing visuals from other conflicts for stronger impact. From a production point of view, there’s something sinister and malicious in fabricating a lie; during wartime, this is meant to raise alarm and stir up animosity. Reporting the truth but attaching a fake image is theoretically less sinister – although it is still a lie, and can fuel confusion, hostility, public safety risks, and harmful civil discourse among those who consume it.

 Number, Symbol, Text

Additionally complex. In some cases, the issue is more complex than this. Perpetrators go to the trouble of creating fake accounts, and of circulating uncaptioned imagery, leaving it to readers to draw their own conclusions. In this way, they can tap into biases and powerful emotions, such as fear, without having to take responsibility for the level of truthfulness of the content.

The worst part. Most parts of the world have taken sides. Polarisation has reached unprecedented heights. When individuals decide to condone a violent action (or not) based on whether an image really originates from their adversaries rather than their favoured faction, that brings out the worst in people. We won’t go into the gory details: Killing innocent children is an atrocity, regardless of who’s behind it – or whether a report has attached the correct image to it. 

Where it’s happening. Misinformation is as old as humanity and decades old in its current recognisable form, but social media has amplified its speed and scale. To say that online misinformation spreads like wildfire is an understatement. The challenge is compounded when shared by people with large followings. This could also happen if the press falls victim to the misinformation that’s flowing into newsrooms at a staggering scale.

Deprioritised. Earlier this year, Meta, Amazon, Alphabet, and Twitter laid off many of the team members focusing on misinformation and hate speech. This was part of a post-COVID-19-induced restructuring aimed at improving financial efficiency. 

The EU takes action: X, Google, Meta, TikTok ordered to remove fake content   

It didn’t take long for European Commissioner Thierry Breton to request that X, YouTube (Google), Facebook (Meta), and TikTok take down fake content.

In letters sent to X’s Elon Musk and to TikTok’s Shou Zi Chew, Breton wrote how their platforms had been used to disseminate fake content related to ‘the terrorist attacks carried out by Hamas against Israel’ (in his letter to Facebook’s Sundar Pichai and to Meta’s Mark Zuckerberg, Breton simply wrote about a surge in such content ‘via certain platforms’).

Each letter reminded the platforms of their obligations under the new Digital Services Act (DSA), including prompt responses to take-down requests by law enforcement. In X, Facebook, and TikTok’s case, the Commissioner gave the platforms 24 hours to respond.

The case of X. In TikTok and Microsoft’s case, things went more or less quiet. In X’s case, CEO Linda Yaccarino responded to complaints, confirming the removal of hundreds of Hamas-linked accounts and the removal or flagging of thousands of pieces of content – but this was either an unsatisfactory response or a predictable course of events that left the European Commission, just a day later, to send X a formal request for information. Breton tweeted the development as ‘a first step in our investigation to determine compliance with the DSA’, hinting that things will require more than just a handful of exchanged letters to be resolved. 

Elections in sight. The immediate worry may well be the Middle East conflict, but the longer-term worry is the numerous elections in 2024 – from the EU’s parliament and those in European countries, to the US presidential elections. It’s a concern that affects many countries.

The restructuring may prove costlier for those platforms laying off disinformation teams to save money.


Digital policy roundup (9–16 October)

// AI GOVERNANCE //

G7 to agree on AI guidelines by year’s end, Japan PM confirms

Japan confirmed that G7 leaders will agree on international guidelines for users by the end of the year, as well as non-binding rules and a code of conduct for developers of AI systems by the end of the year. This was announced by Prime Minister Fumio Kishida during last week’s Internet Governance Forum (IGF) in Kyoto. 

The texts form part of the Hiroshima AI Process, which was kickstarted during May’s G7 summit, held in Hiroshima. The upcoming summit will take place online.

Why is it relevant? There has been a lot of anticipation for the G7 rules on AI, even though they are non-binding. Japan, the current G7 president, will want to see its plans through by the end of the year, before it passes the baton to Italy.


ASEAN eyeing business-friendly AI rules

Southeast Asian countries are taking a business-friendly approach to AI regulation, according to a leaked draft text. The Association of Southeast Asian Nations (ASEAN) draft guide to AI ethics and governance asks companies to consider cultural differences and doesn’t prescribe categories of unacceptable risk. 

The guide is voluntary and meant to guide domestic regulations. ASEAN’s hands-off approach is seen as more business-friendly, as it limits the compliance burden and allows for more innovation. 

Why is it relevant? The EU has been discussing AI rules with countries in the region in a bid to convince them to follow its approach. But ASEAN’s approach clearly goes against the EU’s push for globally harmonised binding rules and is more aligned with other business-friendly frameworks.


// ANTITRUST //

Done deal: Microsoft’s acquisition of Activision Blizzard is approved

Microsoft has completed its USD68.7 billion acquisition of video games producer Activision Blizzard after the UK’s Competition and Markets Authority (CMA) approved the deal. The approval was granted after Microsoft presented the CMA with a restructured agreement in which the company said it would transfer the licensing rights for cloud streaming rights to Ubisoft, a proposed offer which the CMA had already said addressed their previous concerns.

The EU had already given the green light to the merger in May, but media reports said the European Commission was deciding whether it would look further into the restructured deal. Now it seems the European Commission won’t pursue this after all. However, the US Federal Trade Commission (FTC) intends to look into the licensing agreement Microsoft signed with Ubisoft.

Why is this relevant? Microsoft’s acquisition of Activision has been controversial. It’s the most expensive acquisition yet by Big Tech, so due to its scale, regulators feared it could hurt competition and give Microsoft too much power in the gaming market. European regulators are satisfied, but will these approvals solve the bigger problem of Big Tech accumulating ever more power? 

Activision Blizzard

Was this newsletter forwarded to you, and you’d like to see more?


// TAXATION //

IRS audit: Microsoft faces potential USD28.9 billion tax bill

The US Inland Revenue Service (IRS) has notified Microsoft that it owes USD28.9 billion in back taxes, penalties, and interest, covering the period 2004–2013 (which is nowhere near the USD160 million (GBP136 million) the company just paid the UK’s tax authority). The audit, which has been ongoing for over a decade, focuses on a deal where Microsoft transferred intellectual property to a factory in Puerto Rico for more favourable tax treatment. 

Microsoft says the taxes it has already paid could decrease the final tax owed under the audit by up to USD10 billion. The company plans to appeal the IRS’ conclusions, and the case is expected to continue for several more years.

Why is it relevant?

First, it’s the largest audit in US history. The IRS may be looking at the Microsoft case as a chance to prove the agency’s effectiveness in being more aggressive against corporations with endless resources. 

Second, it’s yet another example of Big Tech shifting income to low-tax jurisdictions specifically to lower their tax bill. 

Third, it coincides with the OECD’s latest inroad into its overhaul of global tax rules: The OECD has just published the text of a multilateral convention to implement the so-called Amount A of Pillar One. In simpler terms, this part of the new global rules will oblige some of the largest tech companies in the world to pay tax where their users are located, rather than where their corporate offices are based. 


The week ahead (16–23 October)

16–17 October: This year’s International Regulators Forum is being hosted in Cologne, Germany. The Small Nations Regulators Forum takes place tomorrow.

16–20 October: UNCTAD’s 8th World Investment Forum has returned as an in-person event hosted in Abu Dhabi, UAE.  

18 October: The US Federal Communications Commission meets on Wednesday to decide whether to kickstart the legislative process to restore the net neutrality rules it had introduced in 2015 (reversed in 2017).

18–19 October: The Organization for American States (OAS) Cyber Symposium 2023 takes place in The Bahamas. It’s organised in partnership with the National CIRT of The Bahamas.

20 October: The 27th EU-US Summit, in Washington DC, will bring together US President Joe Biden, European Council President Charles Michel, and European Commission President Ursula von der Leyen to talk about cooperation in areas including AI and digital infrastructure. 

21–26 October: ICANN78 takes place in Hamburg, Germany, starting Saturday. It will be the organisation’s 25th Annual General Meeting.


#ReadingCorner
Little boy with a mobile phone on the street, a child and gadgets

Google proposes framework for protecting kids online 
‘Appropriate safeguards can empower young people and help them learn, connect, grow, and prepare for the future.’ This is how Google introduces its new framework for child safety, which tells policymakers how the company views existing and proposed rules concerning, for instance, age verification, parental consent, and personalised content. Read the blog and framework, published earlier today.


FWAzpGt5 steph
Stephanie Borg Psaila – Author
Director of Digital Policy, DiploFoundation
nPHsW2zD ginger
Virginia Paque – Editor
Senior Editor Digital Policy, DiploFoundation

Numéro 83 de la lettre d’information Digital Watch – octobre 2023

Cover image of the Digital Watch newsletter titled: Le numérique à l'AGNU 78, with a drawing of the speakers at the UNGA, and an artificial brain, representing AI, writing a report

Observatoire

Coup d’œil : quelles sont les nouvelles tendances en matière de politique numérique ?

Géopolitique

La Commission européenne a publié une liste préliminaire de quatre domaines technologiques à haut risque en raison de leur utilisation abusive par des régimes autocratiques et des violations des droits de l’homme ; les experts estiment que cette liste vise la Chine. De l’autre côté de l’Atlantique, alors que Washington envisage d’imposer des restrictions supplémentaires aux exportations de puces, les entreprises américaines continueront à vendre des puces à la Chine, mais pas les plus perfectionnées. Le conseil chinois du commerce a appelé les États-Unis à reconsidérer les règles limitant leurs investissements dans le secteur technologique chinois. Le conseil affirme que les restrictions sont floues et ne font pas la différence entre les applications militaires et civiles.

Gouvernance de l’IA

Les pays du G7 ont convenu de créer un code de conduite international pour l’IA qui établirait des principes pour la surveillance et le contrôle des formes avancées d’IA. Dans le même ordre d’idées, le Japon (qui préside actuellement le G7) et le Canada ont publié des codes de conduite volontaires à l’intention des entreprises qui développent l’IA. 

Cette initiative s’inscrit dans la tendance récente qui consiste à utiliser des lignes directrices facultatives jusqu’à ce que des réglementations soient adoptées.

L’autorité britannique de régulation de la concurrence, la Competitions and Markets Authority (CMA), a proposé sept principes pour encadrer le développement et le déploiement des modèles fondateurs de l’IA (technologie formée à partir de vastes quantités de données pour effectuer un large éventail de tâches et d’opérations). Enfin, les États-Unis ont annoncé leur intention de présenter prochainement à l’ONU une proposition de normes mondiales pour l’utilisation de l’IA militaire.

Sécurité

Le Comité international de la Croix-Rouge (CICR) a édicté huit règles d’engagement à l’intention des cyberpirates qui participent à des conflits, les avertissant que leurs actions peuvent mettre des vies en danger. Ces règles interdisent notamment les cyberattaques visant des civils, des hôpitaux et des installations humanitaires, ainsi que l’utilisation de logiciels malveillants ou d’outils similaires susceptibles de nuire à des cibles tant militaires que civiles.

Infrastructure

La Commission fédérale des communications (FCC) des États-Unis prévoit de rétablir les règles relatives à la neutralité de l’Internet qui ont été abrogées en 2017. La présidente de la FCC, Jessica Rosenworcel, a annoncé que la FCC proposait de reclasser le haut débit sous le titre II de la loi américaine sur les communications. Cela donnerait à la FCC plus d’autorité pour réglementer les fournisseurs d’accès à Internet, y compris la capacité d’empêcher les opérateurs de ralentir ou d’accélérer le trafic Internet vers certains sites web.

Huawei, le géant chinois de la technologie, a intenté une action en justice devant un tribunal de Lisbonne contre une résolution du Conseil de cybersécurité du Portugal (CSSC), qui interdit aux opérateurs d’utiliser leurs équipements dans les réseaux mobiles 5G à grande vitesse.

Économie de l’Internet

La Commission européenne a désigné six contrôleurs d’accès – à savoir Alphabet, Amazon, Apple, ByteDance, Meta et Microsoft – comme gardiens en vertu de la loi sur les marchés numériques (DMA), à l’issue d’une procédure d’examen de 45 jours. La désignation porte sur un total de 22 services de plateforme de base fournis par ces entreprises.

Dans un autre domaine, Amazon a temporairement obtenu une victoire dans une affaire concernant sa classification en tant que très grande plateforme en ligne (VLOP). Le tribunal de la Cour de justice de l’Union européenne (CJUE), à Luxembourg, a, en réponse à la requête d’Amazon, accordé des mesures provisoires, entraînant le report de certaines obligations au titre de la loi sur les services numériques (DSA). Cette décision intervient alors que des mesures strictes ont été prises dans le cadre de la loi sur les services numériques de l’UE (DSA), affectant 19 grandes plateformes en ligne et moteurs de recherche.

Les pratiques anticoncurrentielles (présumées) des grandes entreprises ont été sous les feux de la rampe le mois dernier. La Commission fédérale du commerce des États-Unis (FTC) et 17 procureurs généraux d’État ont poursuivi Amazon pour comportement anticoncurrentiel présumé. Le procès intenté par le ministère américain de la Justice contre Google, l’une des plus grandes affaires antitrust depuis des décennies, a débuté le 12 septembre 2023. Ce procès se concentre sur les activités de recherche de Google, qui seraient « anticoncurrentielles et discriminatoires », permettant à l’entreprise de conserver un monopole sur le marché de la publicité numérique. Dans une autre affaire concernant également Google, l’entreprise a annoncé un règlement provisoire aux États-Unis sur des allégations de monopole concernant la plateforme d’application Play Store.

La Commission européenne a recueilli de manière informelle des avis sur les pratiques potentiellement abusives de Nvidia, a révélé Bloomberg. Cela intervient après que l’autorité française de la concurrence a effectué une « inspection surprise […] dans le secteur des cartes graphiques », dont il a été révélé qu’elle impliquait Nvidia.

Droits numériques

Le commissaire irlandais à la protection des données a confirmé une amende de 345 millions d’euros (370 millions de dollars) à l’encontre de TikTok pour avoir enfreint les lois européennes sur la protection de la vie privée concernant le traitement des données personnelles des enfants. L’Administration nationale américaine des télécommunications et de l’information (NTIA) sollicite l’avis du public sur les risques liés à l’Internet pour les enfants et sur les moyens de les atténuer.

La loi sur la gouvernance des données, essentielle à la stratégie européenne en matière de données, est entrée en vigueur le 24 septembre 2023, avec pour objectif principal de faciliter l’échange sécurisé de données entre les secteurs et les membres de l’UE, en améliorant notamment l’utilisation des données du secteur non public.

Reporters sans frontières (RSF) a appelé le public à participer à la rédaction de la Charte de l’IA afin de clarifier la position de la communauté journalistique sur l’utilisation extensive des technologies de l’IA sur le terrain.

L’organisme norvégien de surveillance des données espère étendre ses amendes journalières d’un million de couronnes norvégiennes (93 000 USD) pour violation de la vie privée à l’encontre de Meta dans l’ensemble de l’UE et de l’Espace économique européen (EEE). Il appartient maintenant au Comité européen de la protection des données (CEPD) d’évaluer la situation.

Politique de contenu

Une cour d’appel fédérale américaine a étendu les limites de la communication de l’Administration Biden avec les plateformes de médias sociaux à l’Agence américaine pour la cybersécurité et la sécurité des infrastructures (CISA). Cette décision réduit considérablement la capacité de la Maison-Blanche et des agences gouvernementales à s’engager avec les plateformes de médias sociaux sur des questions de modération de contenu.

L’UE a mis en garde les principales plateformes de médias sociaux contre le non-respect de la loi sur les services numériques (Digital Services Act, DSA) récemment adoptée pour lutter contre les fausses informations.

Développement

L’UE a publié son rapport sur la Décennie numérique, qui préconise des mesures pour atteindre les objectifs de la Décennie numérique d’ici à 2030.

De nouvelles données de l’UIT montrent que l’accès mondial à l’Internet s’est amélioré en 2023, avec plus de 100 millions de nouveaux utilisateurs dans le monde.
Le sommet du G77 a adopté la déclaration de La Havane, qui met l’accent sur la science, la technologie et l’innovation, et décrit les actions futures du G77.

LES CONVERSATIONS DE LA VILLE – GENÈVE

Lors de la 54e session du Conseil des droits de l’Homme des Nations unies (CDH), un groupe de travail a discuté de la cyberintimidation à l’encontre des enfants, examinant le rôle des États, du secteur privé et des parties prenantes dans la lutte contre la cyberintimidation et l’autonomisation des enfants dans la sphère numérique. En outre, le Conseil a présenté un rapport de synthèse sur le rôle de la maîtrise du numérique, des médias et de l’information dans la promotion et l’exercice du droit à la liberté d’opinion et d’expression lors de la 53e session. Le Conseil a également examiné un rapport sur l’impact des nouvelles technologies destinées à la protection du climat.

Le Forum public 2023 de l’OMC s’est concentré sur le rôle du commerce dans la promotion d’un avenir respectueux de l’environnement, notamment sur le thème suivant : « La numérisation en tant qu’outil pour l’écologisation des chaînes d’approvisionnement ». Plus de 20 sessions ont été consacrées aux outils numériques et à leurs impacts.

La 8e session du Dialogue de l’OMPI s’est penchée sur l’IA générative et la propriété intellectuelle. Pendant deux jours, six groupes de discussion ont abordé les cas d’utilisation de l’IA générative, le contexte réglementaire, les préoccupations éthiques concernant les données d’apprentissage, la paternité, la propriété du travail créatif et les stratégies pour naviguer dans la propriété intellectuelle en matière d’IA générative


En bref

Le numérique à l’AGNU 78

Le débat général de l’Assemblée générale des Nations unies (AGNU) est une plateforme mondiale où les dirigeants du monde entier se réunissent pour aborder certaines des questions les plus urgentes auxquelles l’humanité est confrontée. L’un de ces sujets cruciaux est l’impact des technologies numériques.

Lors du débat général de l’AGNU 2023, 94 intervenants, dont le secrétaire général de l’ONU ainsi que des représentants du Saint-Siège et de l’UE, se sont penchés sur des thèmes numériques.

Ce résultat (94) représente une augmentation significative par rapport à notre première analyse en 2017, lorsque 47 pays s’étaient exprimés sur des sujets numériques. Sept ans plus tard, ce chiffre a doublé pour atteindre 94. Cette forte augmentation souligne la reconnaissance croissante de l’importance primordiale des technologies numériques aux plus hauts niveaux du discours diplomatique.

Dans un contexte plus large, les discussions liées à la technologie numérique représentaient 2,51 % de l’ensemble du corpus textuel produit lors des discours de l’AGNU 2023.

 Bar Chart, Chart
Nombre global d’intervenants mentionnant les questions numériques

Le débat général de 2023 a vu une augmentation substantielle des mentions de l’IA dans les déclarations nationales. Sur les 467 130 mots prononcés pendant le débat, 6 279 concernaient l’IA, ce qui confirme sa position de sujet numérique le plus fréquemment abordé. Ce regain d’intérêt peut être attribué, en partie, à l’attention généralisée suscitée par le lancement de ChatGPT.

L’IA a fait l’objet de 39 discours lors de l’AGNU 78, ce qui témoigne de son importance croissante. Toutefois, les dirigeants ont également exploré d’autres sujets liés au numérique, notamment le développement numérique (44), la cybersécurité (23), la politique de contenu (7), les considérations économiques (4) et les droits de l’Homme (6).

IA. L’évolution rapide de l’IA a suscité des inquiétudes quant à ses risques potentiels, du déplacement d’emplois aux cybermenaces. Si certains intervenants ont souligné le potentiel de transformation de l’IA dans les domaines de la santé et de l’éducation, beaucoup ont insisté sur la nécessité d’une gouvernance éthique et d’une coopération internationale. Un consensus s’est dégagé sur l’urgence de réglementer l’IA, de s’attaquer à ses applications militaires et d’établir des normes mondiales. Le rôle des Nations unies dans la facilitation de ces discussions et la promotion d’une utilisation responsable de l’IA a été un thème récurrent, avec des appels en faveur d’un Pacte numérique mondial et de la création d’une agence internationale de l’IA.

Développement numérique. Les dirigeants ont souligné la nécessité de combler la fracture numérique, de réduire les inégalités et de garantir un développement numérique inclusif. De nombreuses nations ont plaidé en faveur d’une coopération internationale par le biais d’initiatives telles que le Pacte mondial pour le numérique, afin de relever collectivement ces défis. L’importance des technologies numériques pour atteindre les objectifs de développement durable et favoriser la solidarité mondiale est un thème commun aux dirigeants.

Cybersécurité. L’évolution du paysage des menaces non traditionnelles pour la sécurité, en particulier la cybersécurité et la cybercriminalité, a fait l’objet d’un débat. Les dirigeants ont souligné la nécessité d’une coopération internationale et de cadres de gouvernance pour faire face aux cybermenaces transfrontalières, protéger les infrastructures critiques et lutter contre la cybercriminalité.

Politique de contenu. Les dirigeants ont abordé la question de la propagation inquiétante de la désinformation et des fausses informations, amplifiée par l’IA et les plateformes de médias sociaux. Ils ont souligné les risques pour la démocratie ainsi que l’augmentation de la violence et des conflits dans le monde réel provoqués par les discours de haine et la désinformation en ligne. Les efforts pour lutter contre la désinformation comprennent des propositions pour une charte des droits numériques et un code de conduite pour l’intégrité de l’information sur les plateformes numériques.

Économie. L’importance d’adopter la technologie numérique et d’encourager l’innovation pour renforcer les économies a été accentuée. Les efforts visant à réduire les barrières commerciales, à rechercher des accords de libre-échange et à passer à des économies numériques et vertes ont été mis en exergue.
Droits de l’Homme. Les dirigeants ont exprimé leurs préoccupations concernant la surveillance en ligne, la collecte de données et les violations des droits de l’Homme. Ils ont appelé à des approches centrées sur l’homme et fondées sur les droits de l’Homme pour le développement et le déploiement des technologies.

 Person, Light, Traffic Light, Face, Head

Faut-il laisser l’IA halluciner ?

Cette année, les experts humains de Diplo ont été rejoints par DiploAI pour analyser les discours. Ils ont distillé des points clés et repéré des schémas dans les discours, y compris des cas où l’IA a halluciné, créé de fausses informations ou déformé la réalité. Jovan Kurbalija, de Diplo, suggère que nous devrions peut-être la laisser faire dans son dernier article de blog intitulé : « Diplomatic and AI hallucinations » (hallucinations diplomatiques et IA) : comment sortir des sentiers battus pour résoudre les problèmes mondiaux ?

 Chart, Plot, Map, Atlas, Diagram, Person


La carte du monde met en évidence les pays qui ont abordé les questions numériques lors de l’AGNU 78.

La vision de l’UE en matière de numérique et d’IA en 2023 : discours de Mme von der Leyen

Dans son discours sur l’état de l’Union en 2023, la présidente de la Commission européenne, Ursula von der Leyen, a exposé sa vision de l’avenir numérique de l’Europe, en mettant particulièrement l’accent sur le rôle de l’IA. Le discours a mis en lumière les réalisations de l’Europe dans le domaine numérique ainsi que les mesures prises pour relever les défis et saisir les possibilités offertes par l’IA et les technologies numériques.

 People, Person, Crowd, Adult, Female, Woman, Electrical Device, Microphone, Cup, Accessories, Jewelry, Necklace, Audience, Belt, Flag, Lecture, Speech, Ursula von der Leyen
Mme von der Leyen prononçant son discours. Source : Commission européenne

L’investissement de l’Europe dans la transformation numérique

La présidente von der Leyen a commencé par mettre en avant l’importance de la technologie numérique dans la simplification des activités commerciales et de la vie quotidienne. Elle a souligné que l’Europe avait dépassé son objectif d’investissements dans les projets numériques dans le cadre de NextGenerationEU, les États membres utilisant ce financement pour numériser des secteurs clés tels que les soins de santé, la justice et les transports.

Gérer les risques numériques et protéger les droits fondamentaux

Toutefois, la présidente a également reconnu les défis posés par le monde numérique, notamment la désinformation, les contenus préjudiciables et les risques pour la vie privée. Elle a relevé que ces problèmes érodaient la confiance et violaient les droits fondamentaux. Pour contrer ces menaces, l’Europe a pris l’initiative de protéger les droits des citoyens grâce à des cadres législatifs tels que le DSA et le DMA, qui visent à créer un espace numérique plus sûr et à responsabiliser les géants de la technologie.

Le rôle de l’IA 

La présidente von der Leyen a insisté sur le potentiel de l’IA à révolutionner les soins de santé, à accroître la productivité et à lutter contre le changement climatique. Mais elle a également mis en garde contre la sous-estimation des menaces réelles occasionnées par l’IA. Citant les préoccupations des principaux développeurs et experts de l’IA, elle a souligné l’importance d’atténuer les risques liés à l’IA à l’échelle mondiale.

Les trois piliers d’un cadre d’IA responsable

La présidente a exposé trois piliers clés pour le pilotage de l’Europe dans l’élaboration d’un cadre mondial pour l’IA : les garde-fous, la gouvernance et l’orientation de l’innovation.

  1. Garde-fous : veiller à ce que le développement de l’IA reste centré sur l’homme, transparent et responsable. La loi sur l’IA, une loi globale sur l’IA favorable à l’innovation, a été présentée comme un modèle pour le monde entier. Il s’agit maintenant d’adopter rapidement les règles et de passer à la mise en œuvre.
  2. Gouvernance : établir un système de gouvernance unique en Europe et collaborer avec des partenaires internationaux pour créer un groupe d’experts mondial similaire au Groupe d’experts intergouvernemental sur l’évolution du climat (GIEC) pour l’IA. Cet organe fournirait des informations sur l’impact de l’IA sur la société et garantirait des réponses coordonnées au niveau mondial.
  3. Guider l’innovation : tirer parti du rôle prépondérant de l’Europe dans le domaine des supercalculateurs en mettant à la disposition des jeunes pousses de l’IA des ordinateurs à haute performance pour l’entraînement de leurs modèles. En outre, il est essentiel de favoriser un dialogue ouvert avec les développeurs et les entreprises d’IA, à l’instar des règles volontaires de sûreté, de sécurité et de confiance adoptées par les grandes entreprises technologiques aux États-Unis.


Commission ad hoc sur la cybercriminalité : principaux enseignements de la 6e session

La 6e session du Comité ad hoc sur la cybercriminalité a terminé ses travaux, mais de nombreuses questions restent en suspens. Alors que le dernier volet est prévu pour février 2024, les États ne se sont toujours pas mis d’accord sur l’utilisation des termes « cybercriminalité » ou « TIC à des fins malveillantes » dans la convention.

Le dernier projet (mis à jour le 1er septembre 2023) a également suscité un débat entre les États sur le champ d’application de la convention, la Chine et la Russie s’inquiétant du fait que le paysage évolutif des technologies de l’information et de la communication (TIC) n’a pas été pris en compte de manière adéquate. En ce qui concerne la criminalisation des infractions, la Russie a souligné la nécessité de sanctionner l’utilisation des TIC à des fins extrémistes et terroristes, et, avec la Namibie et la Malaisie, entre autres pays, a soutenu l’inclusion des actifs numériques dans le blanchiment des produits de la criminalité. Dans le même temps, certains pays, dont le Royaume-Uni et l’Australie, se sont opposés à leur inclusion, affirmant qu’ils n’entraient pas dans le champ d’application de la convention.

Les dispositions relatives aux droits de l’Homme ont suscité des inquiétudes non seulement parmi les États, mais aussi parmi les parties prenantes. Microsoft a notamment déclaré que les dispositions actuelles inscrites dans le dernier projet pourraient être « désastreuses pour les droits de l’Homme ». En ce qui concerne les mesures de protection des données, l’Afrique du Sud, les États-Unis et la Russie ont proposé la collecte de données relatives au trafic et l’interception de données relatives au contenu. Dans le même temps, Singapour et la Suisse se sont opposés à cette proposition, l’UE soulignant que de telles mesures constituent une menace pour les droits de l’Homme et les libertés fondamentales.

Les négociations sur la coopération internationale ont également rencontré des difficultés, la Russie rappelant l’importance d’établir une distinction entre le lieu de conservation des données et les lieux de traitement, de stockage et de transmission des données, notamment dans le cadre de l’informatique dématérialisée (cloud computing). Pour résoudre le problème de la « perte de localisation » des données, la Russie a proposé de se référer au deuxième protocole de la Convention de Budapest. En revanche, des pays comme le Pakistan, l’Iran, la Chine et la Mauritanie ont proposé d’inclure l’article 47 bis sur la coopération entre les autorités nationales et les fournisseurs de services. Pour l’essentiel, cette coopération devrait porter sur le signalement des délits de cybercriminalité tels qu’établis par la convention, le partage d’expertise, la formation, la préservation des preuves électroniques et la garantie de la confidentialité des demandes reçues des autorités chargées de l’application de la loi.

Une proposition intéressante a été faite par le Costa Rica et le Paraguay pour inclure le mot « durabilité » dans les articles 52 et 56 afin de fournir une assistance efficace et de traiter l’impact sociétal de la cybercriminalité.
La question reste donc ouverte : existe-t-il déjà un traité ? Les États se sont-ils mis d’accord sur les dispositions ? Non. Les États tiendront-ils leur dernier round en février 2024 ? Oui. Que se passera-t-il en cas d’absence de consensus ? Le Bureau de l’Office des Nations unies contre la drogue et le crime (ONUDC) interviendra et confirmera que les décisions seront prises à la majorité des deux tiers des représentants présents et votants.

 Flag


À venir : FGI 2023

L’édition 2023 du Forum sur la gouvernance de l’Internet (FGI) se tiendra à Kyoto, au Japon, du 8 au 12 octobre, sur le thème « L’Internet que nous voulons – l’autonomisation de tous ».

Le programme s’articule autour de huit sous-thèmes :

  • IA et technologies émergentes ;
  • éviter la fragmentation de l’Internet ;
  • cybersécurité, cybercriminalité et sécurité en ligne ;
  • gouvernance des données et transparence ;
  • fractures numériques et inclusion ;
  • gouvernance numérique mondiale et coopération ;
  • droits de l’Homme et libertés ;
  • durabilité et environnement.

Le Forum comprendra environ 300 sessions, avec une pléthore de présentations, y compris des sessions de haut niveau, des sessions principales, des ateliers, des forums ouverts, des séances de discussion éclair, des lancements et des récompenses, des sessions de mise en réseau, des événements du jour 0, des sessions de coalition dynamique, et des sessions d’initiatives nationales et régionales (NRI).

En outre, le village du FGI, où 76 exposants présenteront leur travail, sera ouvert aux visiteurs.

Restez informé sur les rapports du GIP !

La Geneva Internet Platform sera activement impliquée dans l’IGF 2023 en fournissant des rapports sur les sessions de l’IGF pour la 9e année consécutive. Cette année, nos experts humains seront rejoints par DiploAI, qui générera des rapports sur toutes les sessions du FGI.

Nous publierons également des rapports quotidiens sur le FGI tout au long de la semaine, et un rapport final sera publié au terme du FGI.

Ajoutez un signet à notre page dédiée à IGF 2023 sur le Digital Watch Observatory ou téléchargez l’application pour recevoir les rapports. Abonnez-vous pour avoir accès à des bulletins d’information quotidiens.

 Logo, Text

Si vous assistez au FGI de Kyoto, passez par notre kiosque Diplo et GIP. Si vous participez à la conférence en ligne, visitez notre espace dans le village virtuel.


Actualités de la Francophonie

 Logo, Text

L’OIF organise un Café numérique francophone sur la « Découvrabilité et diversité culturelle et linguistique dans l’espace numérique » pour les délégations francophones auprès des Nations unies à New York

 Art, Graphics, Advertisement, Nature, Outdoors, Sea, Water, Pattern, Accessories

Suite à la publication de sa Contribution au Pacte numérique mondial (PNM) en avril 2023, remise à l’Envoyé pour les technologies des Nations unies le 03 mai 2023, l’Organisation internationale de la Francophonie (OIF) a mis en place les « Cafés numériques francophones ». Ce rendez-vous bimensuel a pour objectif de renforcer la sensibilisation des diplomates et experts francophones en charge du numérique au sein des Missions permanentes des Nations unies sur les implications diplomatiques des développements numériques, de faire un état des lieux régulier sur les processus en cours, d’encourager la concertation francophone à New York et, in fine, de favoriser une meilleure coordination des positions. Cette sensibilisation s’inscrit aussi dans le cadre du programme « D-CLIC, formez-vous au numérique » et de son volet 3 sur la sensibilisation à la gouvernance du numérique. 

Ainsi, le deuxième « Café numérique francophone » couvre le thème de la « Découvrabilité et diversité culturelle et linguistique dans l’espace numérique » et aura lieu le 26 octobre 2023. Ce thème n’est pas anodin puisqu’il est une des propositions faites par l’OIF dans sa Contribution au PNM en complément des thèmes élaborés par les Nations unies : « Promotion de la diversité culturelle et linguistique dans le numérique ». L’OIF y plaide la défense de la diversité culturelle et linguistique dans l’espace numérique à travers un fort plaidoyer en faveur de la « découvrabilité » des contenus en ligne. 

En effet, l’environnement numérique ne répond pas suffisamment aux enjeux du multilinguisme et le risque d’exclusion d’une grande partie des expressions culturelles induit par la « plateformisation » des modes de consommation et de distribution des contenus doit être pris en compte. Ce risque doit être atténué par des mesures propres à assurer la découvrabilité de tous les contenus sur la Toile. Ainsi, l’univers numérique doit refléter cette diversité en créant un écosystème favorable à l’affirmation et à la valorisation d’un pluralisme culturel et linguistique excluant tout monopole de la pensée ou forme d’hégémonie culturelle. Cela est d’autant plus opportun avec la montée en puissance de l’Intelligence artificielle (IA), et la manière dont les algorithmes génèrent du contenu dans différentes langues, ayant donc un impact sur la visibilité et la découvrabilité du contenu francophone en ligne. Il est ici question de l’importance de la gouvernance des algorithmes au service de la diversité et de la découvrabilité dans le cyberespace. La promotion de la richesse et la diversité culturelles de demain se feront notamment sur Internet et il est essentiel de construire dès maintenant l’environnement qui permettra de les sauvegarder. Ce seront les enjeux de ce thème qui sera animé par Monsieur Destiny Tchehouali, Professeur de communication internationale au Département de Communication sociale et publique de l’Université du Québec à Montréal (UQAM) et co-titulaire de la Chaire Unesco en communication et technologies pour le développement.

Au-delà de la sensibilisation et du renforcement des compétences sur des thématiques liées au développement du numérique, ce dialogue permettra de favoriser des convergences et positions communes de diplomates et délégations francophones au sein de différentes instances à New York, et notamment durant les négociations intergouvernementales qui s’ouvriront en décembre 2023 sur le PNM.

Le Groupe de travail exécutif sur le numérique (GTEN) rend son rapport et ses recommandations pour renforcer l’action de la Francophonie dans le champ du numérique

La Secrétaire générale de la Francophonie Louise Mushikiwabo a reçu des mains de l’Ambassadeur suisse Martin Dahinden le Rapport du groupe de travail exécutif la Gouvernance du numérique.

Ce rapport, mandaté par le Sommet de Djerba des Chefs d’Etat et de gouvernement des pays ayant le français en partage, a pour objectif de clarifier la valeur ajoutée de la Francophonie en général, et de l’OIF en particulier, dans la gouvernance du numérique. Il a été élaboré par un groupe de travail, le Groupe de travail exécutif sur le numérique (GTEN), constitué d’un nombre restreint de membres de haut niveau issus de pays représentatifs des territoires formant l’espace francophone. Placé sous la Présidence de l’Ambassadeur suisse Martin Dahinden, il est composé d’experts du Bénin, du Canada/Québec, de la République Démocratique du Congo, de la France, du Maroc, de la Roumanie, du Vietnam et de la Fédération Wallonie-Bruxelles. 

De juin à septembre 2023, le Groupe s’est réuni à sept reprises et s’est notamment fondé sur plusieurs documents de référence de la Francophonie (Relevé des décisions, XVIIIe Conférence des chefs d’État et de gouvernement membres de l’OIF – 19 et 20 novembre 2022, Stratégie de la Francophonie numérique 2022-2026, Contribution de la Francophonie au Pacte numérique mondial) pour réfléchir de manière collaborative et itérative aux enjeux du numérique dans l’espace francophone. 

Le rapport contient donc des axes prioritaires et des recommandations pour renforcer l’action de la Francophonie et de ses membres dans le champ du numérique. Il s’attache à faire des propositions opérationnelles sur chaque thématique suivante : la réduction de la fracture numérique et accès au numérique pour les populations de l’espace francophone ; le renforcement des capacités des acteurs nationaux et régionaux, avec une attention particulière aux femmes et aux jeunes ; les voix francophones dans la gouvernance du numérique, notamment à travers la consolidation des initiatives entre pays francophones en matière de régulation du numérique ; la découvrabilité du contenu francophone en contribuant à accroître la visibilité des contenus francophones en ligne ; et enfin la promotion de l’innovation numérique responsable, inclusive et respectueuse des droits de l’Homme.

La Secrétaire générale de la Francophonie, qui a placé le numérique au cœur de son action, s’est engagée à porter ces propositions devant les Ministres des Affaires étrangères de l’espace francophone qui se réuniront lors de la Conférence ministérielle de la Francophonie à Yaoundé les 4-5 novembre prochains.

 Adult, Female, Person, Woman, Male, Man, Accessories, Formal Wear, Tie, Glasses, Clothing, Coat, Blackboard, Louise Mushikiwabo
Source de la photographie : OIF

L’OIF intervient au Forum Régional de Développement de l’UIT pour l’Afrique 2023 (Addis Abeba, 03-05 octobre 2023)

L’Organisation internationale de la Francophonie à travers sa Représentante permanente auprès de l’Union africaine, Mme Néfertiti Tshibanda, a pris part à Addis Abeba au Forum régional de développement de l’UIT pour l’Afrique (RDF-AFR) portant sur le thème : « Transformation numérique pour un avenir numérique durable et équitable : Accélérer la mise en œuvre des ODD en Afrique ». La session de haut niveau à laquelle a pris part Mme la Représentante avait comme thématique de discussion le développement et la transformation numériques de l’Afrique, avec les populations au cœur de ce processus. Lors de son intervention, Mme la Représentante est largement revenue sur l’historique de l’engagement et de la vision de la Francophonie pour le développement du numérique, l’action de l’OIF dans le domaine du renforcement des compétences numériques des populations francophones à travers notamment le programme D-CLIC mais aussi son engagement dans le domaine de la gouvernance du numérique. La diversité culturelle et linguistique dans l’espace numérique avec la découvrabilité des contenus en ligne est un des sujets privilégiés de l’action de l’OIF dans le champ de la gouvernance du numérique. Il fait d’ailleurs partie des deux thématiques (renforcement des capacités numériques et promotion de la diversité culturelle et linguistique dans le numérique) adjoints par l’OIF aux 7 sujets initiaux proposés par les Nations unies pour le Pacte numérique mondial. Pour rappel, l’OIF a remis sa contribution complète au Pacte numérique mondial à l’Envoyé pour les technologies des Nations Unies et l’a présenté aux délégations francophones à New York le 03 mai 2023.

 QR Code, Text

Les Autorités de protection des données personnelles de l’espace francophone se réunissent au Maroc lors de la 14e conférence de l’AFAPDP 

La Commission Nationale de Contrôle de la Protection des Données à Caractère Personnel (CNDP) du Maroc a accueilli la 14e conférence de l’Association Francophone des Autorités de Protection des Données Personnelles (AFAPDP) le 2 octobre 2023 à Tanger (Maroc). Cette conférence avait pour thème principal : « Enjeux des relations DPAs-GAMMAs : Exemple du Web scraping ». 

Le moissonnage de données (web scraping) peut présenter des enjeux et défis majeurs pour la protection de la vie privée. L’extraction automatisée de données à partir du web peut impliquer l’aspiration de données personnelles, notamment sur les réseaux sociaux. Elle peut donc poser des problèmes par rapport aux principes et réglementations sur les données personnelles. 

Il est à rappeler qu’avec 11 autres autorités de protection des données (Data protection authorities d’Australie, Canada, Royaume-Uni, Hong Kong, Suisse, Norvège, Nouvelle-Zélande, Colombie, Jersey, Argentine et Mexique) dans le monde, la CNDP avait déjà signée en août 2023 une lettre destinée aux GAMMAs (Google, Apple, Meta, Microsoft et Amazon) et d’autres réseaux sociaux comme X Corp (ex-Twitter) ou encore ByteDance Ltd (TikTok) pour les inviter à prendre des dispositions permettant de minimiser les risques d’atteinte à la vie privée pour les utilisateurs. 

En savoir plus : https://www.afapdp.org

Événements à venir :

  • Conférence du Réseau francophone des régulateurs des médias – REFRAM (2023, Dakar), date à confirmer (https://www.refram.org)
  • Conférence du Réseau francophone de la régulation des télécommunications – FRATEL (25-26 octobre 2023, Rabat, Maroc) : Comment renforcer l’objectif de satisfaction des utilisateurs dans la régulation ? (https://www.fratel.org/)
  • Participation de l’OIF à l’Assemblée générale annuelle de l’ICANN (ICANN 78), Société pour l’attribution des noms de domaines et des numéros sur Internet (21-26 octobre 2023, Hambourg)

IGF 2023 – Daily 4

 Logo, Advertisement, Art, Graphics, Text

IGF Daily Summary for

Wednesday, 11 October 2023

Dear reader, 

The third day always brings a peak in the IGF dynamics, as happened yesterday in Kyoto. The buzz in the corridors, bilateral meetings, and tens of workshops bring into focus the small and large ‘elephants in the room’. One of these was the future of the IGF in the context of the fast-changing AI and digital governance landscape. 

What will the future role of the IGF be? Can the IGF fulfil the demand for more robust AI governance? What will the position of the IGF be in the context of the architecture proposed by the Global Digital Compact, to be adopted in 2024?

These and other questions were addressed in two sessions yesterday. Formally speaking, decisions about the future of the IGF will most likely happen in 2025. The main policy dilemma will be about the role of the IGF in the context of the Global Digital Compact, which will be adopted in 2024. 

While governance frameworks featured prominently in the debates, a few IGF discussions dived deeper into the specificities of AI governance. 

Yesterday’s sessions provided intriguing reflections and insights on cybersecurity, digital and the environment, human rights online, disinformation, and much more, as you can read below.

You can also read how we did our reporting from IGF2023. Next week, Diplo’s AI and Team of Experts will provide an overall report with the gist of our debates and many useful (and interesting)l statistics. 

Stay tuned!

The Digital Watch team

A rapporteur writes a report on a laptop while observing a dynamic panel discussion in front of a projection screen.

Do you like what you’re reading? Bookmark us at https://wp.dig.watch/igf2023 and tweet us @DigWatchWorld

Have you heard something new during the discussions, but we’ve missed it? Send us your suggestions at digitalwatch@diplomacy.edu


Highlights from yesterday’s sessions
Banners announcing the 18th Annual Meeting of the Internet Governance Forum hang from the ceiling of a walkway at the IGF2023 venue.
Kinkaku-ji Temple in Kyoto. Credit: Sasa VK

The day’s top picks

  • The future of the IGF
  • Ethical principles for the use of AI in cybersecurity
  • Inclusion (every kind of inclusion)

Digital Governance Processes

What is the future of the IGF? 

It could be a counter-intuitive question given the success of IGF2023 in Kyoto. But, continuous revisiting of the purpose of IGF is built into its fundaments. The next review of the future of the IGF will most likely happen in 2025 on the occasion of the 20th anniversary of the World Summit on Information Society (WSIS) when the decision to establish the IGF was made.

In this context, over the last few days in Kyoto, the future of the IGF has featured highly in corridors, bilateral meetings, and yesterday’s sessions. One of the main questions has been what will be the future position of the IGF in the context of the Global Digital Compact (GDC), to be adopted during the Summit of the Future in September 2024. For instance, what will be the role of the IGF if the GDC establishes a Digital Cooperation Forum as suggested in the UN Secretary-General’s policy brief

Debates in Kyoto reflected the view that fast developments, especially in the realm of AI, require more robust AI and digital governance. Many in the IGF community argue for a prominent role for the IGF in the emerging governance architecture. For example, the IGF Leadership Panel believes that it is the IGF that should participate in overviewing the implementation of the GDC. Creating a new forum would incur significant costs in finances, time, and effort. There is also a view that the IGF should be refined, improved and adapted to the rapidly changing landscape of AI and broader digital developments in order to, among other things, involve missing communities in current IGF debates. This view is supported by the IGF’s capacity to change and evolve, as has happened since its inception in 2006. 

Banners announcing the 18th Annual Meeting of the Internet Governance Forum hang from the ceiling of a walkway at the IGF2023 venue.

The Digital Watch and Diplo will follow the debate on the future of the IGF in the context of the GDC negotiations and the WSIS+20 Review Process.


AI

AI and governance

AI will be a critical segment of the emerging digital governance architecture. In the Evolving AI, evolving governance: From principles to action session, we learned that we could benefit from two things. First, we need a balanced mix of voluntary standards and legal frameworks for AI. It’s not about just treating AI as a tool, but regulating it based on its real-world use. Second, we need a bottom-up approach to global AI governance, integrating input from diverse stakeholders and factoring in geopolitical contexts. IEEE and its 400,000 members were applauded for their bottom-up engagement with regulatory bodies to develop socio-technical standards beyond technology specifications. The UK’s Online Safety Bill, complemented by an IEEE standard on age-appropriate design, is one encouraging example.

The open forum discussed one international initiative specifically – the Global Partnership on Artificial Intelligence (GPAI). The GPAI operates via a multi-tiered governance structure, ensuring decisions are made collectively, through a spectrum of perspectives. It currently boasts 29 member states, and others like Peru and Slovenia are looking to join. At the end of the year, India will be taking over the GPAI chair from Japan and plans to focus on bridging the gender gap in AI. It’s all about inclusion, from gender and linguistic diversity to educational programmes to teach AI-related skills. 

AI and cybersecurity

AI could introduce more uncertainty into the security landscape. For instance, malicious actors might use AI to facilitate more convincing social engineering attacks, like spear-phishing, which can deceive even vigilant users. AI is making it easier to make bioweapons and propagate autonomous weapons, raising concerns about modern warfare. National security strategies might shift towards preemptive strikes, as commanders fear that failure to strike the right balance between ethical criteria and a swift military response could put them at a disadvantage in combat. 

On the flip side, AI can play a role in safeguarding critical infrastructure and sensitive data. AI has proven to be a valuable tool in preventing, detecting, and responding to child safety issues, by assisting in age verification and disrupting suspicious behaviours and patterns that may indicate child exploitation. AI could be a game-changer in reducing harm to civilians during conflicts: It could reduce the likelihood of civilian hits by identifying and directing target strikes more accurately, thus enhancing precision and protecting humanitarian elements in military operations. One of yesterday’s sessions, Ethical principles for the use of AI in cybersecurity, highlighted the need for robust ethical and regulatory guidelines in the development and deployment of AI systems in the cybersecurity domain. Transparency, safety, human control, privacy, and defence against cyberattacks were identified as key ethical principles in AI cybersecurity. The session also argued that existing national cybercriminal legislation could cover attacks using AI without requiring AI-specific regulation.

Anastasiya panel
Diplo’s Anastasiya Kazakova at the workshop: Ethical principles for the use of AI in cybersecurity.

The question going forward is: Do we need separate AI guidelines specifically designed for the military? The workshop on AI and Emerging and Disruptive Technologies in warfare called for the development of a comprehensive global ethical framework led by the UN. Currently, different nations have their own frameworks for the ethical use of AI in defence, but the need for a unified approach and compliance through intergovernmental processes persists.

The military context often presents unique challenges and ethical dilemmas, and the first attempts at guidelines for the military are those from the RE-AIM Summit and the UN Convention on Prohibitions or Restrictions on the Use of Certain Conventional Weapons.


Cybersecurity

Global negotiations for a UN cybercrime convention

Instruments and tools to combat cybercrime were high on the agenda of discussions. The negotiations about the possible UN cybercrime convention in the Ad Hoc Committee (AHC) are nearing the end, yet many open issues remain. While the mandate is clearly limited to cybercrime (broader mandate proposals, like the regulation of ISPs, were removed from the text), there is a need to precisely define the scope of the treaty. There is no commonly agreed-upon definition of cybercrime yet, and a focus on well-defined crimes that are universally understood across jurisdictions might be needed. 

There are calls to distinguish between cyber-dependent (dependent upon some cyber element for its execution) serious crime like autonomous cyberweapon terrorist attacks and cyber-enabled (actions traditionally carried out in other environments, but now possible with the use of computer as well) actions like online speech which may hurt human rights., The treaty should also address safe havens for cybercriminals, since certain countries turn a blind eye to cybercrime within their borders — due to their limited capacity to combat it, or political or other incentives to ignore it.

Another major stumbling stone of the negotiations is how to introduce clear safeguards for human rights and privacy. Concerns are present over the potential misuse of the provision related to online content by authoritarian countries to prosecute activists, journalists, and political opponents. Yet the mere decision-making process for adopting the convention – which requires unanimous consensus, or, alternatively, a two-thirds majority vote – makes it unlikely that any provision curtailing human rights will be included in the final text.

The current draft includes explicit references to human rights and thus goes far beyond existing crime treaties (e.g. UNCTAD and UNCAC). A highly regarded example of an instrument that safeguards human rights is the Cybercrime Convention (known as the Budapest Convention) of the Council of Europe, which requires parties to uphold the principles of the rule of law and human rights; in practice, judicial authorities effectively oversee the work of the law enforcement authorities (LEA).

One possible safeguard to mitigate the risks of misuse of the convention is the principle of dual criminality, which is crucial for evidence sharing and cooperation in serious crimes. The requirement of dual criminality for electronic evidence sharing is still under discussion in the AHC.

Other concerns related to the negotiations on the new cybercrime convention include the information-sharing provisions (whether voluntary or compulsory), how chapters in the convention will interact with each other, and how the agreed text will manage to overcome jurisdictional challenges to avoid conflicting interpretations of the treaty. Discussions about the means and timing of information sharing about cybersecurity vulnerabilities, as well as reporting and disclosure, are ongoing.

A more robust capacity-building chapter and provisions for technical assistance are needed, apparently. Among other things, those provisions should enable collaborative capacities across jurisdictions and relationships with law enforcement agencies. The capacity-building initiative of the Council of Europe under the Budapest Convention can serve as an example (e.g. training in cybercrime for judges).

The process of drafting the convention benefited from the deep involvement of expert organisations like the United Nations Office on Drugs and Crime (UNODC), the private sector, and civil society. It is widely accepted that strong cooperation among stakeholders is needed to combat cybercrime. 

The current draft introduces certain challenges for the private sector. Takedown demands as well as placing the responsibility for defining and enforcing rules on freedom of speech on companies, generate controversy and debate within the private sector: Putting companies in an undefined space confronts them with jurisdictional issues and conflicts of law. Inconsistencies in approaches across jurisdictions and broad expectations regarding data disclosure without clear safeguards pose particular challenges; clear limitations on data access obligations are also essential.

What comes next for the negotiations? The new draft of the convention is expected to be published in mid-November, and one final negotiation session is ahead in 2024. After deliberations and approval by the AHC (by consensus or two-thirds voting), the text of the convention will need to be adopted by the UN General Assembly and opened for ratification. For the treaty to be effective, accession by most, if not all, countries is necessary. 

The success or failure of the convention depends on the usefulness of the procedural provisions in the convention text (particularly those relating to investigation, which are currently well-developed) and the number of states that ratify the treaty. Importantly, the success of the treaty implementation is also conditioned that it doesn’t impede existing functioning systems, such as the Budapest Convention, which has been ratified by 68 countries worldwide. An extended effect of a treaty would be the support given to cybercrime efforts by UN member states in passing related national bills. 

Digital evidence for investigating war crimes

A related debate developed around cyber-enabled war crimes, due to the recent decision by the International Criminal Court (ICC) prosecutor to investigate such cases. The Budapest Convention applies to any crime involving digital evidence, including war crimes (in particular Article 14 on war crime investigations, Article 18 on the acquisition of evidence from any service provider, and Article 26 on the sharing of information among law enforcement authorities). 

Of particular relevance is the development of tools and support to capture digital evidence, which could aid in the investigation and prosecution of war crimes. Some tech companies have partnered with the ICC to create a platform that serves as an objective system for creating a digital chain of custody and a tamper-proof record of evidence, which is critical for ensuring neutrality and preserving the integrity of digital evidence. The private sector also plays a role in collecting evidence: There are reports from multiple technology companies providing evidence of malicious cyber activities during conflicts. The Second Additional Protocol to the Budapest Convention offers a legal basis for disclosing domain name registration information and direct cooperation with service providers. At the same time, Article 32 of the Budapest Convention addresses the issue of cross-border access to data, but this access is only available to state parties.

Other significant sources of evidence are investigative journalism and open source intelligence (OSINT) – like the Bellingcat organisation – which uncover war crimes and gross human rights violations using new tools, such as the latest high-resolution satellite imagery. OSINT should be considered an integral part of the overall chain of evidence in criminal investigations, yet such sources should be integrated within a comprehensive legal framework. Article 32 of the Budapest Convention, for example, is already a powerful tool for member states to access OSINT from both public and private domains, with consent. Investigative journalism plays a role in combating disinformation and holding those responsible for war crimes accountable.

Yet, the credibility and authenticity of such sources’ evidence can be questioned. Technological advancements, such as AI, have enabled individuals, states, and regimes to easily manipulate electronic data and develop deepfakes and disinformation. When prosecuting cybercrime, it is imperative that evidence be reliable, authentic, complete, and believable. Related data must be preserved, securely guarded, protected, authenticated, verified, and available for review to ensure its admissibility in trials. The cooperation of state authorities could lead to the development of methodologies for verifying digital evidence (e.g. the work of the Global Legal Action Network).

A digital image in black and blue.

Human rights

Uniting for human rights

‘As the kinetic physical world in which we exist recedes and the digital world in which we increasingly live and work takes up more space in our lives, we must begin thinking about how that digital existence should evolve.’ This quotation, published in a background paper to the session on Internet Human Rights: Mapping the UDHR to cyberspace, succinctly captures one of the central issues of our age. 

The world today is witnessing a concerning trend of increasing division and isolationism among nations. Ironically, global cooperation and governance, the very reasons for IGF 2023, are precisely what we need to promote and safeguard human rights. 

At the heart of yesterday’s main session on Upholding human rights in the digital age was the recognition that human rights should serve as an ethical compass in all aspects of internet governance and the design of digital technologies. But this won’t happen on its own: We need collective commitment to ensure that human rights are at the forefront of the evolving digital landscape, and we need to be deliberate and considerate in shaping the rules and norms that govern it.

The Global Digital Compact framework could promote human rights as an ethical compass by providing a structured and collaborative platform for stakeholders to align their efforts towards upholding human rights in the digital realm. 

The IGF also plays a crucial role in prioritising human rights in the digital age by providing a platform for diverse perspectives, grounding AI governance in human rights, addressing issues of digital inclusion, and actively engaging with challenges like censorship and internet resilience.

Capitalist surveillance

In an era dominated by technological advancements, the presence of surveillance in our daily lives is pervasive, particularly in public spaces. Driven by a need for heightened security measures, governments have increasingly deployed sophisticated technologies, such as facial recognition systems. 

As yesterday’s discussion on private surveillance showed, citizens also contribute to our intricate web of interconnected surveillance networks: Who can blame the neighbours if they want to monitor their street to keep it safe from criminal activity? After all, surveillance technologies are affordable and accessible. And that’s the thing: A parallel development that’s been quietly unfolding is the proliferation of private surveillance tools in public spaces. 

These developments require a critical examination of their impact on privacy and civil liberties, and on issues related to consent, data security, and the potential for misuse. Most of us are aware of these issues, but the involvement of private companies in surveillance introduces a new layer of complexity. 

Unlike government agencies, private companies are often not subject to the same regulations and transparency requirements. This can lead to a lack of oversight and transparency regarding how data is collected, stored, and used. 

Additionally, the potential for profit-driven motives may incentivise companies to push the boundaries of surveillance practices, potentially infringing on individuals’ privacy rights. It’s not like we haven’t seen this before.

A metal post with surveillance cameras aimed in three directions stands against a blue sky marked by clouds

Ensuring ethical data practices

The exploitation of personal data without consent is ubiquitous. Experts in the session Decolonise digital rights: For a globally inclusive future drew parallels to colonial practices, highlighting how data is used to control and profit. This issue is not only a matter of privacy but also an issue of social justice and rights. 

When it comes to children, privacy is not just about keeping data secure and confidential but also about questioning the need for collecting and storing their data in the first place. This means that the best way to check whether a user accessing an online service is underaged is to use pseudonymous credentials and pseudonymised data. Given the wave of new legislation requiring more stringent age verification measures, there’s no doubt that we will be discussing this issue much more in the coming weeks and months. 

Civil society is perhaps best placed to hold companies accountable for their data protection measures and governments in check for their efforts in keeping children safe. Yet, we sometimes forget to involve the children themselves in shaping policies related to data governance and their digital lives. 

Hence, the suggestion of involving children in activities such as data subject access requests. This can help them comprehend the implications of data processing. It can also empower them to participate in decision-making processes and contribute to ensuring ethical and responsible data practices. After all, the experts argue, many children’s level of awareness and concern about their privacy is comparable to that of adults.


Development

Digital technologies and the environment

The pandemic clearly showed the intricate connection between digital technologies and the environment. Although lower use of gasoline-powered vehicles led to a decrease in CO2 emissions during lockdowns, isolating also triggered a substantial increase in internet use due to remote work and online activities, giving rise to concerns about heightened carbon emissions from increased online and digital activities. 

Data doesn’t lie, (statisticians do) and data has confirmed the dual impact of digital technologies: While these technologies contribute 1% to 5% of greenhouse gas emissions and consume 5% to 10% of global energy, they also have the potential to cut emissions by 20% by 2030.

To harness the potential benefits of digitalisation and minimise its environmental footprint, we need to raise awareness about what sustainable sources we have available and establish standards for their use.

While progress is being made, there’s a pressing need for consistent international standards that consider environmental factors for digital resources. Initiatives from organisations such as the Institute of Electrical and Electronics Engineers (IEEE) in setting technology standards and promoting ethical practices, particularly in relation to AI and its environmental impact, as well as collaborations between organisations like GIZ, the World Bank, and ITU in developing standards for green data centres, highlight how working together globally is crucial for sustainable practices. 

Currently, over 90% of global enterprises are small or medium-sized, contributing over 50% of the world’s GDP, yet they lack the necessary frameworks to measure their carbon footprint, which is a key step in enabling their participation in the carbon economy in a real and verifiable way. 

Inclusion of people with disabilities

There’s no one-size-fits-all solution when it comes to meeting the needs of people with disabilities (PWD) in the digital space. First of all, the perduring slogan, ‘nothing about us without us’ must be respected. Accessibility by design standards like Web Content Accessibility Guidelines (WCAG) 2 are easily available through the W3C Accessibility Standards Overview. Although accessibility accommodations require tailored approaches to address the specific needs of both groups and individuals, standards offer a solid foundation to start with. 

The inclusion of people with disabilities should extend beyond technical accessibility to include the content, functionality, and social aspects of digital platforms.The stigma PWD face in online spaces needs to be addressed by implementing policies that create a safe and inclusive online environment. 

Importantly, we must take advantage of the internet governance ecosystem to ensure that

  • We support substantial representation from the disability community in internet governance discussions, beyond discussions on disabilities.
  • We stress the importance of making digital platforms accessible to everyone, no matter their abilities or disabilities, using technology and human empowerment.
  • We provide awareness-raising workshops for those unaware of the physical, mental, and cognitive challenges others might be facing, including those of us who suffer from one disability without understanding what others are facing.
  • We provide skills and training to effectively use available accommodations to overcome our challenges and disabilities.
  • We make available training and educational opportunities for persons with disabilities to be involved in the policymaking processes that involve us, making the internet and digital world better for everyone with the resulting improvements.
  • We support research to continue the valuable scientific improvements made possible by emerging technologies and digital opportunities.
A person in a black suit sits in a wheelchair in front of a computer desk. They are wearing a virtual reality headset and gesturing with their arms and hands.

Sociocultural

The public interest and the internet 

The internet is widely regarded as a public good with a multitude of benefits. Its potential to empower communities by enabling communication, information sharing, and access to valuable resources was appreciated. However, while community-driven innovation coexists with corporate platforms, the digital landscape is primarily dominated by private, for-profit giants like Meta and X. 

This dominance is concerning, particularly because it risks exacerbating pre-existing wealth and knowledge disparities, compromises privacy, and fosters the proliferation of misinformation.

This duality in the internet’s role demonstrates its ability to both facilitate globalisation and centralise control, possibly undermining its democratic essence. The challenge is even greater when considering that efforts to create a public good internet often lack inclusivity, limiting the diversity of voices and perspectives in shaping the internet. Furthermore, digital regulations tend to focus on big tech companies, often overlooking the diverse landscape of internet services. 

To foster a public good internet and democratise access, there is a need to prioritise sustainable models that serve the public interest. This requires a strong emphasis on co-creation and community engagement. This effort will necessitate not only tailoring rules for both big tech and small startup companies but also substantial investments in initiatives that address the digital divide and promote digital literacy, particularly among young men and women in grassroots communities, all while preserving cultural diversity. Additionally, communities should have agency in determining their level of interaction with the internet. This includes enabling certain communities to meaningfully use the internet according to their needs and preferences.

Disinformation and democratic processes 

In the realm of disinformation, we are witnessing new dynamics, with an expanded cast of individuals and group actors responsible for misleading the public, with the increasing involvement of politics and politicians. 

Addressing misinformation in this fast-paced digital era is surely challenging, but not impossible. For instance, Switzerland’s resilient multi-party system was cited to illustrate how it can resist the sway of disinformation in elections. And while solutions can be found to limit the spread of mis- and dis-information online, they need to be put in place with due consideration to issues such as freedom of expression and proportionality. The Digital Services Act (DSA) – adopted in the EU – is taking this approach, although concerns were voiced about its complexity.

A UN Code of Conduct for information integrity on digital platforms could contribute to ensuring a more inclusive and safe digital space, contributing to the overall efforts against harmful online content. However, questions arose about its practical implementation and the potential impacts on freedom of expression and privacy due to the absence of shared definitions.

Recognising the complexity of entirely eradicating disinformation, some argued for a more pragmatic approach, focusing on curbing its dissemination and minimising the harm caused, rather than seeking complete elimination. A multifaceted approach that goes beyond digital platforms and involves fact-checking initiatives and nuanced regulations was recommended. Equally vital are efforts in education and media literacy, alongside the collection of empirical evidence on a global scale, to gain a deeper understanding of the issue.

Tiles with random letters surround five tiles lined up in a row to spell the word ‘FACTS’ on a pink background.

Infrastructure

Fragmented consensus

Yesterday’s discussions on internet fragmentation built on those of the previous days. Delving into diverse perspectives on how to prevent the fragmentation of the internet is inherently valuable. But when there’s an obvious lack of consensus on even the most fundamental principles, it underlines just how critical the debate is.

For instance, should we focus mostly on the technical aspects, or should we also consider content-related fragmentation – and which of these are the most pressing to address? If misguided political decisions pose an immediate threat, should policymakers take a backseat on matters directly impacting the internet’s infrastructure?

Pearls of wisdom shared by experts in both workshops – Scramble for internet: You snooze, you lose and Internet fragmentation: Perspectives & collaboration – offer a promising bridge to close this gap in strategy.

One of these insights emphasised the need to distinguish content limitations from internet fragmentation. Content restrictions, like parental controls or constraints on specific types of content, primarily pertain to the user experience rather than the actual fragmentation of the internet. Labelling content-level limitations as internet fragmentation could be misleading and potentially detrimental. Such a misinterpretation might catalyse a self-fulfilling prophecy of a genuinely fragmented internet.

Another revolved around the role of governments, in some ways overlapping with content concerns. There’s apprehension that politicians might opt to establish alternate namespaces or a second internet root, thereby eroding its singularity and coherence. If political interests start shaping the internet’s architecture, it could culminate in fragmentation and potentially impede global connectivity. And yet, governments have been (and still are) essential in establishing obligatory rules affecting online behaviour when other voluntary measures proved insufficient. 

A third referred to the elusive nature of the concept of sovereignty. Although a state holds the right to establish its own rules, should this extend to something inherently global like the internet? The question of sovereignty in the digital age, especially in the context of internet fragmentation, prompts us to reevaluate our traditional understanding of state authority in a world where boundaries are increasingly blurred by the whirlwinds of silt raised as governments search for survey markers in the digital realm.

Network with pins

Economic

Tax rules and economic challenges for the Global South

Over the years, the growth of the digital economy – and how to tax it – led to major concerns over the adequacy of tax rules. In 2021, over 130 countries came together to support the OECD’s new two-pillar solution. In parallel, the UN Tax Committee revised its UN Model Convention to include a new article on taxing income from digital services.

Despite significant improvements in tax rules, developing countries feel that these measures alone are insufficient to ensure tax justice for the Global South. First, these models are based on the principle that taxes are paid where profits are generated. This principle does not consider the fact that many multinational corporations shift profits to low-tax jurisdictions, depriving countries in the Global South of their fair share of tax revenue. Second, the two frameworks do not address the issue of tax havens directly, which are often located in the Global North. Third, the OECD and UN models do not take into account the power dynamics between countries in the Global North (which has historically been in the lead in international tax policymaking) and the Global South. 

Yesterday’s workshop on Taxing Tech Titans: Policy options for the Global South discussed policy options accessible to developing countries. 

Countries in the Global South have adopted various strategies to tax digital services, including the introduction of digital services taxes (DSTs) that target income from digital services. That’s not to say that they’ve all been effective: Uganda’s experience with taxing digital services, for instance, had unintended negative consequences. In addition, unilateral measures without a global consensus-based solution can lead to trade conflicts.

So what would the experts advise their countries to do? Despite the OECD’s recent efforts to accommodate the interests of developing nations, experts from the Global South remain cautious: ‘Wait and see, and sign up later’ a concluding remark suggested.

Tax word on wooden cubes on the background of dollar banknotes. Tax payment reminder
Diplo/GIP at the IGF

Reporting from the IGF: AI and human expertise combined

We’ve been hard at work following the IGF and providing just-in-time reports and analyses. This year, we leveraged both human expertise and DiploAI in a hybrid approach that consists of several stages:

  1. Online real-time recording of IGF sessions. Initially, our recording team set up an online recording system that captured all sessions at the IGF. 
  2. Uploading recordings for transcription. Once these virtual sessions were recorded, they were uploaded to our transcribing application, serving as the raw material for our transcription team, which helped the AI application split transcripts by speaker. Identifying which speaker made which contribution is essential for analysing the multitude of perspectives presented at the forum – from government bodies to civil society organisations. This granularity enabled more nuanced interpretation during the analysis phase.
  3. AI-generated IGF reports. With the speaker-specific transcripts in hand (or on-screen), we utilised advanced AI algorithms to generate preliminary reports. These AI-driven reports identified key arguments, topics, and emerging trends in discussions. To provide a multi-dimensional view, we created comprehensive knowledge graphs for each session as well as for individual speakers. These graphical representations intricately mapped the connections between speakers’ arguments and the corresponding topics, serving as an invaluable tool for analysis (see the knowledge graph from Day 1 at IGF2023)
Line drawing of an intricate web of fine, coloured lines and nexuses.
  1. Writing dailies. To conclude the reporting process, our team of analysts used AI-generated reports to craft comprehensive daily analyses. 

You can see the results of that approach on our dedicated page.

alt-text=0
One part of Diplo’s Belgrade team at work. Does that clock say 2:30 a.m.? Yes, it does.
A photo collage of tourist sights around Tokyo frames a photo of DiploTeam eating at a restaurant, with the comment: Diplo Crew at IGF2023’.
A part of our team attended the IGF in situ and participated in sessions as organisers, moderators and speakers. Here they are, on their last evening in Kyoto (above).

IGF 2023 – Daily 3

 Logo, Advertisement, Text, Art, Graphics

IGF Daily Summary for

Tuesday, 10 October 2023

Dear reader, 

On Day 2, the IGF got into full swing with intense debates in conference rooms and invigorating buzz in the corridors. The inspiring local flavours permeated the debate on the impact of digitalisation on the treasured Japanese manga culture and Jovan Kurbalija’s parallels between the Kyoto School of Philosophy and AI governance.

 Gate, Torii, Architecture, Building

After the formalities and protocol of the first day, the level of usual ‘language’ decreased, and new insights and controversies increased. AI remains a prominent topic, with much more clarity in the debate. While the familiar AI lingo continues, it was refreshing to see increased clarity in thinking about AI away from the prevailing hype and fear-mongering in the media space.

The quality of debate was increased by viewing AI from various perspectives, from technology and standardisation to human rights and cybersecurity. While acknowledging the reality of different interests and powers, the debate on AI brought the often-missing insight that all countries face similar challenges in governing AI. For this reason, focusing on human-centred AI may help reduce geopolitical tensions in this field.

The Global Digital Compact (GDC) triggered intense and constructive debate. While there is overwhelming support for the GDC as the next step in developing inclusive digital governance, the focus on details is increasing, including the role of the IGF in implementing the GDC and preserving the delicate balance between the multilateral negotiations of the GDC and multistakeholder participation. This year, the IGF also intensified academic debate with policy implications on the difference between internet and ‘digital’. 

Further down in this summary, you can find more on, among other things, internet fragmentation, cybersecurity, content moderation, and digital development. 

You can also read more on an exciting initiative using AI to preserve the rich knowledge that the IGF has generated since its first edition in Athens in 2006.

We wish you inspiring discussions and interesting exchanges on the third day of the IGF in Kyoto!

The Digital Watch Team

A rapporteur writes a report on a laptop while observing a dynamic panel discussion

Do you like what you’re reading? Bookmark us at https://wp.dig.watch/igf2023 and tweet us @DigWatchWorld

Have you heard something new during the discussions, but we’ve missed it? Send us your suggestions at digitalwatch@diplomacy.edu


The highlights of the discussions

The day’s top picks

  • AI: Increasing clarity in debates
  • Japan: Manga, Kyoto philosophers and digital governance
  • The GDC: Overall support and discovering the ‘devils in the details’
  • The IGF itself: Using AI to preserve the rich knowledge treasure of the IGF

Artificial Intelligence

Refreshing clarity in AI debates

 Advertisement, Poster, Art, Graphics, Publication, Text, Book, Nature, Outdoors

We want AI, but what does that mean? Today’s main session brought refreshing clarity to the discussion on the impact of AI on society. It moved from the cliche of AI ‘providing opportunities while triggering threats’ to providing more substantial insights. The spirit of this debate starkly contrasted to the rather alarming hype about existential human threats that AI has triggered.

AI was compared to electricity, with the suggestion that AI is becoming similarly pervasive and requires global standards and regulations to ensure its responsible implementation. The discussion recognised AI as a critical infrastructure. 

 Business Card, Paper, Text

A trendy analogy comparing AI governance to the International Atomic Energy Agency (IAEA) was criticised as containing more differences than similarities between AI and nuclear energy.

While we wait for new international regulations to be developed, a wide range of actors could adopt voluntary standards for AI. For instance, UNICEF uses a value-based design approach developed by the Institute of Electrical and Electronics Engineers (IEEE).

The private sector in AI governance is both inevitable and indispensable. Therefore, its involvement must be transparent, open, and trustworthy. Currently, this is not the case. However, the representative of OpenAI noted the recent launch of the Red Teaming Network as an industry attempt to be more open and inclusive. Other examples are the LEGO Group’s implementation of measures to protect children in their online and virtual environments and the UK Children’s Act.

Calls were made for national and regional attempts to advance local-context AI governance, as in Mauritius, Kenya and Egypt, which are taking steps towards national policies. In Latin America, challenges also arise from unique regional contexts, global power dynamics, and the intangible nature of AI. 

AI and human rights 

Children’s safety and rights were the focus of a workshop organised by UNICEF. AI has already entered classrooms, but without clearly defined criteria for responsible integration. It is already clear that there are benefits: The innovative use of AI for bridging cultural gaps heralds a new era of global connectedness, and it can support fair assessments. Going beyond, according to Honda, its HARU robot can provide emotional support to vulnerable children or AI can fill a gap in a severely under-resourced mental healthcare system, such as in Nigeria, where an ‘Autism VR’ project is increasing awareness, promoting inclusion and supporting neurodiverse children. 

However, a note of caution was also sounded: the future of education lies in harnessing technology’s potential while championing inclusivity, fairness, and global collaboration. Some solutions are: integrating educators in the research process, adopting a participatory action approach, involving children from various cultural and economic backgrounds, and recognising global disparities given that AI datasets are insufficiently capturing the experiences of children from the Global South.

Human rights approaches carried weight today, echoing in the workshop on a Global human rights approach to responsible AI governance. The discussion highlighted the ‘Brussels effect,’ wherein EU regulations became influential worldwide. Countries with more robust regulatory frameworks tend to shape AI governance practices globally, emphasising the implications of rules beyond national borders. In contrast, as some observers noted, in Latin America, the regional history of weak democratic systems generated a general mistrust towards participation in global processes, hindering the continent’s engagement in AI governance. Yet, Latin America provides raw materials, resources, data, and labour for AI development while the tech industry aggressively pushes for regional AI deployment in spite of human rights violations. The same can be said for Africa. 

To address these challenges, it is necessary to strengthen democratic institutions and reduce regional asymmetries, keeping in mind that human rights should represent the voice of the majority. To ensure an inclusive and fair AI governance process, reducing regional disparities, strengthening democratic institutions, and promoting transparency and capacity development are essential. 

AI and standardisation

It appears that regional disparities plague standardisation efforts as well. Standardisation is indispensable for linkages between technology developers, policy professionals and users. Yet, as the workshop Searching for standards: The global competition to govern AI noted, silos remain problematic, isolating developers and policymakers or providers and users of the technology. The dominance of advanced economies as providers of AI tech, heavily guarding intellectual property rights, and early standard-setting has led to situations where harms are predominately framed through a lens in the Global North, at the cost of impacts on users, usually in the Global South. 

As a potential way of opening the early standard-setting game, open-source AI models support developing countries by offering immediate opportunities for local development and involvement in the evolving ecosystem. There is, however, a need for technical standards for AI content, with watermarking proposed as a potential standard. 

AI and cybersecurity

The use of AI in cybersecurity provides numerous opportunities, as noted during the workshop on AI-driven cyber defence: Empowering developing nations. The discussion centred on the positive role of AI in cybersecurity, emphasising its potential to enhance protection rather than pose threats. One example is AI’s effectiveness in identifying fake accounts and inauthentic behaviour.  

Collaboration and open innovation were emphasised as critical factors for AI cybersecurity. Keeping AI accessible to experts in other areas helps prevent misuse, and policymakers should incentivise open innovation. 

A person’s finger touches a digital fingerprint icon on an interlocking network of digital functions represented by icons connected to AI.

Unlocking the IGF’s knowledge using AI

Yesterday, Diplo and the GIP supported the IGF Secretariat in organising a side session, where how to unlock IGF’s knowledge to gain AI-driven insights for our digital future was discussed. The immense amount of data accumulated through the IGF over 18 years – which is a public good that belongs to all stakeholders – presents an opportunity for valuable insights when mined and analysed effectively, with AI applications serving as useful tools in this process. This way, this wealth of knowledge can be more effectively utilised to contribute to the SDGs.

Jovan Kurbalija smiles and reaches for the microphone on a panel at the IGF.

AI can enhance human capabilities to assist the IGF mission. AI has the capability to generate interactive reports from sessions (as it does at IGF2023), with detailed breakdowns by speaker and topic, narrative summaries, and discussion points. Such a tool can codify and translate the arguments presented during sessions, identify and label key topics, and develop a comprehensive knowledge graph. It can connect and compare discussions across different IGF sessions, identify commonalities, link related topics, and facilitate a more comprehensive understanding of the subject matter, as well as associate relevant SDGs with the discussions. 

AI can mitigate the challenge of the crowded schedule of IGF sessions, by establishing links to similar discussions and sessions from past years, which enables better coordination and consolidation of related themes over the course of years and meetings. Ultimately, AI can help us visualise hours of discussions and thousands of pages of discussion in the format of a knowledge graph, as done in Diplo’s experiment with daily debates at this year’s IGF (see below).

An intricate multicoloured lace network of lines and nexuses representing a knowledge graph of Day 0 of IGF2023

AI can increase the effectiveness of disseminating and utilising the knowledge generated by the IGF. It can also help identify underrepresented and marginalised groups and disciplines in the IGF processes, allowing the IGF to increase its focus on involving them. 

Importantly, preserving the IGF’s knowledge and modus operandi can show the relevance and power of respectful engagement with different opinions and views. Since this approach is not a ‘given’ in our time, the IGF’s contribution could be much broader, far beyond the focus on internet governance per se.

Digital governance processes

GDC in the spotlight

While it may look like AI is the one and only most popular topic at this year’s IGF, there is at least one more thing on many participants’ minds: the much anticipated Global Digital Compact (GDC). It’s no surprise, then, that a main session was dedicated to it. If this is the first time you are reading about the GDC (we strongly doubt it), we invite you to familiarise yourself with it on this process page before moving on. 

If you know what the GDC is, then you most likely also know that one sour point in discussions so far has concerned the process itself: While the GDC is expected to be an outcome of the multilateral 2024 Summit of the Future, many state and non-state actors argue that there should be multistakeholder engagement throughout the GDC development process. But, as highlighted during yesterday’s main session, balancing multilateral processes and multistakeholder engagement is indeed a challenge. How to address this challenge seems to remain unclear, but for the time being, stakeholders are encouraged to engage with their member states to generate greater involvement. 

And speaking of multistakeholderism, one expectation (or rather a wish) that some actors have for the GDC is that it will recognise, reinforce, and support the multistakeholder model of internet governance. Another expectation is that the GDC will establish clear linkages with existing processes while avoiding duplication of efforts and competition for resources. For instance, it was said during the session that the IGF itself should have a role in implementing the GDC principles and commitments and in the overall GDC follow-up process. 

Beyond issues of process and focus, one particularly interesting debate has picked up momentum within the GDC process: whether and to what extent internet governance and digital governance/digital cooperation are distinct issues. Right now, there are arguments on both sides of the debate. Please contribute your views to the survey on the internet vs. digital debate.

 Electronics, Hardware, Diagram

Multistakeholder voices in cyber diplomacy                                               

The IGF is, by nature, a multistakeholder space, but many other digital governance spaces struggle with how to define stakeholder engagement. This was highlighted in the session Stronger together: Multistakeholder voices in cyber diplomacy, where many participants called for enhanced stakeholder participation in policy-making and decision-making processes related, in particular, to cybersecurity, cybercrime, and international commerce negotiations.

The non-governmental stakeholders’ perspective is essential for impactful outcomes, transparency, and credibility. The absence of their input not only results in the loss of valuable perspectives and expertise, but also undermines the legitimacy and effectiveness of the policies and decisions made. Moreover, collaboration between state and non-state stakeholders can also be seen as mutually beneficial. Multistakeholder involvement could aid governments in the gathering of diverse ideas during negotiations and decision-making processes related to digital issues. 

However, as the session on enhancing the participation and cooperation of CSOs in/with multistakeholder IG forums noted, civil society organisations, especially from the Global South, face barriers to entry into global multistakeholder internet governance spaces, and the need for increased capacity building, transparency in policy processes, and spaces that allow for network building and coordination to impactfully engage in global multistakeholder internet governance processes.

One approach to solving the conundrum of multistakeholder engagement in intergovernmental processes was proposed: implementing a policy on stakeholder participation. Such a policy, it was said, would transform stakeholder involvement into an administrative process, ensuring that all perspectives are consistently considered and incorporated into policy-making.

People in business dress and holding laptop computers converse in a hallway

Infrastructure

Turning back the tide on internet fragmentation

The concerned words of Secretary-General Antonio Guterres at the start of the 78th UN General Debate still echo in the minds of many of us. ‘We are inching ever closer to a great fracture in economic and financial systems and trade relations,’ he told world leaders, ‘one that threatens a single, open internet with diverging strategies on technology and AI, and potentially clashing security frameworks.’

Those same concerns were raised within the halls of the IGF yesterday. In one of the workshops, experts tried to foresee the internet in 20 years’ time: The path we’re on today, mired with risks, does not bode well for the internet’s future. In a second workshop, experts looked at the different dimensions of fragmentation – fragmentation of the user experience, that of the internet’s technical layer, and fragmentation of internet governance and coordination (explained in detail in this background paper) – and the consequences they all carry. In a third workshop, experts looked at the technical community’s key role in the evolution of the internet and how they can best help shape the future of the internet.

The way we imagine the future of the internet might vary in detail. Still, the core issue is the same: If we don’t act fast, the currently unified internet will come precariously closer to fragmenting into blocs. 

It could be the beginning of the end of the founding vision of the free, open, and decentralised internet, which shaped its development for decades. We need to get back to the values and principles that shaped the internet in its early days if we are to recover those possibilities. These values and principles underpin the technical functioning of the internet, and ensure that the different parts of the internet are interconnected and interoperable. Protecting the internet’s critical properties is crucial for global connectivity.

As risks increase, we shouldn’t lose sight of the lasting positive aspects either. The internet has been transformational; it has opened the doors for instantaneous communication; its global nature has enabled the free flow of ideas and information and created major opportunities for trade. 

A swift change to the current state of affairs (undoubtedly affecting the online space) is forthcoming, The Economist argued recently. But if we want to be more proactive, there are plenty of spaces that can help us understand and mitigate the risks (one of which is the Global Digital Compact). Perhaps this will also give us the space to renew our optimism in technology and its future.

Debate on ‘fair share’ heats up

Internet traffic has increased exponentially, prompting telecom operators to request that tech companies contribute their fair share to maintain the infrastructure. In Europe, this issue is at the centre of a heated debate. Just a few days ago, 20 CEOs from most of Europe’s largest telecom companies called on lawmakers to introduce new rules. 

Yesterday, one lively discussion during an IGF workshop tackled this very question: whether over-the-top (OTT) service providers (e.g. Netflix, Disney Plus) should contribute to the costs associated with managing and improving the infrastructure. While the debate isn’t new, there were at least two points raised by experts that are worth highlighting:

  • Instead of charging network fees, ISPs could partner with OTT providers in profit-sharing agreements. 
  • It might be better if governments are left out of this debate. Instead of imposing new regulations, governments could encourage cooperation between companies. This seems to be an approach actively embraced by the Republic of Korea.

Development

Digital and the environment

The hottest summer ever recorded on Earth is behind us: June, July, and August 2023 were the hottest three months ever documented, World Meteorological Organisation (WMO) data shows. The discussion of the overall impact of digital technologies on the environment at the IGF, therefore, came as no surprise.

Internet use comes with a hefty energy bill, even for seemingly small things like sending texts – it gobbles up data and power. In fact, the internet’s carbon footprint amounts to 3.7% of global emissions. The staggering number of devices globally( over 6.2 billion), need frequent charging, contributing to significant energy consumption. Some of these devices also perform demanding computational tasks that require substantial power, further compounding the issue. Moreover, the rapid pace of electronic device advancement and devices’ increasingly shorter lifespans have exacerbated the problem of electronic waste (e-waste).

There are, however, a few things we can do. For instance, we can use satellites and high-altitude connectivity devices to make the internet more sustainable. We can take the internet to far-off places using renewable energy sources, like solar power. And crucially, if we craft and implement policies right from the inception of technology, we can create awareness among start-up stakeholders about its carbon footprint. We can also leverage AI to optimise electrical supply and demand and reduce energy waste and greenhouse gas emissions, which together, might even generate more reliable and optimistic projections of climate change.

A modern, white, three-bladed windmill stands in a field of green plants, against a blue sky.

Broadband from space

The latest data from ITU shows that approximately 5.4 billion people are using the internet. That leaves 2.6 billion people offline and still in need of access. One of the ways to bring more people online is by using Low Earth Orbit (LEO) satellites – think Starlink – to provide high-speed, low-latency internet connectivity. Another important element here are libraries, which often incorporate robotics, 3D printing, and Starlink connections, enabling individuals to engage with cutting-edge innovations.

There are, however, areas of concern regarding LEO satellites. It could be (is) technically challenging to launch LEO satellites on a large scale. Their environmental impact, both during their launch and eventual disposal in the upper atmosphere, is unclear. For some communities, the cost of using such services might be too high. Additionally, satellites are said to cause issues for astronomical and scientific observations. 

To fully harness the potential of these technologies, countries must re-evaluate and update their domestic regulations related to licensing and authorising satellite broadband services. Additionally, countries must be aware of international space law and its implications to make informed decisions. Active participation in international decision-making bodies, such as ITU and the UN Committee on Peaceful Uses of Outer Space (COPUOS), is crucial for shaping policies and regulations that support the effective deployment of these technologies. 

By doing so, countries can unlock the benefits of space-based technologies and promote the uninterrupted provision of wireless services on a global scale.

Starlink satellite dish on the roof of residential building

Accessible e-learning for persons with disabilities (PWD)

The accessibility challenges in e-learning platforms pose substantial hardships for people with disabilities, both physical and cognitive. Unfortunately, schools frequently fail to acknowledge or address the difficulties associated with online resource access with the immediacy they need and deserve. Those uninformed about and inexperienced with the obstacles of cognitive impairments often regard these issues as insignificant. This lack of awareness compounds the problem, leaving students with disabilities, especially those with cognitive impairments, to silently wrestle with these issues, a workshop on accessible e-learning experience noted.

Some solutions identified are: 

  • Involving users with disabilities in the development process of e-learning platforms
  • Integrating inclusion into everyday practice in educational institutions 
  • Implementing proactive measures and proper benchmarking and assessment tools to effectively address digital inclusion
  • Collaborating globally to make e-learning more accessible

Human rights

Digital threats in conflict zones

With all that’s going on in the Middle East, we can’t help but wonder how digital threats and misinformation are potentially impacting the lives of civilians residing in conflict zones, in a negative way. This issue was tackled in three workshops yesterday – Encryption’s role in safeguarding human rights, Safeguarding the free flow of information amidst conflict, and Current developments in DNS privacy.

In modern conflicts, digital attacks are not limited to traditional military targets. Civilians and civilian infrastructures, such as hospitals, power grids, and communications networks, are also at risk. In addition, with the growing reliance on a shared digital infrastructure, civilian entities are more likely to be inadvertently targeted. The interconnectedness of digital systems means that an attack on one part of the infrastructure can have far-reaching consequences, potentially affecting civilians not directly involved in the conflict.

The blurred lines between civilian and military targets in the digital realm has other far-reaching implications for trust and safety. It affects the credibility of humanitarian organisations, the provision of life-saving services, the psychological well-being of civilians, and their access to essential information.

Experts advocated a multi-faceted approach to address digital threats and misinformation in conflict zones. This included building community resilience, collaborating with stakeholders, enforcing policies, considering legal and ethical implications, and conducting thorough due diligence.

Connected paper cutout dolls in red, yellow, green, and blue hold hands, filling a white surface.

Sociocultural

Multilingualism, cultural diversity, and local content

As in previous years, the discussion on digital inclusion touched on the need to foster multilingualism and access to digital content and tech in native languages. This is particularly challenging in the case of less spoken languages such as Furlan, Sardo, and Arberesh, and these challenges need to be addressed if we want to truly empower individuals and communities to meaningfully engage in and take advantage of the digital world. The Tribal Broadband Connectivity Programme was highlighted as an example of an initiative that works to preserve indigenous languages, thereby adding tribal languages and cultural resources to the internet ecosystem. 

Universal acceptance (UA) was brought up as a way to enable a more inclusive digital space. UA is not only a technical issue (i.e. making sure that domain names and email addresses can be used by (are compatible with) all internet applications, devices, and systems irrespective of script and language), but also one of digital inclusion. It fosters inclusivity and accessibility in the digital realm. And while core technical issues have mostly been resolved, more needs to be done to drive substantive progress on UA. Approaches include more efforts to raise awareness within the technical community about UA readiness, economic incentives (e.g. government preference in public procurement for vendors who demonstrate UA), government support and involvement in the uptake of UA; and policy coordination among different stakeholders.

Multilingualism is not only about accessing content in local languages and in native scripts but also about developing such content. It was noted in a dedicated session that local content creation in minority languages contributes significantly to cultural and linguistic diversity. But challenges remain here as well. 

But in order to create content, users need to be able to access the internet. Yet, digital divides remain a reality, as do the lack of robust infrastructure, affordability issues (e.g. some households can only afford one device, while in many, even this one device is seen as a luxury), and gender inequalities, which prevent many from creating content. In addition, the mismatch between broadband pricing and the spending power of individuals hinders digital inclusion. Continued efforts are required to deploy reliable infrastructure with affordable pricing options.

Nonetheless, there is hope for universal access to the internet in the future. Advancements in technology are gradually making access less expensive with more options, potentially enabling broader internet access. And initiatives such as Starlink and Project Kuiper, which aim to provide connectivity to remote areas via satellites, are helping to bridge the digital divide.

One interesting point in the discussion was that the internet has not evolved into the egalitarian platform initially envisioned for content creation and access to information. Despite the TikTok phenomenon, instead of empowering individuals to become content publishers – it was said – the internet has given rise to powerful intermediaries who aggregate, licence, and distribute content. These intermediaries dominate the industry by delivering uniform content to a global market. And so, challenges remain regarding content distribution and ensuring equal access for all. 
In considering local content contributions, platform and content neutrality should also be considered to ensure a fair and diverse content ecosystem.

Cybersecurity and Digital Safety

The development of offensive cyber capabilities by states, impactful ransomware attacks, and the high risks of sexual abuse and exploitation of minors online, have all raised the profile of cybersecurity and the importance of protecting against new and existing threats, the Main Session on Cybersecurity, Trust & Safety Online noted.

Offensive cyber capabilities and the legitimacy of using force in response to cyberattacks were outlined as important challenges, along with fighting the use of social networks as tools for interventionism, the promotion of hate speech, incitement to violence, destabilisation, and the dissemination of false information and fake news. 

Given the long list and complexity of issues, some feel a legally binding international instrument is needed to complement existing international law and encompass cyberspace adequately. Others underline the need to involve different stakeholders – the technical community, civil society, and companies, including law firms – in shaping any such instrument. The fast pace of tech development is another challenge in this endeavour. The limitations of a comprehensive solution to which we aspire should be acknowledged, and we should prioritise actions that could have the greatest near-future impact for mitigating risks.

Cybercrime negotiations

The debate on a UN treaty to combat cybercrime identified the following challenges: 

  • The current draft of the cybercrime treaty aims to extend the power of law enforcement but offers weak safeguards for privacy and human rights; treaty-specific safeguards may be necessary. 
  • Geopolitics dominates negotiations, and expert input is often needed (but not available) to understand the reality and shape of current cybercrime policies. 
  • Companies must play a crucial role in international cooperation against cybercrime.

Some concrete suggestions to foster increased cooperation and efficiency to combat cybercrime beyond international treaty provisions include the creation of a database of cybersecurity and cybercrime experts for knowledge and information sharing (the efforts of the UN OEWG and the OAS were outlined), developing a pool of existing knowledge to support capacity development for combating cybercrime (not least because policymakers often feel intimidated by technical topics), and focusing on expanding the role of existing organisations such as Interpol. Importantly, states and businesses should become more aware of the economic benefits and potential increase in GDP due to investments in cybersecurity.

International negotiations should also focus more on strengthening systems and networks at a technical level. This includes measures to ensure the development of more secure software, devices, and networks, through security-by-design and security-by-default; providing legal protections for security researchers when identifying vulnerabilities; enhancing education and information sharing; using AI in cybersecurity for identifying vulnerabilities in real-time and other tasks. The risks of emerging technologies have come to the forefront of cybersecurity; however, international discussions should not lose sight of the broader cybersecurity landscape.

Cybersecurity and development

In emerging cybersecurity governance frameworks, developing countries’ specificities should be considered. Taking West Africa as an example, challenges include the lack of national and regional coordination to effectively combat cyber threats; resource limitations on technical, financial, and human fronts; insufficient allocation of resources to the cybersecurity sector; a shortage of qualified personnel in the region; and weak critical infrastructure, which is particularly susceptible to cyber attacks (where frequent power outages and telecommunication disruptions are already commonplace). 

Cybersecurity frameworks developed in such an environment should be based on peer-to-peer cooperation between the states of the region, cooperation, information sharing with the private sector, and local adaptation of global best practices, considering the local context and challenges. Notable initiatives are the Joint Platform for Advancing Cybersecurity in West Africa, launched under the G7 German presidency, which aims to establish an Information Sharing and Analysis Center (ISAC), and the Global Forum on Cyber Expertise (GFCE) work with the Economic Community of West African State (ECOWAS) to enhance capacities through partnerships.

A sturdy padlock sits on a black table in front of a computer keyboard.

Legal

The potential of regulatory sandboxes

What happens when you toss traditional regulation into a sandbox and hand innovators a shovel? You get a regulatory playground where creativity flourishes, rules adapt, and the future takes shape one daring experiment at a time. 

The workshop Sandboxes for data governance highlighted the growing interest in this tool for development of new regulatory solutions. Regions like Africa and the Middle East are in the early stages of adopting fintech-related sandboxes; Singapore has gained more experience and has fostered collaboration between industry and regulators. GovTech sandboxes, as seen in Lithuania, have become integral to the regulatory process where (un)controlled environments facilitate the testing and implementation of mature technologies in the government sector.

A common main challenge is the significant resources and time required to implement sandboxes, and how taxing this is for developing countries. It helps to learn from established sandboxes and tailor them to specific contexts. But more than that, collaborative efforts are needed between government authorities, industry players, civil society organisations, and regulatory bodies to make the process work.

The content creation revolution

The tectonic shift in content creation over the past decade has been internet-shaking. Content creation is no longer limited to an elite group of professionals, thanks to the widespread availability of user-friendly and inexpensive tools. Users are now generating vast amounts of unique and dynamic new content and sharing it on social media platforms and wherever the latest trend thrives.

Yesterday’s workshops on intellectual property discussed this shift, and the efforts to support the accessibility and availability of content through digital platforms. One workshop that looked at content creation and access to open information recognised that the industry is adapting to new technological advancements, while the workshop that looked at the manga culture (a cultural treasure of Japan, the host country) examined how the global manga market enjoyed rapid growth during the COVID-19 pandemic.

Both discussions explained how this transformation has its own challenges. The surge in user-generated content has raised important questions about intellectual property rights (IPR) and the ethical consumption of creative output, including the complexities of identifying and thwarting pirate operators, whose elusive tactics threaten creators’ livelihoods. The need for multistakeholder cooperation involving government bodies, internet communities, and industry players to effectively combat this threat goes without saying.

As the discussions unfolded, a common thread emerged: the need for innovation to meet the evolving demands of the digital age. But the discussions also demonstrated that the digital age demands not only legal frameworks and technological fortification but a nuanced understanding of the evolving dynamics between creators and consumers, and the content they develop and consume. 

Scales and a computer laptop form the background for a judge’s gavel on a desk.
Diplo/GIP at the IGF

Don’t miss our sessions today! 

Sorina Teleanu will speak at the open forum From IGF to GDC: A new era of global digital governance: A SIDS perspective. The session will examine the challenges developing countries face when engaging in global digital governance processes and explore ways to address such challenges. It will also discuss expectations from the ongoing GDC process and the relationship between the GDC process and the IGF. When and where? Wednesday, 11 October, at 09:45–11:15 local time (00:45–02:15 UTC), in Room I.

Anastasiya Kazakova will speak at a workshop on ethical principles for using AI in cybersecurity. The session will discuss what concrete measures stakeholders must take to implement ethical principles in practice and make them verifiable. It will also gather input and positions on how a permanent multistakeholder dialogue and exchange could be stimulated on this topic. When and where? Wednesday, 11 October, at 15:15–11:15 local time (06:15–16:15 UTC), in Annex Hall 1.

IGF 2023 – Daily 2

Decorative banner announcing the IGF2023 Daily#2 Highlights.

IGF Daily Summary for

Monday, 9 October 2023

Dear reader, 

Welcome to the IGF2023 Daily #2, your daily newspaper dedicated to the 18th Internet Governance Forum (IGF) discussions. 

AI and data were two keywords echoing during the kick-off day of IGF2023. Parliamentarians gathered for their now-traditional roundtable, and tens of workshops discussed development, human rights and other pillar themes. We noticed three main trends in yesterday’s debates.

First, many traditional narratives have been rehearsed, including the need ‘to manage both the opportunities and the risks that digital technologies bring’. Less repetition of common points could free more space for fostering new ideas through critical and engaging debates.

Second, existing initiatives were amplified in the debate, including a fresh focus on the G7 Hiroshima AI process and the G20 New Delhi initiative of digital public infrastructure.

Third, some new insights were brought into the debate, including a call for a ‘fourth way’- beyond EU, China, the USA approaches – which will help developing countries to leverage data as a strategic asset for socio-economic development, amplified by cross-border exchanges. 

As you can read below, our reporting aims to identify new insights, ideas, and initiatives in the IGF debates. You can also dive deeper into summary reports generated just in time by DiploAI.

The Digital Watch team, with support from DiploAI

Drawing of a rapporteur taking notes at the back of the room as panelists discuss dynamically in front of a projection screen.

Do you like what you’re reading? Bookmark us at https://wp.dig.watch/event/internet-governance-forum-2023 and tweet us @DigWatchWorld

Have you heard something new during the discussions, but we’ve missed it? Send us your suggestions at digitalwatch@diplomacy.edu


The summary of the discussions
Ky0to mix modern and new Oct 2023

The day’s top picks

  • The need to move from general debates on data as a public good for development to operational use 
  • Calls for a ‘4th way’ in data governance, in addition to the EU, China, and the USA approach
  • The role of parliamentarians in shaping a trusted internet 
  • Call for a judiciary track at the IGF

Leveraging the multistakeholder approach to build the Internet We Want

It is trite to speak, or indeed, in this case, write, about the impact that digital technologies have had on our everyday lives. However, it’s worth noting that these technologies now occupy a prominent position on the global stage, evident in G7, G20, G77, and UNGA discussions. Moreover, there’s a growing realisation that their potential extends far beyond what we’ve witnessed so far: They could help us achieve the SDGs, address climate change, and create a better world.

For instance, AI has emerged as a technology with the potential to enhance the impact of digital technology on the SDGs. Data show that 70% of the SDGs can benefit directly from digital technologies, highlighting their potential to positively impact global development.

UN Secretary-General Guterres outlined three key areas where we need to act:

  1. Bridging the connectivity gap by bringing the last 2.6 billion people online, especially the women and girls in underdeveloped regions
  2. Addressing the governance gap by improving the coordination and alignment of the IGF and other digital policy/governance entities within and beyond the UN system
  3. Prioritising human rights and a human-centred approach to digital cooperation
 People, Person, Crowd, Electronics, Screen, Adult, Male, Man, Female, Woman, Cinema, Accessories, Formal Wear, Tie, Audience, Computer Hardware, Hardware, Monitor, Face, Head, Lecture, Indoors, Room, Seminar, Projection Screen, Architecture, Building, Clothing, Suit, Classroom, School, António Guterres
UN SG Antonio Guterres speaks during the opening of IGF2023. Credit: IGF Flick

The internet must remain open, secure, and accessible. This requires increased support for long-established multistakeholder institutions. Guterres emphasised: ‘We cannot afford another retreat into silos.’ Following this approach, he said, we can maximise the benefits of the internet while reducing its risks, and build the internet we want.

The IGF has a role to play: It should strengthen its position as a global digital policy forum in finding points of convergence and consensus.

But as the IGF crosses the threshold of adulthood, the community can look back and ask: has it delivered on its mandate and purpose? And the community can look forward and ask: How can the IGF better support preparations for and the follow-up to the Global Digital Compact and Summit of the Future?

The role of parliamentarians in shaping a trusted internet 

In the headlines, dwindling trust in politics, underscored by compelling polling data, has raised alarm bells. In the background, this negative trend threatens the legitimacy and effectiveness of political institutions, necessitating concerted efforts to rebuild trust. According to the Day 1 discussions, the integrity of democratic elections is in danger, in light of the widespread interference seen globally. It was noted that 70 democracies are scheduled to hold elections in 2024, and these elections are at a higher risk than ever before, given the growing misuse of digital technology for disinformation and election interference. 

Trust, a key phrase in this session, is further eroded as online violence, particularly against women in politics, remains a serious challenge. It not only jeopardises individual well-being but also undermines democratic processes. And then, there’s the enigma of AI – holding the promise of unprecedented opportunities while posing new challenges such as the micro-targeting of voting audiences, bias, or new, old, and changing privacy concerns. 

Amid these substantial concerns, the role of parliamentarians becomes pivotal. They are the cornerstones of our political system, entrusted with crafting the legal framework that governs our digital lives. Some speakers in this year’s parliamentary track pointed out that parliaments are the only branch of government that remains in touch with the individual daily digital lives of citizens. Thus, their active involvement in establishing robust frameworks for the governance of digital technologies rooted in transparency, accountability, and fairness is urgently needed. Contextualisation of global frameworks is another key point emphasised by the speakers. They argued that countries should adapt global frameworks to their specific needs and local requirements.

The need for agile governance was reiterated throughout the discussion: Sustainable, innovative, and future-proof regulations should be used to effectively and efficiently respond to the ever-changing digital technology landscape. 

The session was a call to action to reestablish trust, combat online violence, safeguard electoral integrity, and navigate the complex realm of AI. Above all, it underscored the indispensable role of parliamentarians in the global digital governance. 

Almost-empty plenary hall at the IGF2023.
The plenary hall of IGF2023 in Kyoto | Credit: SasaVK

AI

In September, the G7 Hiroshima Leaders agreed to develop an international code of conduct for AI. This mirrors the European Commission’s approach to developing voluntary AI guardrails ahead of its actual AI law and the approaches adopted or drafted in the USA and Canada. The High-Level Leaders Session V: Artificial Intelligence reiterated these calls and the need for international guiding principles, research and investment, awareness of the local context, and stakeholder engagement in achieving safe and trustworthy AI.  

The trajectory of AI systems is anticipated to evolve towards multimodal capabilities, seamlessly integrating text and visual content with fluency in multiple languages, extending its global impact beyond English. Generative AI, expected to be as transformative as the internet was, emerged as a central discussion point, poised to etch its mark on history. However, consumers need to know what is AI-generated content and what is human-generated content, particularly ahead of global elections.

The spotlight was on the alarming proliferation of AI-amplified misinformation and disinformation and the profound impact of technology on human emotions and rationality. In this context, there emerged a resounding call for truth, trust, and shared reality, reaffirming the pivotal role of journalism in upholding democratic values. Simultaneously, it was also recognised that the deployment of AI can help address pressing global challenges –  highlighting disaster management, climate crises, global health, and education as high-risk domains.

Transparency and collaboration emerged as linchpins for solutions. Transparency was analysed in the context of technical development and the governance of AI systems. Singapore’s effort in launching the open-source AI Verify Foundation was mentioned as an example of the commitment to open discourse and robust governance. Collaboration, particularly in a multistakeholder fashion, was highlighted, and the private sector was recognised as a force driving AI innovation and, thereby, a necessary partner to governments in governing AI.

Looking ahead, the session heralded the Hiroshima AI Process and the plans for an AI expert support centre under the Global Partnership on AI (GPAI) as signifiers of a proactive approach to addressing AI challenges. Forums such as the Frontier Model Forum, the Partnership on AI, and the ML Commons also represent similar forward-looking efforts. The UN, ITU and the OECD were asked to be more prominent in advancing AI initiatives. 

An AI shield icon hovers over a person’s hand

Internet fragmentation

As expected, internet fragmentation was on the agenda of the IGF. As a network of networks, the internet is inherently fragmented, yet, concerns are looming about harmful fragmentation, which would hindes the intended function of the internet. 

Geopolitical developments are changing internet governance. States increasingly seek to achieve digital sovereignty to exert control over their respective internet spheres. This comes as a response to the adverse effects of internet weaponisation, digital interference,  disinformation, misinformation, and campaigns embracing violence outside their national borders. Such regulatory tendencies, however, can lead to internet fragmentation with negative consequences, including restrictions on access to certain services, internet shutdowns and censorship, and exacerbation of the digital divide in underdeveloped regions. The internet as we know it cannot be taken for granted any more.

International norms are critical to reduce the risks of fragmentation. International dialogue in forums like the IGF is a valuable tool for inclusive discussions and contributions from diverse stakeholders. It is important to acknowledge different perspectives about fragmentation between the Global North and Global South. National regulations must, therefore, consider different contexts and allow countries to pursue their own policies. However, they should maintain a comprehensive approach to internet governance. Of particular relevance are the regulatory frameworks with extraterritorial implications – like those of the EU, China, and India – due to their economic powers and the global nature of the internet.

In developing national and regional regulatory frameworks, it is important to consider multistakeholder input, because the internet – and its critical resources – are not used, owned, or managed solely by states. It can be difficult to establish a central authority responsible for shaping internet policy requirements. Inclusivity and user empowerment are also important, particularly considering the perspectives of marginalised and vulnerable communities. At the same time, there is a significant risk in leaving public policy functions in the hands of private corporations. The industry should accept that it is not exempt from regulations.

A particular concern about harmful fragmentation is related to state control over the public core of the internet and its application layer. Different technologies operate at several layers of the internet, and those distinct layers are managed by different entities. Disruptions in the application layer could lead to disruptions in the entire internet. Therefore, governance of the public core calls for careful consideration, a clear understanding of these distinctions, and deep technical knowledge. 

Accountability for the governance of the public core of the internet should be dealt with on an international level. Regulations related to the technical layers should follow a layered policy approach, in which different regulations may be required for each layer (following approaches embraced in Japan and The Netherlands, for instance). By considering the specificities of different layers, policymakers can create a cohesive and comprehensive regulatory approach that does not lead to internet fragmentation (for instance, a layered approach to sanctions can help prevent unintended consequences like hampering internet access).

Subsea internet cables.
Subsea internet cables. Credit: Airtel Business

Human rights

Looking specifically at the intersection of gender and youth online to achieve a safer and more inclusive digital environment, the workshop BeingDigital Me: Being youth, women, and/or gender-diverse online presented different perspectives in addressing this matter. The speakers highlighted several initiatives that address gender and gender-based violence online, such as the Global Partnership for Action on Gender-Based Online Harassment and Abuse, the work of the Internet Society’s Gender Standing Group, and recent initiatives in Colombia. Speaking about the need for inclusivity and collaboration to combat tech-facilitated gender-based violence and gendered disinformation, the speakers pointed out the role of education and skill development in fostering increased youth participation. In addition, they spoke about the positive impacts of online platforms that promote gender-related initiatives and the need for a specific framework to address digital violence. 

The session on advocacy with Big Tech in restrictive regimes discussed a complex set of issues related to advocating and implementing human rights policies in countries with regimes restricting digital rights. From the perspective of civil society, in addition to engaging with governments on these issues, a challenge lies in understanding the ecosystem, the complexity of regulation and policies, and the fast-paced changes taking place. In restrictive regimes, tech companies that have human rights policies in place must address the dilemma of whether to comply with restrictive national rules or uphold their human rights policies and, as a consequence, limit their business activities within those jurisdictions. Also discussed was the specific role of platforms and their responsibility when it comes to content moderation (particularly AI-moderated content), addressing disinformation, and enhancing transparency. 

In addition, the speakers addressed the imbalance in capacity between civil society and tech companies, the challenges of sudden structural changes in tech companies that impact human rights corporate policies, and the imperative that civil society advocates for implementing human rights policies and contingency strategies by tech companies.

The participants discussed the examples of Russia, Vietnam, Türkiye, Syria, and Pakistan.

top view of group holding wooden cubes with rights lettering

Data governance

The Data Free Flow with Trust (DFFT), one of the pillar themes of IGF2023, was the focus of the session on the development aspects of free data flows, Opportunities of cross-border data flow – DFFT for development. Data is a recurrent topic at the IGF, with many narratives based on the need to balance the flow of data with trust and privacy, the extraction of value from data, data transparency, private-public partnerships, and others. 

A few novel highlights in this year’s discussions included: 

  • a call for a ‘fourth way’ for data governance (in addition to the approaches taken by the USA, the EU, and China) in which data would feature as a strategic asset of developing countries, used for socio-economic development 
  • the centrality of digital public infrastructure (DPI) as an infrastructure for inclusive, open, effective use of data 
  • a more operational and practical concept of data as a public good (see the session on African AI: Digital Public Goods for Inclusive Development
  • strengthened voices of developing and least-developed countries in emerging global data governance frameworks
  • mainstreamed Data Free-Flow with Trust in development assistance projects and initiatives

Tackling the issue of balancing the operationalisation of Data Free Flow with Trust, the speakers in a dedicated session discussed the main challenges in ensuring that privacy, security and intellectual property are safeguarded in the promotion of the free flow of data. The speakers highlighted the challenges related to access to data, the need for redress mechanisms, and the impacts of restricted data flow on the fragmentation of the internet. They also addressed the responsibilities of different stakeholders – including governments and the private sector (be it internet companies or the telecom sector, for instance) – in safeguarding the privacy and security of data. A human rights-based approach to data and the involvement of civil society in the relevant policy processes were mentioned as a must for ethical and responsible data governance. 

It was also emphasised that applying the rule of law in the digital space is as crucial as in the physical world. A proposal was made to establish a judiciary track at the IGF to include judges and other professionals in the judiciary field in discussions related to digital governance. This would provide them with a specific platform to engage with experts, share insights, and gather more knowledge about digital governance.

Visualizing data - abstract purple background with motion blur, digital data analysis concept

Digital and environment

Green and digital are two pillars of many policy approaches and strategies worldwide. The workshop Cooperation for a green digital future highlighted the potential of AI and the internet of things in reporting and gathering accurate information about climate change. Yet, without common measurement standards, the impact of new technologies will be limited. The new societal dynamism of youth in the climate field has much potential for accelerating the multistakeholder approach in advancing an interplay between digital and green policy dynamics.

Because Japan, the host of IGF 2023, has been a leader in robotics for decades, it is not surprising that robots are featured in the IGF debate. This was the case, for instance, in the session Robot symbiosis cafe, where several examples of using robots to assist people with disabilities were given. But beyond highlighting the potential for good, the debate also raised significant concerns, including the need to deal with the hype surrounding the use of robots in society and the risk of new forms of divides emerging because developing countries might not have the resources and know-how to develop robotics. One solution for making robots more affordable is to foster agile, innovative enterprises to streamline the process of robot design and production, ultimately lowering costs and reducing development time.

 Blue digital circuits form a landscape of a large tree and smaller plants beneath clouds on a black background.
Diplo/GIP at IGF2023

Follow our just-in-time reporting!

Unable to attend all the sessions you’re interested in? DiploAI and the team of experts have you covered with just-in-time reporting from IGF2023. Read summaries of the sessions and the main arguments raised during discussions, available only a few hours after the sessions conclude. View knowledge graphs as visual mapping of debates. Bookmark our dedicated IGF2023 page on the Digital Watch observatory, or download the app to read the reports.

Decorative banner ‘Follow IGF2023 Just-in-Time Reporting by DiploAI and Diplo’s Team of Experts, with a QR code for access.

We’re also present at the IGF2023 Village! 

If you’re attending IGF2023 in Kyoto in person, come visit us at booth 56! If you’re joining the meeting online, we have a virtual booth you can swing by!

 Groupshot, Person, Adult, Male, Man, Female, Woman, Accessories, Bag, Handbag, Glasses, Clothing, Coat, Footwear, Shoe, Formal Wear, Tie
Diplo’s director Jovan Kurbalija with Indonesia’s delegation at Diplo/GIP booth at the IGF | Credit: SasaVK

Don’t miss our sessions today! 

We supported the IGF Secretariat in organising a session on unlocking the IGF’s knowledge, where Jovan Kurbalija and Sorina Teleanu will discuss the power of epistemic communities, organising data, and harnessing AI insights for our digital future. When and where? Tuesday, 10 October, at 12:30 – 13:15 local time (03:30 – 04:15 UTC), in Room K.

Pavlina Ittelson will moderate an open forum on ways to enhance in-depth long-term participation and efficient cooperation of CSOs in multilateral- and multistakeholder- internet governance fora. When and where? Tuesday, 10 October, at 14:45 – 16:15 local time (05:45 – 07:15 UTC), in Room K.

DW Weekly #131 – 9 October 2023

 Text, Paper, Page

Dear all,

The EU is in the spotlight this week: It has just published its list of critical technology areas, similar to the lists which other countries have drawn up, which it will assess for risks to its economic security. In other news, Kenyan lawmakers want to halt Worldcoin’s operations in the country, whereas Microsoft’s testimony as part of the ongoing US trial against Google shows how intense the race to data is.

Let’s get started.

Stephanie and the Digital Watch team

PS. If you’re reading this from Kyoto (IGF 2023), join us for discussions and drop by our booth.


// HIGHLIGHT //

The four critical technologies the EU will assess for risks:
AI, advanced chips, quantum tech, and biotech

The European Commission announced on Tuesday that it will review the security and leakage risks of four vital technology domains – semiconductors, AI, quantum technologies, and biotechnologies, among the 10 technologies areas most critical to the EU’s economic security.

What it means. The EU wants to make sure that these technologies do not fall in the wrong hands. If they do, they could be exploited to hurt others. For instance, biotechnologies used for medical treatment can be exploited for potential biowarfare applications. If quantum cryptography designed to safeguard a country’s critical infrastructure is misused, it could potentially undermine or disrupt the critical operations of that same country. We can only imagine what lies in store for AI if it’s used for hostile purposes. 

Dual-use. These four technologies were prioritised due to their transformative nature, their potential to breach human rights, or the risk they carry if they’re used for military purposes. In fact, they all share dual use capabilities, that is, they all have the potential for both civilian (healthcare, communications etc) and military applications (weapons, etc.). 

Other risks. In addition to tech security and leakage, the EU thinks there are other critical risks that will eventually also warrant attention: those linked to the resilience of supply chains; those affecting the physical and cybersecurity of critical infrastructure; and those with implications for the weaponisation of economic dependencies and economic coercion. 

Countries of concern. The recommendation does not mention any specific country that would be targeted, but there’s one term that gives it away. The concept of ‘de-risking’ (in contrast with decoupling), mentioned several times in the recommendation, forms part of the EU’s policy of reducing reliance on China. It’s therefore, quite clear that China will be one of the main targets of the risk assessments.

Issue #1: Divergences. The risk assessments will be carried out in collaboration between the commission and its member states (with input from the private sector). They are the first steps towards implementing the new European Economic Security Strategy, published in June. As with all things new, competing interests and diverging geopolitical concerns are a main challenge: European countries are divided, with France and Germany favouring an investment-first approach, with central Europe adopting a more critical approach to China. 

Issue #2: Protectionism. The EU is set to make crucial decisions next year on the measures it will implement, and whether it will carry out collective risk assessments on the remaining 6 technologies. A potential challenge is that these measures could portray the EU as increasingly adopting protectionist policies, in the eyes of China. If this perception takes hold, it has the potential to significantly harm the trade relations between the EU and China. EU Commissioner Thierry Breton’s assertion that ‘protection does not mean protectionism – again, I insist on this’, is unlikely to assuage concerns.

A geopolitical trend. Though the EU is the latest actor to move ahead with its plan to reduce risks, it’s by no means the first. Other countries, notably the USA and Australia, published similar lists of technologies they were assessing for the risks they pose. 

Yet, there’s a notable difference: The foundation of Europe’s approach is to de-risk, not decouple, supporting the economic security strategy’s tripartite approach of protecting, promoting, and partnering. What needs to be seen is whether the latter will be consigned to  a simple theoretical construct.


Digital policy roundup (2–9 October)

// AI GOVERNANCE //

In USA v Google, Microsoft says companies are competing for data to train AI

Testifying in the ongoing US trial against Google, Microsoft CEO Satya Nadella (appearing as a plaintiff’s witness) said that tech giants were competing for vast troves of content needed to train AI. Companies are entering into exclusive deals with large content makers, which are locking out rivals, Nadella said.

The lawsuit concerns Google’s search business, which the US Department of Justice and state attorneys-general consider ‘anticompetitive and exclusionary’. They are arguing that Google’s agreements with smartphone manufacturers and other firms have strengthened its search monopoly. Google has counterargued that users have plenty of choices and opt for Google due to its superior product.

Why is it relevant? First, Nadella’s comments highlight the resources required by AI technology: computing power, and large troves of data. Second, Nadella said these exclusionary data agreements reminded him of ‘the early phases of distribution deals’ – which is to say that agreements with content providers are monopolising valuable content just as Google allegedly did with smartphone manufacturers and other companies.

Case details: USA v Google LLC, District Court, District of Columbia, 1:20-cv-03010


Was this newsletter forwarded to you, and you’d like to see more?


Hollywood strike 2023

The writers’ strike is over. A historic strike and almost five months later, the Writers’ Guild of America – which represents over 11,500 screenwriters – struck a deal with Hollywood companies on the use of AI: AI-generated material may not be used to undermine or split a writer’s credit, or to adapt literary material; companies can’t force writers to use AI tools; and companies have to disclose whether material given to writers is AI-generated.


// ANTITRUST //

Korean communications authority fines Google, Apple

The Korean Communications Commission (KCC) is fining Google and Apple for abusing their dominant position in the app market. The fine can go up to KRW68 billion (USD50 million).

Google and Apple were found to have forced app developers to use specific payment methods and to have delayed app reviews unfairly. In addition, Apple implemented discriminatory charging of fees to domestic app developers.

Why is it relevant? South Korean authorities have taken aim at Big Tech companies’ practices in recent years. In April, the country’s Fair Trade Commission (FTC) fined Google USD32 million for blocking the growth of local rival app One Store Co. marketplace. In 2021, the FTC fined the company around USD1 million ‘for obstructing other companies from developing rival versions of the Android operating system.’


// WORLDCOIN //

Kenyan lawmakers want Worldcoin to cease operations in the country

A Kenyan parliamentary panel called on the country’s information technology regulator on Monday to shut down the operations of cryptocurrency project Worldcoin within the country until more stringent regulations are put in place. 

The lawmakers’ report concluded that Sam Altman’s Worldcoin project constituted an ‘act of espionage’. The panel also urged the government to launch criminal probes into Tools for Humanity Corp, the company behind Worldcoin’s infrastructure, for operating in Kenya illegally.

Why is it relevant? First, Kenya could set a precedent on how countries could deal with Worldcoin, even though the operations are being scrutinised in other countries as well. Second, it shows the speed at which new technologies can enter a market, leaving regulators to grapple with the policy implications.


// INFRASTRUCTURE //

Amazon launches first test satellites for Kuiper internet network

Amazon launched its initial pair of prototype satellites from Florida last week, the company’s first step before it deploys thousands more satellites into orbit.

However, Amazon is up against pressing schedules on multiple fronts. First, the Federal Communications Commission mandates that at least half of the proposed 3,236 satellites in Project Kuiper’s constellation must be launched by mid-2026. Second, Amazon faces the challenge of catching up with SpaceX, which already boasts over 2 million customers for its satellite internet service.

Why is it relevant? Low-orbit satellites, like the ones launched by Amazon, can expand global connectivity significantly. By operating closer to the Earth’s surface, these satellites enable faster communication speeds, lower latency, and wider coverage.


The week ahead (9–16 October)

Ongoing till 12 October: The annual Internet Governance Forum (IGF2023) is taking place in Kyoto, Japan and online this week. Follow our dedicated space on Dig.Watch for reports. Expect a round-up in next week’s edition.

11–12 October: With so many elections around the corner, the EU DisinfoLab’s 2023 conference will have plenty to discuss.

12–15 October: The 13th IEEE Global Humanitarian Technology Conference in Pennsylvania, USA, will address critical issues for resource-constrained and vulnerable people.

16–17 October: This year’s International Regulators’ Forum will be hosted in Cologne, Germany. The Small Nations Regulators’ Forum takes place on the second day.

16–20 October: UNCTAD’s 8th World Investment Forum returns as an in-person event hosted in Abu Dhabi, UAE. 


#ReadingCorner
Webpage banner of report

More countries, sectors under attack – Report

Cyberattacks have increased globally, with government-sponsored spying and influence operations on the rise. The primary motives? Stealing information, monitoring communications, and manipulating information. These insights are from Microsoft’s latest Digital Defense Report, covering trends from July 2022 to June 2023.

Webpage banner of report

Empowering everything with AI

This Wall Street Journal article (paywalled) talks about the growing role of AI in practically every aspect of our lives – from virtual assistants like Siri and Alexa to automated systems in the workplace. We’ll soon be unable to escape it.


FWAzpGt5 steph
Stephanie Borg Psaila – Author
Director of Digital Policy, DiploFoundation
nPHsW2zD ginger
Virginia Paque – Editor
Senior Editor Digital Policy, DiploFoundation

IGF 2023 – Daily 1

 Logo, Advertisement, Citrus Fruit, Food, Fruit, Grapefruit, Plant, Produce, Text

IGF Daily Summary for

Sunday, 8 October 2023

Dear reader, 

If you’ve arrived in Kyoto, good morning.

If you’re on your way to Kyoto, happy travels.

If you’re following the IGF online, we hope you’re settled in comfortably.

Welcome to the IGF2023 Daily #1, your daily newspaper dedicated to the 18th Internet Governance Forum (IGF) discussions. 

As per tradition, Diplo and the Geneva Internet Platform (GIP) are providing just-in-time reporting from IGF2023, bringing you session summaries, data analysis and more. What’s not traditional is the addition of DiploAI, our new AI reporting tool, to the mix. In this hybrid system, Diplo’s human experts and AI tool work together to deliver a more comprehensive reporting experience.

Our AI will prepare session reports for all the events, while our dedicated human team is curating daily highlights from these reports, which will be delivered to your inbox each day. This issue covers the highlights of Day 0 at IGF2023 on 8 October 2023.

Let’s begin, 

The Digital Watch team

Drawing of a rapporteur taking notes at the back of the room as panelists discuss dynamically in front of a projection screen.

Do you like what you’re reading? Bookmark us at https://wp.dig.watch/event/internet-governance-forum-2023 and tweet us @DigWatchWorld

Have you heard something new during the discussions, but we’ve missed it? Send us your suggestions at digitalwatch@diplomacy.edu


The summary of the discussions

The day’s top picks

  • Importance of Data Free flow with Trust (DFFT)
  • Establishment of a permanent data policy forum
  • Appointing a goodwill ambassador for digitalisation and the SDGs 
  • Call for bottom-up AI
Kiosk poster with a typical Japanese garden scene announcing the 18th Annual Meeting of the Internet Governance Forum.

High-Level Leaders Session I: Understanding Data Free Flow with Trust (DFFT)

The centrality of data marked the kick-off of IGF’s high-level debate. Speakers during the High-Level Leaders Session I: Understanding ‘Data Free Flow with Trust’ (DFFT) anchored data in the wider context of tackling climate change and dealing with health challenges, among others. Thus, cross-border data transfers become critical for our shared digital future. 

The Data Free Flow with Trust (DFFT) concept was introduced to ensure these transfers go smoothly. DFFT aims to enable the flow of data worldwide while ensuring data security and privacy for users.

However, the concept is not without challenges: The global data landscape is fragmented with diverse data security and privacy perspectives. Some of the concerns mentioned at this session were: potential privacy threats from third-party access and government surveillance, as well as the credibility and reliability of data sources. These concerns underscored the importance of implementing explicit principles on governmental access, as demonstrated by the OECD’s Trusted Government Access Program. It was also highlighted that building trust in institutions responsible for data collection is imperative, as is effective policy oversight.

Cross-sector collaboration is also needed to strengthen data governance. Promoting regulatory and operational sandboxes has also been proposed as a practical solution to foster good governance among stakeholders.

New initiatives for data governance are needed to establish a robust global framework capable of efficiently managing data to ensure secure and effective data flow. The idea to create a permanent international forum for data policy dialogue has been gaining wider acceptance, including support from G7 nations. Such a forum should avoid the risk of fragmented data laws and regulations.

Abstract digital waves with flowing particles

High-Level Leaders Session II: Evolving trends in mis- & dis- information

An MIT report from 2018 found that lies spread six times faster than the truth. The situation is worsened by rapid advancements in generative AI, which can create synthetic content nearly indistinguishable from authentic content. This problem is even more critical in the Global South, where weaker institutional structures make populations more susceptible to misinformation.

What can be done to tackle misinformation and disinformation? A holistic approach needs to be taken and it must be multistakeholder in nature, the High Level Leaders Session II: Evolving trends in mis- and dis-Information noted. One part of the solution is enhancing media literacy, to proactively tackle the strength and attraction of disinformation, known as pre-bunking. Users also need to be aware of iof the health and accuracy of their data, and consume data in a balanced and unbiased manner (this was likened to maintaining nutritional balance in your diet). 

Another part of the solution is heightening the responsibility, transparency, and accountability of tech platforms, which should advance responsible innovation, boost fact-checking capabilities and comply with a Code of Practice against disinformation.

Strengthening regulation also forms a crucial part of the solution, but the introduction of new regulations unleashes its own set of challenges, topmost being their slow emergence, always lagging behind new technologies for content generation, including fake content. A revised governance structure that rewards the sharing of accurate information to combat the appeal of false news should also be in place. Emerging as a novel regulatory approach is the concept of ‘digital constitutionalism,’ offering a promising way to control this amplified influence of tech companies. This involves crafting collaborative global legislation and international frameworks capable of effectively confronting and regulating these platform companies.

Tiles form a wordplay illustration of FAct and FAke by using the same F and A for the beginning of both words

High-Level Leaders Session III: Looking ahead to WSIS+20: Accelerating the multistakeholder process 

2025 will mark 20 years since the first World Summit on the Information Society (WSIS) was held, and it is time for a review. The principles established during WSIS are still relevant, and the multistakeholder approach is crucial for effective internet governance, as highlighted by High-Level Leaders Session III: Looking ahead to WSIS+20: Accelerating the multistakeholder process. WSIS has made significant progress in establishing a human-centric, digitally connected global society through the use of ICTs. 

WSIS+20 is at a turning point, building on what worked well and adjusting to what is ahead of us, especially related to challenges triggered by AI. In this new context, the following issues remain high on the agenda of WSIS discussions: inclusion, bridging the digital divide, and putting human values at the centre of AI and digital developments.

Iconic logo of the World Summit on the Information Society Geneva 2003 - Tunis 2005

High-Level Leaders Session IV: Access & innovation for revitalising the SDGs

As the SDG clock ticks ahead of the 2030 SDG deadline, digital is more and more considered a way to rescue the SDGs. This was clear during the SDG debates at the UN General Assembly in September. This call to add a digital element to the SDGs echoed during the High-Level Leaders Session IV: Access & Innovation for Revitalising the SDGs. The session listed numerous areas where digital can support the 2030 Agenda, including poverty, inequality, climate change, and the digital divide.

One of the examples from the session noted that initiatives utilising technologies such as AI, cloud computing, sensors, drones, and blockchain in smart agriculture are being implemented globally to tackle SDG2 and eradicate hunger. 

The session debate listed the following key issues in the nexus between the SDGs and digitalisation: ethical considerations, privacy concerns, digital literacy and equitable access to technology, responsible governance, and education. 

The discussions highlighted the interconnectedness of these goals and emphasised the need for a comprehensive, cooperative approach. Collaboration and partnerships among all stakeholders, including governments, the private sector, and civil society, are deemed essential for the successful implementation of digital solutions in advancing the SDGs. Creating the position of goodwill ambassador for digitalisation and SDGs and initiating digital enlightenment movements can further help spread awareness and knowledge.

Icons for each of the 17 SDGs form a circle around the title: Sustainable Development Goals, with a line linking each one to the centre.

IGF Leadership Panel paper: The Internet We Want

The IGF Leadership Panel, a body established by the UN Secretary-General to support and strengthen the IGF, presented its paper The Internet We Want in a Day 0 session. According to the paper, the internet needs to be:

1. Whole and open. The potential fragmentation of the internet threatens social and economic development benefits, while also harming human rights.

2. Universal and inclusive. Data show that 2.7 billion people remain offline. Connecting them not only requires infrastructure, but also digital skills, and applications and content relevant for users. Frameworks that enable internet connectivity should be based on light-touch ICT policy and regulations, and encourage universal access, competition, innovation, and the development of new technologies. 

3. Free-flowing and trustworthy. Trust is strengthened when governments adopt robust and comprehensive commitments to protect the rights and freedoms of individuals. Cooperation between governments and stakeholders, including business and multilateral organisations, is needed to advocate for interoperable policy frameworks that facilitate cross-border data flows, enabling data to be exchanged, shared, and used in a trusted manner, thereby fostering high privacy standards.

4. Safe and secure. Robust frameworks for high levels of cybersecurity should be established, along with strong recommendations for legal structures, practices, and cross-border cooperation to combat cybercrime.

5. Rights-respecting. Human rights must be respected online and offline, and a human rights-based approach to internet governance is required to realise the full benefits of the internet for all.

What lies ahead?
IGF2023 virtual reception desk attended by an avatar

300+ sessions! That’s what’s in store for IGF2023. We will be with you throughout it all, from the high-level leaders track, parliamentary track, youth track, main sessions, workshops, dynamic coalition sessions, open forums, town halls, lightning talks, award launches, and networking sessions. Bookmark our dedicated IGF2023 page on the Digital Watch observatory, or download the app to read the session reports.

Diplo/GIP at IGF2023

Diplo and the GIP are actively engaged at IGF2023, organising and participating in various sessions.

Diplo’s Director of Knowledge Sorina Teleanu (left) and Diplo’s Executive Director Jovan Kurbalija (right) during Diplo’s Day 0 event.

Diplo’s Director of Knowledge Sorina Teleanu (left) and Diplo’s Executive Director Jovan Kurbalija (right) during Diplo’s Day 0 event.

We kicked off Day 0 with a session on bottom-up AI and the right to be humanly imperfect, where we discussed how AI models should reflect more diversity, relying on different communities’ distinct traditions and practices. Such an approach will contribute to a more authentic, bottom-up AI model that does not limit itself to predominantly European philosophical traditions. We also noted that the uniqueness and imperfection of human traits are invaluable characteristics and essential considerations in the development of AI.

We’re also at the IGF2023 Village! If you are on the ground at IGF2023 in Kyoto, drop by our Diplo/GIP booth. If you’re joining the meeting online, check out our space in the virtual Village.

Jovan Kurbalija, Sorina Teleanu, and Pavlina Ittleson at the Diplo/GIP booth at IGF2023.
Jovan Kurbalija, Sorina Teleanu, and Pavlina Ittleson at the Diplo/GIP booth at IGF2023.