DW Weekly #140 – 18 December 2023

 Text, Paper, Page

Dear all,

The OEWG wrapped up its sixth substantive session, marking the midway point of this process. COP28 addressed the climate crisis with green digital action and Epic Games secured an antitrust victory against Google. In the AI sphere, global leaders pledged support for responsible AI, balancing innovation and ethics at the 2023 GPAI Summit in New Delhi, while OpenAI partnered with Axel Springer to deliver news through ChatGPT, merging AI with real-time updates. China’s online censors targeted digital pessimism online,  and Ukraine suffered a cyberattack on the country’s largest telecom.

This will be the last weekly digest in 2023 – we will take a short break for the holidays and be back in your inbox on 8 January 2024.

Let’s get started.

Andrijana and the Digital Watch team

// HIGHLIGHT //

OEWG wraps up its sixth substantive session

The sixth substantive session of the UN Open-Ended Working Group (OEWG) on security of and the use of information and communications technologies 2021–2025 was held last week. The OEWG is tasked with the study of existing and potential threats to information security, as well as possible confidence-building measures and capacity building. It should also further develop rules, norms, and principles of responsible behaviour of states, discuss ways of implementing them, and explore the possibility of establishing regular open-ended institutional dialogue under the auspices of the UN. 

Here is a quick snapshot of the discussions. A more detailed follow-up will be published this week: Keep an eye out for it on our dedicated OEWG page.

 Text, Device, Grass, Lawn, Lawn Mower, Plant, Tool, Gun, Weapon

Threats. The risks and challenges associated with emerging technologies, such as AI, quantum computing, and the internet of things (IoT), were highlighted by several countries. Numerous nations expressed concerns about the increasing frequency and impact of ransomware attacks on various entities, including critical infrastructure, local governments, health institutions, and democratic institutions. Many countries emphasised the importance of international cooperation and information sharing to effectively address cybersecurity challenges. The idea of a global repository of cyber threats, as advanced by Kenya, enjoys much support in this regard.

 Body Part, Hand, Person, Aircraft, Airplane, Transportation, Vehicle, Handshake

Rules, norms and principles. Many countries mentioned that they have already begun implementing norms at the national level and regional levels through their own national and regional strategies. At the same time, many of them have also signalled that clarifying the norms and providing implementation guidance is necessary. This includes norms implementation checklists, a concept that received widespread acknowledgement and support. There was also interest in delving deeper into discussions surrounding norms related to critical infrastructure (CI) and critical information infrastructure (CII). Yet again, delegations expressed different views on whether new norms are needed: While some states favoured this proposal, other states strongly opposed the creation of new norms and instead called delegates to focus on implementing existing ones.

 Accessories, Bag, Handbag, Scale

International law. There is general agreement that the discussion on the application of international law must be deepened. There’s also a difference of view on whether the ICT domain is so unique as to warrant different treatment. The  elephant in the room is the question of whether a new treaty and new binding norms are needed. Law about state responsibility, the principle of due diligence, international humanitarian law, and international human rights law are also areas without consensus.

 Stencil, Text

Confidence-building measures (CBMs). There’s widespread support for the global Points of Contact (PoC) directory as a valuable CBM. The OEWG members will focus on the implementation and operationalisation of the directory. Many countries prefer an incremental approach to its operationalisation, considering the diversity of regional practices. 

The next steps include: A notification from the Secretariat from UNODA, as the manager of the Global POC directory, will go out very early in the year to all member states, asking them to nominate a point of contact to be included in the PoC directory. An informal online information session on the PoC directory will likely be held sometime in February. The chair noted a need for a space to continue sharing national approaches and national strategies for implementing CBMs. The OEWG will also discuss potential new global CBMs that can be added to the list. 

 Art, Drawing, Doodle

Capacity building. Consensus exists that capacity building is a cross-cutting and urgent issue, enabling countries to identify and address threats while implementing international law and norms for responsible behaviour in cyberspace. Foundational capacities were consistently highlighted as crucial elements in ensuring cybersecurity. This includes legal frameworks, the establishment of dedicated agencies, and mechanisms for incident response, with a special focus on computer emergency response teams (CERTs) and CERT cooperation. However, delegations also stressed the importance of national contexts and how there is no one-size-fits-all answer on building foundational capacities. Eefforts should be tailored to the specific needs, legal landscape and infrastructure of individual countries.

Delegations expressed support for the voluntary cybersecurity capacity-building checklist proposed by Singapore. The checklist aims to guide countries in enhancing their cyber capabilities, fostering international collaboration, and ensuring a comprehensive approach to cybersecurity. Multiple delegations expressed support for the Accra Call for Cyber Resilience Development set forth during the Global Conference on Cyber Capacity Building (GC3B), which seeks to strengthen cyber resilience as a vital enabler for sustainable development.

A mapping exercise in March 2024 will comprehensively survey global cybersecurity capacity building initiatives, aiming to identify gaps and avoid the duplication of efforts. It is anticipated that the results of the exercise will inform the global roundtable on capacity building scheduled for May 2024. The roundtable will serve as an opportunity to involve a range of non-state cybersecurity stakeholders to showcase ongoing initiatives, create partnerships, and facilitate a dynamic exchange of needs and solutions. 

 Accessories, Sunglasses, Text, Handwriting, Glasses

Regular institutional dialogue. The discussions on what the future regular institutional dialogue will look like can be summarised as Programme of Action (PoA) vs OEWG. There have been some novel approaches expressed, though. 

Since the initial proposal of the PoA, there have been several changes. Supporters of the PoA suggest using the review mechanism to identify gaps in existing international law and recognise that such gaps can be filled with new norms. States underlined the action-oriented nature of the PoA, highlighting its capacity building focus. Regarding inclusivity, the PoA should allow multistakeholder participation, especially of the private sector. However, the PoA would be led by states, while stakeholders would be responsible for implementation.  Another novelty includes other initiatives like a PoC directory and threat repository and an UNIDIR implementation survey within the future PoA architecture.

On the other hand, a group of countries submitted a working paper on a permanent OEWG, which they believe should be established right after the end of the current OEWG’s mandate. The permanent OEWG’s focus would be on the development of legally binding rules as elements of a future universal treaty on information security. The working paper suggests several principles, proposing that all decisions of the permanent OEWG should be made by consensus (a crucial difference from a PoA) and stricter rules for stakeholder participation. 

Large UN meeting room has a panel at the front and delegates seated in front of computers. A screen shows the current civil society speaker, Vladimir Radunović.
OEWG in session. Credit: Pavlina Ittelson

The midway point. The OEWG’s mandate spans 2021-2025, with 11 substantive sessions planned during this period. However, the discussions on international security at the UN span 25 years, and some of the disagreements we are seeing today are just as old. Can the OEWG 2021-2025 agree on everything (or anything)? And should it, in order to be deemed successful? We leave you with a quote from the chair himself, Amb. Burhan Gafoor: ‘Because we are midway in this process we also have to think about what is success for the OEWG and for our work. If we define our success in a New York-centric way, then I think we will not have succeeded at all. Our success as a working group will depend on whether we are able to make a difference to the situation on the ground, in capitals in different countries, small countries, developing countries, countries that need help, to deal with the challenge of ICT security.’


// DIGITAL POLICY ROUNDUP (11–18 DECEMBER) //
Screen Shot 2023 12 14 at 20.03.14
COP28 tackles the climate crisis through Green Digital Action
The outcomes of the Green Digital Action track include corporate agreements on reducing greenhouse gas emissions, collaboration on e-waste regulation, and strengthening industry and state collaboration on environmental sustainability standards. Read more.
Screen Shot 2023 12 14 at 20.03.14
COP28 tackles the climate crisis through Green Digital Action
The outcomes of the Green Digital Action track include corporate agreements on reducing greenhouse gas emissions, collaboration on e-waste regulation, and strengthening industry and state collaboration on environmental sustainability standards. Read more.
beautiful hand woman opening play store logo
Epic Games wins antitrust case against Google
A US jury ruled in favour of Epic Games in an antitrust case against the Google Play app store, finding that Google has illegal monopoly power. Read more.
beautiful hand woman opening play store logo
Epic Games wins antitrust case against Google
A US jury ruled in favour of Epic Games in an antitrust case against the Google Play app store, finding that Google has illegal monopoly power. Read more.
circuit board and ai micro processor artificial intelligence of digital human 3d render
Global leaders pledge for responsible AI at the 2023 GPAI Summit in New Delhi
Leaders reaffirmed commitments to responsible AI aligned with democratic values and human rights. Read more.
circuit board and ai micro processor artificial intelligence of digital human 3d render
Global leaders pledge for responsible AI at the 2023 GPAI Summit in New Delhi
Leaders reaffirmed commitments to responsible AI aligned with democratic values and human rights. Read more.
AI prompting banner
OpenAI partners with global news publisher Axel Springer to offer news in ChatGPT
News publisher Axel Springer has partnered with OpenAI, the owner of ChatGPT, to provide AI-generated summaries of news articles. Read more.
AI prompting banner
OpenAI partners with global news publisher Axel Springer to offer news in ChatGPT
News publisher Axel Springer has partnered with OpenAI, the owner of ChatGPT, to provide AI-generated summaries of news articles. Read more.
multi exposure abstract creative digital world map hologram chinese flag blue sky background research analytics concept
China’s online censors target ‘pessimism’ on digital platforms
Content policy moderation in China aims to root out pessimistic content on digital platforms as criticism of the country’s political economy grows. Read more.
multi exposure abstract creative digital world map hologram chinese flag blue sky background research analytics concept
China’s online censors target ‘pessimism’ on digital platforms
Content policy moderation in China aims to root out pessimistic content on digital platforms as criticism of the country’s political economy grows. Read more.
white square with cyber attack alphabet letters cyber attack concept
Cyberattack cripples Ukraine’s biggest telecom operator
There is no indication that the incident has resulted in the compromise of subscribers’ personal data. Two hacker groups, KillNet and Solntsepyok, claimed responsibility for the attack. Read more.
white square with cyber attack alphabet letters cyber attack concept
Cyberattack cripples Ukraine’s biggest telecom operator
There is no indication that the incident has resulted in the compromise of subscribers’ personal data. Two hacker groups, KillNet and Solntsepyok, claimed responsibility for the attack. Read more.

// READING CORNER //
Human hand extends its index finger to touch the index finger of a robotic hand.

MIT’s group of leaders and scholars, representing various disciplines, has presented a set of policy briefs with the goal of assisting policymakers in effectively managing AI in society.


 Home Decor, Art, Graphics

The OECD Digital Education Outlook for 2023 report assesses the current status of countries and potential future directions in leveraging digital transformation in education. It highlights opportunities, guidelines, and precautions for the effective and fair integration of AI in education. It includes data from a broad range of OECD countries and select partner nations.


Andrijana20picture
Andrijana Gavrilovic – Author
Editor – Digital Watch; Head of Diplomatic & Policy Reporting, DiploFoundation
nPHsW2zD ginger
Virginia Paque – Editor
Senior Editor Digital Policy, DiploFoundation

2023 UNCTAD eWeek | Post-event message

 Text

UNCTAD eWeek (4-8 December) was a knowledge fair where thousands of participants from all continents gathered to exchange information, share new ideas, and gain fresh perspectives on AI, data, and e-trade.

During eWeek, Diplo and UNCTAD (with the support of the Omidyar Foundation) made history by providing hybrid reporting of rich knowledge exchange in Geneva and online.

AI and experts summarised and preserved knowledge from eWeek for future policymaking and academic research on digital developments.

The knowledge graph below depicts the breadth and depth of eWeek discussions.

Knowledge graph of UNCTAD eWeek

Each line in the graph presents knowledge linkages among topics, arguments and insights in the corpus of 1,440,706 words analogous to the volume of 2,45 War and Peace novel.


You can unpack the above graph and dive deep into eWeek knowledge ecology by consulting reports

from

featuring

Speakers

delivering

Statements

making

Arguments

visualised as ‘knowledge traffic light’ bellow


DW Weekly #139 – 11 December 2023

 Text, Paper, Page

Dear readers,

You’ve noticed we didn’t publish an issue last week, so in this issue we rounded up developments covering the last two weeks.

We’re also changing the format a bit, to include more links towards our Observatory. Do you love it or hate it? Drop us a line at digitalwatch@diplomacy.edu.

Let’s get started.

Andrijana and the Digital Watch team

// HIGHLIGHT //

EU lawmakers reach a deal on AI Act

We have covered the contentious EU discussions over the AI Act. After 36 hours of negotiations over three days (22 of which were consecutive), a provisional agreement was finally reached.

Definition. The EU definition of AI is borrowed from OECD, which reads: ‘An AI system is a machine-based system that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that [can] influence physical or virtual environments. Different AI systems vary in their levels of autonomy and adaptiveness after deployment.

National security exemption. The proposed regulation won’t impede member states’ national security authority and excludes AI systems for military or defence purposes.

Another exemption. It also exempts AI solely for research, innovation, or non-professional use.

General purpose AI systems and foundation models. General AI systems, especially general-purpose AI (GPAI) models, must adhere to transparency requirements, including technical documentation, compliance with the EU copyright law, and detailed summaries about training content. Stringent obligations for high-impact GPAI models with systemic risks include evaluations, risk assessments, adversarial testing, incident reporting, cybersecurity, and energy efficiency considerations. 

Foundation models. We don’t have many details here. What we know now is that the provisional agreement outlines specific transparency obligations for foundation models, large systems proficient in various tasks before they can enter the market. A more rigorous regime is introduced for high-impact foundation models characterised by advanced complexity, capabilities, and performance, addressing potential systemic risks along the value chain.

High-risk use cases. AI systems presenting only limited risk would be subject to very light transparency obligations, for example, disclosing that content was AI-generated. Obligations for AI systems classified as high-risk (due to their significant potential harm to health, safety, fundamental rights, environment, democracy and the rule of law) that address issues such as data quality and technical documentation include measures to prove that high-risk systems are compliant. Citizens can launch complaints about high-risk AI systems that affect their rights. 

Banned applications of AI. Some applications of AI will be banned because they carry too high of a risk, i.e. they pose a potential threat to citizens’ rights and democracy. These include

  • biometric categorisation systems that use sensitive characteristics (e.g. political, religious, philosophical beliefs, sexual orientation, race)
  • untargeted scraping of facial images from the internet or CCTV footage to create facial recognition databases
  • emotion recognition in the workplace and educational institutions
  • social scoring based on social behaviour or personal characteristics
  • AI systems that manipulate human behaviour to circumvent their free will
  • AI used to exploit the vulnerabilities of people due to their age, disability, social or economic situation

Law enforcement exceptions. Negotiators reached an agreement on the use of remote biometric identification systems (RBI) in publicly accessible spaces for law enforcement, allowing post-remote RBI for targeted searches of persons convicted or suspected of serious crimes and real-time RBI with strict conditions for purposes like

  • targeted searches of victims (abduction, trafficking, sexual exploitation),
  • prevention of a specific and present terrorist threat, or
  • the localisation or identification of a person suspected of having committed one of the specific crimes mentioned in the regulation (e.g. terrorism, trafficking, sexual exploitation, murder, kidnapping, rape, armed robbery, participation in a criminal organisation, environmental crime)

Additionally, changes were made to the commission proposal to accommodate law enforcement’s use of AI, introducing an emergency procedure for deploying high-risk AI tools and ensuring fundamental rights protection, with specific objectives outlined for the use of real-time remote biometric identification systems in public spaces for law enforcement purposes.

Governance. A new AI office within the commission will be advised by a scientific panel on evaluating foundation models and monitoring safety risks and will oversee advanced AI models and enforce common rules across EU member states. The AI Board, representing member states, will coordinate and advise the commission, involving member states in the implementation of regulation, while an advisory forum for stakeholders will offer technical expertise to the board.

Measures to support innovation. Regulatory sandboxes and real-world testing are promoted to enable businesses, particularly SMEs, to develop AI solutions without undue pressure from industry giants and support innovation.

Penalties. Sanctions for non-compliance range from EUR 35 million or 7% of the company’s global annual turnover for violations of the banned AI applications, EUR 15 million or 3% for breaches of the act’s obligations and €7,5 million or 1,5% for the supply of incorrect information. More proportionate caps on administrative fines for SMEs and start-ups are in the agreement.

Group photo of EU lawmakers in a meeting room.
EU lawmakers during the last negotiation meeting. Credit: Euractiv.

Why it matters. The EU can now say it has drafted the very first Western AI law. The Spanish presidency has scored a diplomatic win, and, well, it’s good PR for everyone involved. Some countries are reportedly already reaching out to the EU for assistance in their future processes. 

The draft legislation still needs to go through a few last steps for final endorsement, but the political agreement means its key elements have been approved – at least in theory. There’s quite a lot of technical work ahead, and the act does have to go through the EU Council, where any unsatisfied countries could still throw a wrench into the works. The act will go into force two years after its adoption, which will likely be in 2026. The biggest question is: Will technology move so fast that by 2026, the AI Act will no longer be revolutionary or even effective? Will we see another development the likes of ChatGPT, that would render this regulation essentially obsolete?


// DIGITAL POLICY ROUNDUP (27 NOVEMBER–13 DECEMBER) //
legal advice and counseling for digital technologies laws business and intellectual property
Microsoft’s partnership with OpenAI faces antitrust scrutiny in the USA and the UK
The US Federal Trade Commission (FTC) is conducting preliminary examinations of Microsoft’s investment in OpenAI to determine if it violates antitrust laws. Read more.
legal advice and counseling for digital technologies laws business and intellectual property
Microsoft’s partnership with OpenAI faces antitrust scrutiny in the USA and the UK
The US Federal Trade Commission (FTC) is conducting preliminary examinations of Microsoft’s investment in OpenAI to determine if it violates antitrust laws. Read more.
robotic hand wielding a dustpan and brush meticulously sweeping up one dollar bills
ECB study warns, rapid AI adoption could impact wages, not jobs
The adoption of AI could impact wages, but would not be a concern for job security, according to research by the European Central Bank (ECB). Read more.
robotic hand wielding a dustpan and brush meticulously sweeping up one dollar bills
ECB study warns, rapid AI adoption could impact wages, not jobs
The adoption of AI could impact wages, but would not be a concern for job security, according to research by the European Central Bank (ECB). Read more.
european union eu flag
EU Council adopts Data Act
The act sets principles of data access, portability, and sharing for users of IoT products. Read more.
european union eu flag
EU Council adopts Data Act
The act sets principles of data access, portability, and sharing for users of IoT products. Read more.
many flags of different countries
ITU report: uneven progress in bridging the global digital divide
The International Telecommunication Union’s (ITU) Facts and Figures 2023 report reveals that global internet connectivity is progressing steadily but unevenly, highlighting the disparities of the digital divide. Read more.
many flags of different countries
ITU report: uneven progress in bridging the global digital divide
The International Telecommunication Union’s (ITU) Facts and Figures 2023 report reveals that global internet connectivity is progressing steadily but unevenly, highlighting the disparities of the digital divide. Read more.
g20logo India 2023
India launches global repository for digital public infrastructures post G20
This launch comes after all G20 member states expressed their support for digital public infrastructure policy initiatives at the 2023 G20 summit. Read more.
g20logo India 2023
India launches global repository for digital public infrastructures post G20
This launch comes after all G20 member states expressed their support for digital public infrastructure policy initiatives at the 2023 G20 summit. Read more.
eu flags in front of european commission
European Commission launches Chips Joint Undertaking under the European Chips Act
The European Commission has enacted the Chips JU plan and formed the European Semiconductor Board to advise on implementing the European Chips Act and fostering international collaboration. Read more.
eu flags in front of european commission
European Commission launches Chips Joint Undertaking under the European Chips Act
The European Commission has enacted the Chips JU plan and formed the European Semiconductor Board to advise on implementing the European Chips Act and fostering international collaboration. Read more.
meta logo metaverse product setting podium abstract minimalistic placement abstract background
Meta’s Oversight Board to review handling of violent content in Israel-Hamas conflict cases
Meta’s Oversight Board will focus on a video depicting the aftermath of a Gaza hospital explosion and another featuring a kidnapped woman. Read more.
meta logo metaverse product setting podium abstract minimalistic placement abstract background
Meta’s Oversight Board to review handling of violent content in Israel-Hamas conflict cases
Meta’s Oversight Board will focus on a video depicting the aftermath of a Gaza hospital explosion and another featuring a kidnapped woman. Read more.

// IN CASE YOU MISSED IT //

UNCTAD eWeek 2023 reports

Last week, we had the honour of being the official reporting partner of UNCTAD for the 2023 edition of eWeek. We reported from 127 sessions, spanning 7 days, 2 hours, 17 minutes, and 56 seconds, with a whopping 1,440,706 words. Visit the DW’s dedicated UNCTAD page to read the session reports. You can also register to receive a personalised AI report from the event!


Call for Applications: C4DT Digital Trust Policy Fellowship

The Center for Digital Trust (C4DT) is launching the second round of its Digital Trust Policy Fellowship Program, seeking recent MSc. or PhD graduates, global thinkers, and tech enthusiasts with backgrounds in computer science or engineering. The program looks for individuals with innovative minds, ambitious self-starters ready to tackle challenges in privacy, cybersecurity, AI, and machine learning, and aspiring policy writers with excellent analytical and communication skills. The deadline for applications is 31 January 2024.


// THE WEEK AHEAD //

20 November–15 December. The ITU World Radiocommunication Conference, which aims to review and revise the international treaty governing the use of the radio-frequency spectrum and the geostationary-satellite and non-geostationary-satellite orbits, will conclude on 15 December. 

11–12 December. The 12th edition of the Global Blockchain Congress in Dubai, UAE, under the theme ‘Will the Next Bull Market Be Different?’

11–15 December. The second UN Open-Ended Working Group (OEWG) on developments in the field of ICTs in the context of international security will hold its sixth substantive session 11–15 December. The OEWG is tasked with studying existing and potential threats to information security, possible confidence-building measures, and capacity building. It should also further develop rules, norms, and principles of responsible behaviour of states, discuss ways of implementing them, and explore the possibility of establishing regular open-ended institutional dialogue under the auspices of the UN.

12–14 December. The Global Partnership on AI Summit 2023 will bring together experts to foster international cooperation on various AI issues. GPAI working groups will also showcase their work around responsible AI, data governance, the future of work, and innovation and commercialisation.

12–14 December. Jointly organised by ITU and the European Commission with the co-organisational support of the Accessible Europe Resource Centre, Accessible Europe: ICT 4 All – 2023 aims to explore the areas where accessibility gaps persist and identify what best practices can be replicated for broader impact.

14 December. The launch of the UN Institute for Disarmament Research (UNIDIR) report on ‘International Security in 2045: Exploring Futures for Peace, Security and Disarmament’ will be held in a hybrid format on 14 December 2023 at the Palais des Nations in Geneva, Switzerland.

13–15 December. The Council of Europe’s Octopus Conference 2023 will focus on securing and sharing electronic evidence and capacity building on cybercrime and electronic evidence, specifically the impact the Cybercrime Programme Office made during the last ten years and the next steps.


// READING CORNER //
 Birthday Cake, Cake, Cream, Dessert, Food, People, Person, Icing

ChatGPT: A year in review
ChatGPT recently turned one – delve into the trends it brought forward, which have shaped both industries and regulatory frameworks.


 Sphere, Cap, Clothing, Hat, Astronomy, Outer Space, Home Decor

Geneva Manual on Responsible Behaviour in Cyberspace

The manual, which focuses on the roles and responsibilities of non-state stakeholders in implementing two UN cyber norms related to supply chain security and responsible reporting of ICT vulnerabilities, was launched by the Geneva Dialogue on Responsible Behaviour in Cyberspace last week.


Decorative image of cover page

Digital Watch Monthly December issue

In the December issue of the Digital Watch Monthly, we describe the four seasons of AI, summarise EU lawmakers’ negotiations on the AI Act, examine what Q* and Gemini mean for AGI, and delve into the delicate balance between combating online hate and preserving freedom of speech.


Andrijana20picture
Andrijana Gavrilovic – Author
Editor – Digital Watch; Head of Diplomatic & Policy Reporting, DiploFoundation
nPHsW2zD ginger
Virginia Paque – Editor
Senior Editor Digital Policy, DiploFoundation

Digital Watch newsletter – Issue 85 – December 2023

 Page, Text, Advertisement, Poster, Person, Face, Head

Snapshot: What’s making waves in digital policy?

AI governance

Google and Anthropic have announced an expanded partnership, encompassing joint efforts on AI safety standards, committing to the highest standards of AI security, and using TPU chips for AI inference.

Google has unveiled ‘The AI Opportunity Agenda,’ offering policy guidelines for policymakers, companies, and civil societies to collaborate in embracing AI and capitalising on its benefits.

The OECD launched The AI Incidents Monitor, which offers comprehensive policy analysis and data on AI incidents, shedding light on AI’s impacts to help shape informed AI policies. US President Joe Biden and Chinese President Xi Jinping, on the sidelines of the Asia-Pacific Economic Cooperation’s (APEC) Leaders’ Week, agreed on the need ‘to address the risks of advanced AI systems and improve AI safety through USA-China government talks.

The Italian Data Protection Authority (DPA) initiated a fact-finding inquiry to assess whether online platforms have implemented sufficient measures to stop AI platforms from scraping personal data for training AI algorithms. 

Switzerland’s Federal Council has tasked the Department of the Environment, Transport, Energy, and Communications (DETEC) with providing an overview of potential regulatory approaches for AI by the end of 2024. The council aims to use the analysis as a foundation for an AI regulatory proposal in 2025.

Technologies

Yangtze Memory Technologies Co (YMTC), China’s largest memory chip manufacturer, filed a lawsuit against Micron Technology and its subsidiary for violating eight patents. Under the EU-India Trade and Technology Council (TTC) framework, the EU and India signed a Memorandum of Understanding on working arrangements in the semiconductor ecosystem, its supply chain, and innovation. Joby Aviation and Volocopter air taxi manufacturers showcased their electric aircraft in New York. Amazon introduced Q, an AI-driven chatbot tailored for its Amazon Web Services, Inc. (AWS) customers, serving as a versatile solution catering to business intelligence and programming needs.

Security

The UK, the USA, and 16 other partners have released the first global guidelines to enhance cybersecurity throughout the life cycle of an AI system. The guidelines span four key areas within the life cycle of the development of an AI system: secure design, secure development, secure deployment, and secure operation and maintenance.

The EU Parliament and the EU Council reached a political agreement on the Cyber Resilience Act. The agreement will now be subject to formal approval by the parliament and the council.

Infrastructure

The EU’s Gigabit Infrastructure Act (GIA) is undergoing significant alteration as the ‘tacit approval principle,’ designed to expedite the deployment of broadband networks, has been excluded from the latest compromise text circulated by the Spanish presidency of the EU Council. ICANN launched its Registration Data Request Service (RDRS) to simplify requests for access to nonpublic registration data related to generic top-level domains (gTLDs). 

The International Telecommunication Union (ITU) has adopted ITU-R Resolution 65, which aims to guide the development of a 6G standard. This resolution enables studies on the compatibility of current regulations with potential 6th-generation international mobile telecommunications (IMT) radio interface technologies for 2030 and beyond. 

The Indian government has launched its Global Digital Public Infrastructure Repository and created the Social Impact Fund to advance digital public infrastructure in the Global South as part of its G20 initiatives.

The EU Council adopted the Data Act, setting principles of data access, portability, and sharing for users of IoT products. OpenAI has initiated the Copyright Shield, a program specifically covering legal expenses for its business customers who face copyright infringement claims stemming from using OpenAI’s AI technology.

Internet economy

Apple, TikTok, and Meta appealed against their gatekeeper classification under the EU Digital Markets Act (DMA), which aims to enable user mobility between rival services like social media platforms and web browsers. Conversely, Microsoft and Google have opted not to contest the gatekeeper label. The US Treasury reached a record $4.2 billion settlement with Binance – the world’s largest virtual currency exchange, for violating anti-money laundering and sanctions laws, mandating a five-year monitoring period and rigorous compliance measures. Australia’s regulator called for a new competition law for digital platforms due to their growing influence. 

Digital rights

The Court of Justice of the EU (CJEU) ruled that data subjects have the right to appeal the decision of the national supervisory authority regarding the processing of their personal data.

Content policy

Nepal decided to ban TikTok, citing the disruption of social harmony caused by the misuse of the popular video app. YouTube introduced a new policy that requires creators to disclose the use of Generative AI. OpenAI and Anthropic have joined the Christchurch Call to Action, a project started by French President Emmanuel Macron and then New Zealand Prime Minister Jacinda Ardern to suppress terrorist content. X (formerly Twiter) is on the EU Commission’s radar for having significantly fewer content moderators than its rivals.

Development

ITU’s Facts and Figures 2023 report reveals uneven progress in global internet connectivity, which exacerbates the disparities of the digital divide, particularly in low-income countries. Switzerland announced plans for a new state-run digital identity system, slated for launch in 2026, after voters rejected a private initiative in 2021 due to personal data protection concerns. Indonesia’s Ministry of Communication and Information introduced a new policy on digital identity, which will later require all citizens to have a digital ID.

THE TALK OF THE TOWN – GENEVA

The World Economic Forum (WEF) held its Annual Meeting on Cybersecurity 2023 on 13–15 November, assembling over 150 leading cybersecurity experts. Based on WEF’s Global Security Outlook 2023 report released in January 2023, the annual meeting provided a space for experts to address growing cyber risks with strategic, systemic approaches and multistakeholder collaborations.

The 12th UN Forum on Business and Human Rights took place from 27 to 29 November, focusing on the actual changes that have been made  by states and businesses to implement the UN Guiding Principles on Business and Human Rights (UNGPs) standards. Among the discussed topics was the improvement in disability rights implementation via advancements in assistive technologies, AI and digitalisation, and other care and support systems. 

Held in conjunction with the 12th UN Forum on Business and Human Rights, the UN B-Tech Generative Summit on 30 November explored the undertaking of due diligence in human rights when putting AI into practice. The full-day summit presented the B-Tech Project’s papers on human rights and generative AI and provided a platform for all stakeholders to discuss the practical uses of the UN Guiding Principles on Business and Human Rights (UNGP) and other human-rights-based approaches in analysing the impacts of generative AI.


The four seasons of AI

ChatGPT, a revolutionary creation by OpenAI launched on 30 November 2022, has not only captivated the tech world but also shaped the narrative around AI. As ChatGPT marks its first anniversary, it prompts a collective step back to reflect on the journey so far and consider what lies ahead. 

A symbolic journey through the seasons has been a compelling backdrop to AI’s trajectory since last November. The winter of excitement saw rapid user adoption, surpassing even social media giants with its pace. Within 64 days, ChatGPT amassed an astounding 100 million users, a feat that Instagram, for instance, took 75 days to achieve. The sudden surge in interest in generative AI has taken major tech companies by surprise. In addition to ChatGPT, several other notable generative AI models, such as Midjourney, Stable Diffusion, and Google’s Bard, have been released.

The subsequent spring of metaphors ushered in a wave of imaginative comparisons and discussions on AI governance. Anthropomorphic descriptions and doomsday scenarios emerged, reflecting society’s attempts to grapple with the implications of advanced AI.

As ChatGPT entered its contemplative summer of reflection, a period of introspection ensued. Drawing inspiration from ancient philosophies and cultural contexts, the discourse broadened beyond mere technological advancements. The exploration of wisdom from Ancient Greece to Confucius, India, and the Ubuntu concept in Africa sought answers to the complex challenges posed by AI, extending beyond simple technological solutions.

Now, in the autumn of clarity, the initial hype has subsided, making room for precise policy formulations. AI has secured its place on the agendas of national parliaments and international organisations. In policy documents from various groups like the G7, G20, G77, and the UN, the balance between opportunities and risks has shifted towards a greater focus on risks. The long-term existential threats of AI have taken centre stage in conferences like the London AI Summit, with governance proposals drawing inspiration from entities like the International Atomic Agency (IAAA), CERN, and the International Panel on Climate Change (IPCC). 

What lies ahead? We should focus on the two main issues at hand: how to address AI risks and what aspects of AI should be governed.

In managing AI risks, a comprehensive understanding of three categories – immediate knowns, looming unknowns, and long-term unknowns – is crucial for shaping effective regulations. While short-term risks like job loss and data protection are familiar and addressable with existing tools, mid-term risks involve potential monopolies controlling AI knowledge, demanding attention to avoid dystopian scenarios. Long-term risks encompassing existential threats dominate public discourse and policymaking, as seen in the Bletchley Declaration. Navigating the AI governance debate requires transparently addressing risks and prioritising decisions based on societal responses.

Regarding the governance of AI aspects, current discussions revolve around computation, data, algorithms, and applications. Computation aspects involve the race for powerful hardware, with geopolitical implications between the USA and China. The data, often called the oil of AI, demands increased transparency regarding its usage. Algorithmic governance, which is focused on long-term risks, centres on the relevance of weights in AI models. At the apps and tools level, the current shift from algorithmic to application-focused regulations may significantly impact technological progress. Debates often overlook data and app governance, areas detailed in regulation but not aligned with tech companies’ interests.

 Light, Lightbulb

This text is inspired by Dr Jovan Kurbalija’s Recycling Ideas blog series. It’s a collection of concepts, traditions, and thoughts aimed at constructing a social contract suitable for the AI era.


EU lawmakers warring over the bloc’s AI Act

After more than 22 hours of the initial trilogue negotiations in the EU on 6 and 7 December, encompassing an agenda of 23 items, agreement on the AI Act remains elusive. Here’s what reports point to. 

Foundation models. The negotiations hit a significant snag when France, Germany, and Italy spoke out against the tiered approach initially envisioned in the EU AI Act for foundation models (base models for developers). The tiered approach would mean categorising AI into different risk bands, with more or less regulation depending on the risk level. What France, Germany, and Italy want is to regulate only the use of AI rather than the technology itself, because they want to ensure that AI innovation in the EU is not stifled. They proposed ‘mandatory self-regulation through codes of conduct’ for foundation models. European Parliament officials walked out of a meeting to signal that leaving foundation models out of the law was not politically acceptable. 

According to a compromise document seen by Euractiv, the tiered approach was retained in the text of the act. However, the legislation would not apply to general-purpose AI (GPAI) systems offered under free and open-source licenses. This exemption can be nullified if the open-source model is put into commercial use. At the same time, lawmakers agreed that the codes of conduct would serve as supplementary guidelines until technical standards are harmonised.

According to the preliminary agreement, any model that was trained using computing power greater than 10^25 floating point operations (FLOPs) will be automatically categorised as having systemic risks.

These models would face extensive obligations, including evaluation, risk assessment, cybersecurity, and energy consumption reporting. 

An EU AI office will be established within the commission to enforce foundational model rules, with national authorities overseeing AI systems through the European Artificial Intelligence Board (EAIB) for consistent application of the law. An advisory forum will gather feedback from stakeholders. A scientific panel of independent experts will advise on enforcement, identify systemic risks, and contribute to the classification of AI models.

Contentious issues. While approximately ten issues remain unresolved on the agenda, the primary obstacles revolve around prohibitions, remote biometric identification, and the national security exemption.

Prohibitions. So far, lawmakers have tentatively agreed on prohibiting manipulative techniques, systems exploiting vulnerabilities, social scoring, and indiscriminate scraping of facial images. At the same time, the European Parliament has proposed a much longer list of prohibited practices and is facing a strong pushback from the council.

Remote biometric identification. On the issue of remote biometric identification, including facial recognition in public spaces, members of the European Parliament (MEPs) are pushing for a blanket ban on biometric categorisation systems based on sensitive personal traits, including race, political opinions, and religious beliefs. At the same time, member states are pushing for exemptions to use biometric surveillance when there is a threat to national security. 

National security exemption.  France, leading the EU countries, advocates for a broad national security exemption in AI regulations, emphasising member states’ discretion in military, defence, and national security issues. However, this will likely face resistance from progressive lawmakers, who will likely advocate for an outright ban.

What now? If the EU doesn’t pass the EU Act in 2023, it might lose its chance to establish the gold standard of AI rules. Spain, in particular, is eager to achieve this diplomatic win under their presidency. The Spanish presidency offered MEPs a package deal close to the council’s position, and despite tremendous pressure, the centre-to-left MEPs did not accept it. Negotiations are still ongoing, though. Now we wait.

A man interacts with artificial intelligence to optimize and automate computing.


Higher stakes in the race for AGI? 

The buzz around OpenAI’s November saga has been nothing short of gripping, and we’ve been right in the thick of it, following every twist and turn. 

In summary, OpenAI CEO Sam Altman was ousted from the company because he ‘was not consistently candid in his communications’ with the board. Most of OpenAI’s workforce, approximately 700 out of 750 employees, expressed their intention to resign and join Altman at Microsoft, prompting his reinstatement as CEO. Additionally, OpenAI’s board changed some of its members.

Reports (and speculation) of Q* swiftly broke through. Reuters reported that Altman was dismissed partly because of Q*, an AI project allegedly so powerful that it could threaten humanity. 

Q* can supposedly solve certain math problems. Although its mathematical prowess is on the level of grade-school students (the first 6 or 8 grades), this could be a potential breakthrough in artificial general intelligence (AGI), as it suggests a higher reasoning capacity. OpenAI sees AGI as AI that aims to surpass human capabilities in economically valuable tasks.

Upon his return as CEO, Altman’s comment about Q* was: ‘No particular comment on that unfortunate leak.’  

The news has caused quite a stir, with many wondering what exactly Q* is, if it even exists. Some savvy observers think Q* might be tied to a project from OpenAI in May, bragging about ‘process supervision’ – a technique that trains AI models to crack problems step-by-step.

Some theorise the Q* project might blend Q-learning (i.e. a type of reinforcement learning where a model iteratively learns and improves over time by being rewarded for taking the correct action) with an algorithm that computers can use to figure out how to get somewhere between two places quickly (A* search).

Sidenote: how do you reward AI? By using the reward function, which gives numerical values to an AI agent as a reward or punishment for its actions, Diplo’s AI team explained. For example, if you want AI agent to learn how to get from point A to point B, you can give it +1 for each step in the right direction, -1 for each step in the wrong direction, and +10 for reaching point B. Since the AI agent is trying to maximise the value of the reward function, it will learn to take steps in the right direction.

Others posited that the name Q* might reference the Q* search algorithm, which was developed to control deductive searches in an experimental system. 

Google joins the race. The beginning of December saw the launch of Google’s Gemini, an AI model that, according to Google, has outperformed human experts on massive multitask language understanding, a measurement designed to measure AI’s knowledge of math, history, law, and ethics. This model reportedly can outperform GPT-4 in grade school math. However, Google has declined to comment on Gemini’s parameter counts.

Is this all really about AGI? Well, it’s hard to tell. On the one hand, AI surpassing human capabilities sounds like a dystopia (why does no one ever think it might be a utopia?) is ahead. On the other hand, experts say that even if an AI could solve math equations, it wouldn’t necessarily translate to broader AGI breakthroughs.

What are all these speculations really about? Transparency – and not only at OpenAI and Google. We need to understand who (or what) will shape our future. Are we the leading actors or just audience members waiting to see what happens next?

 Art, Doodle, Drawing, Person, Animal, Bird, Penguin, Head, Face, Canine, Dog, Mammal, Pet

Balancing online speech: Combating hate while preserving freedom

The ongoing battle about preventing and combating online hate speech while ensuring that freedom of expression is protected has had the EU Agency for Fundamental Rights (FRA) calling for ‘appropriate and accurate content moderation’. 

FRA has published a report on the challenges in detecting online hate speech against people of African descent, Jewish people, and Roma people and others on digital platforms, including Telegram, X (formerly known as Twitter), Reddit, and YouTube. Data were collected from Bulgaria, Germany, Italy, and Sweden to provide a comparative analysis based on their current national policies. FRA called regulators and digital platforms to ensure a safer space for people of African descent, Jews, and Roma because it was found that they experience very high levels of hate speech and cyber harassment. Additionally, FRA drew attention to effective content moderation regulation for women as there are higher levels of incitement to violence against them compared to other groups. 

xBaeopxzCTcmDi7oehQSM73Br

Is the DSA enough to ensure content moderation in the EU? While the Digital Security Act (DSA) is considered a big step in moderating online hate speech, FRA claims its effect is yet to be seen. According to FRA, clarification is needed about what is regarded as hate speech, including training for law enforcement, content moderators, and flaggers about legal thresholds for the identification of hate speech. This training should also ensure that platforms do not over-remove content. 

UNESCO’s guidelines. UNESCO’s Director-General, Audrey Azoulay, sounded an alarm about the surge in online disinformation and hate speech, labelling them a ‘major threat to stability and social cohesion’. In response, UNESCO published guidelines for the governance of digital platforms to combat online disinformation and hate speech while protecting freedom of expression. The guidelines include establishing independent public regulators in countries worldwide, ensuring linguistically diverse moderators on digital platforms, prioritising transparency in media financing, and promoting critical thinking.

The importance of civil society. Since the Israeli-Palestinian war began, posts about Palestine and content removals reached an ‘unprecedented scale’ said Jillian York, the director for international freedom of expression at the Electronic Freedom Foundation (EFF). Thus, several Palestinian human rights advocacy groups initiated the ‘Meta: Let Palestine Speak’ petition calling on the tech giant to address the unfair removal of  Palestinian content.

And, of course, AI. As found in FRA’s report, human-based content assessment often uses biassed and discriminatory parameters. This, however, does not mean that AI could be prevented from doing this, as seen in Meta’s auto-translation, which applied the term ‘terrorist’ to Palestinian users who had an Arabic phrase in their bios, for which Meta publicly apologised in October 2023. 


Launch of the Geneva Manual on Responsible Behaviour in Cyberspace

The recently launched Geneva Manual focuses on the roles and responsibilities of non-state stakeholders in implementing two UN cyber norms related to supply chain security and responsible reporting of ICT vulnerabilities. 

The manual was drafted by the Geneva Dialogue on Responsible Behaviour in Cyberspace, an initiative established by the Swiss Federal Department of Foreign Affairs and led by DiploFoundation with the support of the Republic and state of Geneva, the Center for Digital Trust (C4DT) at the Swiss Federal Institute of Technology in Lausanne (EPFL), Swisscom, and UBS.

The Geneva Dialogue plans to expand the manual by discussing the implementation of additional norms in the coming years.

The manual is a living document, open for engagement and enrichment. Visit genevadialogue.ch to contribute your thoughts, ideas, and suggestions, as we chart a course toward a more secure and stable cyberspace. 

Cartoon schematic shows a flow from a contentions oral meeting on cyber norms to a more detailed research analysis with exclamations changing to realisation and agreement, to a summary schematic superimposed over a turtle, alongside a report on cyber norms and a book with a bookmark labelled Geneva manual, ending in a drawing of a turtle with a world globe on its back, with the word ‘Secure!’

DW Weekly #138 – 27 November 2023

DigWatch Weekly. Capturing top digital policy news worldwide

Dear all,

Negotiations on the EU AI Act face challenges as France, Germany, and Italy oppose tiered regulation for foundation AI models. OpenAI’s leadership changes and alleged project Q* raise transparency concerns. The UK, USA, and partners released global AI system development guidelines. Italy is investigating AI data collection, while Switzerland is exploring regulatory approaches. India warned social media giants about the spread of deepfakes and misinformation. The US Treasury imposed record penalties on Binance, and the Australian regulator called for regulatory reform of digital platforms.

Let’s get started.

Andrijana and the Digital Watch team


// HIGHLIGHT //

EU warring on AI Act

The negotiations on the EU AI Act have hit a significant snag, as France, Germany, and Italy spoke out against the tiered approach initially envisioned in the EU AI Act for foundation models. These three countries asked the Spanish presidency of the EU Council, which negotiates on behalf of member states in the trialogues, to retreat from the approach.

The tiered approach would mean categorising AI into different risk bands, with more or less regulation depending on the risk level. 

What France, Germany, and Italy want is to regulate only the use of AI rather than the technology itself and propose ‘mandatory self-regulation through codes of conduct’  for foundation models.

To implement the use-based approach, developers of foundation models would have to define model cards – documents that provide information about machine learning models, detailing various aspects such as their intended use, performance characteristics, limitations, and potential biases.

An EU AI governance body could contribute to formulating guidelines and overseeing the implementation of model cards giving detailed context information.

A hard ‘no’ from the European Parliament. However, European Parliament officials walked out of a meeting to signal that leaving foundation models out of the law was not politically acceptable. 

A suggested compromise. The European Commission circulated a possible compromise: Bring back a two-tiered approach, watering down the transparency obligations and introducing a non-binding code of conduct for the models that pose a systemic risk. Further negotiations are expected to centre around this proposal.

Still a no from the European Parliament. The Parliament is not budging: It is not willing to accept self-regulation and only accepts the idea of EU codes of practice as a complementary element to the horizontal transparency requirements for all foundation models.

Chart details the content of five different AI Act proposals: the IT/FR/DE Non-Paper, the White House Executive Order, the Spanish Presidency Compromise Proposal, Parliament’s Adopted Position, and the Council’s Adopted Position grouped under the five broad areas of Required safety obligations, Compute-related monitoring, Governance body oversight, code of conduct, and Information sharing.
A comparison of key AI Act proposals. Source: Future of Life Institute

Why is it relevant? 

The Franco-German-Italian non-paper and the commission’s proposed compromise have sparked concerns that the largest foundation models will remain underregulated in the EU. Add a time constraint to that: Policymakers hoped to finalise the act at a meeting scheduled for 6 December. The chances for that are currently looking slim. If the EU doesn’t pass the EU Act in 2023, it may lose its chance to establish the gold standard of AI rules.


Digital policy roundup (20–27 November)

// AI //

OpenAI – Last week’s episode

Much has been written about what transpired at OpenAI last week. We have followed the developments, too.

Here’s the quickest recap of the situation on the internet. OpenAI CEO Sam Altman was ousted from the company because he ‘was not consistently candid in his communications’ with the board. Mira Murati took over as Interim CEO. Altman then joined Microsoft. The OpenAI board proceeded to appoint Twitch co-founder Emmett Shear as interim CEO. Approximately 700 out of 750 OpenAI staff sent a letter to the board claiming they would resign from the company over the debacle and join Altman at Microsoft. Altman came back as CEO and OpenAI’s board changed some of its members.

And here’s the most exciting part. Reuters reported that Altman was dismissed partly because of Q*, an AI project allegedly so powerful that it could threaten humanity. 

Q* can supposedly solve certain math problems, suggesting a higher reasoning capacity. This could be a potential breakthrough in artificial general intelligence (AGI), which OpenAI sees as AI that aims to surpass human capabilities in economically valuable tasks.

Why is it relevant? The news has caused quite a stir, with many wondering what exactly Q* is, if it even exists. Is this really about AGI? Well, it’s hard to tell. On the one hand, AI surpassing human capabilities sounds like a dystopia (why does no one ever think it might be a utopia?) is ahead. On the other hand, since the company hasn’t even commented so far, it’s best not to buy into the hype yet. 

But what this is definitely about is transparency – and not only at OpenAI. We all need to understand who (or what) it is that shapes our future. Are we mere bystanders?

Drawing of a game board with different-coloured marker pieces waiting for the three die being tossed by a human hand to signal the next move. Some marker pieces have chat bubbles with icons indicating surprise, intellectual thought or AI implications, justice, economics, agreement, and globalisation.
The grand game of addressing AI for the future of humanity. Who holds the dice? Credit: Vladimir Veljašević

UK, USA, and 16 other partners publish guidelines for secure AI system development

In collaboration with 16 other countries, the UK and the USA have released the first global guidelines to enhance cybersecurity throughout the life cycle of an AI system.

The guidelines were developed by the US Cybersecurity and Infrastructure Security Agency (CISA) and the UK National Cyber Security Centre (NCSC) with international partners, while major companies such as Amazon, Anthropic, Google, IBM, Microsoft, and OpenAI contributed.

The guidelines span four key areas within the life cycle of the development of an AI system: secure design, secure development, secure deployment, and secure operation and maintenance.

  1. The section about the secure design stage focuses on understanding risks, threat modelling, and considerations for system and model design. 
  2. The section on the secure development stage includes guidelines for supply chain security, documentation, and managing assets and technical debt. 
  3. The secure deployment stage section emphasises protecting infrastructure and models, developing incident management processes, and ensuring responsible release. 
  4. The secure operation and maintenance stage section provides guidelines for actions relevant after deployment, such as logging, monitoring, update management, and information sharing.

Why is it relevant? The considerable number of institutions from other countries that are signatories indicates a growing consensus on the importance of securing AI technologies.

Graphic shows a humanoid AI in front of a half-circle world map showing various icons representing technology and networks. Outside the half circle, icons for the sun and clouds are shown, with one of the clouds representing a cloud network.
Image credit: NSCS.

Italy’s DPA launches investigation into data collection for AI training

The Italian Data Protection Authority (DPA) is initiating a fact-finding inquiry to assess whether online platforms have put in place sufficient measures to stop AI platforms from scraping personal data for training AI algorithms. The investigation will cover all public and private entities operating as data controllers, established or providing services in Italy. The DPA has invited trade associations, consumer groups, experts, and academics to offer their input on security measures currently in place and those that could be adopted to prevent the extensive collection of personal data for training purposes.

Why is it relevant? Italy’s DPA is taking privacy very seriously: It even imposed a (temporary) limitation on ChatGPT earlier this year. They stated they would adopt necessary measures after the investigation is concluded, and we have no doubt they will be pulling their punches.

Desktop with partial keyboard extending off to the right side has a paperclipped yellow note that says ‘Personal Data’

Switzerland examines regulatory approaches for AI

Switzerland’s Federal Council has tasked the Department of the Environment, Transport, Energy, and Communications (DETEC) with providing an overview of potential regulatory approaches for AI by the end of 2024

Those approaches must align with existing Swiss law and be compatible with the upcoming EU AI Act and the Council of Europe AI Convention. The council aims to use the analysis as a foundation for an AI regulatory template in 2025.


Was this newsletter forwarded to you, and you’d like to see more?


// CONTENT POLICY //

India’s government issues warning to social media giants on deepfakes and misinformation

The Indian government has issued a warning to social media giants, including Facebook and YouTube, regarding the dissemination of content that violates local laws. The government is particularly concerned about harmful content related to children, obscenity, and impersonation, with a focus on deepfakes. 

The government emphasised the non-negotiable nature of these regulations and stressed the need for continuous user reminders about content restrictions, and warning of potential government directives for non-compliance. Social media platforms have reportedly agreed to align their content policies with government regulations in response to these concerns.

Digital smartphone shows its home screen with its Social Media apps highlighted in a group that contains icons for Pinterest, YouTube, X (formerly Twitter), and other apps

// CRYPTO //

US Treasury hits Binance with record-breaking penalties for money laundering and sanctions violations

The US Department of the Treasury, alongside various enforcement agencies, took unprecedented action against Binance Holdings Ltd., the world’s largest virtual currency exchange, for violating anti-money laundering (AML) and sanctions laws. 

Binance admitted to operating as an unregistered money services business, disregarding anti-money laundering protocols, bypassing customer identity verification, failing to report suspicious transactions including those involving terrorist groups, ransomware, child sexual exploitation, and other illicit activities, and facilitating trades between US users and sanctioned jurisdictions. 

Binance reached a settlement with the US government, including a historic $4.2 billion payment, a five-year monitoring period, and stringent compliance obligations. Binance agreed to exit the US market entirely and comply with sanctions. Failure to meet these terms could result in further substantial penalties.

Why is it relevant? Because it sends a strong message that the cryptocurrency industry must adhere to the rules of the US financial system or face government action.

Compound digital illustration shows the Binance logo, a $50 US bill, and several bitcoins tokens.

// COMPETITION //

Australian regulator calls for new competition laws for digital platforms

The Australian Competition and Consumer Commission (ACCC) has emphasised the urgent need for regulatory reform in response to the expanding influence of major digital platforms, including Alphabet, Amazon, Apple, Google, Meta, and Microsoft. The ACCC’s seventh interim report from the Digital Platform Services Inquiry underscores the risks associated with these platforms extending into various markets and technologies, potentially harming competition and consumers. While acknowledging the benefits of digital platforms, the report highlights concerns about invasive data collection practices, consumer lock-in, and anti-competitive behaviour. 

The report further explores the impact of digital platforms on emerging technologies, emphasising the need for adaptable competition laws to address evolving challenges in the digital economy. 

The ACCC suggests updating competition and consumer laws, introducing targeted consumer protections, and implementing service-specific codes to mitigate these risks and ensure effective competition in evolving digital markets. 

Why is it relevant? The concerns raised by the ACCC are not unique to Australia. Regulatory reforms in Australia could set a precedent for other jurisdictions grappling with similar issues.

Cover page of the seventh Digital platform services inquiry interim report dated September 2023. It has a dark blue isosceles triangle with a lighter bluish internal triangle at the lower left apex and has multiple chat bubbles containing icons representing digital services.
Image credit: ACCC.
The week ahead (27 November–4 December)

27–29 November: The 12th UN Forum on Business and Human Rights is taking place in a hybrid format to discuss effective change in implementing obligations, responsibilities, and remedies.

29–30 November: The inaugural Global Conference on Cyber Capacity Building (GC3B) will be held under the theme of cyber resilience for development and will culminate with the announcement of the Accra Call: a global action framework that supports countries in strengthening their cyber resilience. 

30 Nov 2023: Held in conjunction with the UN Business and Human Rights Forum, the UN B-Tech Generative AI Summit: Advancing Rights-Based Governance and Business Practice will explore practical applications of the UN Guiding Principles on Business and Human Rights and facilitate discussions on implementing these principles for generative AI and other general-purpose AI.

4–8 Dec 2023: UNCTAD eWeek 2023 will address pivotal questions about the future of the digital economy: What does the future we want for the digital economy look like? What is required to make that future come true? How can digital partnerships and enhanced cooperation contribute to more inclusive and sustainable outcomes? Bookmark our dedicated eWeek 2023 page on the Digital Watch Observatory or download the app to read reports from the event. In addition to providing just-in-time reporting from the eWeek, Diplo will also be involved in several activities throughout the event.


#ReadingCorner
The cover page of Diplo Bloges AI Seasons Autumn 2023 edition highlights the article ‘How can legal wisdom from 19th-century Montenegro and Valtazar Bogišić help AI regulation’ by Jovan Kurbalija. It has the word humAInism in the lower right corner.

How can legal wisdom from 19th-century Montenegro and Valtazar Bogišić help AI regulation?

Jovan Kurbalija explores the implications of the 1888 Montenegrin Civil Code on the AI era. He temporises that AI governance, much like the Montenegrin Civil Code, is about integrating tradition with modernity.


Andrijana20picture
Andrijana Gavrilovic – Author
Editor – Digital Watch; Head of Diplomatic & Policy Reporting, DiploFoundation
nPHsW2zD ginger
Virginia Paque – Editor
Senior Editor Digital Policy, DiploFoundation

Numéro 84 de la lettre d’information Digital Watch – novembre 2023

Cover of the November 2023 newsletter in French, with the Illustration of the tree, representing an Artificial Intelligence, in a pot, being watered and its branches being shaped by one woman and two men.

Observatoire

Coup d’œil : ce qui fait des remous dans le domaine de la politique numérique

Géopolitique

Le Bureau de l’industrie et de la sécurité (BIS) du ministère américain du Commerce (DoC) a annoncé un renforcement des restrictions à l’exportation de semi-conducteurs avancés vers la Chine et d’autres pays soumis à des embargos sur les armes. Cette décision a suscité une vive réaction de la part de la Chine, qui a qualifié ces mesures d’intimidation unilatérale et d’abus des mécanismes de contrôle des exportations.

Pour ne rien arranger au paysage technologique sino-américain, le Gouvernement américain envisage d’imposer des restrictions à l’accès des entreprises chinoises aux services d’informatique dématérialisée (cloud). Si cette mesure est mise en œuvre, elle pourrait avoir des conséquences importantes pour les deux pays, en particulier pour des acteurs majeurs comme Amazon Web Services et Microsoft. Enfin, pour des raisons de sécurité, le Canada a interdit les logiciels chinois et russes sur les appareils fournis par le Gouvernement.

Gouvernance de l’IA

Par ailleurs, un projet de loi ayant fait l’objet d’une fuite suggère que les pays d’Asie du Sud-Est, sous l’égide de l’Association des nations de l’Asie du Sud-Est (ASEAN), adoptent une approche favorable aux entreprises en ce qui concerne la réglementation de l’IA. Le projet de guide sur l’éthique et la gouvernance de l’IA demande aux entreprises de tenir compte des différences culturelles et ne prescrit pas de catégories de risques inacceptables. De son côté, l‘Allemagne a mis en place un plan d’action sur l’IA visant à accroître sa progression à l’échelle nationale et européenne, afin de concurrencer les forces prédominantes des États-Unis et de la Chine dans le domaine de l’IA.

Sécurité

Les responsables des agences de sécurité des États-Unis, du Royaume-Uni, de l’Australie, du Canada et de la Nouvelle-Zélande, collectivement connus sous le nom de « Five Eyes », ont publiquement mis en garde contre la vaste campagne d’espionnage menée par la Chine pour obtenir des secrets commerciaux. La Commission européenne a annoncé un examen complet des risques de sécurité dans des domaines technologiques essentiels, notamment les semi-conducteurs, l’IA, les technologies quantiques et les biotechnologies. Le 8 novembre, ChatGPT a été confronté à des pannes, qui seraient dues à une attaque par déni de service distribué (DDoS). Le groupe de pirates informatiques Anonymous Sudan en a revendiqué la responsabilité. Enfin, le dernier rapport de Microsoft sur la défense numérique a révélé une augmentation globale des cyberattaques, avec une hausse des opérations d’espionnage et du trafic d’influence parrainés par les gouvernements.

Infrastructure

La Commission fédérale des communications (FCC) des États-Unis a voté en faveur du rétablissement des règles relatives à la neutralité de l’Internet. Initialement adoptées en 2015, ces règles ont été abrogées par l’Administration précédente, mais sont maintenant sur le point d’être rétablies. 

Access Now, l’Internet Society et 22 autres organisations et experts ont conjointement envoyé une lettre à l’Autorité de régulation des télécommunications de l’Inde (TRAI) pour s’opposer à l’application de coûts de réseau discriminatoires ou de régimes de licence pour les plateformes en ligne.

Économie de l’Internet

La société Google, propriété d’Alphabet, aurait versé une somme substantielle de 26,3 milliards de dollars à d’autres entreprises en 2021 pour s’assurer que son moteur de recherche reste l’option par défaut sur les navigateurs Internet et les téléphones portables. Cette information a été révélée lors du procès anticoncurrentiel intenté par le ministère américain de la Justice (DoJ). Pour des actions anticoncurrentielles similaires, la Japan Fair Trade Commission (JFTC) a ouvert une enquête anti-monopole sur la position dominante de Google dans le domaine de la recherche sur le web. 

La Banque centrale européenne (BCE) a décidé d’entamer une phase de préparation de deux ans à compter du 1er novembre 2023 afin de finaliser les réglementations et de sélectionner les partenaires du secteur privé avant le lancement éventuel d’une version numérique de l’euro. L’étape suivante sera la mise en œuvre éventuelle, après le feu vert des décideurs politiques. Parallèlement, le Conseil européen de la protection des données (CEPD) a demandé que la législation sur l’euro numérique proposée par la Commission européenne renforce les garanties en matière de protection de la vie privée.

Droits numériques

La présidence du Conseil et le Parlement européen sont parvenus à un accord provisoire sur un nouveau cadre pour l’identité numérique européenne (eID) afin de fournir à tous les Européens une identité numérique fiable et sécurisée. Dans le cadre de ce nouvel accord, les États membres fourniront aux citoyens et aux entreprises des portefeuilles numériques qui relient leur identité numérique nationale à d’autres attributs personnels, tels que les permis de conduire et les diplômes.

La Commission du marché intérieur et de la protection des consommateurs du Parlement européen a publié un rapport mettant en garde contre la nature addictive de certains services numériques et préconisant une réglementation plus stricte pour lutter contre la conception addictive des plateformes numériques. Dans le même ordre d’idées, le Comité européen de la protection des données a ordonné au régulateur irlandais des données d’imposer une interdiction permanente de la publicité comportementale de Meta sur Facebook et Instagram.

Les principaux groupes politiques du Parlement européen sont parvenus à un consensus sur un projet de législation obligeant les plateformes Internet à détecter et à signaler les contenus pédopornographiques afin d’empêcher leur diffusion sur Internet.

Politique de contenu

Meta, la société mère de Facebook et d’Instagram, est confrontée à une bataille juridique lancée par plus de 30 États américains. Les plaignants affirment que Meta a intentionnellement et sciemment utilisé des fonctionnalités addictives tout en dissimulant les risques potentiels liés à l’utilisation des médias sociaux, violant ainsi les lois sur la protection des consommateurs et la réglementation sur la protection de la vie privée des enfants de moins de 13 ans.

L’UE a officiellement demandé à Meta et TikTok des précisions sur les mesures de lutte contre la désinformation. Dans le contexte du conflit au Moyen-Orient, l’UE souligne les risques liés à la diffusion à grande échelle de contenus illégaux et de désinformation.

La loi britannique sur la sécurité en ligne (Online Safety Act), qui impose de nouvelles responsabilités aux entreprises de médias sociaux, est entrée en vigueur. Cette loi vise à renforcer la sécurité en ligne et rend les plateformes de médias sociaux responsables de leurs pratiques de modération des contenus.

Développement

La bande de Gaza a subi trois coupures d’Internet depuis le début du conflit, ce qui a incité la société SpaceX d’Elon Musk, Starlink, à offrir un accès à Internet aux organisations d’aide internationalement reconnues à Gaza. Par ailleurs, les ONG environnementales exhortent l’UE à prendre des mesures pour lutter contre les déchets électroniques, en demandant une révision de la directive sur les déchets d’équipements électriques et électroniques (DEEE), selon la communication du Bureau européen de l’environnement.

LES CONVERSATIONS DE LA VILLE – GENÈVE

Comme convenu lors de la session ordinaire du Conseil de l’UIT en juillet 2023, une session supplémentaire consacrée à la confirmation des questions logistiques et à la planification organisationnelle pour 2024-2026 s’est tenue en octobre 2023. Elle a été précédée par le groupe de réunions des groupes de travail du Conseil (GTC) et des groupes d’experts (GE), au cours desquelles la liste des présidents et des vice-présidents a été établie jusqu’à la Conférence de plénipotentiaires de 2026. Le prochain groupe de réunions des GTC et des GE aura lieu du 24 janvier au 2 février 2024.

Le troisième sommet du Geneva Science and Diplomacy Anticipator (GESDA) a vu le lancement de l’Open Quantum Institute (OQI), un partenariat entre le Département fédéral suisse des affaires étrangères (DFAE), le CERN et UBS. L’OQI vise à rendre les ordinateurs quantiques de haute performance accessibles à tous les utilisateurs qui se consacrent à la recherche de solutions et à l’accélération des progrès dans la réalisation des objectifs de développement durable (ODD). L’OQI sera hébergé au CERN à partir de mars 2024 et facilitera l’exploration des cas d’utilisation de la technologie dans les domaines de la santé, de l’énergie, de la protection du climat, etc.

En bref

Définir le paysage mondial de l’IA

Nous avons passé la majeure partie de l’année 2023 à lire et à écrire, chaque mois, sur la gouvernance de l’IA. Le mois d’octobre ne fait pas exception. Alors que le monde est aux prises avec les complexités de cette technologie, les initiatives suivantes illustrent les efforts déployés pour relever les défis en matière d’éthique, de sécurité et de réglementation, tant au niveau national qu’international.

Décret de M. Biden sur l’IA. Le décret représente à ce jour l’effort le plus important du gouvernement américain pour réglementer l’IA. Dévoilé par anticipation, il fournit des directives applicables dans la mesure du possible et appelle à une législation bilatérale lorsque cela s’avère nécessaire, notamment en matière de confidentialité des données.

L’accent mis sur la sûreté et la sécurité de l’IA en est l’une des caractéristiques les plus marquantes. Les développeurs des systèmes d’IA les plus puissants sont désormais tenus de partager les résultats des tests de sécurité et les informations critiques avec le gouvernement américain. En outre, les systèmes d’IA utilisés dans les infrastructures critiques sont soumis à des normes de sécurité rigoureuses, ce qui témoigne d’une approche proactive visant à atténuer les risques potentiels associés au déploiement de l’IA.

Contrairement à certaines lois émergentes sur l’IA, comme la loi sur l’IA de l’UE, le décret de M. Biden adopte une approche sectorielle. Il demande à des agences fédérales spécifiques de se concentrer sur les applications de l’IA dans leur domaine. Par exemple, le ministère de la Santé et des Services sociaux est chargé de promouvoir une utilisation responsable de l’IA dans le domaine de la santé, tandis que le ministère de la Culture est chargé d’élaborer des lignes directrices pour l’authentification des contenus et le marquage en filigrane afin d’étiqueter clairement les contenus générés par l’IA. Le DoJ est chargé de s’attaquer à la discrimination algorithmique, ce qui témoigne d’une approche nuancée et adaptée de la gouvernance de l’IA.

Au-delà de la réglementation, le décret vise à renforcer l’avance technologique des États-Unis. Il facilite l’entrée des travailleurs hautement qualifiés dans le pays, reconnaissant leur rôle essentiel dans l’avancement des capacités d’IA. Le décret donne également la priorité à la recherche sur l’IA par le biais d’initiatives de financement, d’un accès accru aux ressources et aux données relatives à l’IA, et de la mise en place de nouvelles structures de recherche.

Les principes directeurs du G7. Simultanément, les pays du G7 ont publié leurs principes directeurs pour l’IA avancée, accompagnés d’un code de conduite détaillé pour les organisations qui la développent.

 People, Person, Groupshot, Adult, Male, Man, Clothing, Formal Wear, Suit, Coat, Face, Head, Mario Draghi, Fumio Kishida, Justin Trudeau, Jo Johnson, Joe Biden, Emmanuel Macron, Olaf Scholz
Source de la photo : Politico

Ces principes, 11 au total, s’articulent autour de la responsabilité fondée sur le risque. Le G7 encourage les développeurs à mettre en œuvre des mécanismes fiables d’authentification des contenus, ce qui témoigne d’un engagement à garantir la transparence des contenus générés par l’IA.

Une similitude notable avec la loi européenne sur l’IA est l’approche fondée sur le risque, qui confère aux développeurs d’IA la responsabilité d’évaluer et de gérer les risques associés à leurs systèmes. L’UE s’est rapidement félicitée de ces principes, estimant qu’ils pouvaient compléter les règles juridiquement contraignantes de la loi sur l’IA de l’UE au niveau international.

Tout en s’appuyant sur les principes de l’Organisation de coopération et de développement économiques (OCDE) en matière d’IA, les principes du G7 vont plus loin à certains égards. Ils encouragent les développeurs à déployer des mécanismes fiables d’authentification et de provenance du contenu, tels que le filigrane, pour permettre aux utilisateurs d’identifier le contenu généré par l’IA. Toutefois, l’approche du G7 préserve un certain degré de flexibilité, permettant aux juridictions d’adopter le code d’une manière qui s’aligne sur leurs approches individuelles.

Les différents points de vue sur la réglementation de l’IA parmi les pays du G7 sont reconnus, allant d’une application stricte à des lignes directrices plus favorables à l’innovation. Toutefois, certaines dispositions, telles que celles relatives à la vie privée et aux droits d’auteur, sont critiquées pour leur imprécision, ce qui soulève des questions quant à leur capacité à susciter des changements significatifs.

L’initiative chinoise de gouvernance mondiale de l’IA (GAIGI). La Chine a dévoilé son GAIGI lors du troisième forum « Belt and Road », marquant ainsi une étape importante dans l’élaboration de la trajectoire de l’IA à l’échelle mondiale. Le GAIGI de la Chine devrait réunir 155 pays participant à l’initiative du « Belt and Road », créant ainsi l’un des plus grands forums mondiaux de gouvernance de l’IA.

Cette initiative stratégique se concentre sur cinq aspects, dont l’alignement du développement de l’IA sur le progrès humain, la promotion des avantages mutuels et l’opposition aux divisions idéologiques. Elle établit également un système de test et d’évaluation pour évaluer et atténuer les risques liés à l’IA, similaire à l’approche basée sur les risques de la future loi sur l’IA de l’UE. En outre, le GAIGI soutient les cadres fondés sur le consensus et apporte un soutien essentiel aux pays en développement dans le renforcement de leurs capacités en matière d’IA.

L’approche proactive de la Chine en matière de réglementation de son industrie de l’IA lui confère un avantage de premier plan. Malgré son approche profondément idéologique, les mesures provisoires de la Chine sur l’IA générative, en vigueur depuis le mois d’août 2023, ont constitué une première mondiale. Cet avantage positionne la Chine comme un acteur influent dans l’élaboration des normes mondiales en matière de réglementation de l’IA.

Sommet sur la sécurité de l’IA à Bletchley Park. Le sommet très attendu du Royaume-Uni a débouché sur un engagement historique de la part des principaux pays et entreprises du secteur de l’IA : celui de tester les modèles d’IA les plus novateurs avant de les rendre publics.

La déclaration de Bletchley identifie les dangers de l’IA actuelle, notamment les préjugés, les menaces pour la vie privée et la production de contenus trompeurs. Tout en abordant ces préoccupations immédiates, l’accent a été mis sur l’IA d’avant-garde – les modèles avancés qui dépassent les capacités actuelles – et son grave potentiel de nuisance. Les signataires sont l’Allemagne, l’Australie, le Canada, la Chine, la Corée, les États-Unis, la France, l’Inde, le Royaume-Uni et Singapour, soit un total de 28 pays plus l’UE.

Les gouvernements vont désormais jouer un rôle plus actif dans l’expérimentation des modèles d’IA. L’AI Safety Institute, un nouveau centre mondial établi au Royaume-Uni, collaborera avec des institutions d’IA de premier plan pour évaluer la sécurité des technologies d’IA émergentes avant et après leur diffusion publique. Il s’agit là d’un changement important par rapport au modèle traditionnel, dans lequel les entreprises d’IA étaient seules responsables de la sécurité de leurs modèles.

Le sommet a débouché sur un accord visant à créer un groupe consultatif international sur les risques liés à l’IA, inspiré du Groupe d’experts intergouvernemental sur l’évolution du climat (GIEC). Chaque pays signataire désignera un représentant qui soutiendra un groupe plus large d’éminents universitaires spécialisés dans l’IA, en produisant des rapports sur l’état de la science. Cette approche collaborative vise à favoriser un consensus international sur les risques liés à l’IA.

Organe consultatif de haut niveau des Nations unies sur l’IA. Les Nations unies ont adopté une approche unique en créant un organe consultatif de haut niveau sur l’IA, composé de 39 membres. Dirigé par Amandeep Singh Gill, envoyé des Nations unies pour la technologie, l’organe publiera ses premières recommandations d’ici la fin de l’année, les recommandations finales étant attendues pour l’année prochaine. Ces recommandations seront examinées lors du Sommet de l’avenir des Nations unies, en septembre 2024.

Au lieu des initiatives précédentes qui ont introduit de nouveaux principes, l’organe consultatif de l’ONU se concentre sur l’évaluation des initiatives de gouvernance existantes dans le monde, l’identification des lacunes et la proposition de solutions. L’envoyé technologique envisage l’ONU comme une plateforme permettant aux gouvernements de discuter et d’affiner les cadres de gouvernance de l’IA.

Définition actualisée de l’IA de l’OCDE. L’OCDE a officiellement révisé sa définition de l’IA : un système d’IA est un système basé sur une machine qui, pour des objectifs explicites ou implicites, déduit, à partir des données qu’il reçoit, comment générer des résultats tels que des prédictions, du contenu, des recommandations ou des décisions qui peuvent influencer des environnements physiques ou virtuels. Les différents systèmes d’IA varient dans leurs niveaux d’autonomie et d’adaptabilité après le déploiement. Il est prévu que cette définition soit incorporée dans le prochain règlement de l’UE sur l’IA.

La désinformation menace de masquer la vérité au Moyen-Orient

La citation « On dit qu’un mensonge peut faire le tour de la terre le temps que la vérité mette ses chaussures », attribuée à Mark Twain, est – ironiquement – apocryphe.

La désinformation est aussi vieille que l’humanité et date de plusieurs décennies sous sa forme actuelle connue, mais les médias sociaux ont amplifié sa vitesse et son étendue. Un rapport du MIT datant de 2018 a révélé que les mensonges se propagent six fois plus vite que la vérité, sur Twitter en l’occurrence. Les différentes plateformes accentuent de manière différente la désinformation, en fonction du nombre de dispositifs mis en place pour assurer la viralité des messages.

Pourtant, toutes les plateformes de médias sociaux ont été confrontées à la désinformation ces derniers jours, alors que les populations étaient en proie à la violence en Israël et à Gaza. Les plateformes de médias sociaux ont été inondées d’images et de vidéos explicites du conflit, ainsi que d’images et de vidéos qui n’avaient rien à voir avec lui.
Que se passe-t-il ? Des images erronées, des documents modifiés et d’anciennes vidéos sorties de leur contexte circulent en ligne. Il est donc difficile pour quiconque cherche des informations sur le conflit de faire la part des choses entre le faux et le vrai.

 Number, Symbol, Text

Modeler les perceptions. Les affirmations trompeuses ne se limitent pas à la zone de conflit ; elles ont également un impact sur les perceptions globales et contribuent à la division des opinions. Les individus, influencés par des préjugés et des émotions, prennent parti sur la base d’informations qui manquent souvent de précision ou de contexte.

De faux récits sur des plateformes comme X (anciennement connu sous le nom de Twitter) peuvent avoir une incidence sur les programmes politiques, avec des exemples de mémos mensongers circulant au sujet de l’aide militaire et des allégations de transferts de fonds. Même les comptes vérifiés supposés fiables contribuent de manière significative à la diffusion de fausses informations.

Ce que font les entreprises technologiques. Meta a mis en place un centre d’opérations spéciales composé d’experts, dont des personnes parlant couramment l’hébreu et l’arabe. Elle collabore avec des vérificateurs de faits, dont elle utilise les évaluations pour déclasser les faux contenus dans le flux afin d’en réduire la visibilité. Les mesures prises par TikTok sont quelque peu similaires. L’entreprise a mis en place un centre de commandement pour son équipe de sécurité, ajouté des modérateurs maîtrisant l’arabe et l’hébreu, et amélioré les systèmes de détection automatisés. X a supprimé des centaines de comptes liés au Hamas et supprimé ou signalé des milliers de contenus. Google et Apple auraient désactivé les données de trafic en direct pour les cartes en ligne d’Israël et de Gaza. La plateforme de messagerie sociale Telegram a bloqué les chaînes du Hamas sur Android en raison de violations des directives de la boutique d’applications de Google.

L’UE réagit. L’UE a ordonné à X, Alphabet, Meta et TikTok de supprimer les faux contenus. Le commissaire européen Thierry Breton leur a rappelé leurs obligations au titre de la nouvelle loi sur les services numériques (Digital Services Act, DSA), et a donné à X, Meta et TikTok 24 heures pour répondre. X a confirmé avoir supprimé des comptes liés au Hamas, mais l’UE a envoyé une demande formelle d’informations, marquant le début d’une enquête sur le respect de la loi sur les services numériques.

La situation se complique. Cependant, au début de l’année, Meta, Amazon, Alphabet et Twitter ont licencié de nombreux membres de leurs équipes chargées de la désinformation. Cette mesure s’inscrivait dans le cadre d’une restructuration post-COVID-19 visant à améliorer leur rendement sur le plan financier.

Cette situation souligne la nécessité de prendre des mesures énergiques, notamment une vérification efficace des faits, une surveillance réglementaire et une responsabilisation des plateformes, afin d’atténuer l’impact de la désinformation sur la perception du public et le discours mondial.

FGI 2023

Le Forum sur la gouvernance de l’Internet (FGI) 2023 a abordé des questions essentielles dans un contexte de tensions mondiales, notamment le conflit au Moyen-Orient. Avec un nombre record de 300 sessions, 15 jours de contenus vidéo et 1 240 intervenants, les débats ont porté sur des sujets allant du Pacte mondial pour le numérique (PMN) et de la politique en matière d’IA à la gouvernance des données et à la réduction de la fracture numérique.

Les dix questions suivantes sont extraites des rapports détaillés de centaines de séances et d’ateliers organisés dans le cadre du FGI 2023.uEeep0lgHT2UROIVBLtFvKbKddA37jdDlGWmD4Bfi34eKvMDabyFEXKa9PbjUK6bTu uX0IcbvOnAfT16F2lTB2SWgYiS4tmf1o Dt2fPfjhqVjqslqq2nZdfwd3kW9LuOlLKsDNTQg p0VtOUgWAA

1. Comment gouverner l’IA ? Les sessions ont exploré les solutions nationales et internationales en matière de gouvernance de l’IA, en mettant l’accent sur la transparence et en remettant en question la réglementation des applications ou des capacités de l’IA.

2. Quel sera l’avenir du FGI dans le contexte du Pacte mondial pour le numérique (PMN) et du processus d’examen du SMSI+20 ? L’avenir du FGI est étroitement lié à la PMN et au processus d’examen du SMSI+20. L’examen de 2025 pourrait décider du sort du FGI, et les négociations sur la PMN, attendues en 2024, auront également un impact sur la trajectoire du FGI.

3. Comment pouvons-nous utiliser la richesse des données de l’IGF pour un avenir soutenu par l’IA et centré sur l’humain ?

Les 18 années de données du FGI sont considérées comme un bien public. Les discussions ont porté sur l’utilisation de l’IA pour obtenir des informations, améliorer la participation des parties prenantes et représenter visuellement les discussions à l’aide de graphiques de connaissances.

4. Comment atténuer les risques de fracture de l’Internet ? Des approches multidimensionnelles et un dialogue inclusif ont été proposés pour prévenir les conséquences imprévues.

5. Quels sont les enjeux des consultations sur le traité de l’ONU sur la cybercriminalité ? Des inquiétudes ont été exprimées quant au champ d’application, aux garanties en matière de droits de l’Homme, aux imprécisions dans les définitions de la cybercriminalité et au rôle du secteur privé dans les négociations du traité des Nations unies sur la cybercriminalité. L’accent a été mis sur la clarté, sur la séparation entre les crimes cyberdépendants et ceux qui sont rendus possibles par la cybercriminalité, ainsi que sur la coopération internationale.

6. Les nouvelles règles fiscales mondiales seront-elles aussi efficaces que nous l’espérons tous ? Le FGI a débattu de l’efficacité potentielle de la solution à deux piliers de l’OCDE/G20 pour les règles fiscales mondiales. Des inquiétudes subsistent quant aux transferts de bénéfices, aux paradis fiscaux et aux déséquilibres de pouvoir entre les pays du Nord et du Sud.

7. Comment aborder la question de la désinformation et de la protection des communications numériques en temps de guerre ? Les efforts de collaboration entre les organisations humanitaires, les entreprises technologiques et les organismes internationaux ont été jugés essentiels.

8. Comment renforcer la gouvernance des données ? La conférence a souligné l’importance d’une gouvernance des données organisée et transparente, comprenant des normes claires, un environnement favorable et des partenariats public-privé. Le concept Data Free Flow with Trust (DFFT), introduit par le Japon, a été discuté en tant que cadre pour faciliter les flux de données mondiaux tout en garantissant la sécurité et la protection de la vie privée.

9. Comment combler la fracture numérique ? La fracture numérique nécessite des stratégies détaillées allant au-delà de la connectivité et impliquant des initiatives régionales, le déploiement de satellites LEO et des efforts en matière d’alphabétisation numérique. Les partenariats public-privé, en particulier avec les RIR, ont été soulignés comme étant essentiels pour favoriser la confiance et la collaboration.

10. Quel est l’impact des technologies numériques sur l’environnement ? Le FGI a étudié l’impact environnemental des technologies numériques, soulignant le fait que le secteur pourrait réduire ses émissions de 20 % d’ici à 2050. Des actions immédiates, des efforts de collaboration, des campagnes de sensibilisation et des politiques durables ont été préconisés pour minimiser l’empreinte environnementale de la numérisation.

Pour en savoir plus, consultez notre rapport final sur le FGI 2023.

À venir : eWeek 2023 de la CNUCED

Organisée par la Conférence des Nations unies sur le commerce et le développement (CNUCED) en collaboration avec les partenaires d’eTrade for all, l‘eWeek 2023 de la CNUCED est programmée du 4 au 8 décembre au prestigieux Centre international de conférences de Genève (CICG). Le thème central de cet événement novateur est : « Façonner l’avenir de l’économie numérique ».

Des ministres, des hauts fonctionnaires, des P.-D. G., des organisations internationales, des universitaires et des représentants de la société civile se réuniront pour répondre à des questions essentielles sur l’avenir de l’économie numérique : à quoi ressemble l’avenir que nous souhaitons pour l’économie numérique ? Que faut-il faire pour que cet avenir devienne réalité ? Comment les partenariats numériques et une coopération renforcée peuvent-ils contribuer à des résultats plus inclusifs et durables ?

Au cours de la semaine, les participants prendront part à plus de 150 sessions portant sur des thèmes tels que la gouvernance des plateformes, l’impact de l’IA sur l’économie numérique, les pratiques numériques respectueuses de l’environnement, l’autonomisation des femmes grâce à l’entrepreneuriat numérique et l’accélération de la préparation au numérique dans les pays en développement. 

L’événement explorera les domaines politiques clés pour construire une numérisation inclusive et durable à différents niveaux, en se concentrant sur l’innovation, les bonnes pratiques évolutives, les actions concrètes et les mesures réalisables.

Pour les jeunes de 15 à 24 ans, une consultation en ligne a été mise en place afin de s’assurer que leur voix soit entendue dans l’élaboration de l’avenir numérique pour tous.

Les comptes rendus de la GIP en temps réel et les sessions de Diplo à la CNUCED

 Logo, Advertisement

La GIP participera activement à l’eWeek 2023 en fournissant des rapports sur l’événement. Nos experts humains seront rejoints par DiploAI, qui générera les comptes rendus de toutes les sessions de l’eWeek. Mettez en favori notre page dédiée à l’eWeek 2023 sur le Digital Watch Observatory ou téléchargez l’application pour suivre les comptes rendus.
Diplo, l’organisation à l’origine de la GIP, co-organisera également une session intitulée « Scénario du futur avec les jeunes » avec la CNUCED et la Friedrich-Ebert-Stiftung (FES), et une session intitulée « Accords sur l’économie numérique et l’avenir de l’élaboration de règles commerciales numériques » avec CUTS International. La session de Diplo sera intitulée « Bottom-up AI and the Right to be Humanly Imperfect » (IA ascendante et droit à l’imperfection humaine). Pour plus de détails, visitez notre Diplo @ UNCTAD eWeek page.


Actualités de la Francophonie

 Logo, Text

Les régulateurs des télécommunications francophones du Fratel se réunissent à Rabat pour renforcer les intérêts des utilisateurs

L’Agence Nationale de Réglementation des Télécommunications (ANRT) du Royaume du Maroc, présidente du Réseau francophone de la régulation des Télécommunications (Fratel) en 2023, a accueilli la 21e réunion annuelle du réseau, les 25 et 26 octobre 2023, à Rabat sur le thème : « Comment renforcer l’objectif de satisfaction des utilisateurs dans la régulation ? ». Plus de 140 participants représentant 18 autorités de régulation, membres de Fratel, des institutions internationales (Banque mondiale), des associations de consommateurs de différents pays, et des acteurs du secteur ont pris part à cette réunion.

Pour ses 20 ans, le Fratel a mis l’accent cette année sur la prise en compte des intérêts des utilisateurs. Après le 20e séminaire 2023 du réseau qui s’est déroulé en mai à Lausanne sur le thème « Pourquoi et comment associer l’utilisateur à la régulation ? », les tables rondes de la réunion annuelle ont permis aux intervenants d’évoquer les différents types d’utilisateurs au bénéfice desquels la régulation s’exerce et ce qui est mis en œuvre pour satisfaire leurs besoins voire les protéger. Il a aussi été question des moyens d’améliorer l’efficacité des actions d’information à l’égard de ces différentes catégories d’utilisateurs et de l’accompagnement du grand public face aux évolutions technologiques.

 People, Person, Groupshot, Adult, Male, Man, Indoors, Architecture, Building, College
Crédits photo : Fratel

Au cours de la réunion annuelle s’est déroulée l’élection du nouveau comité de coordination du réseau pour 2024. Il est composé de M. Marc SAKALA, Directeur général de l’ARPCE de la République du Congo (président), de Mme Laure DE LA RAUDIERE, Présidente de l’Arcep France et M. Az-El-Arabe HASSIBI, Directeur général de l’ANRT du Maroc (vice-présidents). 

Les membres ont vivement remercié Luc TAPELLA, Directeur de l’ILR du Luxembourg, pour avoir été membre du Comité de coordination durant les 3 dernières années.

Le plan d’action du réseau pour l’année 2024 a été adopté par ses membres. Les thématiques traitées en 2024 tourneront d’une part autour de l’avenir des réseaux et de la régulation et, d’autre part, sur les enjeux de régulation relatifs aux marchés de la donnée et des services numériques.

Le prochain séminaire aura lieu au cours du premier semestre 2024 au Togo et aura pour thème « Économie de la donnée et services numériques : quels enjeux de régulation technico-économiques ? ».  La réunion annuelle du réseau se tiendra au cours second semestre 2024 et portera sur « Quels modèles d’affaires et quelles stratégies des opérateurs télécoms dans le futur ? ».

En savoir plus : www.fratel.org

L’OIF poursuit sa mobilisation des représentants francophones à l’ICANN 

Après la Conférence ICANN77 à Washington en juin dernier, marquée par son retour dans cette enceinte, l’OIF a participé à la Réunion générale annuelle de l’ICANN (Société pour l’attribution des noms de domaine et des numéros sur Internet), à Hambourg, du 21 au 26 octobre 2023. Ce cadre a été l’occasion de poursuivre les efforts de mobilisation de la communauté francophone afin de renforcer la coordination et la mobilisation des voix francophones dans cette instance de régulation d’Internet. En effet, l’ICANN a pour principales missions d’administrer les ressources numériques d’Internet, telles que d’allouer l’espace des adresses de protocole Internet et de gérer le système des noms de domaines de premier niveau (TLD génériques et nationaux), et de coordonner les acteurs techniques. L’OIF y a un statut de membre observateur au sein du GAC (Comité consultatif gouvernemental), l’un des quatre comités consultatifs de l’ICANN qui représente la voix des gouvernements et des organisations intergouvernementales (OIG) dans cette structure multipartite.

 People, Person, Architecture, Building, Classroom, Indoors, Room, School, Crowd, Audience, Lecture, Hall, Lecture Hall, Theater
Crédits photo : OIF

A travers sa Direction de la Francophonie économique et numérique, l’OIF contribue à la coordination et mobilisation des acteurs francophones au sein du GAC afin de porter une voix et des priorités communes francophones de manière plus forte et avec davantage d’impact.

En marge des sessions, l’OIF a ainsi organisé une réunion de coordination des Représentants des Etats membres de la Francophonie au sein du Comité consultatif gouvernemental, le mardi 24 octobre 2023. Celle-ci a réuni 18 participants et a permis de faire émerger des priorités communes, notamment sur le système des noms de domaine (DNS) et la re-délégation aux autorités étatiques ou aux acteurs nationaux de la société civile d’Internet des domaines de premier niveau national (ccTLD, country code Top-Level Domain qui forment la dernière partie d’une adresse Internet et renvoient à un pays spécifique ou une région).  Les noms de domaine de premier niveau restent en effet une problématique récurrente pour certains Etats, notamment en Afrique francophone. Au-delà des questions techniques, cette re-délégation est une véritable question de souveraineté des acteurs nationaux sur leur nom de domaine national. Cet enjeu a été illustré par l’intervention du Ministre des Postes, des Télécommunications et de l’Économie numérique de la République de Guinée, Monsieur Ousmane Gaoual Diallo, qui, en introduction des sessions du GAC, a annoncé que son pays est enfin le gestionnaire du « .gn », après de nombreuses années de travail et de procédures (ce nom de domaine étant préalablement géré par des acteurs privés). 

Le partage de bonnes pratiques concernant cette re-délégation, la gestion et l’opérationnalisation du ccTLD, ainsi que la question du renforcement de capacités constituent des points d’attention et modalités d’intervention à privilégier afin de résoudre des situations similaires. 

Les prochaines réunions de l’ICANN auront lieu à San Juan, Puerto Rico, du 2 au 7 mars 2023 (Forum communautaire ICANN79) et surtout à Kigali, au Rwanda, pour le Forum politique ICANN80 du 10 au 13 juin 2024, qui sera une belle opportunité pour la communauté francophone de se mobiliser.

En savoir plus : www.francophonie.org

Evènements à venir : 

  • Colloque du Réseau francophone des régulateurs des médias (REFRAM) à Nouakchott (Mauritanie, 16-17 novembre 2023) : Organisé par la Haute Autorité de la presse et de l’audiovisuel de Mauritanie, ce colloque a pour thème « L’audiovisuel à l’ère numérique : acquis et défis ». 
  • Conférence eWeek 2023 (Genève, 4-8 décembre 2023) : l’OIF contribue au programme de cette manifestation internationale à travers 3 sessions avec pour thèmes « Vers un indice de vulnérabilité numérique », « Comment répondre aux besoins de compétences numériques en Afrique francophone ? » et « La découvrabilité des contenus numériques, un impératif pour garantir la diversité culturelle ».
  • Formation aux enjeux de négociation du Pacte numérique mondial (7-8 décembre 2023, en ligne) : Organisée par l’OIF en partenariat avec l’ISOC, cette formation s’adresse aux experts francophones en charge du numériques au sein des Missions permanentes des Nations unies à New York.


DW Weekly #137 – 20 November 2023

 Text, Paper, Page

Dear all,

Last week’s meeting between US President Joe Biden and Chinese President Xi Jinping was momentous not so much for what was said (see outcomes further down), but for the fact that it happened at all. 

Over the weekend, the news of Sam Altman’s ousting from OpenAI caused quite a stir. He didn’t need to wait long to find a new home: Microsoft.

Lots more happened, so let’s get started.

Stephanie and the Digital Watch team


// HIGHLIGHT //

Biden-Xi Summit cools tensions after long tech standoff

Last week’s meeting between US President Joe Biden and Chinese President Xi Jinping, in San Francisco on the sidelines of the Asia-Pacific Economic Cooperation’s (APEC) Leaders’ Meeting, marked a significant step towards reducing tensions between the two countries. 

Implications for tech policy. Tensions, especially about technology, have been escalating for months and years. For instance, in August, the US government issued a new executive order banning several Chinese-owned software and apps from its market. This order was met with some trepidation by tech companies operating in both countries as it was unclear how this would affect their businesses. But now, after Biden and Xi’s meeting, there is hope that tensions between the two countries will ease and that this softening will cover many aspects, including tech cooperation and policy. At least, so we hope.

Responsible competition. Prior to their closed-door meeting, the two leaders pragmatically acknowledged that the USA and China have contrasting histories, cultures, and social systems. Yet, President Xi said, ‘As long as they respect each other, coexist in peace, and pursue win-win cooperation, they will be fully capable of rising above differences and find the right way for the two major countries to get along with each other’. Biden earlier had said, ‘We have to ensure that competition does not veer into conflict. And we also have to manage it responsibly.’

Biden Xi
State meeting with Xi, Biden, and staff. Credit: @POTUS on X.

Cooperation on AI. Among other topics, the two presidents agreed on the need ‘to address the risks of advanced AI systems and improve AI safety through US-China government talks,’ the post-summit White House readout said. It’s unclear what this means exactly, given that both China and the USA have already introduced the first elements of an AI framework. The fact that they brought this up, however, means that the USA certainly wants to stop any trace of AI technology theft in its tracks. But what’s in it for China?

US investment. A high-level diplomat suggested to Bloomberg that Xi’s address asking US executives to invest more in China was a signal that China needs US capital because of mistakes at home that have hurt China’s economic growth. If US Ambassador to Japan Rahm Emanuel is right, that explains why cooperation is a win-win outcome.

Tech exports. There’s a significant ‘but’ to the appearance of a thaw. Cooperation will continue as long as advanced US technologies are not used by China to undermine US national security. The readout continued: ‘The President emphasised that the United States will continue to take necessary actions’ to prevent this from happening, at the same time ‘without unduly limiting trade and investment’.

Unreported? Undoubtedly, there were other undisclosed topics discussed by the two leaders during their private meeting. For instance, what happened to the ‘likely’ deal on banning AI from autonomous weapon systems, including drones, which a Chinese embassy official hinted at before the meeting and on which the USA took a new political stand just two days prior?

Although it’s early days to see any significant positive ripple waves after the meeting, we’ll let the fact that Biden and Xi met face to face sink in a little bit. After all, as International Monetary Fund managing director Kristalina Georgieva told Reuters, the meeting was a badly needed signal that the world can cooperate more.


Digital policy roundup (13–20 November)

// AI //

Sam Altman ousted from OpenAI, joins Microsoft

Sam Altman, the CEO of OpenAI, who was fired on Friday in a surprise move by the company’s board, will now be joining Microsoft. Altman will lead a new AI innovation team at Microsoft, CEO Satya Nadella announced today (Monday). Fellow OpenAI co-founder Greg Brockman, who was removed from the board, will also join Microsoft.

Although Twitch co-founder Emmett Shear has been appointed as interim CEO, OpenAI’s future is far from stable: A letter signed by over 700 OpenAI employees has demanded the resignation of the board and the reinstatement of Altman (which might not even be possible at this stage).

Why is it relevant? First, Altman was the driving force behind the company – and its technology – which pushed the boundaries in AI and machine learning in such a short and impactful time. More than that, Altman was OpenAI’s main fundraiser; the new CEO will have big shoes to fill. Second, Microsoft has been a major player in the world of AI for many years; Altman’s move will further increase Microsoft’s already significant influence in this field. Third, tech companies can be as volatile as stock markets.

Sam Altmans badge
Sam Altman shows off an OpenAI badge, which he said was the last time to ever wear it.

US Senate’s new AI bill to make risk assessments and AI labels compulsory

A group of US senators have introduced a bill to establish an AI framework for accountability and certification based on two categories of AI systems – high-impact and critical-impact ones. The AI Research, Innovation, and Accountability Act of 2023 – or AIRIA – will also require internet platforms to implement a notification mechanism to inform the users when the platform is using generative AI. 

Joint effort. Under the bill, introduced by members of the Senate Commerce Committee, the National Institute of Standards and Technology (NIST) will be tasked with developing risk-based guidelines for high-impact AI systems. Companies using critical-impact AI will be required to conduct detailed risk assessments and comply with a certification framework established by independent organisations and the Commerce Department.

Why is it relevant? The bipartisan AIRIA is the latest US effort to establish AI rules, closely following President Biden’s Executive Order on Safe, Secure, and Trustworthy AI. It’s also the most comprehensive AI legislation introduced in the US Congress to date.


// IPR //

Music publishers seek court order to stop Anthopic’s AI models from training on copyrighted lyrics

A group of music publishers have requested a US federal court judge to block AI company Anthropic from reproducing or distributing their copyrighted song lyrics. The publishers also want the AI company to implement effective measures that would prevent its AI models from using the copyrighted lyrics to train future AI models. 

The publishers’ request is part of a lawsuit they filed on 18 October. The case continues on 29 November.

Why is it relevant? First, although the lawsuit is not new, the music publishers’ request for a preliminary injunction shows how impatient copyright holders are with AI companies allegedly using copyrighted materials. Second, the case raises once more the issue of fair use: In a letter to the US Copyright Office last month, Anthropic argued that its models use copyrighted data only for statistical purposes and not for copying creativity.

Case details: Concord Music Group, Inc. v Anthropic PBC, District Court, M.D. Tennessee, 3:23-cv-01092.


 Rocket, Weapon, Launch, Ammunition, Missile

// CONNECTIVITY //

Amazon’s Project Kuiper’s successful Protoflight mission

The team behind Amazon’s Project Kuiper, a satellite network developed by Amazon, has successfully tested the prototype satellites, which were launched on 6 October. Watch this video to see the Project Kuiper team testing a two-way video call from an Amazon site in Texas. The next step is to start mass producing the satellites for deployment in 2024.


Was this newsletter forwarded to you, and you’d like to see more?


// DMA //

Meta and others challenge DMA gatekeeper status

A number of tech companies are challenging the European Commission’s label of digital gatekeepers, which places them into the scope of the new Digital Markets Act. Among the companies: 

  • Meta (Case T-1078/23): The company disagrees with the Commission’s decision to designate its Messenger and Marketplace services under the new law, but does not challenge the inclusion of Facebook, Whatsapp, or Instagram.
  • Apple (Cases T-1079/23 & T-1080/23): Details aren’t public but media reports said the company was challenging the inclusion of its App Store on the list of gatekeepers.
  • TikTok (Case (T-1077/23): The company said its designation risked entrenching the power of dominant tech companies.

Microsoft and Google decided not to challenge their gatekeeper status.

Why is it relevant? The introduction of the Digital Markets Act has far-reaching implications for the operations of tech giants. This legal challenge is a first attempt to block its effective implementation. The outcomes of these cases could establish a precedent for the future regulation of digital markets in the EU.


The week ahead (20–27 November)

20 November–15 December: The ITU’s World Radiocommunication Conference, which starts today (Monday) in Dubai, UAE, will review the international treaty governing the use of the radio-frequency spectrum and the geostationary-satellite and non-geostationary-satellite orbits. Download the agenda and draft resolutions.

21–23 November: The 8th European Cyber Week (ECW) will be held in Renne, France, and will bring together cybersecurity and cyber defence experts from the public and private sectors.

27–29 November: The 12th UN Forum on Business and Human Rights will be held in a hybrid format next week to discuss effective change in implementing obligations, responsibilities, and remedies.


#ReadingCorner

Copyright lawsuits: Who’s really protected?

Microsoft, OpenAI, and Adobe are all promising to defend their customers against intellectual property lawsuits, but that guarantee doesn’t apply to everyone. Plus, those indemnities are narrower than the announcements suggest. Read the article.

Guarding artistic creations by polluting data

Data poisoning is a technique used to protect copyrighted artwork from being used by generative AI models. It involves imperceptibly changing the pixels of digital artwork in a way that ‘poisons’ any AI model ingesting it for training purposes, rendering it functionally useless. While it has been primarily used by content creators against web scrapers, it has many other uses. However, data poisoning is not as straightforward and requires a targeted approach to pollute the datasets. Read the article.


FWAzpGt5 steph
Stephanie Borg Psaila – Author
Director of Digital Policy, DiploFoundation
nPHsW2zD ginger
Virginia Paque – Editor
Senior Editor Digital Policy, DiploFoundation

Digital Watch newsletter – Issue 84 – November 2023

 Page, Text, Advertisement, Poster, Person, Head, Face

Snapshot: What’s making waves in digital policy?

Geopolitics

The US Department of Commerce (DoC) Bureau of Industry and Security (BIS) announced a tightening of export restrictions on advanced semiconductors to China and other nations subject to arms embargoes. This decision has elicited a strong reaction from China, labelling the measures as ‘unilateral bullying’ and an abuse of export control mechanisms. 

Further complicating the US-China tech landscape, there are discussions within the US government about imposing restrictions on Chinese companies access to cloud services. If implemented, this move could have significant consequences for both nations, particularly impacting major players like Amazon Web Services and Microsoft. Finally, Canada has banned Chinese and Russian software from devices issued by the government, citing security concerns.

AI governance

In other developments, a leaked draft text suggests that Southeast Asian countries, under the umbrella of the Association of Southeast Asian Nations (ASEAN), are adopting a business-friendly approach to AI regulation. The draft guide to AI ethics and governance asks companies to consider cultural differences and doesn’t prescribe categories of unacceptable risk. Meanwhile, Germany has introduced an AI action plan intending to increase AI advancement on national and European scales, to compete with the predominant AI forces of the USA and China.

Read more on AI governance below.

Security

The heads of security agencies from the USA, the UK, Australia, Canada, and New Zealand, collectively known as the Five Eyes, have publicly cautioned about China’s widespread espionage campaign to steal commercial secrets. The European Commission has announced a comprehensive review of security risks in vital technology domains, including semiconductors, AI, quantum technologies, and biotechnologies. ChatGPT faced outages on 8 November, believed to be a result of a distributed denial-of-service (DDoS) attack. Hacktivist group Anonymous Sudan claimed responsibility. Finally, Microsoft’s latest Digital Defense Report revealed a global increase in cyberattacks, with government-sponsored spying and influence operations on the rise. 

Infrastructure

The US Federal Communications Commission (FCC) voted to initiate the process of restoring net neutrality rules. Initially adopted in 2015, these rules were repealed under the previous administration but are now poised for reinstatement. 

Access Now, the Internet Society, and 22 other organisations and experts have jointly sent a letter to the Telecom Regulatory Authority of India (TRAI) opposing the enforcement of discriminatory network costs or licensing regimes for online platforms.

Internet economy

Alphabet’s Google reportedly paid a substantial sum of USD 26.3 billion to other companies in 2021 to ensure its search engine remained the default on web browsers and mobile phones. This was revealed during the US Department of Justice’s (DoJ) antitrust trial. For similar anticompetitive actions,  the Japan Fair Trade Commission (JFTC) has opened an antimonopoly investigation into Google’s web search dominance. 

The European Central Bank (ECB) has decided to commence a two-year preparation phase starting 1 November 2023, to finalise regulations and select private-sector partners before the possible launch of a digital version of the euro. The next step will be the possible implementation after a green light from policymakers. In parallel, the European Data Protection Board (EDPB) has called for enhanced privacy safeguards in the European Commission’s proposed digital euro legislation.

Digital rights

The council presidency and the European Parliament have reached a provisional agreement on a new framework for a European digital identity (eID) to provide all Europeans with a trusted and secure digital identity.Under the new agreement, member states will provide citizens and businesses with digital wallets that link their national digital identities with other personal attributes, such as driver’s licences and diplomas.

The European Parliament’s Internal Market and Consumer Protection Committee has passed a report warning of the addictive nature of certain digital services,  advocating tighter regulations to combat addictive design in digital platforms. On a similar note, the European data regulator has ordered the Irish data regulator to impose a permanent ban on Meta’s behavioural advertising across Facebook and Instagram. 

Key political groups in the European Parliament have reached a consensus on draft legislation compelling internet platforms to detect and report child sexual abuse material (CSAM) to prevent its dissemination on the internet.

Content policy

Meta, the parent company of Facebook and Instagram, is confronting a legal battle initiated by over 30 US states. The lawsuit claims that Meta intentionally and knowingly used addictive features while concealing the potential risks of social media use, violating consumer protection laws, and breaching privacy regulations concerning children under 13. 

The EU has formally requested details on anti-disinformation measures from Meta and TikTok. Against the backdrop of the Middle East conflict, the EU emphasises the risks associated with the widespread dissemination of illegal content and disinformation.

The UK’s Online Safety Act, imposing new responsibilities on social media companies, has come into effect. This law aims to enhance online safety and holds social media platforms accountable for their content moderation practices.

Development

The Gaza Strip has faced three internet blackouts since the start of the conflict, prompting Elon Musk’s SpaceX’s Starlink to offer internet access to internationally recognised aid organisations in Gaza. Meanwhile, environmental NGOs are urging the EU to take action on electronic waste, calling for a revision of the Waste Electrical and Electronic Equipment Directive (WEEE Directive), per the European Environmental Bureau’s communication.

THE TALK OF THE TOWN – GENEVA

As agreed during the regular session of the ITU Council in July 2023, an additional session dedicated to confirming logistical issues and organisational planning for 2024–2026 was held in October 2023. It was preceded by the cluster of Council Working Group (CWG) and Expert Group (EG) meetings where the list of chairs and vice-chairs were appointed until the 2026 Plenipotentiary Conference. The next cluster of CWG and EG meetings will take place from 24 January to 2 February 2024. 

The 3rd Geneva Science and Diplomacy Anticipator (GESDA) Summit saw the launch of the Open Quantum Institute (OQI), a partnership among the Swiss Federal Department of Foreign Affairs (FDFA), CERN and UBS. The OQI aims to make high-performance quantum computers accessible to all users devoted to finding solutions for and accelerating progress in attaining sustainable development goals (SDGs). The OQI will be hosted at CERN beginning in March 2024 and facilitate the exploration of the technology’s use cases in health, energy, climate protection, and more.


Shaping the global AI landscape

Month in, month out, we spent most of 2023 reading and writing about AI governance. October is no exception. As the world grapples with the complexities of this technology, the following initiatives showcase efforts to navigate its ethical, safety, and regulatory challenges on both national and international fronts.

Biden’s executive order on AI. The order represents the most substantial effort by the US government to regulate AI to date. Unveiled with anticipation, the order provides actionable directives where possible and calls for bipartisan legislation where necessary, particularly in data privacy.

 Accessories, Formal Wear, Tie, Adult, Female, Person, Woman, Desk, Furniture, Table, Male, Man, Crowd, Boy, Child, People, Electronics, Mobile Phone, Phone, Flag, Wristwatch, Jewelry, Ring, Necklace, Computer, Laptop, Pc, Joe Biden, Kamala Harris
Image credit: CNBC

One standout feature is the emphasis on AI safety and security. Developers of the most potent AI systems are now mandated to share safety test results and critical information with the US government. Additionally, AI systems utilised in critical infrastructure sectors are subjected to rigorous safety standards, reflecting a proactive approach to mitigating potential risks associated with AI deployment.

Unlike some emerging AI laws, such as the EU’s AI Act, Biden’s order takes a sectoral approach. It directs specific federal agencies to focus on AI applications within their domains. For instance, the Department of Health and Human Services is tasked with advancing responsible AI use in healthcare, while the DoC is directed to develop guidelines for content authentication and watermarking to label AI-generated content clearly. The DoJ is instructed to address algorithmic discrimination, showcasing a nuanced and tailored approach to AI governance.

Beyond regulations, the executive order aims to bolster the US’s technological edge. It facilitates the entry of highly skilled workers into the country, recognising their pivotal role in advancing AI capabilities. The order also prioritises AI research through funding initiatives, increased access to AI resources and data, and the establishment of new research structures.

G7’s guiding principles. Simultaneously, the G7 nations released their guiding principles for advanced AI, accompanied by a detailed code of conduct for organisations developing AI.

 Groupshot, Person, People, Adult, Male, Man, Clothing, Formal Wear, Suit, Coat, Face, Head, Mario Draghi, Fumio Kishida, Jo Johnson, Justin Trudeau, Joe Biden, Emmanuel Macron, Olaf Scholz
Image credit: Politico

These principles, totalling 11, centre around risk-based responsibility. The G7 encourages developers to implement reliable content authentication mechanisms, signalling a commitment to ensuring transparency in AI-generated content.

A notable similarity with the EU’s AI Act is the risk-based approach, placing responsibility on AI developers to assess and manage the risks associated with their systems. The EU promptly welcomed these principles, citing their potential to complement the legally binding rules under the EU AI Act internationally.

While building on the existing Organisation for Economic Co-operation and Development AI Principles (OECD) principles, the G7 principles go a step further in certain aspects. They encourage developers to deploy reliable content authentication and provenance mechanisms, such as watermarking, to enable users to identify AI-generated content. However, the G7’s approach preserves a degree of flexibility, allowing jurisdictions to adopt the code in ways that align with their individual approaches.

Differing viewpoints on AI regulation among G7 countries are acknowledged, ranging from strict enforcement to more innovation-friendly guidelines. However, some provisions, such as those related to privacy and copyright, are criticised for their vagueness, raising questions about their potential to drive tangible change.

China’s Global AI Governance Initiative (GAIGI). China unveiled its GAIGI during the Third Belt and Road Forum, marking a significant stride in shaping the trajectory of AI on a global scale. China’s GAIGI is expected to bring together 155 countries participating in the Belt and Road Initiative, establishing one of the largest global AI governance forums.

This strategic initiative focuses on five aspects, including ensuring AI development aligns with human progress, promoting mutual benefit, and opposing ideological divisions. It also establishes a testing and assessment system to evaluate and mitigate AI-related risks, similar to the risk-based approach of the EU’s upcoming AI Act. Additionally, the GAIGI supports consensus-based frameworks and provides vital support to developing nations in building their AI capacities.

China’s proactive approach to regulating its homegrown AI industry has granted it a first-mover advantage. Despite its deeply ideological approach, China’s interim measures on generative AI, effective since August this year, were a world first. This advantage positions China as a significant influencer in shaping global standards for AI regulation.

 Flower, Flower Arrangement, Plant, Adult, Male, Man, Person, Flower Bouquet, Crowd, Face, Head, Accessories, Formal Wear, Tie, Xi Jinping

AI Safety Summit at Bletchley Park. The UK’s much-anticipated summit resulted in a landmark commitment among leading AI countries and companies to test frontier AI models before public release.

The Bletchley Declaration identifies the dangers of current AI, including bias, threats to privacy, and deceptive content generation. While addressing these immediate concerns, the focus shifted to frontier AI – advanced models that exceed current capabilities – and their potential for serious harm. Signatories include Australia, Canada, China, France, Germany, India, Korea, Singapore, the UK, and the USA for a total of 28 countries plus the EU.

 Crowd, Person, Audience, Speech, Adult, Female, Woman, Male, Man, Accessories, Formal Wear, Tie, Clothing, Suit, Electrical Device, Microphone, Podium, Face, Head, Rishi Sunak

Governments will now play a more active role in testing AI models. The AI Safety Institute, a new global hub established in the UK, will collaborate with leading AI institutions to assess the safety of emerging AI technologies before and after their public release. This marks a significant departure from the traditional model, where AI companies were solely responsible for ensuring the safety of their models.

The summit resulted in an agreement to form an international advisory panel on AI risk, inspired by the Intergovernmental Panel on Climate Change (IPCC). Each signatory country will nominate a representative to support a larger group of leading AI academics, producing State of the Science reports. This collaborative approach aims to foster international consensus on AI risk.

UN’s High-Level Advisory Body on AI. The UN has taken a unique approach by launching a High-Level Advisory Body on AI, comprising 39 members. Led by UN Tech Envoy Amandeep Singh Gill, the body will publish its first recommendations by the end of this year, with final recommendations expected next year. These recommendations will be discussed during the UN’s Summit of the Future in September 2024.

Unlike previous initiatives that introduced new principles, the UN’s advisory body focuses on assessing existing governance initiatives worldwide, identifying gaps, and proposing solutions. The tech envoy envisions the UN as the platform for governments to discuss and refine AI governance frameworks. 

OECD’s updated AI definition. The OECD has officially revised its definition of AI, to read: An AI system is a machine-based system that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that [can] influence physical or virtual environments. Different AI systems vary in their levels of autonomy and adaptiveness after deployment. It is anticipated that this definition will be incorporated into the EU’s upcoming AI regulation.


Misinformation crowding out the truth in the Middle East

It is said that a lie can travel halfway around the world while the truth is still putting on its shoes. It is also said it was Mark Twain who coined it – which, ironically, is untrue.

Misinformation is as old as humanity and decades old in its current recognisable form, but social media has amplified its speed and scale. An MIT report from 2018 found that lies spread six times faster than the truth – on Twitter, that is. Different platforms amplify misinformation differently – depending on how many mechanisms for the virality of posts the platform has in place.

Yet all social media platforms have struggled with misinformation in recent days, as people have been grappling with the violence unfolding in Israel and Gaza, social media platforms have become inundated with graphic images and videos of the conflict – and images and videos that have nothing to do with it. 

What’s happening? Miscaptioned imagery, altered documents, and old videos taken out of context are circulating online. This makes it hard for anyone looking for information about the conflict to parse falsehood from truth.

Two tiles form FA and then bifurcate to two possible endings: FACT and FAKE.

Shaping perceptions. Misleading claims are not confined to the conflict zone; they also impact global perceptions and contribute to the polarisation of opinions. Individuals, influenced by biases and emotions, take sides based on information that often lacks accuracy or context. 

False narratives on platforms like X (formerly known as Twitter) can influence political agendas, with instances of fake memos circulating about military aid and allegations of fund transfers. Even supposedly reliable verified accounts contribute significantly to the dissemination of misinformation.

What tech companies are doing. Meta has established a special operations centre staffed with experts, including fluent Hebrew and Arabic speakers. It is working with fact-checkers, using their ratings to downrank false content in the feed to reduce its visibility. TikTok’s measures are somewhat similar. The company established a command centre for its safety team, added moderators proficient in Arabic and Hebrew, and enhanced automated detection systems. X removed hundreds of Hamas-linked accounts and removed or flagged thousands of pieces of content. Google and Apple reportedly disabled live traffic data for online maps for Israel and Gaza. Social messaging platform Telegram blocked Hamas channels on Android due to violations of Google’s app store guidelines. 

The EU reacts. The EU ordered X, Alphabet, Meta, and TikTok to remove fake content. European Commissioner Thierry Breton reminded them of their obligations under the new Digital Services Act (DSA), giving X, Meta, and TikTok 24 hours to respond. X confirmed removing Hamas-linked accounts, but the EU sent a formal request for information, marking the beginning of an investigation into compliance with the DSA.

Complicating matters. However, earlier this year, Meta, Amazon, Alphabet, and Twitter laid off many team members focusing on misinformation. This was part of a post-COVID-19-induced restructuring aimed at improving financial efficiency. 

The situation underscores the need for robust measures, including effective fact-checking, regulatory oversight, and platform accountability, to mitigate the impact of misinformation on public perception and global discourse.


IGF 2023

The Internet Governance Forum (IGF) 2023 addressed pressing issues amid global tensions, including the Middle East conflict. With a record-breaking 300 sessions, 15 days of video content, and 1,240 speakers, debates covered topics from the Global Digital Compact (GDC) and AI policy to data governance and narrowing the digital divide.

The following ten questions are derived from detailed reports from hundreds of workshops and sessions at the IGF 2023

 Text, Logo, Smoke Pipe, Food

1. How can AI be governed? Sessions explored national and international AI governance options, emphasising transparency and questioning the regulation of AI applications or capabilities.

2. What will be the future of the IGF in the context of the Global Digital Compact (GDC) and the WSIS+20 Review Process? The future of the IGF is closely tied to the GDC and the WSIS+20 Review Process. The 2025 review may decide the IGF’s fate, and negotiations on the GDC, expected in 2024, will also impact the IGF’s trajectory.

3. How can we use the IGF’s wealth of data for an AI-supported, human-centred future? 

The IGF’s 18 years of data is considered a public good. Discussions explored using AI to gain insights, enhance multistakeholder participation, and visually represent discussions through knowledge graphs.

4. How can risks of internet fragmentation be mitigated? Multidimensional approaches and inclusive dialogue were proposed to prevent unintended consequences.

5. What challenges arise from the negotiations on the UN treaty on cybercrime? Concerns were raised about the scope, human rights safeguards, undefined cybercrime definitions, and the role of the private sector in the UN treaty on cybercrime negotiations. Clarity, separation of cyber-dependent and cyber-enabled crimes, and international cooperation were emphasised.

6. Will the new global tax rules be as effective as everyone hopes for? The IGF discussed the potential effectiveness of the OECD/G20’s two-pillar solution for global tax rules. Concerns lingered about profit-shifting, tax havens, and power imbalances between Global North and South nations.

7. How can misinformation and protection of digital communication be addressed during times of war? Collaborative efforts between humanitarian organisations, tech companies, and international bodies were deemed essential.

8. How can data governance be strengthened? The discussion emphasised the importance of organised and transparent data governance, including clear standards, an enabling environment, and public-private partnerships. The Data Free Flow with Trust (DFFT) concept, introduced by Japan, was discussed as a framework to facilitate global data flows while ensuring security and privacy.

9. How can the digital divide be bridged? The digital divide requires comprehensive strategies beyond connectivity involving regional initiatives, deploying LEO satellites, and digital literacy efforts. Public-private partnerships, especially with RIRs, were highlighted as crucial for fostering trust and collaboration.

10. How do digital technologies impact the environment? The IGF explored the environmental impact of digital technologies, highlighting the potential to cut emissions by 20% by 2050. Immediate actions, collaborative efforts, awareness campaigns, and sustainable policies were advocated to minimise the environmental footprint of digitalisation.
Read more in our IGF 2023 Final report.


Upcoming: UNCTAD eWeek 2023

Organised by the UN Conference on Trade and Development (UNCTAD) in collaboration with eTrade for all partners, the UNCTAD eWeek 2023 is scheduled from 4 to 8 December at the prestigious International Conference Center Geneva (CICG). The central theme of this transformative event is ‘Shaping the future of the digital economy’.

Ministers, senior government officials, CEOs, international organisations, academia, and civil society will convene to address pivotal questions about the future of the digital economy: What does the future we want for the digital economy look like? What is required to make that future come true? How can digital partnerships and enhanced cooperation contribute to more inclusive and sustainable outcomes?

Over the week, participants will join more than 150 sessions addressing themes including platform governance, the impact of AI on the digital economy, eco-friendly digital practices, the empowerment of women through digital entrepreneurship, and the acceleration of digital readiness in developing countries. 

The event will explore key policy areas for building inclusive and sustainable digitalisation at various levels, focusing on innovation, scalable good practices, concrete actions and actionable steps. 

For youth aged 15–24, there’s a dedicated online consultation to ensure their voices are heard in shaping the digital future for all.

Stay up-to-date with GIP reporting!

The GIP will be actively involved in eWeek 2023 by providing reports from the event. Our human experts will be joined by DiploAI, which will generate reports from all eWeek sessions. Bookmark our dedicated eWeek 2023 page on the Digital Watch Observatory or download the app to follow the reports.

Diplo, the organisation behind the GIP, will also co-organise a session entitled ‘Scenario of the Future with the Youth’ with UNCTAD and Friedrich-Ebert-Stiftung (FES), and a session entitled ‘Digital Economy Agreements and the Future of Digital Trade Rulemaking’ with CUTS International. Diplo’s session will be titled ‘Bottom-up AI and the Right to be Humanly Imperfect.’ For more details, visit our Diplo @ UNCTAD eWeek page.

 Logo, Advertisement, Text


DW Weekly #136 – 13 November 2023

 Text, Paper, Page

Dear all,

The ongoing Middle East conflict has made us realise how dangerous and divisive hate speech can be. With illegal content on the rise, governments are putting on pressure and launching new initiatives to help curb the spread. But can these initiatives truly succeed, or are they just another drop in the ocean?

In other news, policymakers are working towards semantic alignment in AI rules, while tech companies are offering indemnity for legal expenses related to copyright infringement claims originating from AI technology.

Let’s get started.

Stephanie and the Digital Watch team


// HIGHLIGHT //

Governments ramp up pressure on tech companies to tackle fake news and hate speech

Rarely have we witnessed a week quite like the last one, where so much scrutiny was levelled at social media platforms over the rampant spread of disinformation and hate speech. You can tell that leaders are worried about AI’s misuse by terrorists and violent extremists for propaganda, recruitment, and the orchestration of attacks. The fact that so many elections are around the corner raises the stakes even more.

Christchurch Call. In a week dominated by high-stakes discussions, global leaders, including French President Emmanuel Macron and former New Zealand leader Jacinda Ardern, gathered in Paris for the annual Christchurch Call meeting. The focal point was a more concerted effort to combat online extremism and hate speech, a battle that has gained momentum since the far-right shooting at a New Zealand mosque in 2019.

Moderation mismatch. In Paris, Macron seized the opportunity to criticise social media giants. In an interview with the BBC, he slammed Meta and Google for what he termed a failure to moderate terrorist content online. The revelation that Elon Musk’s X platform had only 2,294 content moderators, significantly fewer than its counterparts, fueled concerns about the platforms’ efficacy.

UNESCO’s battle cry. Meanwhile, UNESCO’s Director-General, Audrey Azoulay, sounded an alarm about the surge in online disinformation and hate speech, labelling it a ‘major threat to stability and social cohesion’. UNESCO unveiled an action plan (in the form of guidelines), backed by global consultations and a public opinion survey, emphasising the urgent need for coordinated action against this digital scourge. But while the plan is ambitious, its success hinges on adherence to non-binding recommendations. 

Political ads. On another front, EU co-legislators reached a deal on the transparency and targeting of political advertising. Stricter rules will now prohibit targeted ad-delivery techniques involving the processing of personal data in political communications. A public repository for all online political advertising in the EU is set to be managed by an EU Commission-established authority. ‘The new rules will make it harder for foreign actors to spread disinformation and interfere in our free and democratic processes. We also secured a favourable environment for transnational campaigning in time for the next European Parliament elections,’ lead MEP Sandro Gozi said. In the EU’s case, success hinges not on adherence, but on effective enforcement. 

Use of AI. Simultaneously, Meta, the parent company of Facebook and Instagram, published a new policy in response to the growing impact of AI on political advertising (after it was disclosed by the press). Starting next year, Meta will require organisations placing political ads to disclose when they use AI software to generate part or all of those ads. Meta will also prohibit advertisers from using AI tools built into Meta’s ad platform to generate ads under a variety of categories, including housing, credit, financial services, and employment. Although we’ve come to look at self-regulation with mixed feelings, the new policy – which will apply globally – is ‘one of the industry’s most significant AI policy choices to come to light to date’, to quote Reuters.

Crack-down in India. Even India joined the fray, with its Ministry of Electronics and Information Technology issuing a stern statement on the handling of misinformation. Significant social media platforms with over 5 million users must comply with strict timeframes for identifying and deleting false content.

As policymakers and tech giants grapple with the surge of online extremism and disinformation, it’s clear that much more needs to happen. The scale of the problem demands a tectonic change, one that goes beyond incremental measures. The much-needed epiphany could lie in the shared understanding and acknowledgement of the severity of the problem. While it might not bring about an instant solution, collective recognition of the problem could serve as a catalyst for a significant breakthrough.


Digital policy roundup (6–13 November)

// AI //

OECD updates its definition of AI system

The OECD’s council has agreed to a new definition of AI system, which reads: ‘An AI system is a machine-based system that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that [can] influence physical or virtual environments. Different AI systems vary in their levels of autonomy and adaptiveness after deployment.’

Compared with the 2019 version, it has added content as one of the possible outputs, referring to generative AI systems. 

Why is it relevant? First, the EU, which aligned its AI Act with the OECD’s 2019 definition, is expected to integrate the revised definition into its draft law, presently at trilogue stage. As yet, no documents reflecting the new definition have been published. Second, the EU’s push towards semantic alignment extends further. The EU and USA are currently working on a common taxonomy, or classification system, for key concepts, as part of the EU-US Trade and Technology Council’s work. The council is seeking public input on the draft taxonomy and other work areas until 24 November.


Hollywood actors and studios reach agreement over use of AI 

Hollywood actors have finally reached a (tentative) deal with studios, bringing an end to a months-old strike. One of the disagreements was on the use of AI: Under the new deal, producers will be required to get consent and compensate actors for the creation and use of digital replicas of actors, whether created on set or licensed for use. 

The film and television industry faced significant disruptions due to a strike that began in May. The underlying rationale was this: While it’s impossible to halt the progress of AI, actors and writers could fight for more equitable compensation and fairer terms. Hollywood’s film and television writers reached an agreement in October, but negotiations between studios and actors were at an impasse until last week’s deal.

Why is it relevant? First, it’s a prime example of how AI has been disrupting creative industries and drawing concerns from actors and writers, despite earlier scepticism. Second, as The Economist thinks, AI could make a handful of actors omnipresent, and hence, eventually boring for audiences. But we think fans just want a good storyline, regardless of whether the well-loved artist is merely a product of AI.


OpenAI’s ChatGPT hit by DDoS attack

OpenAI was hit by a cyberattack last week, resulting in a major outage to its ChatGPT and API. The attack was suspected to be a distributed denial of service (DDoS) attack, which is meant to disrupt access to an online service by flooding it with too much traffic. When the outage first happened, OpenAI reported that the problem was identified, and a fix was deployed. But the outage continued the next day, with the company confirming that it was ‘dealing with periodic outages due to an abnormal traffic pattern reflective of a DDoS attack’.

Responsible. Anonymous Sudan claimed responsibility for the attack, which the group said was in response to OpenAI’s collaboration with Israel and the OpenAI CEO’s willingness to invest more in the country.

Screenshot of a message from Anonymous Sudan entitled ‘Some reasons why we targeted OpenAI and ChatGPT’ lists four reasons: (1) OpenAI’s cooperation with the state of Israel, (2) use of AI for weapons and oppression, (3) it is an American company, and 4) it has a bias toward Israel (summary of the list).

Was this newsletter forwarded to you, and you’d like to see more?


// COMPETITION //

G7 ready to tackle AI-driven competition risks; more discussion on genAI needed

Competition authorities from G7 countries believe they already have the legal authority to address AI-driven competitive harm, which power could be further complemented by AI-specific policies, according to a communiqué published at the end of last week’s summit in Tokyo.

When it comes to emerging technologies such as generative AI, however, the G7 competition authorities say that ‘further discussions among us are needed on competition and contestability issues raised by those technologies and how current and new tools can address these adequately.’

Why is it relevant? Unlike other areas of AI governance, competition issues are not a matter of which new laws to enact, but rather how to interpret existing legal frameworks. How could this be done? Competition authorities have suggested that government departments, authorities, and regulators should (a) give proper consideration to the role of effective competition alongside other issues and (b) collaborate closely with each other to tackle systemic problems consistently.


// COPYRIGHT //

OpenAI launches Copyright Shield to cover customers’ legal fees for copyright infringement claims

Sam Altman, the CEO of OpenAI, has announced that the company will cover the legal expenses of business customers faced with copyright infringement claims stemming from using OpenAI’s AI technology. The decision responds to the escalating concern that industry-wide AI technology is being trained on protected content without the authors’ consent. 

This initiative, called Copyright Shield, was announced together with a host of other improvements to ChatGPT. Here’s the announcement: ‘OpenAI is committed to protecting our customers with built-in copyright safeguards in our systems. Today, we’re going one step further and introducing Copyright Shield – we will now step in and defend our customers, and pay the costs incurred, if you face legal claims around copyright infringement. This applies to generally available features of ChatGPT Enterprise and our developer platform.’

Why is it relevant? The offer of covering legal costs has become a trend, after Microsoft, in September, announced legal protection to users of its Copilot AI services faced with copyright infringement lawsuits, with Google following suit a month later by adding a second layer of indemnity to also cover AI-generated output. Details of how these services will be implemented are not yet entirely clear.


Meta info sheet states: Want to subscribe or continue using our Products for free with ads?Laws are changing in your region, so we're introducing a new choice about how we use your info for ads. You'll learn more about what each option means for you before you confirm your choice. Your choice will apply to the accounts in this Accounts CentreSubscribe to use without adsSubscribe to use your Instagram account without ads, starting at €12.99/month (inclusive of applicable taxes). Your info won't be used for ads.Use for free with ads Discover products and brands through personalised ads, while using your Instagram account for free. Your info will be used for ads. To use our Products for free with ads, agree to Meta using your info for the followingContinue to use your information from your accounts in this Accounts Centre for adsContinue to use cookies on our Products to personalise your ads and measure how theyperformHelpful info Your experience will stay the sameYou can change your choice or adjust your settings at any time to make sure that your ad experience is right for you.You can add or remove accounts at any time in Settings.We are committed to your privacy and keeping your information secure.We're updating our Terms and Privacy Policy to reflect these changes, including how we use your information for ads. By continuing to use our Products, you agree to the updated terms.

// PRIVACY //

Meta tells Europeans: Pay or Okay

Meta has rolled out a new policy for European users: Allow Facebook and Instagram to show personalised ads based on user data, or pay a subscription fee to remove ads. But there’s a catch – even if subscribers sign up to remove ads, the company will still gather their data – it just won’t use that data to show them ads. Privacy experts have seen this coming. A legal fight is definitely on the horizon.


// TAXATION //

Apple suffers setback over sweetheart tax case involving Ireland

The Apple-Ireland state aid case, which has been ongoing for almost a decade, is set to be decided by the EU’s Court of Justice, and things don’t look too good for Apple. The current chapter of the case involves a decision by the European Commission, which found that Apple owed Ireland EUR 13 billion (USD 13.8 billion) in unpaid taxes over an alleged tax arrangement granted to Apple, which amounted to illegal state aid. In 2020, the General Court annulled that decision, and the European Commission appealed.

Last week, the Court of Justice’s advocate general said the General Court made legal errors, and the annulment should be set aside. Advocate General Giovanni Pitruzzella advises the court to refer the case back to the lower court for a new decision.

Why is it relevant? First, the new opinion confirms the initial reaction of the European Commission, which at the time had said that the General Court made legal errors. Second, although the advocate general’s opinion is non-binding, it is usually given considerable weight by the court. 

Case details: Commission v Ireland and Others, C-465/20 P


The week ahead (13–20 November)

13–16 November: Cape Town, South Africa, will host the Africa Tech Festival, a four-day event that is expected to bring together around 12,000 participants from the policy and technology sectors. There are 3 tracks: AfricaCom is dedicated to telecoms, connectivity, and digital infrastructure; AfricaTech explores innovative and disruptive technologies; AfricaIgnite is dedicated to entrepreneurs.

15 November: The much-anticipated meeting between US President Joe Biden and Chinese President Xi Jinping will take place on the sidelines of the Asia-Pacific Economic Cooperation (APEC) leaders’ meeting in San Francisco. Both sides will be looking for a way to smooth relations, not least on technology issues.

20 November–15 December: The ITU’s World Radiocommunication Conference, taking place in Dubai, UAE, will review the international treaty governing the use of the radio-frequency spectrum and the geostationary-satellite and non-geostationary-satellite orbits. Download the agenda and draft resolutions.


#ReadingCorner
Cover of a news magazine

The scourge of disinformation and hate speech during elections

There is no doubt that the use of social media as a daily source of information has grown a lot over the past 15 years. But did you know that it has now surpassed print media, radio, and TV? This leaves citizens particularly exposed to disinformation and hate speech, which are highly prevalent on social media. The Ipsos UNESCO survey on the impact of online disinformation and hate speech sheds light on the growing problem, especially during elections.


Screenshot of a Telegeography submarine cable map

One world, two networks? Not yet…

One of the biggest fears among experts is that the tensions between the USA and China could fragment the internet. Telegeography research director Alan Mauldin assesses the impact on the submarine cable industry. If you’re into slide decks, download Mauldin’s presentation.


FWAzpGt5 steph
Stephanie Borg Psaila – Author
Director of Digital Policy, DiploFoundation
nPHsW2zD ginger
Virginia Paque – Editor
Senior Editor Digital Policy, DiploFoundation

DW Weekly #135 – 06 November 2023

 Text, Paper, Page

Dear readers,

Last week’s AI Safety Summit, hosted by the UK government, was on everyone’s radar. Despite coming just days after the US President’s Executive Order on AI and the G7’s guiding principles on AI, the summit served to initiate a global process on establishing AI safety standards. The week saw a flurry of other AI policy developments, making it one of the busiest weeks of the year for AI.

Let’s get started.

Stephanie and the Digital Watch team


// HIGHLIGHT //

Landmark agreement on AI safety-by-design reached by UK, USA, EU, and others

The UK has secured a landmark commitment with leading AI countries and companies to test frontier AI models before releasing them for public use. That’s just one of the initiatives agreed on during last week’s AI Safety Summit, hosted by the UK at Bletchley Park.

Delicate timing. The summit came just after US President Joe Biden announced his executive order on AI, the G7 released its guiding principles, and China’s President Xi Jinping announced its Global AI Governance Initiative. With such a diverse line-up of developments, there was a risk that the UK’s summit would be outshined, and its initiatives overshadowed. But judging by how the UK successfully avoided turning the summit into a marketplace (at least, not publicly), it managed to launch not just a product but a process.

Signing the Bletchley Declaration. The group of countries signing the communique on Day 1 of the summit included Australia, Canada, China, France, Germany, India, Korea, Singapore, the UK, and the USA for a total of 28 countries plus the EU.

Yes, China too. We’ve got to hand it to Prime Minister Rishi Sunak for bringing everyone around the table, including China: ‘Some said, we shouldn’t even invite China… others that we could never get an agreement with them. Both were wrong. A serious strategy for AI safety has to begin with engaging all the world’s leading AI powers.’ And he’s right. On his part, Wu Zhaohui, China’s vice minister of science and technology, told the opening session that Beijing was ready to increase collaboration on AI safety. ‘Countries regardless of their size and scale have equal rights to develop and use AI’, he added, possibly referring to China’s latest efforts to help developing nations build their AI capacities.

Like-minded countries testing AI models. The countries agreeing on the plan to test frontier AI models were actually a smaller group of like-minded countries – Australia, Canada, the EU, France, Germany, Italy, Japan, Korea, Singapore, the USA, and the UK – and ten leading AI companies – Amazon Web Services, Anthropic, Google, Google DeepMind, Inflection AI, Meta, Microsoft, Mistral AI, Open AI, and xAI. 

No China this time. China (and others) were not part of this smaller group, even though China’s representative reportedly attended Day 2. Why China did not sign the AI testing plan remains a mystery (we do have a theory or 2, though).

AI Safety Summit
UK Prime Minister Rishi Sunak addressing the AI Safety Summit (1–2 November 2023)

Outcome 1: Shared consensus on AI risks

Current risks. For starters, countries agreed on the dangers of current AI, as outlined in the Bletchley Declaration, which they signed on Day 1 of the summit. Those include bias, threats to privacy and data protection, and risks arising from the ability to generate deceptive content. 

A more significant focus: Frontier AI. Though current risks need to be mitigated, the focus was predominantly on frontier AI, that is, advanced models that exceed the capabilities of what we’re seeing today, and their ‘potential for serious, even catastrophic, harm’. It’s not difficult to see why governments have come to fear what’s around the corner, as there have been plenty of stark warnings about the future’s superintelligent systems, the risk of extinction, and the seriousness of these warnings. But as long as they don’t let the dangers of tomorrow divert them from addressing the immediate concerns, they’re on track. 

Outcome 2: Governments to test AI models 

Shared responsibility. Gone are the days when AI companies were solely responsible for ensuring the safety of their models. Or as Sunak said on Day 2, ‘we shouldn’t rely on them to mark their own homework’. Governments (the like-minded ones) will soon be able to see for themselves whether next-generation AI models are safe enough to be released to the public, or whether they pose threats to critical national security.

How it will work. A new global hub, called the AI Safety Institute (an evolution of the existing Frontier AI Taskforce), will be established in the UK, and will be tasked with testing the safety of emerging AI technologies before and after their public release. It will work closely with the UK’s Alan Turing Institute and the USA’s AI Safety Institute, among others.

Outcome 3: An IPCC for AI 

Panel of experts. A third major highlight of the summit is that countries agreed to form an international advisory panel on AI risk. Prime Minister Sunak said the panel was ‘inspired by how the Intergovernmental Panel on Climate Change (IPCC) was set up to reach international science consensus.’

How it will work. Each country who signed on to the Bletchley Declaration will nominate a representative to support a larger group of leading AI academics, tasked with producing State of the Science reports. Turing Award winner Yoshua Bengio will lead the first report as chair of the drafting group. The chair’s secretariat will be housed within the AI Safety Institute.

So what’s next? As far as gatherings go, it looks like the UK’s AI Safety Summit is the first of many. The second summit will be online, co-hosted by Korea in 6 months. An in-person meeting in France will follow a year later. As for the first report, we can expect it to be published ahead of the Korea summit. 


Digital policy roundup (30 October–6 November)

// AI //

Big Tech accused of exaggerating AI risks to eliminate competition

On today’s AI landscape, there are a few dominant Big Tech companies, alongside a vibrant open-source community, which is driving significant advancements in AI. The latter is posing a challenging competition to Big Tech, according to Google Brain founder Andrew Ng, leading giant companies to exaggerate the risks of AI in the hope of triggering strict regulation that would stymie the open-source community.

‘It’s been a weapon for lobbyists to argue for legislation that would be very damaging to the open-source community,’ Ng said.

Why is it relevant? Firstly, this statement echoes the cautionary note expressed in a leaked internal Google document from last May, which said that open-source AI would outcompete Google and OpenAI. Second, it paralyses open-source’s ability to resolve governance issues due to Big Tech’s control over data and knowledge.

UN advisory body to tackle gaps in AI governance initiatives  

The UN’s newly formed High-Level Advisory Body on AI, comprising 39 members, will assess governance initiatives worldwide, identify existing gaps, and find out how to bridge them, according to UN Tech Envoy Amandeep Singh Gill. He said the UN provides ‘the avenue’ for governments to discuss AI governance frameworks.

The advisory body will publish its first recommendations by the end of this year, and final recommendations next year. They will be discussed during the UN’s Summit of the Future, to be held in September next year.

Why is it relevant? It appears that the advisory body will not release another set of AI principles. Instead, they will focus on closing gaps rather than adding to the growing number of principles.


Tweet from @netblocks says: Confirmed: Live network data show a new collapse in connectivity in the #Gaza Strip with high impact to Paltel, the last remaining major operator serving the territory; the incident will be experienced as the third telecommunications blackout since the start of the conflict. A line graph shows connectivity declining in percentages from October 2–30 in 2023.

// MIDDLE EAST //

Third internet blackout in Gaza

The Gaza Strip was disconnected from internet, mobile, and telephone networks over the weekend – the third time since the start of the conflict. NetBlocks, a global internet monitoring service, said: ‘We’ve tracked the gradual decline of connectivity, which has corresponded to a few different factors: power cuts, airstrikes, as well as some amount of connectivity decline due to population movement.’


Was this newsletter forwarded to you, and you’d like to see more?


// DATA PROTECTION //

Facebook and Instagram banned from running behavioural advertising in EU

The European data regulator has ordered the Irish data regulator to impose a permanent ban on Meta’s behavioural advertising across Facebook and Instagram. According to the EU’s GDPR, companies need to have a good reason for collecting and using someone’s personal information; Meta had none.

Ireland is where Meta’s headquarters are located. The ban imposed on the company, which owns Facebook and Instagram, covers all EU countries and those in the European Economic Area.

Why is it relevant? There are six different reasons, or legal bases, that a company can use to process data. One of them, based on consent (meaning that a person has given their clear and specific agreement for their information to be used), is Meta’s least favourite, as the chance of users refusing consent is high. Yet, it may soon be the only basis Meta can actually use – a development which will surely make Austria-based NGO noyb quite happy.


The week ahead (6–13 November)

7-8 November: The 2023 Conference on International Cyber Security takes place at The Hague, the Netherlands. The theme is ‘War and Peace. Conflict, Behaviour and Diplomacy in Cyberspace’

8 November: The International AI Summit, organised by ForumEurope and EuroNews in Brussels and online, will ask whether a global approach to AI regulation is possible.

10-11 November: The annual Paris Peace Forum will tackle trust and safety in the digital world, among other topics.

13–16 November: The Web Summit, dubbed Europe’s biggest tech conference, meets in Lisbon.


#ReadingCorner
 Person, Security, Disk

A new chapter in IPR: The age of AI-generated content

Intellectual property authorities worldwide face a major challenge: How to approach inventions created not by human ingenuity, but by AI. This issue has sparked significant debate within the intellectual property community, and many lawsuits. Read part one of a three-part series that delves into the impact of AI on intellectual property rights.


FWAzpGt5 steph
Stephanie Borg Psaila – Author
Director of Digital Policy, DiploFoundation
nPHsW2zD ginger
Virginia Paque – Editor
Senior Editor Digital Policy, DiploFoundation