DW Weekly #168 – 12 July 2024

 Page, Text

Dear readers, 

Welcome to another issue of the Digital Watch weekly! 

We will spotlight another round of the China vs the USA tech competition. Last week, WIPO published a report on the landscape of patents in generative AI or GenAI (think AI that can generate images, videos, text, music, code, etc.) One of the main takeaways from this report, which has been all over the headlines, is that China has a substantial lead in AI patents, with six times more GenAI inventions than the second country on the list, the USA. The Republic of Korea, Japan, and India round up the top five.

Titled 'Number of GenAI patent applications, a stepped award stage shows flags representing the 5 top contenders: (1) China, (2) the USA, (3) South Korea, (4) Japan, and (5) India

This week, a survey of business decision-makers has shown that 83% of Chinese organisations are using GenAI

Does all this recent data show that China is winning the AI race? Not quite. While the country is ahead in some areas, the USA is a strong contender. The USA produced more notable machine-learning models in 2023 (61 compared to China’s 15), leads in AI foundation models, variational autoencoders (VAEs), and private AI investments. Plus, we should remember that the number of patents doesn’t correlate to their quality or impact – the mere registration of patents doesn’t automatically translate into apps or services being put on the market. 

(Sidebar: foundation models are machine learning models trained on broad and diverse data and which can be adapted to a wide range of tasks across different domains; they serve as a base for more specialised models dedicated to specific tasks or areas. A variational autoencoder (VAE) is a generative AI algorithm that uses deep learning to generate new content based on the structure of the input data.) 

The USA nudges ahead with 24% of organisations fully implementing GenAI compared to 19% in China. In addition, since 2018, the USA has been embroiled in a chip war with China, blacklisting Chinese chipmakers and tightening controls on the export of its most advanced chip-related technologies. This is particularly important because there are almost no semiconductors without some kind of US-trademarked bits in their design or production processes, making things tricky for Chinese chipmakers and industries reliant on such technology, including AI, in the meantime.

Stepping away from the US-China competition, it’s important to acknowledge that there are other actors out there to watch. One is India, which had the highest growth rates in GenAI patent family publications (56% per year). 

Interestingly, another survey shows that most (polled) Americans view winning the AI race against China as secondary to a cautious approach to AI development to prevent its misuse by adversaries. Only 23% believe the USA should rapidly build powerful AI to outpace China and gain a decisive advantage.

While the USA and China are grappling with each other, and others are rushing to catch up, the contender closest to the crown right now is AI itself.

The discussions at the 8th substantive session of the Open-ended Working Group (OEWG) on security of and in the use of information and communications technologies 2021–2025 are still ongoing, today being the last day. AI reports and transcripts are available on our dedicated web page, with a human-generated analysis planned for next week.

Andrijana and the Digital Watch team


Highlights from the week of 5-12 July 2024

building 1011876 1280

The decision comes amid regulatory scrutiny from antitrust watchdogs in Europe, the UK, and the US concerning Microsoft’s potential influence over OpenAI.

the white house

President Joe Biden has formed a team of experts to create standards for AI training and deployment across industries.

serbia ai supercomputer

The new AI Development Strategy 2024-2030 builds on this solid foundation, aiming to nurture a vibrant AI ecosystem in Serbia.

eu cybersecurity standards

The European Commission’s AI Act will classify AI-based cybersecurity and emergency services in connected devices as high-risk, requiring extensive testing and security measures, significantly impacting various sectors like medical devices…

OpenAi

OpenAI announced it would block Chinese users from accessing its services on 9 July, amid rising US-China tensions, affecting developers who relied on OpenAI tools.

electronics 4972649 1280

The semiconductor industry, now leading the S&P 500, is experiencing explosive demand driven by AI advancements.

japanese and us flags semiconductor consortium

Based in Silicon Valley, the consortium will focus on developing advanced back-end technologies for semiconductor packaging, aiming to be fully operational next year.

eu and china flags

The initiative comes amid rising EU-China tensions, exemplified by tariffs of up to 37.6% on Chinese electric vehicles.

woman using laptop wood desk with cyberattack warning screen cyber security concept

US authorities have disrupted a sophisticated Russian disinformation campaign, Meliorator, which uses AI to create fake social media personas and spread false information in the US and internationally.

hacker working in the darkness

China’s embassy in Australia dismissed the allegations as ‘political manoeuvring’.

nato flag north atlantic treaty organization flag waving

The move, unveiled during the 2024 NATO Summit in Washington, DC, marks NATO’s 75th anniversary and addresses the increasing cyber threats, especially following the Russian invasion of Ukraine in 2022.

print at home concert tickets

Hackers leaked nearly 39,000 print-at-home tickets for major events in an extortion scheme against Ticketmaster.

new zealand

The shift aims to ensure long-term success and maintain a multistakeholder approach involving governments, tech companies, and civil society.

taiwan flag is depicted on the screen with the program code

Despite this slowdown, the bank plans to conduct public hearings next year to inform the public about digital currency.

amazon logo

Amazon is under pressure from the European Commission to enhance transparency and ensure compliance with new regulations to tackle illegal and harmful content online.

flag of sri lanka

The legislative change introduces three new types of licences for satellite internet service providers, one of which allows Starlink to apply as a licensed service provider pending regulatory approval.



ICYMI

The Detroit Police Department has agreed to new rules limiting how it can use facial recognition technology, prompted by a lawsuit over a wrongful arrest in 2020. The EU and France flex their antitrust powers against Microsoft, Apple and Nvidia. Watch for details!


Upcoming

8N6ikeWA un flag blue

The UN OEWG 2021–2025 8th substantive session will focus on adopting the group’s annual progress report (APR), taking stock of the group’s discussions over the previous year and charting the way forward for the group. The GIP follows the event; just-in-time reports and transcripts are available on our dedicated web page.

440 360 max July newsletter banner

In the July issue of our monthly newsletter, we look at how the UN can navigate the tightrope between innovation and impartiality when integrating AI in its operations, explore how AI chatbots master language using principles rooted in the linguistic theories of Ferdinand de Saussure, look at recent governments’ actions on digital antitrust oversight, and explain how social media giants won in a free speech showdown at the US Supreme Court.

Digital Watch newsletter – Issue 91 – July 2024

Front page of the Newsletter

Snapshot: The developments that made waves

AI governance

The UN General Assembly has adopted a non-binding resolution on AI capacity building, led by China, to enhance developing countries’ AI capabilities through international cooperation. It also calls for support from international organisations and financial institutions. African ICT and communications ministers have endorsed the Continental AI Strategy and the African Digital Compact to boost the continent’s digital transformation. The G7 Leaders’ Communiqué emphasised a coordinated strategy for handling AI’s opportunities and challenges, introducing an action plan for workplace AI adoption and underlining initiatives such as advancing the Hiroshima Process International Code of Conduct, supporting SMEs, and promoting digital inclusion and lifelong learning.

The International Monetary Fund has recommended fiscal policies for governments grappling with the economic impacts of AI, including taxes on excess profits and a carbon levy.

China leads the world in generative AI patent requests, significantly outpacing the USA. At the same time, US tech companies dominate in producing cutting-edge AI systems, according to the World Intellectual Property Organization (WIPO). A European Commission report shows the EU lags behind its 2030 AI targets, with only 11% of enterprises using designated AI technologies, far short of the 75% target. The Japanese Defence Ministry has introduced its first AI policy to enhance defence operations. Brazil is partnering with OpenAI to modernise legal processes, reduce court costs, and improve efficiency in the solicitor general’s office.

Technologies

The USA has introduced draft rules to regulate investments in China, focusing on AI and advanced technology sectors that may pose national security threats. The USA plans to expand sanctions on semiconductor chips and other goods sold to Russia, targeting Chinese third-party sellers. Discussions are ongoing with the Netherlands and Japan to restrict 11 Chinese chipmaking factories and extend equipment export controls. The USA faces a projected shortage of 90,000 semiconductor technicians by 2030, prompting the Biden administration to launch a workforce development program.

The European Commission is seeing industry views about China’s increased production of older-generation computer chips.

China will develop standards for brain-computer interfaces (BCI) through a new technical committee, focusing on data encoding, communication, visualisation, electroencephalogram data collection, and applications in various fields.

Infrastructure

Telecommunications companies from Kazakhstan and Azerbaijan will invest over USD 50 million in laying 370 kilometres of fibre optic cables under the Caspian Sea. Meanwhile, Senegal’s new digital chief announced plans to enhance digital infrastructure, coordinate government programs, foster collaborations, and build on previous achievements to increase the digital economy’s GDP contribution.

Cybersecurity

The UN Security Council held an open debate on cybersecurity, focusing on evolving cyber threats and the need for positive digital advancements.

A recent cyberattack on the cloud storage company Snowflake is shaping up to be one of the largest data breaches ever, impacting hundreds of Snowflake business customers and millions of individual users. Indonesia’s national data centre was hit by a variant of LockBit 3.0 ransomware, disrupting immigration checks and public services. The hackers have since apologised and offered to release the keys to the stolen data. The University Hospital Centre in Zagreb, Croatia, also suffered a cyberattack by LockBit. Despite rising ransomware attacks, a Howden report indicates that global cyber insurance premiums are decreasing as businesses improve their loss mitigation capabilities. Additionally, nearly ten billion unique passwords were leaked in a collection named RockYou2024, heightening risks for users who reuse passwords.

Australia has mandated internet companies to create enforceable codes within six months to prevent children from accessing inappropriate content. New Zealand transitioned the Christchurch Call to Action against online terrorist content into an NGO, now funded by tech companies like Meta and Microsoft.

Digital rights

The EU’s proposed law mandating AI scans of messaging app content to detect child sexual abuse material (CSAM) faces criticism over privacy threats and potential false positives. EU regulators charged Meta with breaching tech rules via a ‘pay or consent’ ad model on Facebook and Instagram, alleging it forced users to consent to data tracking. The US Department of Justice (DOJ) plans a lawsuit against TikTok for alleged children’s privacy violations. Google is accused by European data protection advocacy group NOYB (none of your business) of tracking users without their informed consent through its Privacy Sandbox feature.

Legal

The International Criminal Court is investigating alleged Russian cyberattacks on Ukrainian infrastructure as potential war crimes. In Australia, legal action has been initiated against Medibank for a data breach affecting 9.7 million individuals. ByteDance and TikTok are challenging a US law aiming to ban the app, citing concerns about free speech. Global streaming companies are contesting new Canadian regulations mandating 5% of revenues be used for local news, questioning the legality of the government’s actions.

Internet economy

China’s Ministry of Commerce has introduced draft rules to bolster cross-border e-commerce by promoting the establishment of overseas warehouses and improving data management and export supervision. Nvidia is facing potential charges in France over allegations of anti-competitive behaviour. The first half of 2024 saw a significant surge in cryptocurrency theft, with over USD 1.38 billion stolen by 24 June.

Development

The first part of the Broadband Commission’s annual State of Broadband report ‘Leveraging AI for Universal Connectivity’ explores AI’s impact on e-government, education, healthcare, finance, and environmental management, and its potential to bridge or widen the digital divide. The second part will provide further insights into AI’s development and propose strategies for equitable digital advancement. India will require USB-C as the standard charging port for smartphones and tablets starting in June 2025, aligning with the EU’s efforts to enhance user convenience and reduce electronic waste.

Sociocultural

New York state lawmakers passed a law limiting social media platforms in displaying addictive algorithmic content to users under 18 without parental consent. The European Commission has asked Amazon for details on how it complies with Digital Services Act rules, focusing on transparency in its recommender systems. Google Translate is significantly expanding, adding 110 languages, driven by AI advancements.

THE TALK OF THE TOWN – GENEVA

From 4 to 14 June, the Council of the International Telecommunication Union (ITU)  made key decisions on space development, green digital action, and global digital cooperation. The council reviewed the ITU Secretary-General’s report on the implementation of the Space 2030 Agenda, focusing on leveraging space technology for sustainability. Resolutions were drafted to highlight ITU’s role in using digital technologies for sustainability, with a report on current green digital initiatives. ITU will continue engaging with the Global Digital Compact (GDC) to enhance global digital cooperation.

On 14 June, the first UN Virtual Worlds Day showcased technologies like virtual and augmented reality, the metaverse, and spatial computing to advance SDGs. The event included a high-level segment, real-world applications, discussions on policy, and the launch of the Global Initiative on Virtual Worlds – Discovering the CitiVerse, a platform to develop frameworks, raise awareness, share best practices, and test metaverse solutions in cities.


AI@UN: Navigating the tightrope between innovation and impartiality

The UN is not short on risks, but AI adds novel ones for the organisation. As off-the-shelf AI proprietary systems carry the bias of data and the algorithm on which it is developed and come with limitations and challenges for transparency, reliance on proprietary AI will open inevitable questions about the impartiality of such systems.

Why is impartiality important for the UN? The principle of impartiality is the linchpin of the UN’s credibility, ensuring that policy advice remains objective, grounded in evidence, and sensitive to diverse perspectives. This impartiality will be tested as the UN reacts to the inevitable need to automate reporting, drafting, and other core activities central to its operation. 

Ensuring impartiality would require transparency and explainability of the full AI cycle, from the data on which foundational models are based to assigning weights to different segments of AI systems.

An inclusive approach to AI development is key for upholding the principle of impartiality. We, the peoples, the first three words of the UN Charter,  should guide the development of AI at the UN. Contributions of countries, companies, and communities worldwide to AI@UN could bolster the high potential of AI to support the UN’s missions of upholding global peace, advancing development, and protecting human rights.

AI@UN has two main goals:

  • support policy discussions on the sustainable AI transformation of the UN ecosystem
  • inspire the contributions of AI models and agents by member states and other actors
Emblem of the UN in white on a blue disk superimposed on an AI circuit with connectors radiating outwards from the centred disc.

As a starting point, the following guiding principles are proposed for the development and deployment of AI models, modules, and agents at the UN: 

1. Open source: Abiding by the open-source community’s principles, traditions, and practices. Openness and transparency should apply to all phases and aspects of the AI life-cycle, including curating data and knowledge for AI systems, selecting parameters and assigning weights to develop foundational models, vector databases, knowledge graphs, and other segments of AI systems. 

2. Modularity: Developing self-contained modules according to shared standards and parameters. AI@UN should start with AI agents and modules for core UN activities and operations.

3. Public good: Walking the talk of public good by using AI to codify UN knowledge as a public good to be used by countries, communities, and citizens worldwide. By doing so, the UN would inspire the AI-enabled codification of various knowledge sources, including ancient texts and oral culture, as the common heritage of humankind.   

4. Inclusivity: Enabling member states, companies, and academia to contribute, by their capacities and resources, to the technical, knowledge, and usability aspects of AI@UN. 

5. Multilingualism: Representing a wide range of linguistic and cultural traditions. A special focus should be on harvesting the knowledge and wisdom available in oral traditions that are not available in the written corpus of books and publications.

6. Diversity: Ensuring inputs from a wide range of professional, generational, cultural, and religious perspectives. While AI@UN should aim to identify convergences between different views and approaches, diversity should not be suppressed by the least common denominator approach inferred in AI. Diversity should be built in through the transparent traceability of sources behind AI-generated outputs. 

7. Accessibility: Adhering to the highest standards for accessibility, in particular for people with disabilities. AI@UN must increase the participation of people with disabilities in UN activities, from meetings to practical projects. Simple solutions and low-bandwidth demand should make the system affordable for all. 

8. Interoperability: Addressing the problem of organisational silos in managing knowledge and data within the UN system. Interoperability should be facilitated by knowledge ontologies and taxonomies, data curation, and shared technical standards.

9. Professionalism: Following the highest industry and ethical standards of planning, coding, and deploying software applications. This will be achieved by testing, evaluating, and submitting AI solutions to a peer-review process. The main focus will be maximising the reliable development of AI solutions to directly impact human lives and well-being. 

10. Explainability: Tracing every AI-generated artefact, such as a report or analysis, to sources used by AI inference, including texts, images and sound recording. Explainability and traceability would ensure transparency and impartiality of AI@UN systems.

11. Protection of data and knowledge: Achieving the highest level in protecting data, knowledge and other inputs into AI systems. 

12. Security: Guaranting the highest possible level of security and reliability of AI@UN. Open source, red-teaming, and other approaches will ensure that the systems are protected by having as many critical eyes as possible to test and evaluate AI code and algorithms. AI communities will be encouraged to contribute to red-teaming and other tests of the AI@UN system.

13. Sustainability: Realisation of SDGs and Agenda 2030 through three main approaches: firstly, ensuring that SDGs receive higher weights in developing AI models and tools; secondly, the AI systems themselves should be sustainable through, for example, sharing the code, building resources, and providing proper documentations and development trails; thirdly, AI solutions should be developed and deployed with environmental sustainability in mind.

14. Capacity: By developing an AI system, the UN should develop its own and wider AI capacities. Capacity development should be: (a) holistic, involving the UN Secretariat, representatives of member states, and other communities involved in UN activities; and (b) comprehensive, covering a wide range of AI capacities from a basic understanding of AI to high-end technical skills. 

15. Future-proofing: Planning and deploying systems dealing with future technological trends. Experience and expertise gathered around AI@UN should be used to deal with other emerging technologies, such as augmented/virtual reality and quantum computing. 

Opportunities in crises. AI transformation will inevitably trigger tensions due to its impact on deeper layers of how the UN functions. Likely opposition based on human fear and attachments to the status quo should be openly addressed and reframed around opportunities that AI transformation will open on individual and institutional levels. 

For instance, AI can help small and developing countries to participate in more informed and impactful ways in the work of the UN. AI can help compensate for the smaller size of their diplomatic missions and services, which must follow the same diplomatic dynamics as larger systems. An emphasis on AI will reduce current AI asymmetry.

AI can also help the UN Secretariat to refocus time and resources and spend less time on traditional paperwork, like preparing reports, to allow more work on the ground in member states where their help is critical.

Next steps. Embarking on this journey towards integrating AI into the UN’s operations is not merely a step but a leap into the future – one that demands boldness, a cooperative spirit, and an unwavering dedication to the ideals that have anchored the UN since its inception. The potential for AI to bolster the UN’s mission to uphold global peace, advance development, and champion human rights is immense. In fact, the need to adopt an open-source AI framework exceeds the need for technological innovation. The UN will be able to evolve, take the lead, and remain relevant in a rapidly changing global landscape by adopting an open approach to AI.

By leveraging the transformative power of AI, the UN can turn a looming challenge into a watershed moment, ensuring the organisation’s relevance and leadership in charting the course of human progress for all.

This text was adapted from AI@UN: Navigating the tightrope between innovation and impartiality, first published on Diplo’s blogroll.

AIatUN 1
www.diplomacy.edu

The UN faces the challenge of integrating AI in a way that maintains its impartiality and credibility, advocating for an open-source AI platform contributed to by countries, companies, and citizens to ensure transparency, inclusivity, and adherence to its core principles.


How AI chatbots master language: Insights from Saussure’s linguistics

Linguistics, intertwined with modern technology, prompts questions about how chatbots function and respond articulately to diverse inputs. Chatbots, powered by large language models (LLMs) like ChatGPT, acquire digital cognition and articulate responses using principles rooted in the linguistic theories of Ferdinand de Saussure.

Saussure’s early 20th-century work laid the groundwork for understanding language through syntax and semantics. Syntax refers to the rules governing the arrangement of words to form meaningful sentences. Saussure saw syntax as a system of conventions within a language community, interlinked with other linguistic elements like semantics. Semantics involves the study of meaning in language. Saussure introduced the concept of the sign, consisting of the signifier (sound/image) and the signified (concept), which is crucial for understanding how LLMs process and interpret word meanings.

Two humanoid robots drawn in cubism style talk as though in conversation.

How LLMs process language. LLMs like ChatGPT process and understand language through several core mechanisms:

  1. Training on vast amounts of textual data from the internet to predict the next word in a sequence
  2. Tokenisation to divide the text into smaller units
  3. Learning relationships between words and phrases for semantic understanding
  4. Using vector representations to recognise similarities and generate contextually relevant responses
  5. Leveraging transformer architecture to efficiently process long contexts and complex linguistic structures

LLMs transform text into tokenised units (signifiers) and map these to embeddings that capture their meanings (signified). The model learns these embeddings by processing vast amounts of text, identifying patterns and relationships analogous to Saussure’s linguistic structures.

Semantics and syntax in LLMs. Understanding and generating text in LLMs involves both semantic and syntactic processing. 

LLMs process semantics through (a) contextual word embeddings that capture word meanings in different contexts based on usage, (b) an attention mechanism to prioritise important words, and (c) layered contextual understanding that handles words that have multiple related meanings (polysemy) and different words with the same meaning (synonymy). The model is pre-trained on general language patterns and fine-tuned on specific datasets for enhanced semantic comprehension. 

For syntax, LLMs use (a) positional encoding to understand word order, (b) attention mechanisms to maintain sentence structure,   (c) layered processing to build complex sentences, and (d) learn probabilistic grammar from large amounts of text. Tokenisation and sequence modelling help track relationships between words, and the transformer model integrates both sentence structure and meaning at each stage, ensuring responses are both meaningful and grammatically correct. Training on diverse datasets further enhances its ability to generalise across various ways of using language, making the chatbot a powerful natural language processing tool.

Integrating Saussure’s linguistic theories with the cognitive mechanisms of large language models illuminates the inner workings of contemporary AI systems and also reinforces the enduring relevance of classical linguistic theories in the age of AI.

This text was adapted from In the beginning was the word, and the word was with the chatbot, and the word was the chatbot, first published on the Digital Watch Observatory.

generate and image of digital letters and algorithms to depict how chatbots acquire knowledge respectively how they learn by being fed with words and sentences
dig.watch

Given the profound importance of language and its various disciplines in technological developments, it is crucial to consider how chatbots function as products of advanced technology. Specifically, it contributes to understanding how chatbots learn through algorithmic cognition and how they effectively and accurately respond to diverse user queries reflecting their systems in linguistics studies



Social media giants win in free speech showdown at US Supreme Court

Social media platforms play an imperative role in people’s lives, not only in communication but also in receiving and disseminating information. At the same time, some risks may come with social media content, such the possibility of as hate speech, the spread of mis- and disinformation, and harassment. This has raised questions on the liability of social media platforms in regulating such content, as well as the role of governments in taking action. Do social media platforms have free speech rights? Can governments implement policies against social media platforms and their own content policies? The US Supreme Court tackled those questions in its decision in Moody vs. NetChoice and NetChoice, LLC vs. Paxton

NetChoice and the Computer and Communications Industry Association (CCIA), a coalition of social media companies and internet platforms, challenged the laws of two US states, Florida and Texas. These laws were enacted in 2021 amidst growing Republican party criticism of social media companies’ enforcement of their own policies

NetChoice and CCIA claimed that the Florida and Texas laws violate private companies’ first amendment rights and that governments should not be allowed to intervene in private companies’ speech policies. A group of political scientists filed an amicus brief stating that these two laws do not set a threshold as to what they consider to be hate speech and what dangerous and violent election-related speech could prevent social media platforms from moderating threats against election officials. On the other hand, officials from Texas and Florida argue that these laws aim to regulate the liability of social media platforms rather than restrict speech online while stressing that the first amendment does not apply to private businesses. One US federal appeals court invalidated Florida’s statute, while another upheld the Texas law. However, both laws were suspended pending the US Supreme Court’s final decision.

The hand of a black-robed figure holds a gavel striking its wooden base, on a desk, with the scales of justice in the background.

The supreme court decided that the lower courts’ decisions were inadequate for free speech rights under the first amendment and that the two laws are unconstitutional. In concluding its decision, the supreme court found that social media platforms are protected by the first amendment when they create content. The supreme court has also ruled that presenting a curated collection of others’ speech counts as expressive activity. 

Essentially, this sets a precedent for setting free speech rights under the first amendment for social media platforms and private businesses in the USA. Namely, US states cannot implement policies restricting their ability to regulate the content disseminated on their platforms. This could prevent governments from enacting laws leading to social media platforms losing their independence in regulating their content.


Governments steam forward with digital antitrust oversight 

In 1996 John Perry Barlow penned ‘A Declaration of the Independence of Cyberspace’. This anthology document, which reflected the libertarian internet culture of the time, was a push-back against governmental intervention and regulation of the blooming technology sector. Accordingly, governments around the world adopted a hands-off approach, under the assumption that regulation could stifle innovation. 

Almost three decades later, this understanding has radically changed. In recent years, reports published by several organisations, such as the World Bank, the Internet Society, and UNCTAD have shown a growing concentration of wealth and power in the digital economy. Data divides are particularly relevant in this context, as they lead to concentration upstream, in data-intensive technology sectors, such as AI. Against this backdrop, investigations into the potentially anti-competitive behaviour adopted by tech companies are proliferating.

A human hand holds a magnifying glass over four blocks. The first, third, and fourth blocks have green checkmarks on them. The second has a red triangle with an exclamation point inside of it.

In the EU, recent investigations have led to the first charge brought by the European Commission against a tech company under the Digital Markets Act (DMA), a law designed to curb Big Tech’s dominance and foster fair competition. According to the preliminary findings of an investigation launched in March, Apple would be in violation of the DMA. Apple’s App Store allegedly squeezes out rival marketplaces by making it more difficult for users to download apps from alternative stores, and by not allowing app developers to communicate freely and conclude contracts with their end users. Apple has been given the opportunity to review the preliminary findings, and it can still avoid a fine if it presents a satisfactory proposal to address the problem. 

Other countries are also hardening their laws on competition. The ‘Brussels effect’ and the influence of the DMA can be seen in the Digital Competition Bill, proposed by the government of India to complement existing antitrust laws. Similarly to the DMA, the law would target large companies and could introduce similarly heavy fines. In particular, tech giants would be prohibited from exploiting non-public user data and from favouring their own products or services on their platforms. They would also be barred from restricting users’ ability to download, install, or use third-party apps.

The bill is raising concern among tech companies. A US lobbying group has opposed the move, fearing its impact on business. Inspired by the common belief that dominated the tech sector in the 1990s, technology companies claim that India’s bill could stifle innovation. However, the claim seems unlikely to prosper. 

Concerns about tech sector competition are rising in the USA, traditionally an advocate for minimal regulation. The USA is tightening AI industry controls, with the DOJ and the Federal Trade Commission (FTC) dividing oversight: the FTC will regulate OpenAI and Microsoft, while the DOJ oversees Nvidia. Although less active than the EU in antitrust regulation, the US closely monitors mergers and acquisitions. This recent agreement between the two governmental bodies paved the way for antitrust investigations to be launched.

Competition is increasingly becoming a playing field with significant ​governmental activity and oversight. As countries re-assert their jurisdiction, claims of cyber independence seem a distant echo from the past.


DW Weekly #167 – 5 July 2024

 Page, Text

Dear readers, 

Welcome to another issue of the Digital Watch weekly! 

‘We have won the war on floppy disks on June 28!’ Japan’s Minister for Digital Transformation, Taro Kono cheerfully announced a few days ago. That sentence may make you think ‘Good riddance!’ or ‘Oh wow, I haven’t used one of those in ages’ (if you’re not Japanese). Or it might make you think: ‘What’s a floppy disk?’

A floppy disk, or just floppy, is a flexible removable magnetic disk (typically encased in a plastic envelope or a hard plastic shell) for storing data. Here’s the kicker: floppies can only hold between 800 KB and 2.8 MB of data, with 1.44 MB being the standard. 

Until recently, they were very much used in Japanese admin. However, two years ago, Kono declared war on floppies. He vowed that the digital agency where he is in charge will change over 1,000 government procedures that require the use of floppies so that online services can be used instead. So they did it, and they ‘won the war’.

But why were floppies in use in Japan until recently? Various explanations have been posited, from the conservative nature of bureaucracy to the floppy’s reliability. Not only does physical media offer a higher degree of authenticity of information, but floppies almost never break or lose data. Maybe it can all be summed up as: If it ain’t broke don’t fix it.

The next frontier? Fax machines, Kono said at a conference in June

Drawing of Japan’s Minister for Digital Transformation Taro Kono standing in front of a large red circle, smiling and holding up a paper decree in front of multiple microphones. At the side is a yellow garbage container overflowing with floppy disks.

In other news, a World Intellectual Property Organization (WIPO) report shows China leads in AI patents, though US firms dominate in advanced AI systems. In another major event, 10 billion passwords were leaked in the largest publication of a compilation of passwords to date.

Andrijana and the Digital Watch team


Highlights from the week of 28 June-5 July 2024

united nations headquarters in new york city

The SDOs argue that these proposals promote centralisation, which they believe would harm the internet and global economies and societies.

flag of the united nations

The non-binding resolution, initiated by China and co-sponsored by several other countries, seeks to foster international cooperation and urge organisations and financial institutions to support this cause.

abstract technology ai computing

Over the past decade, 54,000 GenAI inventions were recorded, with a significant surge occurring in the last year. China contributed over 38,200 patents, compared to nearly 6,300 from the USA.

ai brain intelligent ai technology digital graphic design electronics ai machine learning robot human brain science artificial intelligence technology innovation futuristic

The EU is far behind its 2030 AI targets, with only 11% of firms using AI. Optimistic leaders call for more investments, cooperation, and a completed Digital Single Market.

flag of japan

AI integration in Japan’s defence aims at combat speed and operational efficiency.

Apple

Apple plans to integrate ChatGPT into its devices, aiming to boost AI capabilities and meet consumer demand.

gpu chipset workforce

The programme is part of a broader $39 billion effort to enhance US chipmaking capabilities and reduce reliance on foreign suppliers.

chinese flag 1752046 1280

The initiative, which includes considerations for ethics and safety, aims to unify Chinese research efforts and position the country as a leader in international BCI standards.

massive password leak

The RockYou2024 data will be included in Cybernews’ Leaked Password Checker to help users identify if their credentials were compromised.

croatia flag close up

Authorities have been notified, and a criminal investigation is underway.

woman using laptop wood desk with cyberattack warning screen cyber security concept

The incident compromised the personal data of customers, including sensitive information such as Social Security Numbers and medical records, with unauthorised activity traced back to late October 2023.

law and justice in united states of america

The justices unanimously overturned lower court decisions due to inadequate consideration of First Amendment implications and directed further analysis.

japan banknotes

Despite the rise in cashless transactions, cash remains significant, and the redesign aims to stimulate consumer spending and productivity amid inflationary pressures.



ICYMI

What is digital immortality? And (how) can it be achieved? Watch to find out!


Upcoming

8N6ikeWA un flag blue

The UN OEWG 2021-2025 8th substantive session will focus on adopting the group’s annual progress report (APR), taking stock of the group’s discussions on threats to information security, rules, norms, and principles of responsible behaviour of states, international law, confidence-building measures (CBMs), capacity building efforts, and a regular open-ended institutional dialogue under the auspices of the UN.

hlpf

The High-level Political Forum on Sustainable Development (HLPF) 2024 will take place from 8 to 17 July under the theme ‘Reinforcing the 2030 Agenda and eradicating poverty in times of multiple crises: the effective delivery of sustainable, resilient and innovative solutions’. 

DW Weekly #166 – 28 June 2024

 Page, Text

Dear readers, 

Welcome to another issue of the Digital Watch weekly! 

In 2022, a team of Japanese scientists invented a dermis or skin equivalent, a living layer composed of cells and an extracellular matrix, in an attempt to cover the robots’ outer layer to make them look more human. Now, they are pushing forward with another invention: a way to wrap that dermis equivalent seamlessly onto the rigid surfaces of a robot.

These perforation-type anchors were inspired by the structure of skin ligaments. The researchers constructed a robotic face covered with dermis equivalent and a silicone layer. They then used perforation-type anchors to move the silicone around the corners of the robot’s mouth and, ultimately, to make a robot smile.

 Baby, Person, Face, Head, Furniture, Piggy Bank, Marie Laurencin
Image credit: Kawai et al., Perforation-type anchors inspired by skin ligament for robotic face covered with living skin, Cell Reports Physical Science

The skin equivalent’s primary advantage, researchers say, is that it is capable of self-healing. It can regenerate missing skin through cellular proliferation without any triggers. Researchers tested this previously, in 2022, on a robotic finger but have not tested it since the invention of perforation-type anchors. They noted that ‘the facial shape, with its intricate unevenness, differs significantly from the simpler, convex shape of a finger’. However, the researchers acknowledged that is a future challenge.

The fun part? The research might be of use to humans in the future: Researchers will study how the face moves to ensure the skin equivalent is applied with sufficient thickness over a robot’s face. As forming expressions such as smiles is closely linked to the development of wrinkles, researchers expect that the knowledge they gain about wrinkle formation could find applications in the cosmetics and orthopaedic surgery industries.

During this process, researchers controlled mechanical actuators beneath the dermis equivalent. The scientists note that replacing mechanical actuators with cultured muscle tissue could advance the understanding of emotions and aid treatments like facial paralysis surgery.

We can expect that while it looks quite unsettling now, the skin-equivalent-covered robot is bound to look better, more realistic, and even more akin to humans.

 Book, Comics, Publication, Person, Adult, Female, Woman, Face, Head

Monotheistic religions such as Christianity and Islam hold that God made man in his image. Christianity, in particular, holds that God is a benevolent creator who loves his creations. One might ask: Are humans determined to play God with robots?

Andrijana and the Digital Watch team


Highlights from the week of 21-28 June 2024

apple 1839363 1280

Criticism also arose over Apple’s delay in launching AI-powered features in the EU, citing DMA compliance issues.

building 1011876 1280

Microsoft has indicated its willingness to collaborate with EU regulators to find solutions.

wikileaks

US authorities have pursued Assange since 2010 following WikiLeaks’ significant disclosure of confidential files.

closeup male hand holding smartphone with blank screen background blurred flag india

India is set to mandate USB-C as the standard charging port for smartphones and tablets from June 2025, aligning with the EU’s efforts to enhance user convenience and reduce electronic waste.

flag of indonesia 1

Hackers using a variant of LockBit 3.0 ransomware have disrupted Indonesia’s national data centre, affecting immigration checks and over 200 public services, with the government refusing to pay the $8 million ransom demand.

eu flags in front of european commission

Tech companies like WhatsApp, privacy advocates, legal and security experts, and numerous EU lawmakers argue the plan threatens privacy and encryption, potentially leading to mass surveillance.

flag of usa and china on cracked concrete wall background

The regulations initially target China, Macau, and Hong Kong, with possible expansion to other regions and include various exceptions to address national interest and existing commitments.

Screenshot 2023 08 09 at 12.57.10

The company expands its AI chip offering to the Middle East through a new partnership with Qatar’s Ooredoo amid US export restrictions.

circuit board and ai micro processor artificial intelligence of digital human 3d render

Manufacturing will be outsourced to Taiwan’s TSMC, although production is expected to start later this year.

flag india computer keyboard technology concept

The decline signifies a lack of organic demand, with most current transactions stemming from banks disbursing benefits to employees.



Upcoming

UN Cybercrime Convention 1920x1080px 2

The event is aimed at identifying expectations (if any) from the concluding session of the Ad Hoc Committee to Elaborate a Comprehensive International Convention on Countering the Use of Information and Communications Technologies for Criminal Purposes. It will highlight the elements that define a positive outcome of this process, based on the insights of different experts and stakeholders.

DW Weekly #165 – 21 June 2024

 Page, Text

Dear readers, 

Welcome to another issue of the Digital Watch weekly! 

Chinese scientists created an AI military commander that mimics human military leaders in various ways, including experience, thought patterns, and personality traits, even incorporating their flaws. The catch is that this AI military commander is strictly confined to a laboratory at the Joint Operations College of the National Defence University in Shijiazhuang, Hebei and used only for virtual war games.

In these simulations, the AI is given significant control, acting as the main decision maker. This allows the People’s Liberation Army (PLA) to conduct extensive and varied war games far beyond what would be possible with the limited availability of senior human commanders. The AI’s knowledge base is capped to simulate human forgetfulness. As its memory reaches its limit, older knowledge is discarded, ensuring that only the most relevant information is retained for decision-making.

News coverage claims that AI is forbidden to lead the armed forces in China. This seems to allude to China’s military principle: ‘The Party commands the gun.’ What would stop the Party from commanding AI to command the gun in battle?
International discussions about the use of AI in the military tend to revolve around (lethal) autonomous weapon systems (LAWS) – think missiles, drones, submarines – able to make their own decisions on the battlefield. The GGE on LAWS, for instance, has been discussing issues related to human responsibility for decisions on the use of weapons systems and human-machine interaction. However, the use of AI to completely replace military commanders is, as of yet, an uncharted area.

Caricature drawing shows six identical soldiers standing in a present arms position in front of a Chinese flag. They are facing a laptop computer which has a chat bubble with the text: '>attention; >present arms; >at ease; > ... .

AI was also prevalent on the agenda of the Group of Seven (G7) summit in Italy, where Pope Francis made an unprecedented appearance, stressing the importance of human control over AI, particularly in life-or-death decisions, and calling for a ban on autonomous weapons. He called for ‘algor-ethics’, a set of global and pluralistic ethical principles, to guide AI development globally.

The G7 Leaders’ Communiqué, published last Friday, also tackled AI. The G7 leaders emphasised the need for a unified approach to manage AI’s benefits and risks. A new action plan to effectively harness AI in the workplace was announced. A credential that can be used to identify organisations supporting the implementation of the Hiroshima Process International Code of Conduct for Organizations Developing Advanced AI Systems is also in the works. Key initiatives include advancing the Hiroshima AI Process, supporting SMEs, and fostering digital inclusion and lifelong learning.

The G7 outlined a strategy to combat cyber threats, focusing on responsible state behaviour, cybersecurity enhancement across sectors, deterrence tools, and building cyber capacities through initiatives like the Cybersecurity Working Group and the Ise-Shima Cyber Group. The G7 reaffirmed support for the UN Program of Action (PoA) on ICTs as the future format for regular institutional dialogue on cybersecurity.

Additionally, the G7 established a Semiconductors G7 Point of Contact Group to coordinate semiconductor industry issues.

Another regional gathering that caught attention was EuroDIG 2024, the European version of the Internet Governance Forum (IGF), held on 17–19 June in Vilnius, Lithuania. If you didn’t follow the discussions in real-time, we’ve got you covered with our hybrid reports

And just yesterday, the Republic of Korea convened a high-level Security Council open debate on cybersecurity as part of its presidency of the Security Council. The debate centred on how the Security Council could address the harmful use of ICT in order to maintain international peace and security.

Andrijana and the Digital Watch team


Highlights from the week of 14 -21 June 2024

africa ai digital continent only the african continent with artificial intelligence digitalised

The AI Strategy aims to harness AI for development, ensuring ethical use while minimising risks and maximising opportunities.

broadband commission

The report aims to guide policymakers on AI advancements to ensure equitable digital access.

IMF

AI’s impact on jobs could widen economic inequality, affecting both white-collar and blue-collar professions.

DALL%C2%B7E 2023 11 22 22.33.01 A photo realistic image representing a conceptual conflict in semiconductor technology between China and the United States. The image features a large

Washington aims to add 11 more Chinese chipmaking factories to a restricted list and extend controls on additional equipment.

TikTok

The company claims the divestiture is technologically, commercially, and legally unfeasible and that the law violates free speech rights while unfairly targeting TikTok. However, the US Justice Department defends the legislation, still citing national security concerns.

american law symbol with hammer and us flag

Kaspersky’s proposed mitigating measures were deemed insufficient, and violations of the new rules will result in fines or criminal charges.

robber wearing black hoodie against digitally generated russian national flag

The ongoing ICC investigation could potentially set a legal precedent for cyberwarfare.

hacker working code network

The breach, initially blamed on a third-party contractor and a misconfigured firewall, was later traced to an IT service desk operator at Medibank who inadvertently provided a hacker access to the system.

hacker 3342696 1280

The incident underscores the increasing cyber threats facing the healthcare industry and the vulnerabilities within its infrastructure.

google 959059 1280

Alphabet’s Google faces a complaint from Austrian group NOYB for allegedly tracking Chrome users without proper consent, despite promoting privacy safeguards.

senegal

To bolster Senegal’s digital transformation, Senegal Numérique SA has entered a partnership with the African Digital Development Agency (ADD) to share best practices and enhance the interoperability of government information systems and services.



ICYMI

EuroDIG 2024 banner

EuroDIG 2024 was held in Vilnius, Lithuania, from June 17–19. If you couldn’t make it to the conference, don’t fret. DiploAI-generated reports are ready for you to dive into.


Reading corner

generate and image of digital letters and algorithms to depict how chatbots acquire knowledge respectively how they learn by being fed with words and sentences

Given the profound importance of language and its various disciplines in technological developments, it is crucial to consider how chatbots function as products of advanced technology. Specifically, it contributes to understanding how chatbots learn through algorithmic cognition and how they effectively and accurately respond to diverse user queries reflecting their systems in linguistics studies.

g7

The Leaders of the G7 issued a communiqué after their summit in Apulia, Italy, tackling AI, chips and cybersecurity. Read the communiqué in full.

G7 Italy 2024

Pope Francis spoke during the second day of the G7 summit in Borgo Egnazia, Italy, on 14 June, focusing on the ethical implications and potential dangers of AI. Read his speech in full.

DW Weekly #164 – 14 June 2024

 Page, Text

Dear readers, 

Welcome to another issue of the Digital Watch weekly! 

The Apple-OpenAI partnership is now official! Apple has announced that ChatGPT from OpenAI will be integrated into Siri for free later this year on iOS 18 and macOS Sequoia. This feature will be optional. Apple users can ask Siri a question, and if Siri doesn’t fully understand or can’t complete their request, it might suggest asking ChatGPT. For instance, Siri might ask: ‘Hey, do you want to ask ChatGPT?’ This means Apple users can now tap into ChatGPT’s advanced capabilities and get answers. That begs the question of whether Apple is dropping out of the AI race. Certainly not, as the company just introduced  ‘Apple Intelligence’, a new AI system designed to be deeply integrated into Apple’s ecosystem. At least for now, Apple’s AI efforts will prioritise wide usability, while leaving chatbot development to companies like OpenAI and Google.

Craig Federighi, Apple’s head of software engineering, acknowledged that ChatGPT is one of the best large language models (LLMs) currently available, which is why Apple chose to integrate it into Siri. 

Apple had also been negotiating with Google, ultimately choosing OpenAI. According to Federighi, Apple is open to other partnerships as well. So, while the initial collaboration is with ChatGPT, we might see other models, too, with Apple’s approval and the deals they strike.

One person made sure this partnership would get even more attention. He is a tech mogul, billionaire, and well-known for his online trolling—yes, of course, it’s Elon Musk. The CEO of Tesla, SpaceX, and the social media company X (formerly known as Twitter) has announced (on X) a ban on Apple devices in his companies and that visitors ‘will have to check their Apple devices at the door, where they will be stored in a Faraday cage’. Why? Musk contends that the Apple-OpenAI partnership is an  ‘unacceptable security violation’. Specifically, Musk is worried that integrating ChatGPT into Apple’s products could compromise users’ privacy and security by allowing OpenAI to access and use user data without explicit permission. Although Apple said that user queries sent to ChatGPT will not be stored by OpenAI, it seems Musk is not buying it. Can we expect this ban to become a reality? Given Musk’s history with OpenAI, it’s unclear if he’s serious or just trolling once again.

 Person, Art, Advertisement, People, Face, Head, Electronics, Screen, Book, Comics, Publication

Staying with Elon Musk and OpenAI, Musk’s lawyers moved to dismiss his lawsuit against OpenAI and CEO Sam Altman, ending a months-long legal battle. Interestingly, this move came just a day after Musk criticised OpenAI’s and Apple’s partnership.

Musk’s lawsuit alleges that OpenAI strayed from its founding principles to develop AI solely for the betterment of humanity, not for profit. Musk claimed that OpenAI had breached its founding agreement by prioritising commercial interests over the public good, particularly with its partnership with Microsoft. In response, OpenAI argued that the lawsuit was a baseless attempt by Musk to further his own interests in the AI sector.

Why the sudden change? This is unclear, as  Musk’s attorneys filed the dismissal request in California state court without specifying a reason. Despite withdrawing the lawsuit, Musk dismissed the case without prejudice, leaving the door open for potential future legal action.

In other news, the G7 summit kicked off on Thursday in Italy. One of the main topics of discussion is AI. And here’s a historic first: Pope Francis will join the discussions on AI ethics. So, what might happen? Pope Francis is expected to advocate for international cooperation in regulating AI, highlighting the need to address global technology inequalities and threats like AI weapons and misinformation. Pope Francis’s engagement at the G7 reflects his active involvement in contemporary issues, as he aims to shape AI development ethically for the benefit of humanity.

Next week, don’t miss The European Dialogue on internet Governance (EuroDIG) 2024,17–19 June, in Vilnius, Lithuania. We’ll bring you reports on a selected number of sessions, so bookmark the hyperlinked pages.

To wrap up this week’s run-through, here are two rather bizarre stories: A Turkish student has been jailed for cheating on an exam by using AI, and Berlin is set to launch the world’s first cyber brothel.

Bojana, Boris and the Digital Watch team


Highlights from the week of 7-14 June 2024

brazil flag against city blurred background sunrise backlight scaled

The government anticipates that AI will be crucial in managing and analysing lawsuits more efficiently, alerting them to potential actions before final decisions are made, thereby mitigating significant financial losses.

usa flag and russia flag

The measures will broaden existing export controls to cover US-branded goods manufactured globally, addressing loopholes that have allowed Russia to adapt economically.

chinese flag 1752046 1280

The rules will target import and export of e-commerce, reflecting changes in markets and international regulation of e-commerce.

9dt4wutvwds

The legislation seeks to enforce fair practices, prohibit the exploitation of user data, and ensure users can choose third-party apps and default settings.

ICANN DomainNames

ICANN has named Kurt Erik Lindqvist as its new President and CEO, effective 5 December 2024. Kurt will be based in both the Geneva office and the Los Angeles headquarters.

flag of turkiye

The competition authority emphasised that the fine was imposed because Google did not adequately address concerns regarding fair competition.

google 959059 1280

The lawsuit aims to break up Google’s digital advertising business for more competition.

rsf logo

The self-regulation permitted by this framework convention will not ensure that AI serves the general interest and protects the right to information.

child safety on social media

Under the new laws, platforms will be barred from exposing ‘addictive’ algorithmic content to users under 18 without parental consent.

the flag of switzerland flying on a banner

Switzerland ramps up security measures amidst a rise in cyberattacks and disinformation campaigns ahead of the Ukraine peace summit, pointing to potential Russian involvement.



ICYMI

Is our software secure? Watch the latest Geneva Dialogue Podcast [ep 3] with Bert Hubert as we discuss cyber norms, stakeholder responsibilities, and the role of open-source software in enhancing security.

Upcoming

EuroDIG 2024

EuroDIG 2024 will occur in Vilnius, Lithuania from June 17–19. The conference is a multistakeholder platform created in 2008 to discuss internet governance annually in various European cities.


Reading corner

Encyclopedie Diderots Tree of Knowledge
www.diplomacy.edu

There has been a steady shift from using mind maps to relying on external maps. What impact does this have on human cognition and our decision-making processes? Aldo Matteucci analyses.

Numéro 90 de la lettre d’information Digital Watch – juin 2024

 Page, Text, Advertisement, Poster, Person, Book, Publication, Face, Head

Observatoire

Coup d’œil : les développements qui font des vagues

Gouvernance de l’IA

Le Chili a présenté une politique nationale actualisée en matière d’IA ainsi qu’une nouvelle législation visant à en garantir un développement éthique, à répondre aux préoccupations en matière de protection de la vie privée et à promouvoir l’innovation au sein de son écosystème technologique. En Afrique du Sud, le Gouvernement a annoncé la création d’un conseil consultatif d’experts en IA. Ce conseil sera chargé de mener des recherches, de formuler des recommandations et de superviser la mise en œuvre des politiques d’IA dans le pays. Dans le même temps, la Zambie a finalisé une stratégie globale en matière d’IA, visant à mettre les technologies modernes au service du développement du pays.

Le très attendu deuxième sommet mondial sur l’IA, qui s’est tenu à Séoul, a permis d’obtenir des promesses de sécurité de la part d’entreprises de premier plan, soulignant ainsi l’importance des efforts de collaboration pour faire face aux risques liés à l’IA.

Aux États-Unis, les législateurs ont présenté un projet de loi visant à réglementer les exportations d’IA. Ce projet de loi vise à contrôler l’exportation des technologies d’IA qui pourraient être utilisées à des fins malveillantes ou constituer une menace pour la sécurité nationale. En outre, le ministère américain du Commerce envisage de nouvelles mesures de contrôle des exportations de logiciels d’IA vendus à la Chine. Cette annonce intervient alors que les États-Unis et la Chine se sont rencontrés à Genève pour discuter des risques liés à l’IA. 

Les inquiétudes concernant la sécurité et la transparence de l’IA ont été mises en évidence par un groupe d’employés actuels et anciens d’OpenAI, qui ont publié une lettre ouverte signalant que les principales entreprises d’IA ne disposent pas de la transparence et de la responsabilité nécessaires pour prévenir les risques potentiels.

Technologies 

Les États-Unis devraient tripler leur production de semi-conducteurs d’ici 2032, creusant ainsi l’écart avec la Chine en matière de fabrication de puces. Les entreprises chinoises spécialisées dans les puces d’IA, dont des fleurons du secteur tels que MetaX et Enflame, revoient à la baisse la conception de leurs puces pour se conformer aux protocoles de sécurité de la chaîne d’approvisionnement et aux exigences réglementaires rigoureuses de la Taiwan Semiconductor Manufacturing Company (TSMC), ce qui suscite des inquiétudes quant à l’innovation et à la compétitivité à long terme. La Corée du Sud a dévoilé un programme de soutien substantiel de 26 billions de wons (19 milliards de dollars américains) pour son industrie des puces, tandis que la loi européenne sur les puces financera une nouvelle ligne pilote de puces à hauteur de 2,5 milliards d’euros. Le Japon envisage une nouvelle législation pour soutenir la production commerciale de semi-conducteurs avancés. 

Le premier essai sur l’homme de l’implant cérébral de Neuralink s’est heurté à des problèmes majeurs, car les fils du dispositif se sont détachés du cerveau, ce qui a compromis sa capacité à décoder les signaux cérébraux.

Cybersécurité

Les discussions de haut niveau entre les États-Unis et la Chine ont attiré l’attention sur des cybermenaces telles que Volt Typhoon, reflétant ainsi l’escalade des tensions. Les responsables américains et britanniques ont souligné que la Chine représentait une menace redoutable pour la cybersécurité. Par ailleurs, le groupe de travail à composition non limitée (GTCNL) sur la sécurité des TIC a établi un répertoire mondial des centres d’opérations d’urgence (POC), afin de renforcer la réponse internationale aux cyberincidents. Pour en savoir plus, voir page 7.

Une cyberattaque massive contre le ministère britannique de la Défense a suscité des soupçons, et la Chine a été mise en cause. Le groupe Qilin a revendiqué la responsabilité d’une cyberattaque contre les laboratoires Synnovis, qui a perturbé des services clés dans des hôpitaux londoniens. Une violation de données revendiquée par IntelBroker a visé Europol, amplifiant les inquiétudes concernant la sécurité des données des forces de l’ordre. En outre, Ticketmaster a été victime d’une violation de données qui a compromis les données personnelles de 560 millions d’utilisateurs et fait l’objet d’un recours collectif.

Infrastructure

Les autorités américaines ont averti les entreprises de télécommunications qu’une société chinoise contrôlée par l’État, chargée de réparer les câbles sous-marins internationaux, pourrait les endommager. Google a annoncé qu’il allait construire Umoja, le premier câble sous-marin reliant l’Afrique et l’Australie. Le Zimbabwe a accordé à Starlink, la société d’Elon Musk, une licence d’exploitation dans le pays. 

Juridique

Dans une bataille juridique, TikTok et ses créateurs ont poursuivi le gouvernement américain au sujet d’une loi exigeant que l’application rompe ses liens avec sa société mère chinoise, ByteDance, sous peine d’être interdite aux États-Unis. Cela a incité la cour d’appel du District de Columbia à accélérer l’examen de la loi d’interdiction de TikTok.

La bataille juridique entre la société X, d’Elon Musk, et l’autorité australienne de régulation de la cybersécurité au sujet de la suppression de 65 messages montrant une vidéo d’un évêque chrétien assyrien poignardé a pris fin. En avril, la Cour fédérale d’Australie, agissant à la demande du commissaire à la cybersécurité, a émis une ordonnance mondiale temporaire obligeant X à masquer le contenu de la vidéo. Toutefois, en mai, le tribunal a rejeté la requête de l’autorité de régulation visant à prolonger cette ordonnance, ce qui a conduit l’autorité de régulation à abandonner ses poursuites judiciaires à l’encontre de X.

L’UE a lancé une enquête sur Facebook et Instagram en raison de craintes concernant la sécurité des enfants. Dans le même temps, l’organisme de régulation italien a infligé une amende à Meta pour utilisation abusive des données des utilisateurs.

Économie de l’internet

Microsoft, OpenAI et Nvidia se sont retrouvés sous le microscope antitrust aux États-Unis en raison de leur position dominante perçue dans le secteur de l’IA. Le Rwanda a annoncé son intention de créer une monnaie numérique d’ici à 2026, tandis que les Philippines ont approuvé un programme pilote de monnaie stable.

Droit numérique

La Cour européenne des droits de l’Homme a déclaré que la loi polonaise sur la surveillance violait le droit à la vie privée, faute de garanties et d’un contrôle efficace. Les Bermudes ont interrompu leurs projets de technologie de reconnaissance faciale en raison de préoccupations liées à la protection de la vie privée et de retards dans les projets, ce qui reflète les hésitations mondiales quant à l’impact de la technologie sur les libertés civiles.

OpenAI a été critiquée pour avoir utilisé la voix de Scarlett Johansson dans ChatGPT sans son consentement, ce qui a mis en lumière les questions de respect de la vie privée et de droits de propriété intellectuelle. Le rapport de la GLAAD a révélé que les principales plateformes de médias sociaux ne gèrent pas la sécurité, la vie privée et l’expression de la communauté LGBTQ en ligne.

Google a lancé son service « Results about you » en Australie pour aider les utilisateurs à supprimer les résultats de recherche contenant leurs informations personnelles. Les grandes entreprises mondiales de l’internet travaillent en étroite collaboration avec les régulateurs de l’UE pour s’assurer que leurs produits d’intelligence artificielle sont conformes aux lois sur la protection des données de l’Union, a déclaré la Commission irlandaise de protection des données. Cependant, OpenAI est en conflit avec le Conseil européen de la protection des données au sujet de l’exactitude des résultats de ChatGPT

Développement

L’UE a officiellement adopté le règlement pour une industrie « zéro net » (NZIA) afin d’encourager la fabrication de technologies propres au sein de l’UE.

L’Afrique du Sud s’est engagée à réduire la fracture numérique dans le pays et à étendre l’accès à l’internet pour tous. Le Maroc a lancé un programme visant à étendre l’accès à l’internet à haut débit à 1 800 zones rurales. Le programme « Génération connectée » (GenSi) a été lancé pour fournir des compétences numériques aux jeunes et aux femmes des zones rurales d’Indonésie.

Socioculturel

L’UE a lancé une enquête sur la désinformation sur X après la fusillade du Premier ministre slovaque. Par ailleurs, X a officiellement commencé à autoriser les contenus pour adultes. 

Le Maroc a annoncé sa stratégie numérique 2030, qui vise à dématérialiser les services publics et à renforcer l’économie numérique afin d’encourager les solutions numériques locales, de créer des emplois et d’apporter de la valeur ajoutée. La Zambie a franchi une étape importante dans la transformation numérique des cartes d’identité, en numérisant 81 % de ses cartes d’identité papier en trois mois.

OpenAI a annoncé qu’elle avait démantelé cinq opérations secrètes qui utilisaient ses modèles d’IA pour des activités trompeuses en ligne, ciblant des questions telles que l’invasion de l’Ukraine par la Fédération de Russie, le conflit de Gaza, les élections indiennes, et la politique en Europe et aux États-Unis. Une enquête a révélé une grande inquiétude quant à une éventuelle utilisation abusive de l’IA lors de la prochaine élection présidentielle aux États-Unis. Les élections européennes sont un sujet brûlant au début du mois de juin. L’UE a accusé la Russie de diffuser de la fausse information avant ces élections, et une étude a montré que TikTok n’avait pas réussi à lutter efficacement contre la diffusion de la désinformation avant les élections. Il est intéressant de noter que Microsoft a indiqué que l’IA n’avait eu qu’un impact minime sur la désinformation les entourant.
Les législateurs new-yorkais s’apprêtent à interdire aux entreprises de médias sociaux d’utiliser des algorithmes pour contrôler le contenu vu par les jeunes sans le consentement de leurs parents. Dans le même temps, l’Australie a annoncé un essai de technologies de vérification de l’âge afin d’améliorer la sécurité en ligne des mineurs.

En bref

La gouvernance numérique au cœur du SMSI+20 et du Sommet mondial AI for Good 2024

La dernière semaine de mai a été marquée par deux événements numériques qui ont suscité un grand intérêt : le SMSI+20 et le Sommet mondial AI for Good. Le premier a figuré en bonne place à l’ordre du jour des responsables de la politique numérique, étant donné l’examen prochain, 20 ans après, des progrès réalisés dans la mise en œuvre des résultats du SMSI tels que définis dans la déclaration de Genève et l’agenda de Tunis. L’année 2024 a également été marquée par la négociation du Pacte mondial pour le numérique (PMN), dans le cadre duquel les États membres des Nations unies doivent réaffirmer et renforcer leur engagement en faveur du développement numérique et d’une gouvernance efficace. Il est tout à fait naturel qu’au cours du SMSI+20 de cette année, une question soit restée en suspens : quelle est la pertinence des résultats du SMSI à la lumière du PMN ? 

Selon la première révision du PMN, les États membres de l’ONU doivent « rester fidèles aux résultats du [SMSI] ». Les parties prenantes se sont réunies lors du SMSI+20 pour réfléchir et comparer le PMN et les processus d’examen du SMSI+20. Les intervenants ont souligné la nécessité d’aligner les deux processus, et en particulier de tirer parti du modèle multipartite inclusif défini par le processus du SMSI. Certains ont réitéré l’importance du Forum sur la gouvernance de l’internet (FGI), une plateforme clé de politique numérique pour les discussions multipartites née de l’agenda de Tunis, qui doit être exploitée dans la mise en œuvre et l’alignement des lignes d’action du SMSI et des principes du PMN. D’autres ont souligné la nécessité de mécanismes concrets de suivi du PMN et ont suggéré que le processus d’examen du SMSI+20 soit un moment crucial de réflexion. Enfin, plusieurs autres intervenants se sont penchés sur la dimension régionale, soulignant que les besoins locaux, le multiculturalisme et le multilinguisme étaient des éléments essentiels à prendre en compte dans les deux processus. 

Une discussion plus nuancée a également eu lieu sur la pertinence du processus global du SMSI pour la gouvernance numérique. Avant l’examen des 20 ans en 2025, les experts ont réfléchi aux réalisations du processus du SMSI jusqu’à présent, notamment en ce qui concerne la promotion de la coopération numérique entre les groupes de la société civile, le secteur privé et d’autres parties prenantes. Certains ont appelé à une collaboration accrue au sein du système des Nations unies afin de réaliser des progrès plus substantiels dans la mise en œuvre des AL du SMSI ; d’autres ont encouragé la communauté technique à s’intégrer davantage dans le processus d’examen du SMSI+20. 

Le Sommet mondial AI for Good, quant à lui, est pertinent pour la gouvernance numérique à deux égards. D’une part, il sert de plateforme aux acteurs de l’IA pour se réunir, échanger, travailler en réseau et rechercher des possibilités de développement. L’initiative de l’UIT comprend non seulement un sommet mondial, mais aussi des ateliers et des initiatives qui se déroulent tout au long de l’année, et qui encouragent les développeurs et les chercheurs dans le domaine de l’IA à trouver des solutions créatives aux défis mondiaux. Dans la première révision du PMN, l’initiative AI for Good est également mentionnée pour son rôle de mécanisme de renforcement des capacités en matière d’IA. 

D’autre part, le Sommet mondial AI for Good offre une plateforme aux chefs d’entreprise, aux décideurs politiques, aux chercheurs en IA et à d’autres pour discuter ouvertement des questions de gouvernance de l’IA, échanger des cas d’utilisation à fort potentiel pour faire progresser les ODD et établir des partenariats intersectoriels qui vont au-delà de l’événement de trois jours. Le sommet de cette année comprenait une journée sur la gouvernance de l’IA au cours de laquelle des décideurs politiques de haut niveau et des développeurs d’IA de premier plan ont pu débattre des principaux processus de gouvernance de l’IA, du rôle du système des Nations unies dans l’avancement de ces processus et de la difficulté pour les gouvernements de trouver un équilibre entre les risques et les avantages du développement de l’IA.
Les conversations du sommet ont porté sur des sujets plus techniques que les discussions politiques conventionnelles. Les experts ont évalué les avantages et les inconvénients des modèles linguistiques à code source ouvert par rapport aux modèles propriétaires, ainsi que la possibilité d’établir des normes pour l’harmonisation des industries de haute technologie ou le développement responsable et équitable de la technologie. La diversité linguistique et culturelle dans le développement de l’IA a également été présentée comme un élément clé, en particulier parce que les LLM occupent le devant de la scène.

GIP a fourni des rapports en temps réel du SMSI+20 et du Sommet mondial AI for Good.

Analyse

Avons-nous besoin d’un contrat social numérique ?

Vous connaissez peut-être le concept de « contrat social ». L’idée est que les individus veulent quitter l’état de nature, où il n’y a pas d’ordre politique, et qu’ils forment une société, acceptant d’être gouvernés par une autorité en échange d’une sécurité ou de droits civils. 

L’ère numérique soulève les questions suivantes : l’État peut-il respecter sa part du contrat social ? Avons-nous besoin d’un nouveau contrat social pour l’ère en ligne, qui rétablirait la relation de confiance entre les citoyens et l’État ? Suffit-il de réunir les citoyens et l’État autour d’une même table ? Ou bien le contrat social d’aujourd’hui devrait-il également impliquer spécifiquement la communauté technique et le secteur privé, qui gèrent la majeure partie du monde en ligne ? 

La société moderne pourrait avoir besoin d’un nouveau contrat social entre les utilisateurs, les sociétés Internet et les gouvernements, dans la tradition du Léviathan de Thomas Hobbes (échanger la liberté contre la sécurité) ou du Contrat social de Rousseau, plus favorable (volonté individuelle contre volonté commerciale/politique). Le nouvel accord entre les citoyens, les gouvernements et les entreprises devrait répondre aux questions suivantes : quels devraient être les rôles respectifs des gouvernements et du secteur privé dans la protection de nos intérêts et de nos actifs numériques ? Un système de freins et de contrepoids ? Un système d’équilibre des pouvoirs soigneusement conçu et plus transparent serait-il suffisant ? Le nouveau contrat social doit-il être mondial ou des contrats régionaux et nationaux peuvent-ils fonctionner ?

Un contrat social pourrait résoudre les principaux problèmes et jeter les bases d’un internet plus fiable. Cette solution est-elle réalisable ? Il y a des raisons d’être prudemment optimiste sur la base d’intérêts partagés dans la préservation de l’internet. Pour les entreprises de l’internet, plus elles ont d’utilisateurs confiants, plus elles peuvent faire de bénéfices. Pour de nombreux gouvernements, l’internet est un facilitateur de croissance sociale et économique. Même les gouvernements qui considèrent l’internet comme un outil subversif doivent réfléchir à deux fois avant d’interrompre ou d’interdire l’un de ses services. Nos habitudes quotidiennes et nos vies personnelles sont tellement liées à l’internet que toute perturbation de ce dernier pourrait catalyser une perturbation de la société dans son ensemble. Un internet digne de confiance est donc dans l’intérêt de la majorité.

D’un point de vue rationnel, il est possible de parvenir à un compromis autour d’un nouveau contrat social pour un internet de confiance. Nous devrions faire preuve d’un optimisme prudent, car la politique (en particulier la politique mondiale) tout comme la confiance (et la confiance mondiale) ne sont pas nécessairement rationnelles.

 Book, Comics, Publication, Baby, Person, Face, Head, People, Advertisement, Thomas Hobbes of Malmesbury, Tipu Sultan, Charles François de Cisternay du Fay, Ibn al-Ḥaytham

Les étapes de la gouvernance de l’IA en Europe

En l’espace d’une semaine, l’Europe a connu deux évolutions importantes en matière de gouvernance de l’IA : le Comité des ministres du Conseil de l’Europe a adopté une convention sur l’IA et les droits de l’Homme, tandis que le Conseil européen a donné son approbation finale à l’Acte sur l’IA de l’UE. Cela semble un peu confus à première vue, n’est-ce pas ? 

Tout d’abord, il s’agit d’organes différents : le Conseil de l’Europe (CdE) est une organisation européenne de défense des droits de l’Homme, tandis que le Conseil européen est l’un des organes exécutifs de l’UE. Ensuite, logiquement, les deux documents en question sont également différents, mais ils présentent de nombreuses similitudes.

Les deux documents définissent un système d’IA comme un système basé sur une machine qui déduit, à partir des données qu’il reçoit, comment générer des résultats tels que des prédictions, du contenu, des recommandations ou des décisions qui peuvent influencer des environnements physiques ou virtuels. 

Les deux documents visent à garantir que les systèmes d’IA soutiennent et ne portent pas atteinte aux droits de l’Homme, à la démocratie et à l’État de droit. Toutefois, la convention-cadre du Conseil de l’Europe offre une structure générale et globale applicable à toutes les étapes du cycle de vie de l’IA. La loi européenne sur l’IA est plus spécifique au marché unique de l’UE et vise également à garantir un niveau élevé de protection de la santé, de la sécurité et de l’environnement. 

Les deux documents adoptent une approche fondée sur le risque, ce qui signifie que les réglementations sont d’autant plus strictes que le préjudice potentiel pour la société est élevé. Ils soulignent que les personnes physiques doivent être informées si elles interagissent avec un système d’IA et non avec une autre personne physique. 

Aucun des deux documents ne couvre les activités liées à la sécurité nationale et à la défense ni les systèmes d’IA ou les modèles développés dans le cadre de la recherche et du développement scientifiques.  

Les États parties à la convention du CdE peuvent évaluer si un moratoire, une interdiction ou d’autres mesures appropriées sont nécessaires en ce qui concerne certaines utilisations de systèmes d’IA si l’État partie considère que ces utilisations sont incompatibles avec le respect des droits de l’Homme, le fonctionnement de la démocratie ou l’État de droit.

La convention du CdE est de caractère international, et elle est ouverte à la signature des États membres du CdE, des États non membres qui ont participé à son élaboration et de l’UE. Elle sera juridiquement contraignante pour les États signataires. La loi européenne sur l’IA, quant à elle, n’est applicable que dans les États membres de l’UE. 

En outre, les mesures énumérées dans ces deux documents sont appliquées de manière différente. Dans le cas de la convention-cadre du Conseil de l’Europe, chaque partie à la convention adopte ou maintient les mesures législatives, administratives ou autres appropriées pour donner effet aux dispositions énoncées dans la présente convention. Son champ d’application est multiple : Si les signataires doivent appliquer la convention aux « activités entreprises dans le cadre du cycle de vie ou des systèmes d’IA par les autorités publiques ou les acteurs privés agissant en leur nom », ils peuvent décider de l’appliquer également aux activités des acteurs privés ou de prendre d’autres mesures pour satisfaire aux normes de la convention dans le cas de ces acteurs. 

En revanche, la loi européenne sur l’IA est directement applicable dans les États membres de l’UE. Elle n’offre pas d’options : les fournisseurs de systèmes d’IA qui mettent sur le marché de l’UE des systèmes d’IA ou des modèles d’IA à usage général doivent se conformer aux dispositions, de même que les fournisseurs et les déployeurs de systèmes d’IA dont les résultats sont utilisés au sein de l’UE. 

Notre conclusion ? La convention du CdE a une perspective plus large et elle est moins détaillée que l’acte de l’UE, tant en ce qui concerne les dispositions que les questions d’application. Bien qu’ils se chevauchent quelque peu, ces deux documents sont distincts, mais complémentaires. Les parties qui sont des États membres de l’UE doivent appliquer les règles de l’UE lorsqu’elles traitent entre elles des questions couvertes par la convention du CdE. Elles doivent toutefois respecter les objectifs de la convention du CdE et l’appliquer pleinement lorsqu’elles interagissent avec des parties non membres de l’UE.

 Book, Comics, Publication, Advertisement, Person, Poster, Face, Head, Architecture, Building, Factory, Manufacturing, François Walthéry

En bref

Le téléphone rouge de la cybernétique : l’annuaire POC

Le concept des lignes directes entre États, qui servent de lignes de communication immédiates et sécurisées en cas de crise ou d’urgence, a été établi il y a de nombreuses années. L’exemple le plus célèbre est peut-être le téléphone rouge Washington-Moscou de 1963, créé pendant la crise des missiles de Cuba comme mesure de confiance pour éviter une guerre nucléaire.

 Book, Comics, Publication, Person, Face, Head

Dans le cybermonde d’aujourd’hui, où l’incertitude et l’interconnexion entre les États sont de plus en plus grandes, et où les conflits ne sont pas rares, les canaux de communication directe sont indispensables au maintien de la stabilité et de la paix. En 2013, les États ont discuté pour la première fois des points de contact pour la gestion des crises dans le contexte de la sécurité des TIC. 

Après une décennie de discussions sous les auspices de l’ONU, les États se sont finalement mis d’accord sur les éléments nécessaires à la mise en place d’un répertoire mondial et intergouvernemental des points de contact, dans le cadre du rapport d’activité annuel du groupe de travail à composition non limitée de l’ONU (GTCNL).

Qu’est-ce que l’annuaire POC et pourquoi est-il important ? L’annuaire POC est un répertoire en ligne de points de contact qui vise à faciliter l’interaction et la coopération entre les États afin de promouvoir un environnement TIC ouvert, sûr, stable, accessible et pacifique. Il se veut facultatif, pratique et neutre, et vise à accroître le partage d’informations entre les États, et à soutenir la prévention, la détection et la réponse à des incidents TIC urgents ou importants par le biais d’efforts de renforcement des capacités.

En janvier 2024, le Bureau des affaires de désarmement des Nations unies (UNODA) a invité tous les États à désigner, dans la mesure du possible, des POC diplomatiques et techniques. En mai 2024, l’UNODA a annoncé que 92 États l’avaient déjà fait. L’UNODA a également annoncé le lancement de l’annuaire mondial des POC et de son portail en ligne, marquant ainsi l’opérationnalisation de ces mesures de confiance dans la sphère de la sécurité des TIC.

Le premier test ping est prévu pour le 10 juin 2024, et d’autres tests de ce type seront effectués tous les six mois pour maintenir les informations à jour. Le travail sur le répertoire complétera le travail des réseaux des équipes d’intervention en cas d’urgence informatique (CERT) et des équipes d’intervention en cas d’incident de sécurité informatique (CSIRT).

Chaque État décide de la manière de répondre aux communications reçues par l’intermédiaire du répertoire. L’accusé de réception initial ne signifie pas que vous êtes d’accord avec les informations partagées, et toutes les informations échangées entre les POC doivent rester confidentielles.

Avant la création du répertoire mondial des POC au niveau des Nations unies, certains États utilisaient déjà des POC au niveau bilatéral ou régional, tels que les POC de cybersécurité ASEAN-Japon, le réseau de POC politiques et techniques de l’OSCE, les POC du Conseil de l’Europe établis par la Convention de Budapest et les POC d’INTERPOL pour la cybercriminalité. Bien que ces canaux puissent se chevaucher, tous les États ne sont pas membres de ces organisations régionales ou sous-régionales, ce qui fait de l’annuaire mondial un complément essentiel. 

Le répertoire s’accompagne également d’efforts de renforcement des capacités, ce qui en constitue probablement l’un des éléments les plus importants. Par exemple, le président du GTCNL est chargé d’organiser des exercices de simulation pour utiliser des scénarios de base afin de permettre aux représentants des États de simuler les aspects pratiques de la participation à un répertoire des POC, et de mieux comprendre leurs rôles diplomatiques et techniques. 

En outre, des réunions régulières en personne et virtuelles des POC seront organisées. Une réunion spécifique sera consacrée cette année à la mise en œuvre de l’annuaire et à l’examen des améliorations nécessaires ; nous devrions donc nous attendre à des actualisations ultérieures sur sa mise en œuvre pratique.

Réduire la confusion terminologique : s’agit-il de gouvernance numérique, de gouvernance de l’internet ou de gouvernance de l’IA ?

Les termes « numérique » et « internet » sont presque interchangeables dans les discussions sur la gouvernance. Bien que la plupart des utilisations soient occasionnelles, le choix signale parfois des approches de gouvernance différentes. 

Le terme « numérique » renvoie à l’utilisation d’une représentation binaire – par « 0 » et « 1 » – des éléments de notre réalité sociale. Le terme « internet » fait référence à toute communication numérique effectuée via le protocole de communication de transport/protocole internet (TCP/IP). Vous lisez probablement ce texte grâce au TCP/IP, qui transporte des signaux numériques (0 et 1) représentant des lettres, des mots et des phrases. L’utilisation correcte des deux termes décrit ce qui se passe en ligne, pour introduire un troisième terme, ainsi que dans les informations numérisées. 

Devrions-nous utiliser le terme de gouvernance de l’internet dans un sens plus spécifique que celui de gouvernance numérique ? La réponse est à la fois oui et non.

« OUI », on pourrait dire que tous les phénomènes numériques pertinents pour la gouvernance sont communiqués via TCP/IP, du contenu au commerce électronique et à la cybercriminalité. La plupart des aspects de l’IA qui nécessitent une gouvernance concernent l’utilisation de l’internet pour interagir avec ChatGPT ou pour générer des images et des vidéos sur des plateformes d’IA.

Par exemple, la réglementation des « hypertrucages » générés par l’IA est une question de gouvernance de l’internet, car le préjudice causé à la société est dû à leur diffusion via l’internet et les médias sociaux alimentés par TCP/IP, le protocole de l’internet. Si nous avons des hypertrucages enregistrés sur nos ordinateurs, il n’y a pas lieu de les réglementer, car ils ne causent pas de préjudice à la société.

La réponse « NON » est liée à la tendance croissante à régir l’IA au-delà de ses utilisations, en réglementant les algorithmes et le matériel sur lesquels l’IA fonctionne. Cette approche de la réglementation du fonctionnement de l’IA sous le prétexte des risques à long terme est problématique, car elle ouvre la porte à une intrusion plus profonde dans l’innovation, les utilisations abusives et les risques qui peuvent affecter le fondement de la société humaine.

Alors que les propositions de gouvernance de l’IA deviennent de plus en plus populaires, nous devrions garder à l’esprit deux leçons tirées de l’histoire de la technologie et de la gouvernance de l’internet. 

Premièrement, si l’internet s’est développé au cours des dernières décennies, c’est précisément parce qu’il a été réglementé au niveau des utilisations (applications). Deuxièmement, chaque fois qu’une gestion exceptionnelle est allée plus loin sous le capot de la technologie, cela s’est fait avec une ouverture et une transparence totales, comme cela a été le cas pour l’établissement de TCP/IP, HTML et d’autres normes internet. 

En résumé, si la gouvernance de l’IA s’effectue au niveau des usages, comme cela a été le cas pour la plupart des technologies dans l’histoire, elle n’est pas différente de la gouvernance de l’internet. Bien que cela puisse sembler hérétique, compte tenu de l’engouement actuel pour l’IA, on peut même se demander si nous avons besoin d’une gouvernance de l’IA. Elle devrait peut-être être régie par les règles existantes en matière de propriété intellectuelle, de contenu, de commerce, etc.

Prendre du recul par rapport au tourbillon des débats numériques pourrait nous aider à revoir la terminologie et les concepts que nous avons peut-être considérés comme acquis. De telles réflexions, y compris la façon dont nous utilisons les termes « internet » et « numérique », devraient améliorer la clarté de la pensée sur les développements numériques/internet/IA à venir.

Ce texte a été publié pour la première fois sur le blog de Diplo. Lire la version originale.


Actualités de la Francophonie

 Logo, Text

L’OIF au Sommet mondial de la société de l’information (SMSI) 27-31 mai à Genève

L’un des rendez-vous majeurs de l’agenda numérique international a donné l’occasion à l’OIF de mettre en avant ses priorités de de sensibiliser les décideurs à des thématiques importantes dans le contexte des négociations du Pacte numérique mondial.

C’est ainsi que l’OIF s’est associé à l’initiative de la Fédération Wallonie-Bruxelles autour du sujet « Intelligence artificielle et désinformation : des solutions techniques et politiques». M. Bertrand Levant, chef du pôle Intégrité de l’information à l’OIF, a présenté les initiatives de l’organisation dans ce domaine, et notamment le programme ODIL, plateforme francophone des initiatives de lutte contre la désinformation.De son côté, la Représentation permanente de l’OIF auprès des Nations Unies à Genève et Vienne (RPGV), cheffe de file de l’organisation sur les questions de gouvernance internationale du numérique et de l’intelligence artificielle, a organisé deux sessions. La première, organisée en partenariat avec le programme Information pour tous (IFAP) de l’UNESCO et intitulée « Comprendre et surmonter les biais culturels et linguistiques dans l’IA » a accueilli deux professionnels du secteur pour évoquer les méthodes concrètes de mise en œuvre de la découvrabilité de contenus francophones sur internet.

 People, Person, Adult, Female, Woman, Chair, Furniture, Crowd, Male, Man, Electronics, Headphones, Indoors, Accessories, Bag, Handbag, Computer, Laptop, Pc, Electrical Device, Microphone, Architecture, Building, Classroom, Room, School, Remote Control, Audience, Lecture, Computer Hardware, Hardware, Monitor, Screen, Hall

Puis, la RPGV s’est penchée plus particulièrement sur les enjeux du Pacte numérique mondial (PNM) en organisant un dialogue entre Mme Renata Dwan, conseillère spéciale au Bureau de l’Envoyé spécial pour les technologies du Secrétaire général de l’ONU, Mme Sorina Teleanu, Directrice de la connaissance à la Diplo foundation et M. Henri Eli Monceau, Représentant permanent de l’OIF à Genève. Ce dialogue, très suivi par les diplomates en charge des questions numériques à Genève, a notamment permis d’évoquer le partage des rôles entre Genève et New York dans la mise en œuvre et le suivi du PNM et les points d’amélioration des versions actuelles du document.

 People, Person, Crowd, Adult, Female, Woman, Male, Man, Audience, Indoors, Speech, Lecture, Face, Head, Clothing, Formal Wear, Suit

Les Tech civiques francophones au service de la gouvernance démocratique

En marge du SMSI, la RPGV a organisé le 30 mai au Graduate Institute de Genève, et en partenariat avec l’organisation ICT4Peace, un évènement de valorisation de l’action des Tech civiques francophones.

Quatre entrepreneurs sociaux (Open Terms Archive, Sayna, Siren Analytics et Fondation Hirondelle) ont ainsi eu l’opportunité de présenter leurs activités respectives, où les outils d’intelligence artificielle se mettent au service d’objectifs de « service public »:. contrôle démocratique des conditions d’utilisation des plateformes numériques, lutte contre la désinformation ou formation aux métiers du numérique des populations défavorisées. Des projets activement soutenus par l’OIF qui a souhaité faire connaître leur travail au plus grand nombre, convaincue que de nombreux partenariats sont possibles avec les organisations de la Genève internationale.

 Groupshot, Person, Adult, Male, Man, Clothing, Formal Wear, Suit, Coat, Face, Head, Computer Hardware, Electronics, Hardware, Monitor, Screen, Scarf, Pants

L’assistance de l’OIF dans le cadre des négociations du Pacte numérique mondial

L’OIF poursuit son soutien multiforme aux délégations francophones engagées dans les négociations du PNM. Au-delà des actions de plaidoyer et de sensibilisation, la Représentation permanente de l’OIF auprès des Nations unies à Genève et à Vienne (RPGV) a fourni aux délégations une analyse détaillée des changements apportés au projet de texte dans les versions successives au « 0 draft ».

Des propositions de formulation sur les priorités de l’OIF (promotion de la diversité culturelle et linguistique des contenus numériques, et renforcement des compétences numériques) ont également été soumises aux diplomates francophones afin qu’ils puissent s’en faire l’écho dans le cadre des négociations.

Par ailleurs, un atelier intitulé « La diversité culturelle et linguistique à l’ère du numérique et des technologies émergentes » a été organisé par la Représentation permanente de l’OIF à New York et la Représentation permanente de l’OIF à Genève, en collaboration avec le Groupe des amis de l’espagnol et la Communauté des pays de langue portugaise. Cette discussion a permis de réaffirmer collectivement l’importance d’élargir et de renforcer dans le texte du Pacte numérique mondial les formulations relatives à la diversité culturelle et linguistique, et de passer en revue les formulations concrètes qui pourraient être introduites dans le texte.

L’OIF à ICANN80

La Secrétaire générale de la Francophonie, invitée par les autorités rwandaises, a demandé à M. Henri Monceau, Représentant permanent de l’OIF à Genève et Vienne, de la représenter lors de la conférence ICANN80 de Kigali (8-14 juin).

C’est ainsi que M. Monceau a pris part à de nombreuses tables-rondes dans le cadre de la Conférence, ainsi qu’à des évènements organisés en marge, telle que la réunion convoquée par Smart Africa.

Cette mission a également permis d’organiser de nombreuses rencontres bilatérales avec les acteurs techniques de la gouvernance internationale du numérique afin de les sensibiliser aux priorités de l’OIF et des pays de l’espace francophone.


DW Weekly #163 – 7 June 2024

 Page, Text

Dear readers, 

Welcome to another issue of the Digital Watch weekly! 

AI in healthcare is not a new concept. What is new is the concept of an AI hospital town. Researchers from China’s Tsinghua University developed a virtual environment called ‘Agent Hospital’ where the entire process of diagnosing and treating patients is simulated. Virtual doctors, nurses, and patients are powered by large language models (LLMs). The environment has 14 doctors and 4 nurses and could reportedly treat as many patients in a few days as human doctors would in 2 years.

Its strength is in simulations. Because the patients aren’t ‘real’ and their roles are generated by AI – more specifically ChatGPT 3.5 – inexperienced doctors can practice diagnosis and treatment without risks. Another example researchers gave is simulating the spread and control of infectious diseases in a region. 

The town is scheduled to be operational by the second half of 2024. Researchers say it will bring high-quality, affordable and convenient healthcare services to the public.

Some questions that beg for an answer. Firstly, what data were the AI agents trained on? Secondly, if a real patient inputs their medical information, wherever they are supposed to input that data (that’s not clear either), how secure is that data? Thirdly, will there be a hardware component that could allow for more detailed tests to be carried out? And if not, and if patients have to go somewhere and get physical tests done, is using an AI doctor still affordable and convenient? Fourthly, will AI be able to give out prescriptions? If so, how will prescription abuse be prevented? These are just the questions at the top of our heads. There are six more months to go before the operationalisation of the AI hospital town – we’re still unsure what the ‘town’ part is about – so more details will surely surface.

We all know that entering your symptoms in a search engine and pressing the enter button is less than advisable since some of us will end up self-diagnosing with deadly eradicated diseases, (or possibly worse, a rational-sounding mistake), thus bringing more stress and anxiety upon ourselves. Will the AI doctor truly be more efficient? Would you entrust your healthcare to an AI agent, or would you follow up with a human doctor, just to make sure?

A caricature drawing shows a nude person sitting on a stool with a sheet covering their private parts. A mechanical AI probes and sends readings to its computer screen that says 'Patient No. 98,762. Condition: Processing ETA 0.00. It can take a few minutes or hours...'

In other news, the EU elections are in full swing. One of the most important issues of our time is disinformation, and this week, we’ve seen the EU accuse Russia of spreading disinformation ahead of these EU elections, and a study found TikTok failed to address disinformation effectively before the elections. Interestingly, Microsoft reported that AI had minimal impact on disinformation surrounding the EU elections. We also point you to our analysis of the impact of digital technologies on elections

Andrijana and the Digital Watch team


Highlights from the week of 31 May-7 June 2024

digital USA flag 2

The Department of Justice will lead the investigation into Nvidia, while the FTC will examine OpenAI and Microsoft.

OpenAi

Prominent figures in the AI community endorsed the letter, criticising the inadequate preparations made by AI companies for the potential dangers of AI technology.

tturhpw9bwq

Downgrading chip designs raises concerns about long-term innovation and competitiveness.

shutterstock 2253270387 scaled

The plan aims to strengthen the chip supply chain through various measures such as promoting domestic production sites, investing in human resources, and enhancing research and development activities.

x app logo front twitter blue bird symbol background 3d rendering

Elon Musk and Australian officials, including Prime Minister Anthony Albanese, engaged in heated exchanges over the issue.

person using new social media application smartphone 1

Users can mark their posts as containing sensitive media, and access to such posts is restricted for underage users.

Ticketmaster logo

The massive data breach of personal sensitive information of users have led to a law firm filing a lawsuit against the company.

back view of male hacker wearing a hoodie

The attack has compromised blood transfusion IT systems, endangering patient health and eroding public trust in healthcare institutions.

rwanda

Rwanda follows examples from other African countries, incorporating public consultations and international testing.



Reading corner

Temporal clauses

In her third post in the series ‘Speaking of Futures,’ Dr Biljana Scott delves into presuppositions and their influence on how we perceive the future, especially regarding AI.

AIatUN 1

The UN faces the challenge of integrating AI in a way that maintains its impartiality and credibility, advocating for an open-source AI platform contributed to by countries, companies, and citizens to ensure transparency, inclusivity, and adherence to its core principles.

Newsletter banner JUNE 2024 DW Banner 900x736px header

In our June issue, we discuss the need for a digital social contract, look at discussions from WSIS+20 and AI for Good, analyse the newest AI governance developments in Europe, explain the significance of the OEWG POC directory, and attempt to reduce terminological confusion over the terms digital, internet and AI governance.

Digital Watch newsletter – Issue 90 – June 2024

 Page, Text, Book, Publication, Advertisement, Poster, Person, Comics, Face, Head

Snapshot: The developments that made waves

AI governance

Chile introduced an updated national AI policy along with new legislation to ensure ethical AI development, address privacy concerns, and promote innovation within its tech ecosystem. In South Africa, the government announced the formation of an AI expert advisory council. The council will be responsible for conducting research, providing recommendations, and overseeing the implementation of AI policies in the country. Meanwhile, Zambia finalised a comprehensive AI strategy aimed at leveraging modern technologies for the country’s development.

The highly anticipated second global AI summit in Seoul, secured safety commitments from leading companies, emphasising the importance of collaborative efforts to address AI-related risks.

In the USA, lawmakers introduced a bill to regulate AI exports. This bill aims to control the export of AI technologies that could be used for malicious purposes or pose a threat to national security. Additionally, the US Department of Commerce is considering new export controls on AI software sold to China. This comes as the USA and China met in Geneva for discussions on AI risks. 

Concerns about AI safety and transparency were highlighted by a group of current and former OpenAI employees who issued an open letter warning that leading AI companies lack the necessary transparency and accountability to prevent potential risks.

Technologies

The USA is set to triple its semiconductor production by 2032, widening the chipmaking gap with China. Chinese AI chip firms, including industry leaders such as MetaX and Enflame, are downgrading their chip designs to comply with the Taiwan Semiconductor Manufacturing Company’s (TSMC) stringent supply chain security protocols and regulatory requirements, raising concerns about long-term innovation and competitiveness. South Korea has unveiled a substantial 26 trillion won (USD$19 billion) support package for its chip industry, while the EU Chips Act will fund a new chip pilot line with EUR2.5 billion. Japan is considering new legislation to support the commercial production of advanced semiconductors. 

Neuralink’s first human trial of its brain implant faced significant challenges as the device’s wires retracted from the brain, affecting its ability to decode brain signals.

Cybersecurity

High-level talks between the US and China brought attention to cyber threats like the Volt Typhoon, reflecting escalating tensions. US and British officials underscored that China poses a formidable cybersecurity threat. Meanwhile, the Open-ended Working Group (OEWG) on ICT Security established a Global POC Directory, aiming to bolster international response to cyber incidents. Read more below.

Suspicions arose over a massive cyberattack on the UK’s Ministry of Defence, with China being implicated. The Qilin group claimed responsibility for a cyberattack on Synnovis labs, disrupting key services at London hospitals. A data breach claimed by IntelBroker targeted Europol, amplifying concerns over law enforcement data security. Additionally, Ticketmaster suffered a data breach which compromised 560 million users’ personal data, and is facing a class action lawsuit.

Infrastructure

US officials warned telecom companies that a state-controlled Chinese company that repairs international undersea cables might be tampering with them. Google announced it will build Umoja, the first undersea cable connecting Africa and Australia. Zimbabwe has granted Elon Musk’s Starlink a license to operate in the country.

Legal

In a major legal battle, TikTok and creators on TikTok have sued the US government over a law that requires the app to sever ties with its Chinese parent company, ByteDance, or face a ban in the USA. This prompted a US Court of Appeals for the District of Columbia to expedite the review of the TikTok ban law.

A legal battle between Elon Musk’s X and the Australian cyber safety regulator over the removal of 65 posts showing a video of an Assyrian Christian bishop being stabbed has come to an end. In April, the Federal Court of Australia, acting upon the eSafety Commissioner’s application, issued a temporary worldwide order mandating X to hide the video content. However, in May, the court rejected the regulator’s motion to extend this order, leading the regulator to drop its legal proceedings against X.

The EU launched an investigation into Facebook and Instagram over concerns about child safety. Meanwhile, Italy’s regulatory body fined Meta for misuse of user data.

Internet economy

Microsoft, OpenAI, and Nvidia found themselves under the antitrust microscope in the USA for their perceived dominance in the AI industry. Rwanda announced plans for a digital currency by 2026, while the Philippines approved a stablecoin pilot program.

Digital rights

The European Court of Human Rights ruled that Poland’s surveillance law violates the right to privacy, lacking safeguards and effective review. Bermuda halted its facial recognition technology plans due to privacy concerns and project delays, reflecting global hesitations about the technology’s impact on civil liberties.

OpenAI was criticised for using Scarlett Johansson’s voice likeness in ChatGPT without her consent, highlighting issues of privacy and intellectual property rights. GLAAD’s report found major social media platforms fail to handle the safety, privacy, and expression of the LGBTQ community online.

Google launched its Results about you tool in Australia to help users remove search results that contain their personal information. Leading global internet companies are working closely with EU regulators to ensure their AI products comply with the bloc’s data protection laws, Ireland’s Data Protection Commission stated. However, OpenAI is in hot water with the European Data Protection Board over the accuracy of ChatGPT output

Development

The EU has officially enacted the Net-Zero Industry Act (NZIA) to bolster clean technologies manufacturing within the EU.

South Africa pledged to bridge the digital divide in the country and expand internet access for all. Morocco launched a programme to expand high-speed internet to 1,800 rural areas. The Connected Generation (GenSi) programme was launched to provide digital skills to youth and women in rural Indonesia.

Sociocultural

The EU launched an investigation into disinformation on X after the Slovakian Prime Minister’s shooting. X also officially began allowing adult content

Morroco announced its Digital Strategy 2030, which aims to digitise public services and enhance the digital economy to foster local digital solutions, create jobs, and add value. Zambia reached a key milestone in digital ID transformation, digitising 81% of its paper ID cards in 3 months.

OpenAI announced that it had disrupted five covert influence operations that misused its AI models for deceptive activities online, targeting issues such as Russia’s invasion of Ukraine, the Gaza conflict, Indian elections, and politics in Europe and the USA. A survey revealed widespread concerns about potential AI abuse in the upcoming US presidential election. EU elections are a hot topic at the beginning of June. The EU accused Russia of spreading disinformation ahead of these elections, and a study found TikTok failed to address disinformation effectively before the elections. Interestingly, Microsoft reported that AI had minimal impact on disinformation surrounding the elections.

New York lawmakers are preparing to ban social media companies from using algorithms to control content seen by youth without parental consent. Meanwhile, Australia announced a trial for age verification technologies to improve online safety for minors. 


Digital governance in focus at WSIS+20 Summit and AI for Good Global Summit 2024

The last week of May saw two attention-grabbing digital events: the WSIS+20 Forum and the AI for Good Global Summit. The former was featured heavily on the agenda of digital policymakers, given the upcoming 20-year review of the implementation progress of the WSIS outcomes as defined in the Geneva Declaration and the Tunis Agenda. Meanwhile, 2024 also saw the negotiation of the Global Digital Compact (GDC), where UN member states are to reaffirm and strengthen their commitment to digital development and effective governance. It is only natural that during this year’s WSIS+20 Forum, a question lingered on the tip of everyone’s tongue: What is the relevance of the WSIS outcomes in light of the GDC? 

 Advertisement, Person, Car, Transportation, Vehicle, Accessories, Bag, Handbag, Clothing, Footwear, Shoe, Poster

According to the first revision of the GDC, UN member states are to ‘remain committed to the outcomes of the [WSIS]’. Stakeholders gathered at the WSIS+20 Forum to reflect on and compare the GDC and the WSIS+20 review processes. Among participants, there was an evident concern about duplicating existing frameworks for digital governance, which increases the complexity and burden for stakeholders to follow and implement both processes; speakers emphasised the need to align the two processes, and especially to leverage the inclusive multistakeholder model laid out by the WSIS process. Some reiterated the importance of the Internet Governance Forum (IGF), a key digital policy platform for multistakeholder discussion born from the Tunis Agenda, to be harnessed in the implementation and alignment of both the WSIS Action Lines (ALs) and the GDC principles. Some stressed the necessity of concrete follow-up mechanisms to the GDC and suggested that the WSIS+20 review process be a critical time of reflection. Still others looked at the regional dimension, underscoring local needs, multiculturalism, and multilingualism as essential to be reflected in both processes. 

There was also a more nuanced discussion around the relevancy of the overall WSIS process to digital governance. Ahead of the 20-year review in 2025, experts reflected on the achievements of the WSIS process so far, especially in fostering digital cooperation among civil society groups, the private sector, and other stakeholders. Some called for increased collaboration among the UN system to make more substantive progress in the implementation of the WSIS ALs; others encouraged the technical community to be further integrated into the WSIS+20 review process. 

The AI for Good Global Summit, on the other hand, is relevant to digital governance in two ways. For one, it serves as a platform for AI actors to convene, exchange, network, and seek scaling opportunities. Not only does the ITU initiative feature a global summit, but it also hosts year-round workshops and initiatives that encourage AI developers and researchers to innovate creative solutions to global challenges. In the first revision of the GDC, AI for Good is also mentioned for its role as a mechanism for AI capacity building. 

Second, the AI for Good Global Summit provides a platform for business leaders, policymakers, AI researchers, and others to openly discuss AI governance issues, exchange high-potential use cases in advancing the SDGs, and establish cross-sectoral partnerships that go beyond the 3-day event. This year’s summit featured an AI governance day where high-level policymakers and top AI developers can deliberate on major AI governance processes, the role of the UN system in advancing such processes, and the conundrum for governments to balance risks and gains from AI development.

The conversations at the summit featured more technical topics than conventional policy discussions. Experts evaluated the pros and cons of open source vs proprietary large language models (LLMs) and the potential to set standards for the harmonisation of high-tech industries or responsible and equitable development of the technology. Linguistic and cultural diversity in AI development was also highlighted as key, especially as LLMs are taking centre stage.

GIP provided just-in-time reports from WSIS+20 and AI for Good Global Summit.

wf24 d2 1200x1200 1
dig.watch

The WSIS+20 Forum High-Level Event, part of the World Summit on the Information Society process, was held 27–31 May. The meetings reviewed progress related to information and knowledge societies, shared best practices, and built partnerships.

Global Summit 2024
dig.watch

The AI for Good Global Summit was held 30–31 May, in Geneva, Switzerland. This event, part of the AI for Good platform, focused on identifying practical AI applications to advance the SDGs globally.


Do we need a digital social contract?

You might be familiar with the concept of social contract. The idea is that individuals want to leave behind the state of nature, where there is no political order, and they form a society, consenting to be governed by an authority in exchange for security or civil rights. 

The digital age raises the question: Can the state deliver on its part of the social contract? Do we need a new social contract for the online era, which will re-establish the relationship of trust between citizens and the state? Is it enough to bring citizens and the state to the same table? Or should today’s social contract also specifically involve the technical community and the private sector, which manage most of the online world? 

Modern society may need a new social contract between users, internet companies, and governments, in the tradition of Thomas Hobbes’s Leviathan (exchange freedom for security) or Rousseau’s more enabling Social Contract (individual vs commercial/political will). The new agreement between citizens, governments, and businesses should address the following questions: What should the respective roles of governments and the private sector be in protecting our interests and digital assets? Would a carefully designed checks-and-balances system with more transparency be sufficient? Should the new social contract be global, or would regional and national contracts work?

Drawing captures the thought-bubble discussion of philosophers including Thomas Hobbes, David Hume, John Locke, and Hugo Grotius among a diverse group of government, academic, technical community, civil society and international organisation representatives at a large conference table covered with papers and computers.

A social contract could address the principal issues and lay the foundation for the development of a more trustworthy internet. Is this a feasible solution? Well, there is reason for cautious optimism based on shared interests in preserving the internet. For internet companies, the more trusting users they have, the more profit they can make. For many governments, the internet is a facilitator of social and economic growth. Even governments who see the internet as a subversive tool have to think twice before they interrupt or prohibit any of its services. Our daily routines and personal lives are so intertwined with the internet that any disruption to it could catalyse a disruption for our broader society. Thus, a trustworthy internet is in the interests of the majority.

Rationally speaking, there is a possibility of reaching a compromise around a new social contract for a trusted internet. We should be cautiously optimistic, since politics (especially global politics), like trust (and global trust), are not necessarily rational.

This text was adapted from the opinion pieces ‘The Internet and trust’ and ‘In the Internet we trust: Is there a need for an Internet social contract?

internet trust
www.diplomacy.edu

The blog explores the critical role trust plays in the fabric of internet governance and digital ecosystems. It argues for strengthened trust mechanisms to enhance security and cooperation in the digital age.

2015 02 12 internetbussinesmodel01X2 thumb
www.diplomacy.edu

The author examines the necessity of an internet social contract to foster trust and cooperation online, highlighting the importance of defining digital rights and responsibilities for a harmonious cyberspace. This contemplation stresses the need for a collective agreement to guide internet behaviour and governance.


AI governance milestones in Europe

In the span of a week, two significant developments in AI governance happened in Europe: the Council of Europe’s Committee of Ministers adopted a convention on AI and human rights, while the European Council gave final approval to the EU AI Act. Reads just a bit confusing at first glance, doesn’t it? 

Firstly, these are different bodies: The Council of Europe (CoE)is a European human rights organisation, while the European Council is one of the executive bodies of the EU. Secondly, logically, the two documents in question are also different, yet they share many similarities.

Both documents define an AI system as a machine-based system that infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments. 

Both documents aim to ensure AI systems support and do not undermine human rights, democracy, and the rule of law. However, the CoE’s framework convention provides a broad, overarching structure applicable to all stages of the AI lifecycle. The EU’s AI Act is more specific to the EU’s single market and also aims to ensure a high level of protection of health, safety, and environmental protection. 

A risk-based approach is adopted in both documents, meaning the higher the potential harm to society, the stricter the regulations are. Both documents highlight that natural persons should be notified if they are interacting with an AI system, and not another natural person. 

Neither of the documents extends to national security and defence activities. Neither document covers AI systems or models developed for scientific research and development.

Parties to the CoE convention can assess whether a moratorium, a ban or other appropriate measures are needed with respect to certain uses of AI systems if the party considers such uses incompatible with the respect for human rights, the functioning of democracy or the rule of law.

The CoE convention is international in nature, and it is open for signature by the member states of the CoE, the non-member states that have participated in its drafting, and the EU. It will be legally binding for signatory states. The EU AI Act is, on the other hand, applicable only in EU member states. 

Further, the measures enumerated in these two documents are applied differently. In the case of the CoE framework convention, each party to the convention shall adopt or maintain appropriate legislative, administrative or other measures to give effect to the provisions set out in this convention. Its scope is layered: While signatories are to apply the convention to ‘activities undertaken within the lifecycle or AI systems undertaken by public authorities or private actors acting on their behalf’, they may decide whether to apply it to the activities of private actors as well, or to take other measures to meet the convention’s standards in the case of such actors. 

On the other hand, the EU AI Act is directly applicable in the EU member states. It does not offer options: AI providers who place AI systems or general-purpose AI models on the EU market must comply with the provisions, and so must providers and deployers of AI systems whose outputs are used within the EU. 

Our conclusion? The CoE convention has a broader perspective, and it is less detailed than the EU Act, both in provisions and in enforcement matters. While there is some overlap between them, these two documents are distinct yet complementary. Parties that are EU member states must apply EU rules when dealing with each other on matters covered by the CoE convention. However, they must still respect the CoE’s convention’s goals and fully apply the convention when interacting with non-EU parties.

A humanoid AI sits at a table discussing with two humans holding papers. The large words 'OBEY' and 'SERVE' and smaller charts show on the wall. Smaller indicators name the EU AI ACT and the CoE Convention on AI.

The red telephone of cyber: The POC Directory

The concept of hotlines between states, serving as direct communication lines for urgent and secure communication during crises or emergencies, was established many years ago. Perhaps the most famous example is the 1963 Washington-Moscow Red Phone, created during the Cuban Missile Crisis as a confidence-building measure (CBM) to prevent nuclear war. 

Coloured drawing divided in two parts depicts and old-fashioned red telephone held by a person in one classic war room with a map and a model battlefield game board and another on the right in a differently decorated but comparable war room.

In today’s cyberworld, where uncertainty and interconnectedness between states are increasingly higher, and conflicts are not unheard of, direct communication channels are indispensable for maintaining stability and peace. In 2013, states discussed points of contact (POCs) for crisis management in contexts of ICT security for the first time. 

After a decade of discussions under the auspices of the UN, states finally agreed on elements for the operationalisation of a global, intergovernmental POC directory as a part of the UN Open-ended working group (OEWG) Annual Progress Report (APR).

What is the POC directory and why is it important? The POC directory is an online repository of POCs which aims to facilitate interaction and cooperation between states to promote an open, secure, stable, accessible, and peaceful ICT environment. The directory is intended to be voluntary, practical, and neutral, aiming to increase information sharing between states and further the prevention, detection, and response to urgent or significant ICT incidents through related capacity-building efforts.

In January 2024 the UN Office for Disarmament Affairs (UNODA) invited all states to nominate, where possible, both diplomatic and technical POCs. In May 2024, UNODA announced that 92 states have already done so. UNODA also announced the launch of the global POC Directory and its online portal, marking the operationalisation of these CBMs in the sphere of ICT security.

The first ping test is planned for 10 June 2024, and further such tests will be conducted every six months to keep the information up-to-date. The work on the directory will complement the work of the Computer Emergency Response Teams (CERTs) and Computer Security Incident Response Teams (CSIRTs) networks.

Each state decides how to respond to communications received via the directory. Initial acknowledgement of receipt does not imply agreement with the information shared, and all information exchanged between POCs is to remain confidential.

Before the establishment of the global POC directory at the UN level, some states already used POCs at bilateral or regional levels, such as the ASEAN-Japan cybersecurity POCs, OSCE network of policy and technical POCs, CoE POCs established by the Budapest Convention, and INTERPOL POCs for cybercrime. While these channels may overlap, not all states are members of such regional or sub-regional organisations, making the global directory a crucial addition. 

Capacity-building efforts also accompany the directory, and this is probably one of the most important elements in the directory. For example, the OEWG chair is tasked with convening simulation exercises to use basic scenarios to allow representatives from states to simulate the practical aspects

of participating in a POC directory and better understand the roles of diplomatic and technical POCs. Additionally, regular in-person and virtual meetings of POCs will be convened. A dedicated meeting will be held this year to implement the directory and consider necessary improvements, so we should expect further updates on the directory’s practical implementation.



Reducing terminological confusion: Is it digital or internet or AI governance?

Digital and internet are used almost interchangeably in governance discussions. While most uses are casual, the choice sometimes signals different governance approaches. 

The term digital is about using a binary representation – via ‘0’ and ‘1’ – of artefacts in our social reality. The term internet refers to any digital communication that is conducted via the Transport Communication Protocol/Internet Protocol (TCP/IP). You’re probably reading this text thanks to TCP/IP, which carries digital signals (0s and 1s) that represent letters, words, and sentences. Using both terms correctly describes what is going on online, to introduce a third term, and in digitised information. 

Should we use the term internet governance in a more specific sense than digital governance? The answer is both yes and no.

YES, one could say that all digital phenomena with relevance for governance are communicated via TCP/IP, from content to e-commerce and cybercrime. Most AI aspects that require governance are about using the internet to interact with ChatGPT or to generate images and videos on AI platforms.

For instance, regulation of deep-fakes generated by AI is and internet governance issue, as harm to society is caused by their distribution through the internet and social media powered by TCP/IP, the internet protocol. If we have deepfakes stored on our computers, it does not require any governance as it does not cause any harm to society.

The answer NO relates to the increasing push to govern AI beyond its uses through the regulation of algorithms and hardware on which AI operates. This approach to regulating how AI works under the pretext of mainly long-term risks is problematic, as it opens the door to deeper intrusion into innovation, misuses, and risks that can affect the fabric of human society.

As proposals for AI governance become more popular, we should keep in mind two lessons from tech history and internet governance. 

First, the internet has grown over the last few decades precisely because it has been regulated at the uses (applications) level. Second, whenever exceptional management went deeper under the bonnet of technology, it was done with full openness and transparency, as has been the case with setting TCP/IP, HTML, and other internet standards. 

In sum, if AI governance takes place at the uses level, as has been done for most technologies in history, it is not different from internet governance. Although it may sound heretical, given the current AI hype, one might even question whether we need AI governance at all. Perhaps AI should be governed by existing rules on intellectual property, content, commerce, etc.

Stepping back from the whirlpool of digital debates could help us revisit terminology and concepts that we might have taken for granted. Such reflections, including how we use the terms internet and digital, should increase clarity of thinking on future digital/internet/AI developments.

This text was first published on Diplo’s blogroll. Read the original version.

OcMiksDy
www.diplomacy.edu

The blog discusses the importance of distinguishing between digital and internet governance. It emphasises the need for precision in terminology to accurately describe online activities and the governance required. 


DW Weekly #162 – 31 May 2024

 Page, Text

Dear readers, 

Welcome to another issue of the Digital Watch weekly! 

How much is too much? Everything in moderation, the saying goes, and Indonesians surely have it on their minds these days. The country is grappling with a huge excess of government digital public service apps, as new applications were made with each change in leadership. This is bound to cause confusion among citizens and drain resources from the ministries and regional administrations that operate the apps. The proposed solution? A hard stop on making apps and green light on integrating them – all 27,000 of them – into one platform, the INA Digital platform. Obviously, this will be a huge undertaking. Working on it are 400 local digital talents under GovTech Indonesia. By September, the platform would integrate services from at least 15 ministries. One of the drawbacks is, as is the case with any centralised system, that cybersecurity might become an issue: Data will still be stored by individual ministries, yet will need to safeguard this data with shared protocols.

Drawing of a person reaching out for help from beneath a mountain of apps. The pile of apps has a sinister shadow.

Things are moving along in the TikTok vs USA case: The US Court of Appeals for the District of Columbia has agreed to speed up the review of the ‘TikTok ban law’ that mandates that ByteDance divest TikTok’s US assets. Oral arguments are to be presented in September. However, reports suggest that TikTok is developing a new recommendation algorithm for its US-based users, separate from its current recommendation algorithm, which might pave the way for the divestiture. Historically speaking, no remedies TikTok or ByteDance have made have managed to reassure the USA that the companies are not beholden to China. Will this be enough?

This week, Geneva hosted WSIS+20 and AI for Good Global Summit 2024. Discussions are almost finished at WSIS – even as we’re sending this email, ITU’s Secretary General Doreen Bogdan-Martin is delivering closing remarks. Just a few more hours are left of the AI for Good programme. One of the most exciting points of the discussions was Sam Altman’s interview, where he touched upon OpenAI’s alleged use of Scarlett Johansson’s voice likeness in its AI model, ChatGPT. Altman claimed the company didn’t intend to make the voice sound like Johansson. 

And speaking of ChatGPT, a survey has revealed that up to 30% of the population has no knowledge of generative AI tools and their implications.

In other news this week, the EU established its AI Office, which is set to play a key role in the implementation of the AI Act. International law enforcement agencies took down 911 S5, the biggest botnet ever, Zambia joined the list of countries with national AI strategies, and more.

Andrijana and the Digital Watch team


Highlights from the week of 24-31 May 2024

HesBgBluFlagSet5 14

By integrating public services into the INA Digital platform, the Indonesian government hopes to achieve cost savings and enhance service delivery.

eu flags in front of european commission

The AI Office is structured into different units, each with specific responsibilities. Led by the Head of the AI Office and advised by a Lead Scientific Adviser and an international…

shutterstock 2248471653 1 scaled

The research shows that 20-30% of respondents in six countries have not heard of ChatGPT or other popular AI tools.

the flag of zambia

The Zambian ministry is also actively training its workforce in AI, indicating a commitment to building the necessary human capital to fully leverage AI technologies.

double exposure creative artificial intelligence icon with man hand writing notebook background neural networks machine learning concept

The companies have sought the European Commission’s input on their new AI products, particularly those in the large language model space.

shutterstock 2237752713 scaled

National privacy watchdogs have raised concerns about the widely used AI service, leading to ongoing investigations.

chinese flag 1752046 1280

Chinese chip shares have seen an increase lately as seventeen investors, including five major Chinese banks, contributed to the fund, each adding around 6% to the total capital.

tiktok9

The bill received significant support in Congress due to concerns among lawmakers about Chinese access to Americans’ data and potential espionage through the app.

tiktok7

TikTok and ByteDance have filed a lawsuit in US federal court to challenge the law forcing a sale or ban of the app.

eu european union flags in front of european comission building in background brussles belgium

The net-zero industry act is one of the three key legislative initiatives of the Green Deal Industrial Plan.

the united states supreme court building washington d c usa

Musicians are increasingly concerned about the misuse of their identities in AI-generated content.

handcuffs on laptop cyber crime concept

The arrest was part of a multiagency operation involving law enforcement from the US, Singapore, Thailand, and Germany.

cybercrime through the internet

The operation was conducted across Europe and North Asia by Europol and its partners. Four individuals were arrested.

WSIS20 2

The discussion highlighted key challenges such as limited CSO involvement in digital policy, financial barriers, and overrepresentation of Global North organisations. The CADE project aims to address these by offering tailored training to CSOs to enhance their internet governance skills and foster constructive dialogue with policymakers.



ICYMI

Flags help us uncover the basics of AI. Watch the video to find out what they teach us about pattern recognition and probability.


Check out our dedicated WSIS+20 and AI for Good Global Summit 2024 webpages for reports from selected sessions. 

wf24 d2 1200x1200 1

The WSIS+20 Forum High-Level Event, part of the World Summit on the Information Society process, was held 27–31 May. The meetings reviewed progress related to information and knowledge societies, shared best practices, and built partnerships.

Global Summit 2024

The AI for Good Global Summit was held 30–31 May, in Geneva, Switzerland. This event, part of the AI for Good platform, focused on identifying practical AI applications to advance the SDGs globally.


Reading corner

artificial intelligence ai and machine learning ml
dig.watch

Continuing the three-part series on AI’s influence on intellectual property (IP), this final section touches upon the approaches applied to safeguard IP in the AI Age.

Biscott
www.diplomacy.edu

Bi Scott’s first blog post, ‘Speaking of Futures (1): Story-capsules’, she looked at how story-capsules found in connotations and metaphors can subliminally influence the way we think about the future. This week, Scott looks at the framing devices in broader narratives about AI, from science fiction and cautionary tales to logical fallacies.

Chhath Puja
www.diplomacy.edu

In 2015, Aldo Matteucci commented on the ambitious plan for a big data-driven encyclopedia of religious cultural history, highlighting issues with Western biases and the difficulty of quantifying religious experiences.