Making generative AI safe is still the talk of the tech world, while TikTok continues to run into hurdles, and the US-China chips war keeps getting more heated. The UK has revealed details of its cyber operations, and law enforcement won a significant battle in cybercrime when it took down the dark web marketplace Genesis Market. We round off the digital policy updates of this issue with the EU initiative to shape its vision of virtual worlds.
Andrijana and the Digital Watch team
// HIGHLIGHT //
Geneva SDOs chime in: Standards are the answer to safe AI development
The safe development of AI seems to be on everyone’s minds these days. If you’ve been reading any tech-related news, you’re probably aware ofPause giant AI experiments: An open letter. (Sidebar: if you haven’t read it, tech experts, including Elon Musk, Steve Wozniak, and Yuval Harari are asking tech giants for a six-month pause in the training of AI systems more powerful than GPT-4, until we can ensure that their effects will be positive and their risks manageable.). The letter has encountered criticism; for example, Bill Gates believes that pausing AI development won’t solve the challenges and would be difficult to enforce. Ex-Google CEO Eric Schmidt commented that such a pause ‘will simply benefit China’.
Three key Geneva-based international standards-developing organisations (SDOs) chimed in. In their reply to the open letter, the International Electrotechnical Commission (IEC), the International Organization for Standardization (ISO), and the International Telecommunication Union (ITU) highlight the role that standards can have in safe AI development. Standards underpin regulatory frameworks, provide ‘appropriate guardrails for responsible, safe, and trustworthy AI development’, and ‘can help mitigate the risks associated with AI systems and ensure that they are aligned with societal values and expectations’. The three SDOs invited interested stakeholders to join the work of developing consensus-based international standards and encourage their adoption.
But international standards take time to develop. and their uptake by the industry is very much a voluntary matter (there are some exceptions when regulations may require compliance with specific standards). So some countries have started taking matters into their own hands to at least alleviate data privacy concerns: After Italy’s much-talked-about (temporary) ban of ChatGPT, privacy regulators in France, Ireland, and Switzerland reached out to Italian ones to find out more about the basis of the ban, and Germany is considering a ban as well.
The company behind ChatGPT, Open AI, has since offered remedies in Italy, having committed to enhancing transparency in using personal data and existing mechanisms to exercise data subject rights and safeguards for children.
But the company may also have to offer remedial measures in Canada, where the Office of the Privacy Commissioner of Canada will investigate a complaint alleging that OpenAI collected, used and disclosed personal information without consent.
More countries are keeping a close eye on generative AI (rhyme not intended). The UK’s Information Commissioner’s Office (ICO) has stressed that organisations developing or using generative AI must approach data protection by design and default and outlined eight important questions that developers and users should consider. Switzerland’s Federal Data Protection and Information Commissioner had a similar message, advising users to examine the purposes for which text or images they upload are used and reminding companies that employ AI to observe data protection legislation.
Will more countries follow with their own warnings? Almost certainly.
// TIKTOK //
Bans, lawsuits, fines and investigations
The latest in the slew of bans on TikTok comes from Australia, which opted to ban the app on government devices, similar to its Five Eyes intelligence allies and multiple European countries. Albania is also contemplating such a ban.
TikTok has been blasted for allowing children under 13 to use its app, contradicting its terms of service. In the last week, it has been fined for this reason in the UK and sued in Portugal. The UK fine includes collecting children’s data without parental consent.
Another lawsuit has been filed in Portugal against TikTok for ‘misleading commercial practices’ and ‘opaque privacy policies’.
Vietnam, TikTok’s sixth biggest market, is also set to open an investigation into the platform because of the harmful content and false information that its algorithm can suggest. If it’s found guilty, strict fines will be imposed.
Japan’s move is far from surprising; rumours that it will join the USA (the ‘certain country’ China seems to be referencing) and the Netherlands to institute chip export rules started swirling in January. The USA was the first country in this triad to implement the rules, in October.
China took this fight to the WTO in December when it initiated a trade dispute procedure against US chip export control measures, arguing that these measures ‘threatened the stability of the global industry supply chains’. After the Netherlands and Japan seemingly followed the US move in March, China has now urged the WTO to monitor export chip restrictions, and the trio’s deal, arguing that it violates the fairness and transparency principles of the WTO. China also asked the trio to acknowledge if they have reached a private deal on chip exports.
The UK National Cyber Force (NCF) disclosed how it conducts ‘responsible cyber operations to counter state threats, support military operations, and disrupt terrorists and serious criminals’.
The document Responsible cyber power in practice highlights that the NCF’s cyber operations are accountable, precise, and calibrated, i.e. conducted in a legal and ethical manner, timed and targeted with precision, with their intended impact carefully assessed.
The NCF’s approach to adversarial cyber operations is based on the ‘doctrine of cognitive effect’ – using techniques that have the potential to sow distrust, decrease morale, and weaken the targets’ ability to plan and conduct their activities effectively, with the goal of changing their behaviour.
The NCF highlighted that its operations are covert, and the intent is that adversaries do not realise that the effects they are experiencing are the result of a cyber operation, which is why it was not forthcoming with details. The NCF did state that it has protected military deployments overseas; disrupted terrorist groups; countered sophisticated, stealthy and continuous cyber threats; countered state disinformation campaigns; reduced the threat of external interference in democratic elections; and removed child sexual abuse material from public spaces online.
A joint international law enforcement operation has seized the Genesis Market, a dark web market which offered access to over 80 million account access credentials such as usernames and passwords for email, bank accounts, and social media. The operation, led by the US FBI and the Dutch National Police, involved 17 countries and was codenamed Operation Cookie Monster.
// METAVERSE //
The EU seeks feedback on its vision for virtual worlds
The European Commission is presenting an initiative on virtual worlds – metaverses — entitled ‘An EU initiative on virtual worlds: A head start towards the next technological transition’. Its goal is to develop a vision for virtual worlds based on respect for digital rights and EU laws and values. The European Commission will seek feedback from stakeholders and the public through citizen panels and targeted workshops.
11–13 April: The Digital Rights and Inclusion Forum will be held in Nairobi, Kenya, under the theme ‘Building the sustainable internet for all’. The forum will highlight Africa’s challenges and provide solutions for a sustainable online future for everyone.
11–21 April: The fifth session of the Ad Hoc Committee on Cybercrime will consider the preamble, provisions on international cooperation, preventive measures, technical assistance, the mechanism of implementation, and the final provisions of the future convention on cybercrime.
12–13 April: The ECOM21 23 will discuss business operations, technology, and regulatory frameworks in Riga, Latvia.
13 April: The Global Digital Compact (GDC) co-facilitators are organising a series of thematic deep dives to prepare for intergovernmental negotiations on the GDC. The 13 April discussion will cover internet governance. As these in-depth discussions unfold, the GIP Digital Watch will examine how the GDC’s focus topics have been tackled in different key policy documents. Visit our dedicated GDC page on the Digital Watch observatory to read more about how issues related to internet governance have been covered in such documents.
Diplo and the Geneva Internet Platform (GIP) are organising an event on 13 April entitled Technology and Diplomacy: The Rise of Multilateralism in the Bay Area in San Francisco, California, where we will officially launch our ‘Tech Diplomacy Practice in the San Francisco Bay Area’ report. If you’re based in San Francisco, register and join us!
Can sharks eat the internet?
Well, no, not really. But the headline gets us all thinking about the extreme vulnerability of the undersea infrastructure on which the digital world relies, Diplo’s director Dr Jovan Kurbalija writes.
Can AI beat human intuition?
Check for yourself! What does your intuition tell you: did AI write text A or text B in this blog post?
Latest edition of Digital Watch newsletter
The freshly published April issue of our monthly newsletter on digital policy includes: a look at TikTok coming under fire from several countries due to data privacy and national security concerns, a look at how ChatGPT-4 model is pushing the boundaries of AI development, and a summary of OEWG 2021-2025 continued to discuss cybersecurity at its fourth substantive session.
Andrijana Gavrilovic
Editor, Digital Watch, and Head of Diplomatic and Policy Reporting, DiploFoundation
Was this newsletter forwarded to you, and you’d like to see more?
Digital policy developments that made global headlines
The digital policy landscape changes daily, so here are all the main developments from March. There’s more detail in each update on the Digital Watch Observatory.
Global digital architecture
The Global Digital Compact (GDC) co-facilitators organised a thematic deep dive on digital inclusion and connectivity to prepare for intergovernmental negotiations on the GDC.
Sustainable development
UNCTAD’s Technology and Innovation Report 2023 explores the potential benefits of green innovation for developing nations, including driving economic growth and enhancing technological capabilities.
The European Commission unveiled the Net-Zero Industry Act to boost clean energy technologies in the EU and support a transition to a more sustainable and secure energy system. It also adopted a new proposal aiming to make repair of goods easier and cheaper for consumers. It also introduced a new act to enhance the resilience and security of critical raw materials supply chains in the EU, reducing reliance on imports from third countries.
A trove of leaked documents dubbed the Vulkan files, have revealed Russia’s cyberwarfare tactics against adversaries such as Ukraine, the USA, the UK, and New Zealand. Ukraine’s computer emergency response team (CERT-UA) has recorded a spike in cyberattacks on Ukraine since the start of the year.
The UK National Cyber Force (NCF) disclosed details about its approach to responsible cyber operations.
E-commerce and internet economy
A high-level group has been established to provide the European Commission with advice and expertise related to the implementation and enforcement of the Digital Markets Act (DMA).
Brazil will impose new tax measures to tackle unfair competition from Asian e-commerce giants and limit tax benefits for companies.
Infrastructure
State-owned Chinese telecom companies are investing $500 million to build their own undersea fibre-optic internet cable network to compete with a similar US-backed project amid the ongoing tech war between the two countries.
ICANN, the organisation responsible for managing the internet’s address book, is preparing to launch a new gTLD round.
A UK watchdog has fined TikTok $16 million for collecting children’s data without parental consent. A Portuguese NGO sued TikTok for allowing children under 13 to join without parental permission and adequate protection.
Content policy
Google will no longer block news content in Canada, which it did temporarily in response to draft rules that would require internet platforms to compensate Canadian media companies for making news content available. At the same time, Meta has announced that it will end access to news content for Canadian users if the rules are introduced in their current form.
The prime ministers of Moldova, the Czech Republic, Slovakia, Estonia, Latvia, Lithuania, Poland, and Ukraine have signed an open letter which calls on tech firms to help stop the spread of false information.
Jurisdiction and legal issues
A US judge has ruled that the Internet Archive’s digital book-lending programme violates copyrights, potentially setting a legal precedent for future online libraries.
China’s State Council Information Office (SCIO) has released a white paper recapping the country’s laws and regulations on the internet.
UK regulators have revised their stance on Microsoft’s acquisition of Activision Blizzard, having previously raised concerns that it would harm competition in console gaming.
TikTok has come under fire from several countries due to data privacy and national security concerns. The core of the issue seems to lie in TikTok’s ownership by the Chinese company ByteDance, as China’s 2017 National Intelligence law requires companies to assist with state intelligence work, raising fears about the transfer of user data to China. Additionally, there are concerns that the Chinese government could use the platform for espionage or other malicious purposes. Several countries have sued TikTok for exposing children to harmful content and other practices that put their privacy at risk, as well.
TikTok has tried to alleviate fears of two global leaders on tech regulations – the USA and the EU. The company has committed to moving US data to the USA under Project Texas. European data security would be achieved by Project Clover, which includes security gateways that will determine data access and data transfers outside of Europe, external auditing of data processes by a third-party European security company, and new privacy-enhancing technologies.
During the past month, Belgium, Norway, the Netherlands, the UK, France, New Zealand, and Australia issued guidelines against installing and using TikTok on government devices. A ban contemplated by Japan is more general: Lawmakers will propose banning social media platforms if used for disinformation campaigns.
The much-publicised testimony of TikTok CEO Shou Chew before the US Congress didn’t garner the company much legal favour in the USA: The lawmakers are still not convinced that TikTok is not beholden to China. It seems the USA will be proceeding with legislation (most likely the RESTRICT Act) to ban the app. That might be an uphill battle: Critics argue that banning TikTok may violate First Amendment rights and would set a dangerous precedent of curtailing the right to free expression online. Another option is divestiture, whereby ByteDance would sell the US operations of TikTok to a US-owned entity.
Chew testifies before the US Congress. Source: CNN
What does China have to say?
At the beginning of March, China fiercely criticised the USA: Chinese Foreign Ministry spokesperson Mao Ning stated ‘We demand the relevant US institutions and individuals discard their ideological bias and zero-sum Cold War mentality, view China and China-U.S. relations in an objective and rational light, stop framing China as a threat by quoting disinformation, stop denigrating the Communist Party of China and stop trying to score political points at the expense of China-USA relations.’ Ning added, ‘How unsure of itself can the US, the world’s top superpower, be to fear a young person’s favourite app to such a degree?’
Ning also criticised the EU over its TikTok restriction, noting that the bloc should ‘Respect the market economy and fair competition, stop overstretching and abusing the concept of national security and provide an open, fair, transparent and non-discriminatory business environment for all companies.’ Similar remarks were repeated mid-March by Foreign Ministry spokesperson Wang Wenbin.
As reports of the USA demanding divestiture were confirmed by a TikTok representative, Wenbin also noted that ‘The USA has yet to prove with evidence that TikTok threatens its national security’ and that ‘it should stop spreading disinformation about data security.’
China’s Ministry of Commerce drew a line in the sand: the Chinese government would oppose the sale or divestiture of TikTok per China’s 2020 export rules. These remarks were made the same day Chew testified before Congress, casting further doubt on TikTok’s independence from the Chinese government.
More reassurances in the hope that the app is not banned from general use. The reality is that this might not be enough. In the USA, TikTok’s fate will likely ultimately be decided by the courts. There’s a very good chance that other countries mentioned in this article would follow suit.
The GPT-4 model: Pushing boundaries, raising concerns
The world of AI witnessed a flurry of exciting developments in March. While the arrival of GPT-4 promises to take natural language processing and image recognition to new heights, the concerns raised by the ‘Pause Giant AI Experiments: An Open Letter initiative’ about the ethical implications of large-scale AI experiments cannot be ignored.
OpenAI has announced the development of GPT-4, a large multimodal model that can process both text and images as inputs. This announcement marks a significant milestone in the evolution of GPT models, as GPT-3 and GPT-3.5 were limited to processing text only. The ability of GPT-4 to process multiple modalities will expand the capabilities of natural language processing and image recognition, opening up new possibilities for AI applications. This development is sure to generate a lot of interest and anticipation as the AI community awaits further details about GPT-4’s capabilities and its potential impact on the field.
With the ability to process 32,000 tokens of text, unlike GPT-3, which was limited to 4,000 tokens, GPT-4 offers expanded possibilities for long-form content creation, document analysis, and extended conversations (Tokenisation is a way of separating a piece of text into smaller units called tokens; here, tokens can be either words, characters, or subwords). The latest GPT-4 model has the capacity to process and generate extended passages of text. It has achieved impressive results on a range of academic and professional certification tests, such as the LSAT, GRE, SATs, AP exams, and a simulated law school bar exam.
This buzz was apparently the last straw for many. Not long after, a group of AI researchers, including Elon Musk and Steve Wozniak, signed the ‘Pause Giant AI Experiments: An Open Letter initiative’ urging AI labs to pump the brakes. The letter calls for a global ban on the training of AI systems more powerful than GPT-4. It expresses concern about the potential for AI to become a ‘threat to the existence of human civilisation’. It points out that AI could be used to create autonomous weapons and ‘out-think and out-manoeuvre human control.’ The letter goes on to suggest that AI could eventually become so powerful that it could create a superintelligence that would outsmart human beings.
The open letter has reignited debate among the scientific and tech community about the importance of responsible development of AI, including addressing concerns about bias, transparency, job displacement, privacy, and the potential for AI to be weaponised. Government officials and tech companies have a significant role to play in regulating AI, such as setting ethical guidelines, investing in safety research, and providing education for those working in the field.
This article has been brought to you by Diplo’s AI and Data Lab. The lab keeps an eye on developments in the AI diary, runs experiments like Can AI beat human intuition, and creates applications such as this reporting one.
At Diplo, we’re also discussing AI’s impact on our future through a series of webinars. Join us on 2 May for the latest one in the series as we discuss AI ethics and governance from a non-Western perspective.
What’s new with cybersecurity negotiations?
The UN Open-ended Working Group (OEWG) on cybersecurity held its fourth substantive session. We share the highlights below.
Existing and potential threats. Supply chain risks, the use of AI-powered instruments, ransomware, and the spill-over effects of Russian cyberattacks on Ukraine, which have affected the infrastructure in Europe, have been mentioned, among other threats, during the session. Kenya proposed establishing a UN repository of common threats. The EU proposed formulating a common position on ransomware, and the Czech Republic proposed a more detailed discussion on responsible state behaviour in developing new technologies.
Rules, norms, and principles.Russia and Syria argued that existing non-binding rules don’t effectively regulate the use of ICTs to prevent inter-state conflicts and proposed drafting a legally binding treaty. Other countries (e.g. Sri Lanka and Canada) criticised this proposal. Egypt argued that the development of new norms doesn’t conflict with the existing normative framework.
International law (IL). Most states reaffirmed IL’s applicability to cyberspace, but some (Cuba, India, Jordan, Nicaragua, Pakistan, Russia, Syria) argued that automatic applicability is premature and supported a proposal for a legally binding treaty. Russia submitted an updated concept of the ‘Convention of the UN on Ensuring International Information Security’ with Belarus and Nicaragua as co-sponsors. Most states don’t support drafting a new legally binding instrument.
Speaking of international humanitarian law (IHL), the EU and Switzerland affirmed its applicability; however Russia and Belarus refused the automatic application of IHL in cyberspace, citing a lack of consensus on what constitutes an armed attack.
The UN Charter principles and enforcement of state obligations have been also discussed for the first time, we believe. Most states also supported the Canadian-Swiss proposal to include these topics, peaceful settlement of disputes, IHL, and state responsibility in the OEWG’s programme of work in 2023.
Confidence building measures (CBMs). Some delegations have called for more active participation of regional organisations to share their experiences in the OEWG. There was also a broad agreement to establish a Points of contact (POC) directory, though states continued discussing who should be nominated as a PoC (agencies or particular persons), what functions they should have, etc.
Capacity building. Some countries highlighted that the Programme of Action (PoA) to advance responsible state behaviour will be the primary instrument to structure capacity-building initiatives. Iran stressed that ITU could be a permanent forum for coordination in this regard. Cuba supported this idea.
Regular institutional dialogue.Supporters of the PoA emphasised the complementarity of the OEWG and the PoA. Some states mentioned the possibility of discussing additional cyber norms under the PoA, if needed, and called for a dedicated OEWG session on the PoA. China noted that states who supported the PoA resolution are undermining the status of the OEWG. Russia, Belarus, and Nicaragua proposed a permanent body with review mechanisms as an alternative to the PoA. Some states, though, warned that parallel tracks of discussions would require more resources.
Next steps. The chair plans to host an informal virtual meeting in late April for regional PoC directories to share their experiences. The second revised non-paper on the PoC directory is expected after. An inter-sessional meeting on IL and regular institutional dialogue will be held around the end of May. The Annual Progress Report zero draft is also expected in early June. States will discuss the APR at the 5th substantive session on 24–28 July 2023.
The 2023 edition of the World Summit on the Information Society (WSIS) Forum featured over 250 sessions exploring a wide range of issues related to ICT for development and the implementation of the WSIS Action Lines agreed upon back in 2003. The forum also included a high-level track that highlighted, among other issues, the urgency of advancing internet access, availability and affordability as driving forces of digitalisation, and the importance of fostering trust in digital technologies. The event was hosted by ITU and co-organised together with the UNESCO, UNCTAD, and the UN Development Programme (UNDP). More forum outcomes will be published by ITU on the dedicated page.
Diplo and the Geneva Internet Platform (GIP), together with the Permanent Missions of Djibouti, Kenya, and Namibia, hosted a session on Strengthening Africa’s voices in global digital processes on the last day of the forum. This session stressed the need for strengthened cooperation – within and beyond Africa – to implement the continent’s digital transformation strategies and ensure that African interests are adequately represented and reflected in international digital governance processes. Building and developing individual and institutional capacities, coordinating common positions on issues of mutual interest, leveraging the expertise of actors from various stakeholder groups, and ensuring effective and efficient communication between missions and capitals were some of the suggested steps towards ensuring that African voices are fully and meaningfully represented on the international stage. Read the session takeaways.
Director Dr Jovan Kurbalija moderates the Diplo WSIS session in Geneva. Source: Diplo
The 2023 CCW Group of Governmental Experts on emerging technologies in the area of Lethal Autonomous Weapons Systems (GGE on LAWS) held its first session in March. During the five-day meeting, the group focused on the following dimensions of emerging technologies in the area of LAWS: the characterisation of LAWS – definitions and scope; the application of IHL: possible prohibitions and regulations; human-machine interaction, meaningful human control, human judgement, and ethical considerations; responsibility and accountability; legal reviews; risk mitigation, and confidence-building measures.
The 26th session of the CSTD tackled (a) technology and innovation for cleaner and more productive and competitive production and (b) ensuring safe water and sanitation for all: a solution by science, technology and innovation.
At the opening ceremony, Rebeca Grynspan, Secretary-General of UNCTAD, delivered a statement emphasising the critical juncture humanity finds itself in as a moment of global challenges and technological possibilities. The Secretary-General highlighted the worrisome decline in overall human progress over the past two years, jeopardising our sustainable future goals. Addressing these significant economic, social, and environmental issues requires coordinated global action.
The session also featured the presentation of the 2023 Technology and Innovation Report, which identifies crucial opportunities and particle solutions for developing countries to utilise innovation for sustainable growth.
What to watch for:Global digital policy events in April
The fifth session of the Ad Hoc Committee on Cybercrime will touch upon the new negotiating consolidated document on the preamble, provisions on international cooperation, preventive measures, technical assistance, the mechanism of implementation, and the final provisions of the convention. The secretariat has also prepared a separate document on the mechanisms of implementation to facilitate the deliberations of member states on the implementation of mechanisms for the convention. Lastly, it is expected that states will further negotiate on the first negotiating consolidated document from the fourth session. Read more.
The fifth session of the Ad Hoc Committee on Cybercrime will touch upon the new negotiating consolidated document on the preamble, provisions on international cooperation, preventive measures, technical assistance, the mechanism of implementation, and the final provisions of the convention. The secretariat has also prepared a separate document on the mechanisms of implementation to facilitate the deliberations of member states on the implementation of mechanisms for the convention. Lastly, it is expected that states will further negotiate on the first negotiating consolidated document from the fourth session. Read more.
The Global Digital Compact (GDC) co-facilitators are organising a series of thematic deep dives to prepare for intergovernmental negotiations on the GDC. The 13 April discussion will cover internet governance. As these in-depth discussions unfold, the GIP will examine how their focus topics have been tackled in different key policy documents. Visit our dedicated page on the Digital Watch observatory to read more about how issues related to internet governance have been covered in such documents. Read more.
The Global Digital Compact (GDC) co-facilitators are organising a series of thematic deep dives to prepare for intergovernmental negotiations on the GDC. The 13 April discussion will cover internet governance. As these in-depth discussions unfold, the GIP will examine how their focus topics have been tackled in different key policy documents. Visit our dedicated page on the Digital Watch observatory to read more about how issues related to internet governance have been covered in such documents. Read more.
The annual UN World Data Forum advances data innovation, encourages cooperation, generates political and financial backing for data initiatives, and facilitates progress towards enhanced data for sustainable development. The forum focus on the following thematic areas: Innovation and partnerships for better and more inclusive data; Maximising the use and value of data for better decision-making; Building trust and ethics in data; Emerging trends and partnerships to develop the data ecosystem. Read more.
The annual UN World Data Forum advances data innovation, encourages cooperation, generates political and financial backing for data initiatives, and facilitates progress towards enhanced data for sustainable development. The forum focus on the following thematic areas: Innovation and partnerships for better and more inclusive data; Maximising the use and value of data for better decision-making; Building trust and ethics in data; Emerging trends and partnerships to develop the data ecosystem. Read more.
The RSA Conference 2023 will take place on 24 – 27 April in San Francisco, the USA. The conference will be held under the theme ‘Stronger Together’, and will feature seminars, workshops, training, an exhibition, keynote addresses, and interactive activities. Read more.
The RSA Conference 2023 will take place on 24 – 27 April in San Francisco, the USA. The conference will be held under the theme ‘Stronger Together’, and will feature seminars, workshops, training, an exhibition, keynote addresses, and interactive activities. Read more.
The G7 Digital and Tech Ministers’ Meeting will address various digitalisation issues, including emerging concerns and changes in the global environment around digital affairs. The ministers will discuss a framework for operationalising the Data Free Flow with Trust (DFFT) in cooperation with the G7 and other countries while respecting national regulations, enhancing transparency, ensuring interoperability, and promoting public-private partnerships. The operationalisation of DFFT is expected to help SMEs and others to safely and securely use data from around the world, enabling them to develop cross-border businesses. Read more.
The G7 Digital and Tech Ministers’ Meeting will address various digitalisation issues, including emerging concerns and changes in the global environment around digital affairs. The ministers will discuss a framework for operationalising the Data Free Flow with Trust (DFFT) in cooperation with the G7 and other countries while respecting national regulations, enhancing transparency, ensuring interoperability, and promoting public-private partnerships. The operationalisation of DFFT is expected to help SMEs and others to safely and securely use data from around the world, enabling them to develop cross-border businesses. Read more.
The Digital Watch observatory maintains a live calendar of upcoming and past events.
All eyes will be on China, as it prepares to receive France’s Emmanuel Macron, Spain’s Pedro Sánchez, and the EU’s Ursula von der Leyen. We’ll keep an eye out for anything that could impact the digital policy landscape.
Meanwhile, Italy has imposed a temporary limit on access to ChatGPT (our analysis for this week), as content policy shares the spotlight with cybersecurity updates – notably, the revelations from the leaked Vulcan Files.
Let’s get started.
Stephanie and the Digital Watch team
// HIGHLIGHT //
Italy’s rage against the machine
Italy has become the first western country to impose a (temporary) limitation on ChatGPT, the AI-based chatbot platform developed by OpenAI, which has caused a sensation around the world.
Users’ personal data breached: A data breach affecting ChatGPT users’ conversations and information on payments by subscribers to the service was reported on 20 March. OpenAI attributed this to a bug.
Unlawful data collection: ChatGPT uses massive amounts of personal data to train its algorithms without having a legal basis to collect and process it.
Inaccurate results: ChatGPT spews out inaccuracies and cannot be relied upon as a source of truth.
Inappropriate for children: ChatGPT lacks an age verification mechanism, which exposes children to receiving responses that are ‘absolutely inappropriate to their age and awareness’.
How is access being blocked? In compliance with the Italian data protection authority’s order, OpenAI geoblocked access to ChatGPT to anyone residing in Italy. It also issued refunds to Italian residents who have upgraded to the Plus software upgrade.
However, OpenAI’s API – the interface that allows other applications to interact with it – and Microsoft’s Bing – which also uses ChatGPT – are still accessible in Italy.
ChatGPT disabled for users in Italy
Dear ChatGPT customer,
We regret to inform you that we have disabled ChatGPT for users in Italy at the request of the Italian Garante.
We are issuing refunds to all users in Italy who purchased a ChatGPT Plus subscription in March. We are also temporarily pausing subscription renewals in Italy so that users won’t be charged while ChatGPT is suspended.
We are committed to protecting people’s privacy and we believe we offer ChatGPT in compliance with GDPR and other privacy laws. We will engage with the Garante with the goal of restoring your access as soon as possible.
Many of you have told us that you find ChatGPT helpful for everyday tasks, and we look forward to making it available again soon.
If you have any questions or concerns regarding ChatGPT or the refund process, we have prepared a list of Frequently Asked Questions to address them.
– The OpenAl Support Team
What’s the response from users? The reactions have been mixed. Some users think this is shortsighted since there are other ways in which ChatGPT can still be accessed. (One of them is using a VPN, a secure connection that allows users to connect to the internet by masking their actual location. So if an Italian user chooses a different location through its VPN, OpenAI won’t realise that the user is indeed connecting from Italy. This won’t work for users wanting to upgrade: OpenAI has blocked upgrades involving credit cards issued to Italian users or accounts linked to an Italian phone number.)
Others think this is a precursor to what other countries will do. They think that if a company is processing data in breach of the rules (in Europe, that’s the GDPR), then it might be required to revise its practices before it continues offering its services.
How temporary is ‘temporary’? What happens next depends on two things: the outcomes of the investigation into the recent breach, and whether (and how) OpenAI will reform any of its practices. Let’s revisit the list of grievances:
Personal data breach: Nothing can reverse what happened, but OpenAI can install tighter security controls to prevent other incidents. Once authorities are convinced that stricter precautions have been taken, there’s no reason not to lift its ban on this issue alone.
Unlawful data collection: This is primarily a legal issue. So let’s say an Italian court confirms that the way the data was collected was illegal (it would take a great deal of effort to establish this, as OpenAI’s machine is proprietary, i.e. not open to the public to inspect). OpenAI is not an Italian company, so the court will have limited jurisdiction over it. The most it can do is impose a hefty fine and turn the ban into a semi-permanent one. Will it have achieved its aim? No, as Italian users will still be able to interact with the application. Will it create momentum for other governments to consider guardrails or other forms of regulation? Definitely.
Inaccurate data: This issue is the most complex. If by inaccurate we mean incorrect information, the software is improving significantly with every new iteration. Compare GPT-4 with its predecessor, v.3.5 (or even the early GPT-4 model with the same version at launch date). But if we mean biased or partial data, the evolution of AI-based software shows us how inherent this issue is to its foundations.
Inappropriate for children: New standards in age verification are a work in progress, especially in the EU. These won’t come any time soon, but when they do, it will be an important step in really limiting what underaged users have access to. It will make it much harder for kids to access platforms which aren’t meant for them. As for the appropriateness of content, authorities are working on strategies to reel in bigger fish (TikTok, Facebook, Instagram) in the bigger internet pond.
// AI //
UNESCO urges governments to implement ethical AI framework
Director-General Audrey Azoulay said the ethical issues raised by AI technology, especially discrimination, gender inequality, fake news, and human rights breaches – are concerning.
‘Industry self-regulation is clearly not sufficient to avoid these ethical harms, which is why the recommendation provides the tools to ensure that AI developments abide by the rule of law, avoiding harm, and ensuring that when harm is done, accountability and redressal mechanisms are at hand for those affected.’
Stop right there! Three blows for ChatGPT
The first is that Elon Musk and a group of AI experts and industry leaders are calling for a six-month moratorium on the development of systems more powerful than OpenAI’s newly released GPT-4 due to potential risks to society. Over 50,000 people have signed the open letter.
The second is that the Center for Artificial Intelligence and Digital Policy has filed a complaint with the US Federal Trade Commission to stop OpenAI from issuing new commercial releases of GPT-4, due to concerns about the software’s ‘biased, [and] deceptive’ nature, which is ‘a risk to privacy and public safety’.
The third is a new report from Europol, which is sounding an alarm about the potential misuse of large language models (the likes of ChatGPT, Bard, etc.). For instance, the software can be misused by criminals who can generate convincingly authentic content at a higher level for their phishing attempts, which gives criminals an edge. The agency is recommending that law enforcement agencies get ready.
(It’s actually four blows if we count Italy’s temporary ban).
Indian judge uses ChatGPT to decide bail in murder case
A murder case in India made headlines last week when the Punjab and Haryana High Court used ChatGPT to respond to an application for bail in an ongoing case of attempted murder. Justice Anoop Chitkara asked the AI tool: ‘What is the jurisprudence on bail when the assailants assaulted with cruelty?’ The chatbot considered the presumption of innocence and stated that if the accused has been charged with a violent crime that involves cruelty, they may be considered a risk to the community. The judge clarified that the chatbot was not used to determine the outcome but only ‘to present a broader picture on bail jurisprudence, where cruelty is a factor.’
Was this newsletter forwarded to you, and you’d like to see more?
A trove of leaked documents, dubbed the Vulkan files, have revealed Russia’s cyberwarfare tactics, according to journalists from 11 media outlets, who say the authenticity of the files has been confirmed by 5 intelligence agencies.
The documents show that consultancy firm Vulkan worked for Russian military and intelligence agencies to support Russia’s hacking operations and the spread of disinformation. The documents also link a cyberattack tool developed by Vultan with the hacking group Sandworm, to whom the USA attributed various attacks such as NotPetya.
The documents include project plans, contracts, emails, and other internal documents dated between 2016 and 2021.
Pro-Russian hacktivists launch DDoS attacks on Australian organisations
Australian universities have been targeted by Distributed Denial of Service (DDoS) attacks in recent months. Web infrastructure security company Cloudflare reported that the attacks were carried out by Killnet and AnonymousSudan – hacktivist groups with pro-Russia sympathies – on several organisations in Australia.
Killnet has a record of targeting governments and organisations that openly support the Ukrainian government. Since the start of the Ukraine war, the group has been associated with attacks on the websites of the European Parliament, airports in the USA, and the healthcare sectors in Europe and the USA, among others.
Cyberattacks on Ukraine on the rise, CERT-UA says
Ukraine’s computer emergency response team (CERT-UA) has recorded a spike in cyberattacks on Ukraine since the start of the year. The 300+ cyber incidents processed by CERT-UA are almost twice as many as during the corresponding period last year when Russia was preparing for a full-scale invasion.
In a Telegram message, Russia’s State Special Communications Service said that Russia’s aim is to obtain as much information as possible that can give it an advantage in a conventional war against Ukraine.
// ANTITRUST //
Google to Microsoft: Your cloud practices are anti-competitive
It’s been a while since Big Tech engaged in a public squabble, so when Google accused Microsoft’s cloud practices of being anti-competitive last week, we thought the growing rivalry spurred by ChatGPT had reached new levels.
In comments to Reuters, Google Cloud’s vice president Amit Zavery said Google Cloud had filed a complaint to regulatory bodies and has asked the EU’s antitrust watchdog ‘to take a closer look’ at Microsoft. In response, Microsoft reminded Google that the latter was in the lead in the cloud services sector. We’re wondering: Could this be a hint that it’s actually Google that merits greater scrutiny?
// CONTENT POLICY //
New US bill aims to strengthen news media negotiations with Big Tech
US lawmakers have reintroduced a bill to help news media in their negotiations with Big Tech, after a failed attempt during the last congressional session. The bipartisan bill – the Journalism Competition and Preservation Act – will create a four-year safe harbour period for news organisations to negotiate terms such as revenue sharing with tech companies like Facebook and Google.
Lawmakers are taking advantage of momentum gathered from a similar development in Canada, where the Online News Act, or Bill C-18, is currently being debated in Parliament. Reacting to the Canadian draft rules, Google and Meta threatened to do the same (leaving Reporters Without Border reeling; Google went through with its threat). We’re wondering whether Google – or any other Big Tech entity – will do the same in the USA.
East Europe governments call on tech companies to fight disinformation
The prime ministers of Moldova, the Czech Republic, Slovakia, Estonia, Latvia, Lithuania, Poland and Ukraine have signed an open letter which calls on tech firms to help stop the spread of false information.
Some of the proposed actions include: refraining from accepting payments from those previously sanctioned, improving the accuracy and transparency of algorithms (rather than focusing on promoting content), and providing researchers free or affordable access to platforms’ data to understand the tactics of manipulative campaigns.
Internet Archive’s digital book lending violates copyright laws, US judge rules
A US judge has ruled that the Internet Archive’s digital book lending program violates copyrights, potentially setting a legal precedent for future online libraries. The initiators of the case, the Association of American Publishers, argued that the program infringed on their authors’ exclusive rights to reproduce and distribute their works.
Although the Internet Archive based its argument on the principle of fair use, Judge Denny Chin disagreed, as the platform’s practice impacts publishers’ income from licensing fees for paper and e-book versions of the same texts. The judge said that Open Library’s practice of providing full access to those books without obtaining permission from the copyright holders violated copyright rules. Internet Archive is preparing an appeal, but until then, it can’t provide new scanned library material.
4 April: The European Broadcasting Union’s Sustainability Summit 2023 will focus on green streaming and other environment-friendly practices in digital broadcasting.
4–5 April: The International Association of Privacy Professionals (IAPP) Global Privacy Summit will gather privacy practitioners to discuss current regulatory challenges. (Sadly, ticket prices are prohibitively high.)
Diplo and the Geneva Internet Platform are organising two events this week:
AI governance: Terminator movie director says we might already be too late
AI has become an integral part of modern life, but with its increasing prevalence, James Cameron, the director of the iconic Terminator movies warns that humans are facing a titanic battle (pun intended) for control over technology. Cameron urges governments to create ethical standards for AI before it’s too late. Read more here and here. (Note: Articles on this podcast were making the rounds last week, but the podcast itself is from December).
Qu’est-ce qui différencie ChatGPT des autres modèles linguistiques précédents ?
La décision d’OpenAI de mettre ChatGPT gratuitement à la disposition du public était audacieuse, compte tenu des coûts importants associés à la maintenance du système tout en traitant des millions de questions d’utilisateurs curieux – des coûts que le P.-D. G. d’OpenAI, Sam Altman, a lui-même qualifiés d’« exorbitants ». Cependant, cette décision s’est avérée non seulement hardie, mais aussi ingénieuse. En rendant ChatGPT accessible à tous, OpenAI a déclenché une étincelle de curiosité et d’intérêt qui a attiré l’attention du monde entier.
En réalité, ChatGPT n’a pas été le premier modèle à être mis à la disposition du public gratuitement. Galactica, de Meta, a tenté le même exploit, a subi un échec cuisant et a été abandonné au bout de quelques jours seulement. Alors, comment ChatGPT a-t-il réussi, alors que Galactica n’a pas abouti ? La réponse est principalement liée à la manière dont il a été annoncé et à qui. Galactica a été présenté à la communauté universitaire comme une IA capable de rédiger sans effort des articles scientifiques – un domaine où le public est exigeant et critique, et qu’il est aisé de décevoir. D’un autre côté, ChatGPT – et ses limites – a été présenté publiquement comme un outil ouvert et accessible à tous, que chacun peut expérimenter et apprécier. Cette approche, combinée aux performances de ChatGPT, l’a distingué des modèles précédents et a changé la donne dans le monde de l’IA.
La décision de mettre ChatGPT à la disposition du grand public n’a pas été sans contraintes. Contrairement aux modèles à source ouverte, ChatGPT n’est disponible que de manière limitée, les chercheurs intéressés ne pouvant pas regarder « sous le capot » du modèle ou l’adapter à leurs besoins spécifiques. Il s’agit d’une approche différente de celle adoptée par OpenAI avec son exceptionnel modèle Whisper, dont la transparence était remarquable. D’un point de vue commercial, la décision d’OpenAI d’offrir ChatGPT au public, malgré les coûts élevés qu’elle implique, s’est avérée être la meilleure, car elle a attiré l’attention et des fonds.
Comment les autres entreprises ont-elles réagi ?
ChatGPT a suscité une attention et une concurrence sans précédent dans le monde de l’IA. L’intérêt considérable du public à son égard a incité les principaux acteurs du secteur à réagir rapidement. Ce type de compétition est sans aucun doute passionnant, mais il peut aussi entraîner des pertes. Néanmoins, les stratégies employées sont assez curieuses.
Microsoft n’a pas hésité à consacrer 10 milliards de dollars à l’intégration de ChatGPT dans tous ses produits phares, y compris Skype, Teams et Word. Elle l’a fait rapidement et ouvertement en réponse à l’intérêt et à la popularité considérables de ChatGPT auprès du public, créant ainsi un exemple à suivre.
Google a annoncé à la hâte qu’il intégrerait le modèle de Bard dans Google Search, mais a
échoué dans sa première étape lorsque Bard a commis une erreur de fait dans sa démo de lancement. La stratégie de Google s’est traduite par d’importantes pertes initiales pour l’entreprise, une chute de 10 % du cours de l’action ayant effacé 170 milliards de dollars de la valeur marchande de Google.
Malgré le revers causé par la publication et la mise en libre accès du modèle Galactica, Meta persiste dans son intention de mettre en libre accès ses modèles d’IA de pointe. Avec l’ouverture récente de son plus grand modèle de langage, appelé « modèle LLaMa », Meta semble tenter de réduire la dépendance des utilisateurs à l’égard de l’API OpenAI GPT en donnant accès à de nouveaux modèles. LLaMa n’est pas le premier modèle à grande échelle à être ouvert – BLOOM et OPT sont disponibles depuis un certain temps –, mais leur application a été limitée en raison de leurs exigences matérielles élevées. LLaMa est environ dix fois plus petit que ces modèles et peut fonctionner sur un seul processeur graphique, ce qui pourrait permettre à un plus grand nombre de chercheurs d’accéder à des modèles linguistiques de grande taille et de les étudier. Dans le même temps, LLaMa obtient des résultats similaires à ceux de GPT-3.
Les géants chinois de la technologie n’ont pas perdu de temps : Baidu prévoit d’intégrer son chatbot Ernie (abréviation de Enhanced Representation through kNowledge IntEgration) dans son moteur de recherche en mars et, à terme, dans toutes ses activités.
En Chine, ChatGPT n’est pas officiellement disponible, mais les utilisateurs ont pu y accéder grâce à des solutions de contournement. Toutefois, les autorités de régulation ont demandé aux principales entreprises technologiques chinoises de ne pas intégrer ChatGPT dans leurs services, car le logiciel « pourrait aider le gouvernement américain à diffuser de la désinformation et à manipuler les informations mondiales pour servir ses propres intérêts géopolitiques ». Les entreprises technologiques chinoises devront également rendre compte aux régulateurs avant de déployer leurs propres services de type ChatGPT.
L’impact de l’IA générative sur notre avenir
Les développements de février dans le domaine de l’IA générative ont une fois de plus soulevé la question suivante : l’IA va-t-elle prendre le contrôle de nos emplois ? C’est pourquoi les craintes qu’une nouvelle technologie rende le travail humain superflu sont omniprésentes à chaque fois que l’on parle d’une nouvelle technologie.
Nos collègues du Diplo’s AI Lab, qui ont également utilisé des modèles de langage pour développer des outils d’IA très performants (nous sommes partiaux ici), pensent que l’IA ne rendra pas la plupart des emplois superflus. Certains emplois ont une nature intrinsèquement interhumaine, et l’IA aura du mal à les remplacer.
Cependant, l’IA rendra certains emplois superflus, comme cela a été le cas pour toutes les nouvelles technologies. La bonne nouvelle est que les outils d’IA permettront aux travailleurs de gagner du temps en supprimant les tâches courantes de leur liste. Et, si certains emplois disparaîtront, de nouveaux apparaîtront, comme l’ont déjà dit le P.-D. G. de Microsoft, Satya Nadella, et le fondateur de Microsoft, Bill Gates. La question est la suivante : comment faire en sorte que les générations actuelles et futures soient préparées à faire face aux changements actuels et à venir sur le marché du travail ?
Vous souhaitez nous faire part de vos réflexions sur l’IA générative ? Écrivez-nous à digitalwatch@diplomacy.edu !
Baromètre
Les développements de la politique numérique qui ont fait la une
Le paysage de la politique numérique évolue quotidiennement. Voici donc les principaux éléments du mois de février. Nous les avons décodés en petites mises à jour qui font autorité. Vous trouverez plus de détails dans chaque mise à jour sur le Digital Watch observatory.
Les ministres de l’UE examinent une version modifiée du projet de loi sur la cyberrésilience (CRA), un règlement sur les exigences en matière de cybersécurité pour les produits numériques.
La Commission européenne a lancé une consultation publique sur l’avenir de la connectivité, considérée comme un prélude à des projets qui pourraient obliger les grandes entreprises technologiques à payer leur part des coûts liés à l’infrastructure numérique. Elle a également publié une proposition de loi sur l’infrastructure Gigabit.
PayPal a suspendu le lancement de son système de monnaie stable en raison d’une surveillance réglementaire accrue. La Commission australienne de la concurrence et de la consommation (ACCC) va examiner les produits et services interconnectés offerts par les plateformes numériques pour savoir s’ils nuisent à la concurrence et aux consommateurs.
Les signataires du code de pratique 2022 sur la désinformation, qui comprend toutes les grandes plateformes en ligne, ont mis en place un centre de transparence qui garantit la transparence de leurs efforts de lutte contre la désinformation, et ont publié des rapports sur la mise en œuvre des engagements pris dans le cadre du code.
La Chine a annoncé la création d’un bureau national des données, qui établira un système de données pour le pays et coordonnera l’utilisation des ressources en données.
La Cour constitutionnelle fédérale allemande a jugé contraire à la Constitution l’utilisation par la police de l’analyse automatisée des données pour prévenir la criminalité.
Les représentants de 59 pays ont lancé un appel commun à l’action sur le développement, le déploiement et l’utilisation responsables de l’IA dans le domaine militaire.
Le Comité sur l’IA du Conseil de l’Europe a poursuivi les discussions sur une convention sur l’IA et les droits de l’homme, et a publié un projet de texte. Les Pays-Bas restreindront l’exportation des technologies les plus avancées en matière de semi-conducteurs, y compris les systèmes de lithographie dans l’ultraviolet profond (DUV).
Comment les algorithmes mettent-ils l’article 230 à l’épreuve ?
Il a longtemps été avancé que la loi américaine qui protège les plateformes de médias sociaux de toute responsabilité concernant le contenu publié par les utilisateurs sur ces plateformes – l’article 230 de la loi sur la décence des communications (Communications Decency Act) – devrait être restreinte ou carrément supprimée.
L’article 230, qui a fait l’objet de tant de débats, est une règle astucieuse de deux phrases qui stipule que : (a) les plateformes ne sont pas des éditeurs (elles ne sont donc pas responsables du contenu publié par les utilisateurs, contrairement aux éditeurs) ; (b) dans les cas où les plateformes assurent elles-mêmes le contrôle du contenu de tiers, elles ne peuvent pas être sanctionnées pour d’autres contenus préjudiciables qu’elles ne retirent pas.
Il faut reconnaître que cette règle a permis à Internet de prospérer. Les plateformes ont pu héberger une grande quantité de contenus d’utilisateurs, sans crainte de responsabilité. Elles ont également permis la publication instantanée de contenus, sans que les plateformes soient tenues de les examiner avant de les rendre publics. La liberté d’expression a prospéré.
Mais aujourd’hui, l’ère des algorithmes met l’article 230 à l’épreuve. Il y a quelques semaines, la Cour suprême des États-Unis a commencé à examiner des arguments dans deux affaires, toutes deux jugées initialement par le neuvième district, qui pourraient avoir des répercussions sur l’article 230.
Gonzales vs Google
Dans un procès contre Google, la famille d’une Américaine tuée lors d’une attaque à Paris par des militants islamistes a fait valoir que l’algorithme de Google recommandait aux utilisateurs de YouTube des contenus provenant du groupe militant. La famille fait appel du premier jugement en faisant valoir que l’article 230 ne confère pas d’immunité aux plateformes en ce qui concerne les contenus recommandés par les algorithmes, étant donné que les suggestions faites par les algorithmes ne sont pas des contenus de tiers, mais les propres contenus de l’entreprise. L’argument de Google, en revanche, est que l’article 230 ne vise pas seulement à protéger les entreprises contre les contenus de tiers, mais va jusqu’à stipuler que les plateformes ne devraient pas être considérées comme des éditeurs.
De nombreuses entreprises et organisations ont déposé des recours en justice pour soutenir Google. Twitter, par exemple, fait valoir que les algorithmes permettent de donner la priorité à certains contenus par rapport à d’autres (« les contenus les plus récents par rapport aux plus anciens »), mais qu’ils ne véhiculent aucun contenu propre. Microsoft soutient que les algorithmes sont tellement essentiels à la vie quotidienne que l’article 230 ne devrait pas être restreint ou réinterprété, car cela « causerait des ravages sur l’Internet tel que nous le connaissons ».
Twitter vs Taamneh
La seconde affaire est un appel intenté par Twitter après que le neuvième circuit a écarté l’article 230 et autorisé la poursuite de l’affaire. Il a ensuite été statué que Twitter et d’autres plateformes n’avaient pas pris les mesures adéquates pour empêcher l’apparition de contenus terroristes sur la plateforme. La famille d’un Jordanien a accusé Twitter de ne pas avoir surveillé la plateforme après un attentat en 2017 qui a entraîné la mort de l’homme et de 38 autres victimes.
Cette affaire est axée sur la loi antiterroriste, mais, comme l’appel pourrait infirmer le jugement de la juridiction inférieure (qui comprenait une décision sur l’article 230), l’appel pourrait également avoir des répercussions sur l’article 230.
Quels sont les enjeux ?
Bien que les deux affaires soient liées, c’est l’affaire Gonzales qui devrait aborder la question des algorithmes : les plateformes peuvent-elles être tenues responsables du contenu promu par leurs algorithmes ?
Les deux actions en justice, qui seront tranchées d’ici la fin du mois de juin, peuvent avoir plusieurs issues. La plus radicale serait que le tribunal supprime les protections accordées aux plateformes en vertu de l’article 230. Mais, compte tenu de l’enjeu, cette solution est peu probable.
Il serait plus réaliste que la Cour suprême maintienne l’interprétation actuelle de l’article 230 ou, tout au plus, qu’elle introduise une restriction subtile. Il appartiendra alors au pouvoir législatif de répondre au mécontentement exprimé par les décideurs politiques ces dernières années.
Mise à jour des politiques de la Genève internationale
De nombreuses discussions politiques ont lieu chaque mois à Genève. Voici ce qui s’est passé en février.
La troisième session du groupe de travail à composition non limitée (GTCNL) sur la réduction des menaces spatiales s’est tenue à l’Office des Nations unies à Genève. Le GTCNL est chargé, entre autres, de formuler des recommandations sur les normes, les règles et les principes de comportement responsable possibles en ce qui concerne les menaces que les États font peser sur les systèmes spatiaux. Il a été créé par la résolution 76/23 de l’Assemblée générale des Nations unies et s’est déjà réuni deux fois : (a) du 9 au 13 mars 2022, (b) puis les 12 et 13 septembre 2023. Le GTCNL devrait se réunir à nouveau du 7 au 11 août 2023 et soumettre un rapport final à la 78e Assemblée générale des Nations unies en septembre 2023.
L’Union internationale des télécommunications (UIT) a organisé une série de webinaires sur la lutte contre la contrefaçon et les appareils TIC volés. Dans le premier épisode, des intervenants de différents groupes de parties prenantes ont présenté les problèmes et les défis liés à la circulation des dispositifs TIC contrefaits. Une attention particulière a été accordée aux solutions possibles par le biais de la normalisation.
Explorez la Genève numérique !
Besoin d’un guide sur la gouvernance de l’Internet à Genève ? Notre Geneva Digital Atlas, dans lequel vous trouverez les coordonnées des 46 acteurs les plus importants de la politique numérique, vous accompagnera tout au long de votre voyage. Gardez un œil sur nos Instagram, Twitter, YouTube, Facebook et LinkedIn pour les vidéos hebdomadaires Geneva Digital Tours, dans lesquelles des personnalités de haut niveau vous guident à travers leurs institutions. Au mois de mars, les organisations impliquées dans la normalisation et l’infrastructure seront présentées. Notre première invitée est Doreen Bogdan-Martin, secrétaire générale de l’UIT !
Les principaux événements du mois de mars en matière de politique numérique
11–16 Mars 2023,ICANN76 (Paris, France) L’ICANN76 offrira à sa communauté la possibilité d’aborder diverses questions relatives à son activité et à la gestion du système de noms de domaine (DNS). Le programme de ce forum communautaire de l’ICANN76 comprend le développement des capacités / la formation, l’interaction entre les communautés, l’élaboration de politiques, la sensibilisation / l’engagement, la sécurité / les questions techniques et les rapports / les mises à jour.
Le thème du Forum 2023 du Sommet mondial sur la société de l’information (SMSI) est « Lignes d’action du SMSI pour reconstruire en mieux et accélérer la réalisation des ODD ». Le Forum du SMSI est une plateforme mondiale multipartite destinée à faire progresser le développement durable par la mise en œuvre des grandes orientations du SMSI. Il facilite le partage d’informations et de connaissances, la création de connaissances, l’identification des tendances émergentes et la promotion de partenariats avec les organisations des Nations unies et les cofacilitateurs des lignes d’action du SMSI.
Diplo et la Geneva Internet Platform (GIP), avec le soutien des missions permanentes de Djibouti, du Kenya et de la Namibie, co-organisent une session pendant le Forum du SMSI pour débattre de la diplomatie numérique de l’Afrique. La session explore la manière dont l’Afrique peut renforcer sa participation à la gouvernance numérique mondiale, compte tenu de la croissance de ses économies numériques, de ses écosystèmes de start-up et de sa transformation numérique dynamique. Elle vise à identifier les priorités en matière de politique numérique, à améliorer la participation de l’Afrique aux processus de gouvernance numérique mondiale, et à offrir des idées pratiques pour renforcer la diplomatie numérique de l’Afrique dans les processus internationaux liés à la cybersécurité, à l’IA, à la gouvernance des données, à l’accès et à l’infrastructure. Enfin, la session proposera des mesures pratiques pour développer la diplomatie numérique africaine.
La Commission de la science et de la technologie au service du développement (CSTD) tiendra sa 26e séance sous les thèmes principaux suivants : Technologie et innovation pour une production plus propre, plus performante et plus compétitive ; Assurer l’eau potable et l’assainissement pour tous : la solution par la science et l’innovation ; Technologie et innovation pour une production plus propre, plus efficace et plus compétitive.La commission se concentrera sur la manière dont la science, la technologie et l’innovation peuvent servir de catalyseurs à l’Agenda 2030, en particulier dans des domaines cruciaux tels que le développement économique, environnemental et social. La CSTD examinera également les progrès réalisés dans la mise en œuvre et le suivi des résultats du Forum du SMSI aux niveaux régional et international, et entendra des présentations sur les examens en cours des politiques de la science, de la technologie et de l’innovation.
La septième séance de la conférence de l’Organisation mondiale de la propriété intellectuelle (OMPI) portera sur la propriété intellectuelle et les technologies d’avant-garde, et se concentrera sur l’intersection de la propriété intellectuelle et du métavers, en explorant les technologies d’avant-garde qui le rendent possible et en examinant les défis qu’elles posent au système de propriété intellectuelle existant. L’objectif principal de la séance est de fournir une stratégie pour relever ces défis, et de faire en sorte que l’innovation et le développement continuent de profiter à tout le monde.
Last week was marked by geopolitics: from the testimony of TikTok’s CEO in front of Congress and a wave of TikTok bans across Europe, to new moves in the semiconductor industry. Developments in the chatbot race, an Ibero-American charter on digital rights and responsibilities charter, and more layoff news round off this issue.
Andrijana and the Digital Watch team
// HIGHLIGHT //
TikTok and the terrible, horrible, no good, very bad week
The news coverage of TikTok CEO Shou Chew’s testimony in front of the US Congress largely describes Chew as having been ‘grilled’. We wouldn’t call it inaccurate: Chew was dancing between two fires as lawmakers from both sides of the aisle levied question upon question regarding TikTok’s data privacy and content policies.
And here’s how we see it: Lawmakers had their minds made up long before they stepped foot in that hearing room, and nothing Chew could have said would have made it better. Let’s dive into the whats and whys as succinctly as possible. There’s also a TL;DR version in a text box below the article.
Why was Chew testifying? As the committee’s chair put it: ‘because the American people need the truth about the threat TikTok poses to our national and personal security’. In case you haven’t followed the argument critics are espousing, it goes like this: TikTok is a subsidiary of ByteDance, which is a private Chinese company, possibly subject to the Chinese 2017 National Intelligence law, which requires any Chinese entity to support, assist, and cooperate with state intelligence work – including, possibly, the transfer of US citizens’ TikTok data to China.
TikTok’s solution? Project Texas. To alleviate these fears, TikTok has committed to moving US data to the USA under what they call Project Texas. This means that all US user data would be automatically stored in Oracle’s (a US privately-owned company) servers – and this company is headquartered in Texas, hence the project’s name. In Chew’s words to Congress: ‘Our commitment is to move their data into the United States, to be stored on American soil by an American company, overseen by American personnel. So the risk would be similar to any government going to an American company, asking for data.’
Chew also asserted: ‘I have seen no evidence that the Chinese government has access to that data. They have never asked us, we have not provided.’ A congresswoman found that ‘preposterous’.
Here’s the stumbling stone: For ten days now, ByteDance has been under federal investigation for improperly accessing the data of Forbes’ journalists and some of their contacts last year.
And what about harmful content? Chew was less forthcoming on the practical measures TikTok takes to handle harmful content, though it has to be said that no platform has found a magical solution yet. He did pledge that TikTok ‘will keep safety, particularly for teenagers, as a top priority’ and that ‘TikTok will remain a place for free expression and will not be manipulated by any government.’
Beholden to CCP or not? Ultimately, Chew’s testimony hasn’t convinced Congress that ByteDance and TikTok are not beholden to the Chinese Communist Party (CCP) and that Project Texas would make the data of US citizens unreachable to China.
A Wall Street Journal article published the same day of the hearing was very unhelpful to Chew. The WSJ reported that the Chinese government would oppose the sale or divestiture of TikTok per China’s export rules. This remark was made at a regular (yet misfortunately scheduled for TikTok) press briefing by China’s Ministry of Commerce. Many a lawmaker took that to mean that China does, in fact, own TikTok.
What next? To answer this question about the future, we must look into the past. During the Trump administration, two possibilities were put forward: divestiture and banning.
Divestiture means ByteDance would sell the US operations of TikTok to a US-owned entity. However, the recent remarks by China’s commerce ministry made it clear that the licence won’t be granted.
A ban won’t be possible without a new law. The White House is currently favouring the RESTRICT Act, which, if passed, would allow the Department of Commerce to sanction and ultimately ban software from adversarial countries. The list of adversarial countries includes China; therefore, the act would make banning TikTok possible. Yet, banning TikTok may violate First Amendment rights: Critics argue that the First Amendment protects the rights of American citizens to use the social media platforms of their choice. A ban would set a dangerous precedent of curtailing the right to expression online. The over 150 million Americans that use the app might have something to say about that, too.
TikTok’s week in Europe hasn’t been great either: Civil servants in Norway, the Netherlands, and France, as well as parliamentarians in the UK, won’t be allowed to use the app on their work devices anymore.
There are differences in these measures. In Norway, there is a recommendation against installing and using TikTok on government service devices. The same recommendation applies to Telegram. In the Netherlands, civil servants will only be able to use pre-approved apps on their work devices, meaning ‘espionage-sensitive apps’ will no longer be allowed. It’s been confirmed by local media that TikTok would fall in that category. But so would other apps. France has banned, with immediate effect, recreational apps on civil servants’ work phones due to cybersecurity concerns. Again, this includes TikTok but also other apps. The UK has previously banned TikTok on the work phones of civil servants but is now expanding the ban to all parliamentary devices and the wider parliamentary network.
These bans are much narrower than the ban being discussed across the pond.
A ban contemplated by Japan is more general: Lawmakers will create a proposal banning social media platforms if they are used for disinformation campaigns.
TL;DR: TikTok’s CEO Shou Chew’s testimony hasn’t convinced Congress that TikTok and its owner, ByteDance, are not subject to China’s 2017 National Intelligence law, which requires organisations to assist with state intelligence work, which could include transferring the TikTok data of US citizens to China. A new law may be necessary to ban the app in the US, but it may violate First Amendment rights. Elsewhere, governments in Europe, including Norway, the Netherlands, and France have issued guidelines against installing and using TikTok on government devices, and Japan is contemplating a general ban.
// CHIPS //
USA and Canada pledge more financial support to domestic semiconductor companies
The USA and Canada have pledged more money for domestic semiconductor companies: The USA pledged US$50 million in Defense Production Act funding for advancing packaging for semiconductors and printed circuit boards, and Canada up to CAD250 million for semiconductor projects from the Strategic Innovation Fund.
Czechia and Brazil move closer to Asian chipmakers
As North American countries make another move to lessen their reliance on Asian chipmakers, two other countries took steps in the opposite direction. A delegation of Czech politicians and company representatives hopes for Taiwanese investments in Czech chip technologies. Brazil will reportedly seek mainland Chinese technology and investment in developing its semiconductor industry.
Get-out-of-detention card?
The most eyebrow-raising news (in chips) from last week was that China has released Chen Datong, a top chip investor, after an eight-month detention to help China navigate the US export rules on semiconductors. Under these rules, advanced computing and semiconductor manufacturing items produced in the USA cannot be exported to China. Datong was detained during a probe related to corruption last August.
The chatbot race showed no signs of stopping last week.OpenAI is once again in the lead: The company connected ChatGPT to the internet via plugins. Unlike the previous versions of the OpenAI chatbot, this one can browse the web to answer your question, but it can also help you book your flights, order food, and perform other online tasks. ‘There’s a risk that plugins could increase safety challenges by taking harmful or unintended actions, increasing the capabilities of bad actors who would defraud, mislead, or abuse others,’ OpenAI acknowledges. And hey, if these developments scare you, you’re in good company; the OpenAI CEO is scared too.
And speaking of Microsoft’s rivals, Google has officially launched Bard – but only in the UK and the USA. Scarcely a day after it was launched, Bard was taking down ChatGPT-generated misinformation – about Bard being shut down, no less – also prompting concerns about chatbots purporting misinformation on the internet.
French parliament authorised AI-powered surveillance to secure 2024 Olympics
French lawmakers have passed a bill to use AI-powered surveillance technology to secure the 2024 Paris Olympics. It allows surveillance ‘on an experimental basis and until 30 June 30, 2025, for the sole purpose of ensuring the security of sporting, recreational, or cultural events which, by their scale or their circumstances, are particularly exposed to the risk of acts of terrorism or serious threats to the safety of persons’. The bill specifies that biometric data won’t be processed and facial recognition technologies won’t be used.
The use of AI-powered surveillance has raised concerns about privacy and civil liberties, with critics warning that the technology could be used to monitor and control citizens beyond and after the Olympics. It also may conflict with the EU’s Artificial Intelligence Act, currently under discussion in Brussels.
The bill can still be contested at the country’s constitutional court, Politico writes.
// COMPETITION //
High-Level Group on the DMA established A high-level group has been established to provide the European Commission with advice and expertise related to the implementation and enforcement of the Digital Markets Act (DMA). The group will offer advice and recommendations within members’ areas of expertise on implementation and enforcement matters, promote a consistent regulatory approach, provide expertise in market investigations, and assess interactions between the regulation and sector-specific rules applied by national authorities. The group will submit an annual report to the Commission, which will be communicated to the European Parliament and the Council.
Was this newsletter forwarded to you, and you’d like to see more?
The EU makes repair easier for consumers in new proposal
The European Commission adopted a new proposal containing common rules promoting the repair of goods. The proposal aims to make repair easier and cheaper for consumers. Sellers will be required to offer repair, except when it is more expensive than replacement. The proposal also includes the right for consumers to claim repair from producers, the producers’ obligation to inform consumers about which products they are obliged to repair, an online platform to connect consumers with repairers and sellers of refurbished goods, a European Repair Information Form for transparency on repair conditions and prices, and a European quality standard for repair services.
Mobile phones, cordless phones, and tablets will soon be included in the list of covered goods.
Such measures could have a significant impact on e-waste, as data shows that Europe has the highest annual generation of e-waste per capita – 16 kilograms per person.
The counter below shows the amount of e-waste generated worldwide so far in 2023.
// DIGITAL RIGHTS //
OEI Adopts Ibero-American Charter of Principles and Rights in Digital Environments to promote inclusion
The charter aims to guarantee inclusion in information societies via the exercise of fundamental rights in the framework of digital transformation. It does so by promoting ten common principles. States should consider these when developing and adopting policies related to protecting rights and fulfilling duties in the digital environment. Companies, civil society, and academia should consider the principles when developing and applying technologies.
These principles are
Centrality of the person. Rights and duties in digital environments
Digital inclusion and connectivity
Privacy, trust, data security and cybersecurity
Full access to education, culture and health in inclusive and safe digital environments
Special attention to girls, boys and adolescents
Social, economic and political participation in fair and sustainable digital environments
Digital public administration
Fair, inclusive, and secure digital economy
An approach to emerging technologies that does not renounce the centrality of people
Ibero-American assistance and cooperation for digital transformation
Facebook’s content moderator layoffs in Kenya blocked by judge
There is, however, some lukewarm news. A Kenyan judge has issued a temporary order blocking Facebook from carrying out a mass layoff of content moderators in the country. The moderators are alleging unlawful termination by Meta’s former content moderation partner Sama. They are also alleging that Meta instructed its new content moderator partner Majorel, to blacklist former Sama employees. The judge’s order prohibits Facebook from terminating the moderators’ contracts until the case has been heard and determined. The case is set to be heard on 28 March.
27 March: The Global Digital Compact (GDC) co-facilitators are organising a series of thematic deep dives to prepare for intergovernmental negotiations on the GDC. The 27 March discussion will cover digital inclusion and connectivity, and it will be guided by the following questions: 1. How can governments, international organisations, private companies, and civil society work together to close the digital divide and improve access, skills, and meaningful connectivity for all? 2. What actions should be taken to enable digital inclusion for all? 3. What policies, frameworks, and programs have proven most successful and should be scaled up and adapted to other contexts to foster digital inclusion?
As these in-depth discussions unfold, the GIP will look at how their focus topics have been tackled in different key policy documents, from the outcomes of the World Summit on the Information Society (WSIS) (2003–2005), through the UN Secretary-General’s Roadmap for Digital Cooperation, to the latest UN General Assembly resolution on ICT for sustainable development. Visit our dedicated page on the Digital Watch observatory to see how issues related to digital inclusion and connectivity have been covered in such documents. Bookmark the page to revert to it later, as we’ll continue to do the same for other topics tackled during the deep dives between now and June.
27–31 March: The 26th session of the Commission on Science and Technology for Development (CSTD) will meet in Geneva to discuss technology and innovation for cleaner and more productive and competitive production and ensuring safe water and sanitation for all, through solutions from science, technology and innovation.
28 March: Universal Acceptance (UA) Day promotes the advantages of UA and will consist of raising awareness and improving technical skills related to UA. Find your local UA event here.
29 March: The European Commission and the European Group on Ethics in Science and New Technologies (EGE) will host the Open Round Table on democracy in the digital age to discuss how technological innovation can serve democracy and how to strengthen democratic systems for meaningful civic participation.
In a letter entitled ‘The Age of AI has Begun’, Gates predicts that AI ‘will help empower people at work, save lives, and improve education’. He suggests ways to mitigate the risks of AI, and also suggests principles that should guide public conversation about AI. Read the letter here.
What’s new with cybersecurity negotiations? OEWG 2021–2025 fourth substantive session
Here’s what stood out this time around: two proposals to counter ransomware, high hopes for a future Points of Contact (PoC) directory, and in-depth discussions on the applicability of the UN Charter to cyberspace. Proposals also abounded – you can find those in the orange boxes dispersed throughout the blog. Read the blog here.
Andrijana Gavrilovic
Editor, Digital Watch, and Head of Diplomatic and Policy Reporting, DiploFoundation
Was this newsletter forwarded to you, and you’d like to see more?
Amid all the excitement over the release of GPT-4 (our highlight in today’s digest), the major actors in digital policy seemed to be very focused on their geopolitical priorities and their quest for digital supremacy. In other news, TikTok has been banned in more regions, while the inventor of AI machine DABUS continues his legal fight in courts across the world.
Let’s get started Stephanie and the Digital Watch team
// HIGHLIGHT //
GPT-4 is (already) here
As soon as we got used to the idea that a search bot would be revolutionising the way we write, learn, and work, along came a new update from ChatGPT inventor OpenAI: GPT-4, which leverages more data and computation than the previous models (you’ll have to take the company’s word for it).
During roughly the same period (that is, last week), Google announced a range of AI tools for its email, collaboration, and cloud software, while Microsoft announced Copilot, which will overhaul its entire line-up of Office applications. China’s Baidu joined the ranks with its Ernie bot.
The incessant announcements of new releases tell us we’re in for an eventful year: finding out the impressive things generative AI can do (my favourite’s below); discovering ways in which it can automate our work (but hopefully not our thinking); and witnessing how it’s changing the face of marketing and educational methodologies.
A new product or software release will push the boundaries at each turn. In a way, it will be like the mobile phone business: You think you own the latest model until you realise the company announced a new version yesterday.
What are a couple of meals I can make with this?
Based on the items visible in the fridge, here are a couple of meal ideas:
1) Yogurt parfait: Layer yogurt, strawberries, and blueberries in a bowl or jar. Top with honey or nuts for added texture and flavor.
2) Carrot and hummus wrap: Spread hummus on a tortilla or wrap. Top with shredded carrots and any other desired veggies. Roll up and enjoy!
The speed at which things are moving means that the typically slow-moving regulatory environment will have an even tougher time catching up. Two areas in which policymakers will need to move swiftly are enacting regulations for the safe, transparent, and ethical use of AI and updating intellectual property rules given the large amounts of copyrighted data these systems are using.
The biggest challenge for policymakers is how to future-proof these regulations, that is, how to introduce flexible safeguards that keep the door open for innovation and still apply to emerging technologies. Rather than a complete rewrite, such regulations would (hopefully) only need to be adapted. A good example is the EU’s proposed AI Act, which will create a process for self-certification and oversight, based on the level of risk of an AI system.
Future-proofing the regulations also helps create an environment where innovation is encouraged and supported, and consumers are protected. In turn, this fosters an environment of trust, which is so badly needed. There has never been a more important time for speedy tech regulation than now.
In addition, New Zealand will ban TikTok on devices with access to the country’s parliamentary network. As for the European Parliament, the ban announced earlier in March extends to all data networks managed by the institution, including the Wi-Fi network accessible to visitors.
The reasons. It’s the same chorus: data privacy and cybersecurity risks. Because TikTok’s parent company ByteDance is Chinese, governments worry that the Chinese government can access users’ data through ByteDance.
A ban or a split.As we wrote last week, TikTok’s plans to develop tighter privacy standards and open new data centres in Europe might not be enough to appease policymakers in the EU. Things are looking much worse for TikTok in the USA, where new regulations that would empower the government to ban the app are looming. Media reports are now saying that TikTok is considering splitting from ByteDance if everything else fails. This would surely prove TikTok’s resolve, if it comes to that.
// GEOPOLITICS //
Digitalisation a priority for EU’s long-term competitiveness
Digitalisation is one of the main priorities for the EU’s future, the European Commission said in its communication to mark the 30th anniversary of the Single Market. The commission identified nine priorities for securing long-term competitiveness.
The EU wants a broader take-up of digital tools across the economy, and stronger tech sectors, including AI, quantum computing, microelectronics, web 4.0, virtual reality and digital twins, and cybersecurity.
The speech marked a significant moment in China’s history as it was Xi Jinping’s first address since he began his third term as president on 10 March.
China publishes white paper on law-based cyberspace governance
China’s State Council Information Office (SCIO) has released a white paper: China’s Law-Based Cyberspace Governance in the New Era. The first of its kind, the document recaps China’s laws and regulations related to the internet and enforcement.
In a press conference, SCIO Deputy director expressed China’s willingness to work with the international community in global governance processes and to engage in consultations with joint contributions.
Was this newsletter forwarded to you, and you’d like to see more?
EU, Latin America, Caribbean partners launch EU-LAC Digital Alliance
There’s a new partnership in town: The European Union–Latin America and Caribbean Digital Alliance, which will focus on building digital infrastructures and promoting connectivity and innovation. The EU is making an initial contribution of €145 million to implement the alliance’s digital projects. A high-level EU-LAC summit is planned for July.
// E-ID //
EU’s draft eID directive goes to trilogue
The European Parliament has given its formal nod for negotiations on the new eID directive to start with the EU Council. The new rules for electronic identification and authentication – operated through a personal digital wallet on a mobile phone – will give citizens and businesses digital access to the main public services across the EU.
The inter-institutional discussions, also known as trilogues, will start immediately. The Swedish presidency of the EU council is hoping to reach a political agreement by June, before it passes the baton to Spain.
Patent battle for AI machine’s inventions continues
The quest of American computer scientist Stephen Thaler to have his AI machine DABUS declared a patent-holder for its inventions continues.
This time, Thaler will try to convince the US Supreme Court that the US Patent Act doesn’t restrict the term inventor to human beings only. If the US Supreme Court reverses lower courts’ decisions, it could open a door to a new system in which AI machines could be declared patented inventors of their creations.
ICANN comes one step closer to launching a new round of gTLDs
ICANN, the organisation responsible for managing the internet’s address book, is getting ready to launch a new gTLDs round. This comes after the ICANN Board approved the majority of recommendations (see Section A of this scorecard) made by a working group focused on preparing the procedural rules.
gTLDs, short for generic top-level domains, are found in the last part of a website address, such as .com or .org. Experts say the entire process could take more than a year before applicants can actually submit a request. The last time a new round was issued was in 2012.
// CONTENT POLICY //
Trump’s back. Former US President Donald Trump has made a comeback on social media with a solitary ‘I’M BACK’ message accompanied by an old video announcing his election in 2016, which fades to a ‘Trump 2024’ screen. We’re wondering what he’ll say next.
20–23 March: Mozilla’s festival for activists returns in virtual format. It will bring people together ‘to better our digital landscape, build transformative systems, and sustain momentum within our community towards positive human and digital rights progress’.
21 March: The DC Blockchain Summit 2023, hosted by the Chamber of Digital Commerce in Washington DC, will gather policymakers, entrepreneurs, business leaders, investors, and other experts to discuss regulatory trends, the latest innovations, and issues in the blockchain industry.
Policy report: The New American Foreign Policy of Technology
A new report by the US German Marshall Fund argues that the tech challenges of the 21st century can’t be solved with the old 20th-century system. Rather, the USA needs a new digital foreign policy. The digital foreign policy must include three key parts: (a) a digital policy lab for national policymaking and partnerships; (b) a technology task force for deepening cooperation among like-minded countries in supply chains (such as semiconductors); (c) global promotion of the US Declaration for the Future of the Internet. Full text.
Opinion: Regulatory first movers
Former chairperson of the US Federal Communications Commission Tom Wheeler – remembered principally for the 2015 Open Internet Order that established strong net neutrality rules in the USA – has written about the regulatory approaches of the EU, UK, and USA. The title of his article sums up the difference between the three players: ‘The UK and EU establish positions as regulatory first movers while the US watches’. Read here.
This is a mock-up for taxonomy/topic updates, using Newsletter Glue’s ‘NG: Latest posts’ feature. You can find a short introduction in this instructional video.
This block includes features such as:
Sorting of posts
Number of posts to display
Number of words to display per post
Filters for Post types / Categories / Tags / Authors / Topic
Display options: Label, heading, featured image, excerpt, and type of call to action
Iowa has become the sixth state in the US to pass a comprehensive data privacy law, which applies to companies that control or process data of at least 100,000 Iowa…
The digital euro should be safe and resilient, easy and convenient to use, and widely accessible to the public. The ECB is collaborating with other central banks to understand the…
OpenAI has released general purpose machine learning (GPM) technology called GPT-4. GPT-4 has the potential to significantly disrupt traditional industries, such as legal and medical services, by providing automated assistance.
At the ICANN76 Community Forum in Mexico, the ICANN Board adopted a series of recommendations made in the context of what within the ICANN community is known as the New…
We start the week with news that the Silicon Valley Bank, a major lender to Big Tech companies, has collapsed. This only adds to the financial woes of tech companies, especially those which have had to consider layoffs.
Meanwhile, much of the debate during last week’s fourth substantive session of the UN Open-Ended Working Group (OEWG) highlighted existing divergences between countries advocating for an international treaty, others in favour of implementing existing norms of state behaviours, and yet others somewhere in between. We’ll publish an in-depth analysis on our OEWG page next week.
In other news, India and China are planning changes in how they manage their data, while Big Tech wrestles with Canada over its new media bargaining draft rules. Let’s get started.
Stephanie and the Digital Watch team
// HIGHLIGHT //
Will TikTok’s Project Clover be enough to appease European policymakers?
TikTok’s relationship with US and European policymakers is shaky at best. In the USA, the company is still trying to recover from Trump’s attempt to ban the app from operating in the country unless it was sold to an American company. The attempt may have failed back then, but policymakers are still intent on passing rules that would give the government the power to ban the company.
The EU is following suit: In their most recent action against TikTok, the EU’s institutions announced a ban on TikTok from being used on the personal devices of their staff. (So did the US Government, Canada, and Belgium.)
Antagonism against TikTok. The reason for this antagonist approach is because TikTok’s parent company ByteDance is Chinese. There are three main causes for concern.
Citing national security issues, policymakers fear that the Chinese government could be using the platform for espionage or other malicious activities. Although the USA and the EU share common concerns, the USA has been challenging TikTok (undoubtedly also influenced by the ongoing trade war between the USA and China), for longer than the EU.
Policymakers also worry that users’ data can be accessed by China (TikTok has admitted that non-EU staff do have access to European data). Here, there’s more at stake for the EU due to its tougher legislation on data protection; the EU has in fact been more vociferous on data handling than it has on issues of national security.
TikTok has been sued in several countries for exposing children to harmful content and for other practices that place children’s privacy at risk. The investigations and lawsuits follow their natural course: imposing fines on TikTok where appropriate and mandating a change in practices where necessary. Even though these practices can harm kids, no company has ever been banned from a market over this issue.
This begs two questions: Will the EU follow the USA’s stance on security? And will TikTok manage to reassure the EU that it’s severing all connections with China regarding European data? In the interests of cooperation between the USA and Europe, we’re bound to see a certain ripple effect in the actions the USA and Europe take when it comes to national security. But when protecting citizens’ data, the EU will want to make up its own mind.
Enter Project Clover. TikTok has just announced new data access and data security measures, including (a) security gateways that will determine employee access to European TikTok user data and data transfers outside of Europe; (b) external auditing of these processes by a third-party European security company; (c) new privacy-enhancing technologies to ensure that data cannot be de-anonymised ‘without additional information’.
This comes in addition to recent announcements that TikTok will open three European data centres (two in Ireland and one in Norway), to the tune of €1.2 billion (US$1.3 billion) in annual running costs.
Nor is it the first project of its kind: A similar plan, Project Texas, is ongoing in the USA. But Project Clover is a tad more ambitious. According to TikTok, the new data access rules will not only comply with the EU’s GDPR, but will introduce higher data access standards.
Not quite there yet. All of this should help appease European policymakers. But as they say, the devil is in the details. Details that put Project Clover on the right track, but not quite strongly enough.
First, TikTok will probably be nudged into confirming that not only will the data be housed in Ireland and Norway, but that access to it also be confined to Europe. The carefully-worded announcement falls short of affirming this (current wording includes: ‘Our existing data access controls already highly limit access to user data’; ‘security gateways… will determine employee access to European TikTok user data and data transfers outside of Europe’; ‘this will add another level of control over data access’).
Second, TikTok will need to provide clearer timelines on when it will implement what it promises to do. Clear schedules will also be important for timely compliance with the tougher obligations that the Digital Services Act imposes on very large online platforms and for implementing its sister legislation, the Digital Markets Act.
The current provision on cross-border data flows (Section 17 of the bill) states that the ‘Central Government may… notify such countries or territories outside India to which a data fiduciary may transfer personal data…’ Sources told the newspaper this is likely to be amended, with the bill allowing cross-border data flows to all locations by default, with the exception of blacklisted countries, where transfers would be restricted.
China plans to set up new bureau for data governance; reiterates calls for self-reliance
China is planning to set up a new government agency to centralise the management of its data. The proposed national data bureau, to be coordinated by the National Development and Reform Commission, will be tasked with the sharing and application of data resources. The plan was submitted last week for deliberation during the annual session of the National People’s Congress, one of the ‘two sessions’. The proposed body will also be responsible for advancing the development of China’s digital economy and other digital-related ambitions.
In separate news, Chinese President Xi Jinping has renewed calls to speed up efforts for achieving greater self-reliance in science and technology. China’s strength should ultimately rely on scientific and technological innovation, he told the National People’s Congress.
// CONSUMER PROTECTION //
WhatsApp agrees to be more transparent when updating terms of service
The European Commission has announced that WhatsApp has committed to being more transparent on changes to its terms of service, in settlement of an EU consumer probe. For future updates to its terms, WhatsApp promised it would explain clearly to users what changes it intends to make to its contracts and how the changes could affect users’ rights. WhatsApp has also committed to making it easier for users to reject updates if desired and to explain to users what happens when they reject an update.
The announcement came after discussions between the company and the consumer protection cooperation (CPC) Network of authorities responsible for enforcing the EU consumer protection laws. The CPC will monitor how the company will implement these commitments.
Google will stop blocking news content in Canada, the company told a parliamentary panel last week. A few weeks ago, Google temporarily blocked access to news content in Canada in reaction to draft rules that would oblige internet platforms to compensate Canadian media companies for making news content available on the platforms.
…But Meta threatens to start blocking access
Facebook’s parent company Meta, however, announced it would end access to news content for Canadian users if the rules are introduced in their current form.
A Meta spokesperson said: ‘A legislative framework that compels us to pay for links or content that we do not post, and which are not the reason the vast majority of people use our platforms, is neither sustainable nor workable.’
// CYBERSECURITY //
US budget proposes big spend on cybersecurity
The new US budget for 2024, announced last week by President Biden, is requesting significant funding for cybersecurity operations, including USD63 million for strengthening the FBI’s capacity, and USD145 million to improve the capacities of the Cybersecurity and Infrastructure Security Agency (CISA).
In addition, the budget requests more than USD395 million to advance global cyber and digital development initiatives, including the Department of State’s Bureau of Cyberspace and Digital Policy, and regional initiatives such as Digital Transformation with Africa.
// JOBS //
Meta to lay off thousands of employees (again)
It’s already been reported recently in the Financial Times, and now it’s Bloomberg’s turn to report that Meta is planning to cut thousands of jobs. The company, which has not issued any official communication on this second round of job cuts, laid off more than 11,000 jobs in November.
The week ahead (13–19 March)
13–16 March: Organised by ITU and the UN Economic Commission for Europe (UNECE), the all-virtual Future Networked Car Symposium 2023 will examine the latest advances in vehicle connectivity, automated mobility, and the role of AI in the transport sector.
13–17 March: This year’s WSIS Forum 2023, in Geneva and online, will focus on how to accelerate the achievement of the SDGs. (Pass by our booth!)
16 March: The European Commission’s public consultation on the enforcement procedures of the new Digital Services Act ends today.
16–17 March: This year’s Blockchain Africa Conference 2023 is its ninth edition. Taking place in Johannesburg, South Africa, and online, it will focus on the opportunities of using blockchain technology and potential use cases in Africa and globally.
#ReadingCorner
Chomsky dubs marvels of ChatGPT a ‘false promise’
If there’s one person we would ask for his thoughts on ChatGPT, it’s Professor Noam Chomsky, widely known as the father of modern linguistics. In last week’s op-ed, Chomsky wrote how content generators’ ‘deepest flaw is the absence of the most critical capacity of any intelligence.’ Read the full text.
Ready, set, go! Well, except not everyone was at the starting line.
Microsoft took off like a bullet with its surprise press event, announcing that it would integrate the large language model (LLM) ChatGPT-4, created by OpenAI, into its Bing search engine and Edge browser. Google, who had previously dubbed the threat that ChatGPT poses ‘code red,’ had to react fast. The next day, it announced its conversational AI named Bard. And this is only the first lap of the race.
What sets ChatGPT apart from previous other language models?
OpenAI’s decision to make ChatGPT available to the public for free was a bold move, considering the significant costs associated with maintaining the model while handling millions of questions from curious users – costs that Open AI CEO Sam Altman himself called ‘eye-watering’. However, this decision proved to be not only brave but also astute. By making ChatGPT accessible to everyone, OpenAI ignited a spark of curiosity and interest that captured worldwide attention.
In reality, ChatGPT was not the first model to be made available to the public for free. Meta’s Galactica attempted the same feat, suffered from an infamous crash and was turned off after only a few days. So how did ChatGPT succeed when Galactica didn’t? The answer is mainly related to how it was advertised and to whom. Galactica was presented to the academic community as an AI capable of effortlessly writing scientific papers – a field with a demanding and critical audience that is easy to disappoint. On the other hand, ChatGPT – and its limitations – was promoted publicly as an open and accessible tool for everyone to experiment with and enjoy. This approach, combined with ChatGPT’s performance, set it apart from previous models and made it a game changer in the world of AI.
The decision to make ChatGPT available to the general public was not without its limitations. Unlike open-sourced models, ChatGPT is only available in a limited way, with interested researchers unable to look ‘under the hood’ of the model or adapt it to their specific needs. This is a different approach than what OpenAI did with its exceptional Whisper model, which was commendable in its transparency. From a business standpoint, OpenAI’s decision to offer ChatGPT to the public for free, despite the high costs involved, proved to be the right one, as it attracted attention and funds.
How other companies reacted
ChatGPT has stirred up unprecedented attention and competition in the world of AI. The public’s tremendous interest in ChatGPT has prompted significant players in the industry to react swiftly. This type of competition is undoubtedly exciting, but it can also result in losses. Nevertheless, the strategies employed are quite intriguing.
Microsoft did not hesitate to allocate US$10 billion toward the integration of ChatGPT to all of its leading products, including Skype, Teams, and Word. It did so rapidly and openly in response to ChatGPT’s overwhelming public interest and popularity, and set a precedent for others to follow.
Google hastily announced it would integrate the Bard model into Google Search, but failed its first step when Bard made a factual error in its launching demo. Google’s strategy resulted in significant early losses for the company, as a 10% share price drop wiped off US$170 billion from Google’s market value.
Despite the setback caused by the public release and open-sourcing of the Galactica model, Meta persists in its approach of open-sourcing its state-of-the-art AI models. With the recent open-sourcing of its largest language model, called the LLaMa model, Meta seems to be attempting to decrease user reliance on the OpenAI GPT API by providing access to new models. LLaMa is not the first large-scale model to be open-sourced – BLOOM and OPT have been available for some time. But their application has been limited due to their high hardware requirements. LLaMa is about ten times smaller than these models, and can run on a single GPU, potentially helping more researchers access and study large language models. At the same time, LLaMa achieves similar results to GPT-3.
China’s tech giants haven’t been wasting time: Baidu is planning to integrate its chatbot Ernie (short for Enhanced Representation through kNowledge IntEgration) into its search engine in March and eventually into all Baidu operations.
Regulators’ response
Regulators in both the USA and China are taking notice: OpenAI CEO Sam Altman has been meeting with US lawmakers, who reportedly pressed him on bias, the speed of changes in AI, and AI’s potential uses.
In China, ChatGPT is not officially available, but users have been able to access it through workarounds. However, regulators have told key Chinese tech companies not to integrate ChatGPT into their services, over risks that the software ‘could provide a helping hand to the US government in its spread of disinformation and its manipulation of global narratives for its own geopolitical interests’. Chinese tech companies will also have to report to regulators before they roll out their own ChatGPT-like services.
Generative AI’s impact on our future
February developments in generative AI once again brought forth the question: will AI take over our jobs? Aside from the obvious, work brings meaning and fulfilment to humans, which is why fears of a new technology making human labour redundant run wild with every new tech hype.
Our colleagues from Diplo’s AI Lab, who have also been using language models to develop some pretty smart AI tools (we’re biased here), think that AI won’t make most jobs redundant. Some jobs have an intrinsically interhuman nature, and AI will be hard-pressed to replace those.
Yet, AI will make some jobs redundant, as has also been the case with every new technology.
The good news is that AI tools will free up time for workers by taking mundane tasks off their lists. And while some jobs will disappear, new ones will appear, as Microsoft CEO Satya Nadella and Microsoft founder Bill Gates already said. The question is: How do we ensure that both current and future generations are prepared to deal with current and similar changes in the job market?
Would you like to share your thoughts on generative AI with us? Write to us at digitalwatch@diplomacy.edu!
Digital policy developments that made global headlines
The digital policy landscape changes daily, so here are all the main developments from February. We’ve decoded them into bite-sized authoritative updates. There’s more detail in each update on the Digital Watch Observatory.
The US White House has published a National Cybersecurity Strategy, noting that big companies should take more responsibility for insecure software products and services.
The European Commission launched a public consultation on the future of connectivity, which is considered a prelude to plans that could require Big Tech to pay their share of costs related to digital infrastructure. It also published a proposal for a Gigabit Infrastructure Act.
The Australian Competition and Consumer Commission (ACCC) will examine whether interconnected products and services offered by digital platforms harm competition and consumers.
The signatories to the 2022 Code of Practice on Disinformation, which includes all major online platforms, have set up a Transparency Centre, which will guarantee the transparency of their efforts to combat disinformation, and released reports on their implementation of the code’s commitments.
The European Commission dropped its complaint against Apple’s in-app purchase mechanism, which obliges music streaming app developers to use the proprietary system if they want to distribute paid content on iOS devices, but will continue investigating Apple’s anti-steering practice.
Technologies
Representatives of 59 countries issued a joint call to action on the responsible development, deployment, and use of AI in the military domain.
The Netherlands will restrict the export of the most advanced semiconductors technology, including deep ultraviolet (DUV) lithography systems.
How are algorithms putting Section 230 to the test?
It’s long been argued that the law in the USA that protects social media platforms from liability over content posted by users on these platforms – the so-called Section 230 of the Communications Decency Act – should be narrowed down or downright abolished.
Section 230, which has been the object of so many debates, is a nifty two-sentence rule which states that: (a) platforms aren’t publishers (hence, they aren’t liable for the content users post, unlike publishers), (b) in cases where platforms self-police third-party content, they can’t be punished for other harmful content they don’t remove.
Admittedly, this rule has allowed the internet to flourish. Platforms were able to host a huge amount of user content, unshackled from the fear of liability. It also allowed content to be posted instantaneously, without platforms being required to review it before making it public. Freedom of expression thrived.
But now, the age of algorithms is putting Section 230 to the test. A few weeks ago, the US Supreme Court began hearing arguments in two cases, both initially decided by the Ninth District, that could have implications for Section 230.
Gonzales vs Google
In a lawsuit against Google, the family of an American woman killed in an attack in Paris by Islamist militants has been arguing that Google’s algorithm recommended content from the militant group to YouTube users. The family is appealing the first judgement by arguing that Section 230 doesn’t provide platforms immunity when it comes to algorithm-recommended content, as the suggestions made by algorithms are not third-party content but the company’s own. Google’s argument, on the other hand, is that Section 230 is not just about protecting companies from third-party content but goes as far as to state that platforms shouldn’t be considered publishers.
Many companies and organisations have filed court briefs in support of Google. Twitter, for instance, is arguing that algorithms provide a way of prioritising some content over others (‘newer content over older content’), but is not conveying any content of its own. Microsoft is arguing that algorithms are so essential to daily life that Section 230 shouldn’t be narrowed or reinterpreted, as this would ‘wreak havoc on the internet as we know it’.
Twitter vs Taamneh
The second case is an appeal filed by Twitter after the Ninth Circuit set aside Section 230 and allowed the case to proceed. It then ruled that Twitter and other platforms had not taken adequate steps to prevent terrorist content from appearing on the platform. The family of a Jordanian man accused Twitter of failing to police the platform after a 2017 attack led to the man’s death along with 38 other victims.
The focus of this case is the antiterrorism law, but since the appeal could overturn the lower court’s judgement (which included a ruling on Section 230), the appeal could also have repercussions for Section 230.
What’s at stake?
Although both cases are connected, it’s the Gonzales case which is likely to tackle the question of algorithms: whether platforms can be held liable for content promoted by their algorithms.
There are several possible outcomes to both lawsuits, which will be decided by the end of June. The most drastic outcome is for the court to remove the protections which Section 230 gives to platforms. But judging by what’s at stake, this is quite unlikely.
A more realistic outcome is for the Supreme Court to retain the current interpretation of Section 230 or, at most, introduce a subtle limitation. Then, it will be up to the legislative branches to address the discontent aired by policymakers in recent years.
Policy updates from International Geneva
Numerous policy discussions take place in Geneva every month. Here’s what happened in February.
The third session of the Open-ended working group (OEWG) on reducing space threats took place at the UN Office in Geneva. The OEWG is mandated, inter alia, to make recommendations on possible norms, rules, and principles of responsible behaviours relating to threats by states to space systems. It was set up by UN General Assembly Resolution 76/23 and convened already twice: (a) from 9 to 13 March 2022, (b) then on 12 and 13 September 2023. The OEWG is expected to meet again from 7 to 11 August 2023 and to submit a final report to the 78th UN General Assembly in September 2023.
The International Telecommunication Union (ITU) organised a series of webinars on Combating counterfeiting and stolen ICT devices. In its first episode, panellists from different stakeholder groups present issues and challenges on the circulation of counterfeit ICT devices. Particular attention was given to possible solutions via standardisation.
Explore digital Geneva!
Need a guide through internet governance in Geneva? Our Geneva Digital Atlas, where you can find details and contacts for the 46 most relevant digital policy actors, will be with you on your journey.Keep an eye on our Instagram, Twitter, YouTube, Facebook or LinkedIn for weekly Geneva Digital Tours videos where high-level names take you on a tour through their institutions. During March, we feature organisations involved in standardisation and infrastructure. Our first guest is Doreen Bogdan-Martin, Secretary-General of ITU!
ICANN76 will offer opportunities for the ICANN community to address various issues concerning ICANN’s activity and domain name system (DNS) management. The ICANN76 Community Forum programme schedule includes capacity development/training, cross-community interaction, policy development, outreach/engagement, security/technical matters, and reporting/updates. Read more.
ICANN76 will offer opportunities for the ICANN community to address various issues concerning ICANN’s activity and domain name system (DNS) management. The ICANN76 Community Forum programme schedule includes capacity development/training, cross-community interaction, policy development, outreach/engagement, security/technical matters, and reporting/updates. Read more.
Thetheme of the World Summit on the Information Society (WSIS) Forum 2023 is ‘WSIS Action Lines for Building Back Better and Accelerating the Achievement of the SDGs’. The WSIS Forum is a global multistakeholder platform for advancing sustainable development through the implementation of the WSIS Action Lines. The forum facilitates information and knowledge sharing, knowledge creation, identifying emerging trends and fostering partnerships with UN organisations and WSIS Action Line co-facilitators.Diplo and the Geneva Internet Platform (GIP), with the support of Permanent Missions of Djibouti, Kenya, and Namibia, are co-organising a session during the WSIS Forum to discuss Africa’s digital diplomacy. The session explores how Africa can enhance its participation in global digital governance, considering its growing digital economies, start-up ecosystems, and dynamic digital transformation. It aims to identify digital policy priorities, improve Africa’s participation in global digital governance processes, and offer practical insights to strengthen Africa’s digital diplomacy in international processes related to cybersecurity, AI, data governance, and access and infrastructure. Ultimately, the session will propose practical steps for developing African digital diplomacy. Read more.
Thetheme of the World Summit on the Information Society (WSIS) Forum 2023 is ‘WSIS Action Lines for Building Back Better and Accelerating the Achievement of the SDGs’. The WSIS Forum is a global multistakeholder platform for advancing sustainable development through the implementation of the WSIS Action Lines. The forum facilitates information and knowledge sharing, knowledge creation, identifying emerging trends and fostering partnerships with UN organisations and WSIS Action Line co-facilitators.Diplo and the Geneva Internet Platform (GIP), with the support of Permanent Missions of Djibouti, Kenya, and Namibia, are co-organising a session during the WSIS Forum to discuss Africa’s digital diplomacy. The session explores how Africa can enhance its participation in global digital governance, considering its growing digital economies, start-up ecosystems, and dynamic digital transformation. It aims to identify digital policy priorities, improve Africa’s participation in global digital governance processes, and offer practical insights to strengthen Africa’s digital diplomacy in international processes related to cybersecurity, AI, data governance, and access and infrastructure. Ultimately, the session will propose practical steps for developing African digital diplomacy. Read more.
The Commission on Science and Technology for Development (CSTD) will hold its 26th session under the main themes: Technology and innovation for cleaner and more productive and competitive production; Ensuring safe water and sanitation for all: a solution by science, technology and innovation. The commission will focus on analysing how science, technology, and innovation can serve as enablers of the 2030 Agenda, especially in crucial areas like economic, environmental, and social development. The CSTD will also review the progress made in the implementation of and follow up on the outcomes of the WSIS Forum on regional and international levels; and hear presentations on ongoing science, technology, and innovation policy reviews. Read more.
The Commission on Science and Technology for Development (CSTD) will hold its 26th session under the main themes: Technology and innovation for cleaner and more productive and competitive production; Ensuring safe water and sanitation for all: a solution by science, technology and innovation. The commission will focus on analysing how science, technology, and innovation can serve as enablers of the 2030 Agenda, especially in crucial areas like economic, environmental, and social development. The CSTD will also review the progress made in the implementation of and follow up on the outcomes of the WSIS Forum on regional and international levels; and hear presentations on ongoing science, technology, and innovation policy reviews. Read more.
The seventh session of the World Intellectual Property Organization’s (WIPO) Conversation on Intellectual Property and Frontier Technologies will concentrate on the intersection of IP and the metaverse, exploring the frontier technologies that make it possible and examining the challenges they pose to the existing IP system. The primary objective of the session is to provide a roadmap for addressing these challenges to ensure that innovation and development continue to benefit everyone. Read more.
The seventh session of the World Intellectual Property Organization’s (WIPO) Conversation on Intellectual Property and Frontier Technologies will concentrate on the intersection of IP and the metaverse, exploring the frontier technologies that make it possible and examining the challenges they pose to the existing IP system. The primary objective of the session is to provide a roadmap for addressing these challenges to ensure that innovation and development continue to benefit everyone. Read more.
The Digital Watch observatory maintains a live calendar of upcoming and past events.
This week, we dive into the White House’s new cybersecurity strategy, which marks a fundamental shift away from a decades-long environment in support of self-regulation. In other news, the European Commission has halved its antitrust investigation on Apple’s marketplace practices, whereas China and India announce new plans for development and non-personal data.
Happy March!
Stephanie and the Digital Watch team
// HIGHLIGHT //
USA’s new cybersecurity strategy: Big companies should take more responsibility for insecure software products and services
The White House’s new National Cybersecurity Plan, released last week, makes a major announcement: The US government will shift the burden of defending cyberspace to large tech manufacturers and software companies and away from individuals, small businesses, and local governments.
In essence, this means new laws – down the line – that will hold large companies accountable for failing to take reasonable precautions to secure their products and services. Down the line, because it’s not something that will be developed overnight. And with the presidential election in 2024, there’s only so much that can be achieved. (Let’s also wait for the strategy’s implementation plan to be published in a few months’ time).
And yet, this sets the tone for a fundamental shift away from a decades-long environment where the end users (you and me) have been facing the brunt of digital technologies vulnerable to viruses due to early releases or personal data breaches, which companies failed to adequately prevent. The idea is that companies that fail to meet specific standards will be held liable for any data losses or harm caused by cybersecurity errors that could have been avoided with more rigorous security. They will also be prevented from strong-arming their way out of liability just because they hold market power.
An updated cyber-social contract. This major shift in who should bear responsibility is what Kemba Walden, acting national cyber director, described as a change in America’s cyber-social contract. In a press briefing, Walden explained: ‘Today, across the public and private sectors, we tend to devolve responsibility for cyber risk downwards. We ask individuals, small businesses, and local governments to shoulder a significant burden for defending us all. This isn’t just unfair, it’s ineffective.’
Under this reimagined cyber-social contract, the division of tasks between governments and the private sector is quite clear. The strategy explains that ‘in a free and interconnected society, protecting data and assuring the reliability of critical systems must be the responsibility of the owners and operators of the systems that hold our data and make our society function, as well as of the technology providers that build and service these systems.’
On the other hand, ‘government’s role is to protect its own systems; to ensure private entities, particularly critical infrastructure, are protecting their systems; and to carry out core governmental functions such as engaging in diplomacy, collecting intelligence, imposing economic costs, enforcing the law, and, conducting disruptive actions to counter cyber threats.’
The days of self-regulation are numbered. The strategy’s heavy stance on regulation signals a break from two decades of efforts to get companies – including those in critical sectors – to voluntarily strengthen all aspects of their cybersecurity, both internally and in their products, databases, and services.
Voluntary approaches to cybersecurity are no longer adequate, Deputy National Security Advisor for Cyber and Emerging Technology Anne Neuberger explained during an event in Washington.
Coalitions for combating ransomware. You may all recall the Biden-Putin summit in Geneva in June 2021, which marked the start of cyber detente (we even ran a monthly newsletter on cyber detente). At the time, the two countries agreed to cooperate to deter ransomware criminal cells (of Russian origin or operating from Russia). Technical work was progressing, until it all went downhill just over a year ago.
In lieu of such cooperation, the USA is working with its allies (such as through the Counter Ransomware Initiative) to pressure Russia and other countries to disrupt malicious behaviour. Through the new plan, the USA also hopes to strengthen these partnerships and carry out what the USA-Russia cyber detente failed to do, especially in combating ransomware.
This was one of two complaints. The second – the so-called anti-steering practice, which restricts app developers from informing iPhone and iPad users of alternative music subscription services – is still a concern for the commission’s ongoing anti-competition investigation.
During an event last week, EU competition chief Margrethe Vestager said ‘We remain concerned about Apple’s anti-steering provisions and its impact on the music streaming market. But we refocused our competition concerns on the direct consumer impact.’
EDPB welcomes improvements under EU-US Data Privacy Framework, but concerns remain
The European Data Protection Board (EDPB), the EU’s data watchdog, wants to see the USA’s commitment to limiting US security agencies’ data collection activities not only on paper but also in practice.
The EDPB’s non-binding opinion on the Draft Adequacy Decision (published by the European Commission in December) welcomes the improvements introduced by a recent executive order, which limits data collection to what is necessary and proportional. However, ‘close monitoring is needed concerning the practical application of the newly introduced principles of necessity and proportionality. Further clarity is also necessary regarding temporary bulk collection and the further retention and dissemination of the data collected in bulk,’ the watchdog said.
An adequacy decision will ultimately confirm that the data of European citizens can be transferred to the USA without additional safeguards.
Digital India Bill to introduce rules for non-personal data sharing
A public consultation on the basic guiding principles and architecture of the upcoming law will take place on 9 March. Once the consultation process is concluded, the government will release a final draft for consultation. The law will replace the decades-old Information Technology Act.
Was this newsletter forwarded to you, and you’d like to see more?
Under this new plan, China will apply digital technology more seriously to the economic sector, as well as to the agriculture, manufacturing, finance, education, medical services, transportation, and energy sectors.
On the global front, China also plans to continue participating in multilateral forums, and to cooperate on developing new international rules such as those related to cross-border data flows.
Chinese experts have said that more efforts were needed to strengthen the private sector’s role in the semiconductors sector and to cultivate globally competitive high-tech enterprises.
// METAVERSE //
‘It is already time’, says EU competition chief
Speaking during a public event, EU Commissioner for Competition Margrethe Vestager hinted that European policymakers are already looking into metaverse policy.
She said: ‘digital markets have not fulfilled their promise for small businesses to achieve scale and greater reach with fewer physical barriers to get in their way. We have certainly not been too quick to act – and this can be an important lesson for us in the future. We need to anticipate and plan for change, given the obvious fact that our enforcement and legislative process will always be slower than the markets themselves. For example, it is already time for us to start asking what healthy competition should look like in the metaverse, or how something like ChatGPT may change the equation.’
// AI //
Can an AI machine be granted a patent for an invention?
This is the question which UK Supreme Court judges are deliberating after hearing arguments brought forward on appeal by American inventor Stephen Thaler.
The case involves two patent applications for two inventions which Thaler says were created by an AI machine he owns called Dabus (an acronym for Device for the Autonomous Bootstrapping of Unified Sentience). The case has already been dismissed by the High Court and the Court of Appeal, which ruled that patents cannot be awarded in cases where the inventor is not a natural person.
The UK’s Supreme Court is expected to hand down a final judgement in the coming months.
The week ahead (6–12 March)
6 March: The EU commission’s next technical workshop with stakeholders on how to comply with the new Digital Markets Act will address app store-related aspects, including alternative in-app payment systems, steering (a practice which allows developers to inform users about other purchasing options) and sideloading (the process of installing an app which did not come from one of the two main app stores).
6 March: The 19th Annual State of the Net conference, taking place in Washington DC, will bring together internet stakeholders in government and in the private sector to talk about connectivity, cybersecurity, AI developments, and children’s privacy.
6–7 March: The Council of Europe and the Moroccan Ministry of Justice are jointly organising an international conference on strengthening cooperation on cybercrime and e-evidence in Africa.
6–10 March: The UN Open-Ended Working Group (OEWG), tasked with studying existing and potential threats to information security and possible confidence-building measures and capacity development, will hold its 4th substantive session in New York. Deeper discussions on the points of contact (PoC) directory are expected. There will be quite a few side events too.
6–17 March: The priority theme of the 67th session of the Commission on the Status of Women is ‘Innovation and technological change, and education in the digital age for achieving gender equality and the empowerment of all women and girls’.
8–9 March: European trade association DIGITALEUROPE will host chief EU policymakers and leaders from the private sector for the two-day annual Masters of Digital.
10–12 March: The 2nd session of the European Commission citizens’ panel on the metaverse and other virtual worlds will ask people to identify, discuss and prioritise values and principles that should guide their development.
10–16 March: The ICANN76 Community Forum, to be held in Cancún, Mexico and online, will bring together ICANN supporting organisations, the advisory committee and the broader ICANN community to discuss ongoing issues on domain name system (DNS) management. Preparatory meetings took place last week.
#ReadingCorner
EU Cyber Resilience Act: Enforcing cyber norms far beyond Europe
A new article by our colleague Anastasiya Kazakova looks at the extra-territorial effect that the EU’s upcoming cybersecurity law, the Cyber Resilience Act, will have on products and services developed by the private sector for citizens (these points are also potentially applicable to new US laws imposing liability for cybersecurity flaws once they materialise). Assuming that companies decide not to lower the bar for non-EU users, the new rules will mean that users worldwide will benefit from these stricter requirements. Moreover, EU member states adopting these rules will also contribute to implementing at least three of the norms on responsible state behaviour.