Dear all,
All eyes will be on China, as it prepares to receive France’s Emmanuel Macron, Spain’s Pedro Sánchez, and the EU’s Ursula von der Leyen. We’ll keep an eye out for anything that could impact the digital policy landscape.
Meanwhile, Italy has imposed a temporary limit on access to ChatGPT (our analysis for this week), as content policy shares the spotlight with cybersecurity updates – notably, the revelations from the leaked Vulcan Files.
Let’s get started.
Stephanie and the Digital Watch team
// HIGHLIGHT //
Italy’s rage against the machine
Italy has become the first western country to impose a (temporary) limitation on ChatGPT, the AI-based chatbot platform developed by OpenAI, which has caused a sensation around the world.
The Italian Data Protection Authority (GDPD) listed four reasons:
- Users’ personal data breached: A data breach affecting ChatGPT users’ conversations and information on payments by subscribers to the service was reported on 20 March. OpenAI attributed this to a bug.
- Unlawful data collection: ChatGPT uses massive amounts of personal data to train its algorithms without having a legal basis to collect and process it.
- Inaccurate results: ChatGPT spews out inaccuracies and cannot be relied upon as a source of truth.
- Inappropriate for children: ChatGPT lacks an age verification mechanism, which exposes children to receiving responses that are ‘absolutely inappropriate to their age and awareness’.
How is access being blocked? In compliance with the Italian data protection authority’s order, OpenAI geoblocked access to ChatGPT to anyone residing in Italy. It also issued refunds to Italian residents who have upgraded to the Plus software upgrade.
However, OpenAI’s API – the interface that allows other applications to interact with it – and Microsoft’s Bing – which also uses ChatGPT – are still accessible in Italy.
ChatGPT disabled for users in Italy
Dear ChatGPT customer,
We regret to inform you that we have disabled ChatGPT for users in Italy at the request of the Italian Garante.
We are issuing refunds to all users in Italy who purchased a ChatGPT Plus subscription in March. We are also temporarily pausing subscription renewals in Italy so that users won’t be charged while ChatGPT is suspended.
We are committed to protecting people’s privacy and we believe we offer ChatGPT in compliance with GDPR and other privacy laws. We will engage with the Garante with the goal of restoring your access as soon as possible.
Many of you have told us that you find ChatGPT helpful for everyday tasks, and we look forward to making it available again soon.
If you have any questions or concerns regarding ChatGPT or the refund process, we have prepared a list of Frequently Asked Questions to address them.
– The OpenAl Support Team
What’s the response from users? The reactions have been mixed. Some users think this is shortsighted since there are other ways in which ChatGPT can still be accessed. (One of them is using a VPN, a secure connection that allows users to connect to the internet by masking their actual location. So if an Italian user chooses a different location through its VPN, OpenAI won’t realise that the user is indeed connecting from Italy. This won’t work for users wanting to upgrade: OpenAI has blocked upgrades involving credit cards issued to Italian users or accounts linked to an Italian phone number.)
Others think this is a precursor to what other countries will do. They think that if a company is processing data in breach of the rules (in Europe, that’s the GDPR), then it might be required to revise its practices before it continues offering its services.
How temporary is ‘temporary’? What happens next depends on two things: the outcomes of the investigation into the recent breach, and whether (and how) OpenAI will reform any of its practices. Let’s revisit the list of grievances:
Personal data breach: Nothing can reverse what happened, but OpenAI can install tighter security controls to prevent other incidents. Once authorities are convinced that stricter precautions have been taken, there’s no reason not to lift its ban on this issue alone.
Unlawful data collection: This is primarily a legal issue. So let’s say an Italian court confirms that the way the data was collected was illegal (it would take a great deal of effort to establish this, as OpenAI’s machine is proprietary, i.e. not open to the public to inspect). OpenAI is not an Italian company, so the court will have limited jurisdiction over it. The most it can do is impose a hefty fine and turn the ban into a semi-permanent one. Will it have achieved its aim? No, as Italian users will still be able to interact with the application. Will it create momentum for other governments to consider guardrails or other forms of regulation? Definitely.
Inaccurate data: This issue is the most complex. If by inaccurate we mean incorrect information, the software is improving significantly with every new iteration. Compare GPT-4 with its predecessor, v.3.5 (or even the early GPT-4 model with the same version at launch date). But if we mean biased or partial data, the evolution of AI-based software shows us how inherent this issue is to its foundations.
Inappropriate for children: New standards in age verification are a work in progress, especially in the EU. These won’t come any time soon, but when they do, it will be an important step in really limiting what underaged users have access to. It will make it much harder for kids to access platforms which aren’t meant for them. As for the appropriateness of content, authorities are working on strategies to reel in bigger fish (TikTok, Facebook, Instagram) in the bigger internet pond.
// AI //
UNESCO urges governments to implement ethical AI framework
UNESCO has called upon governments to implement its Recommendation on the Ethics of Artificial Intelligence immediately.
Director-General Audrey Azoulay said the ethical issues raised by AI technology, especially discrimination, gender inequality, fake news, and human rights breaches – are concerning.
‘Industry self-regulation is clearly not sufficient to avoid these ethical harms, which is why the recommendation provides the tools to ensure that AI developments abide by the rule of law, avoiding harm, and ensuring that when harm is done, accountability and redressal mechanisms are at hand for those affected.’
Stop right there! Three blows for ChatGPT
The first is that Elon Musk and a group of AI experts and industry leaders are calling for a six-month moratorium on the development of systems more powerful than OpenAI’s newly released GPT-4 due to potential risks to society. Over 50,000 people have signed the open letter.
The second is that the Center for Artificial Intelligence and Digital Policy has filed a complaint with the US Federal Trade Commission to stop OpenAI from issuing new commercial releases of GPT-4, due to concerns about the software’s ‘biased, [and] deceptive’ nature, which is ‘a risk to privacy and public safety’.
The third is a new report from Europol, which is sounding an alarm about the potential misuse of large language models (the likes of ChatGPT, Bard, etc.). For instance, the software can be misused by criminals who can generate convincingly authentic content at a higher level for their phishing attempts, which gives criminals an edge. The agency is recommending that law enforcement agencies get ready.
(It’s actually four blows if we count Italy’s temporary ban).
Indian judge uses ChatGPT to decide bail in murder case
A murder case in India made headlines last week when the Punjab and Haryana High Court used ChatGPT to respond to an application for bail in an ongoing case of attempted murder. Justice Anoop Chitkara asked the AI tool: ‘What is the jurisprudence on bail when the assailants assaulted with cruelty?’ The chatbot considered the presumption of innocence and stated that if the accused has been charged with a violent crime that involves cruelty, they may be considered a risk to the community. The judge clarified that the chatbot was not used to determine the outcome but only ‘to present a broader picture on bail jurisprudence, where cruelty is a factor.’
Was this newsletter forwarded to you, and you’d like to see more?
// CYBERSECURITY //
Vulkan files: Leaked documents reveal Russia’s cyberwarfare plans
A trove of leaked documents, dubbed the Vulkan files, have revealed Russia’s cyberwarfare tactics, according to journalists from 11 media outlets, who say the authenticity of the files has been confirmed by 5 intelligence agencies.
The documents show that consultancy firm Vulkan worked for Russian military and intelligence agencies to support Russia’s hacking operations and the spread of disinformation. The documents also link a cyberattack tool developed by Vultan with the hacking group Sandworm, to whom the USA attributed various attacks such as NotPetya.
The documents include project plans, contracts, emails, and other internal documents dated between 2016 and 2021.
Pro-Russian hacktivists launch DDoS attacks on Australian organisations
Australian universities have been targeted by Distributed Denial of Service (DDoS) attacks in recent months. Web infrastructure security company Cloudflare reported that the attacks were carried out by Killnet and AnonymousSudan – hacktivist groups with pro-Russia sympathies – on several organisations in Australia.
Killnet has a record of targeting governments and organisations that openly support the Ukrainian government. Since the start of the Ukraine war, the group has been associated with attacks on the websites of the European Parliament, airports in the USA, and the healthcare sectors in Europe and the USA, among others.
Cyberattacks on Ukraine on the rise, CERT-UA says
Ukraine’s computer emergency response team (CERT-UA) has recorded a spike in cyberattacks on Ukraine since the start of the year. The 300+ cyber incidents processed by CERT-UA are almost twice as many as during the corresponding period last year when Russia was preparing for a full-scale invasion.
In a Telegram message, Russia’s State Special Communications Service said that Russia’s aim is to obtain as much information as possible that can give it an advantage in a conventional war against Ukraine.
// ANTITRUST //
Google to Microsoft: Your cloud practices are anti-competitive
It’s been a while since Big Tech engaged in a public squabble, so when Google accused Microsoft’s cloud practices of being anti-competitive last week, we thought the growing rivalry spurred by ChatGPT had reached new levels.
In comments to Reuters, Google Cloud’s vice president Amit Zavery said Google Cloud had filed a complaint to regulatory bodies and has asked the EU’s antitrust watchdog ‘to take a closer look’ at Microsoft. In response, Microsoft reminded Google that the latter was in the lead in the cloud services sector. We’re wondering: Could this be a hint that it’s actually Google that merits greater scrutiny?
// CONTENT POLICY //
New US bill aims to strengthen news media negotiations with Big Tech
US lawmakers have reintroduced a bill to help news media in their negotiations with Big Tech, after a failed attempt during the last congressional session. The bipartisan bill – the Journalism Competition and Preservation Act – will create a four-year safe harbour period for news organisations to negotiate terms such as revenue sharing with tech companies like Facebook and Google.
Lawmakers are taking advantage of momentum gathered from a similar development in Canada, where the Online News Act, or Bill C-18, is currently being debated in Parliament. Reacting to the Canadian draft rules, Google and Meta threatened to do the same (leaving Reporters Without Border reeling; Google went through with its threat). We’re wondering whether Google – or any other Big Tech entity – will do the same in the USA.
East Europe governments call on tech companies to fight disinformation
The prime ministers of Moldova, the Czech Republic, Slovakia, Estonia, Latvia, Lithuania, Poland and Ukraine have signed an open letter which calls on tech firms to help stop the spread of false information.
Some of the proposed actions include: refraining from accepting payments from those previously sanctioned, improving the accuracy and transparency of algorithms (rather than focusing on promoting content), and providing researchers free or affordable access to platforms’ data to understand the tactics of manipulative campaigns.
Internet Archive’s digital book lending violates copyright laws, US judge rules
A US judge has ruled that the Internet Archive’s digital book lending program violates copyrights, potentially setting a legal precedent for future online libraries. The initiators of the case, the Association of American Publishers, argued that the program infringed on their authors’ exclusive rights to reproduce and distribute their works.
Although the Internet Archive based its argument on the principle of fair use, Judge Denny Chin disagreed, as the platform’s practice impacts publishers’ income from licensing fees for paper and e-book versions of the same texts. The judge said that Open Library’s practice of providing full access to those books without obtaining permission from the copyright holders violated copyright rules. Internet Archive is preparing an appeal, but until then, it can’t provide new scanned library material.
4 April: The European Broadcasting Union’s Sustainability Summit 2023 will focus on green streaming and other environment-friendly practices in digital broadcasting.
4–5 April: The International Association of Privacy Professionals (IAPP) Global Privacy Summit will gather privacy practitioners to discuss current regulatory challenges. (Sadly, ticket prices are prohibitively high.)
Diplo and the Geneva Internet Platform are organising two events this week:
- 4 April: Vulnerabilities in digital products: How can humans decrease risks for humans?
- 4 April: How to Train Diplomats to Deal With AI and Data?
Join us online!
AI governance: Terminator movie director says we might already be too late
AI has become an integral part of modern life, but with its increasing prevalence, James Cameron, the director of the iconic Terminator movies warns that humans are facing a titanic battle (pun intended) for control over technology. Cameron urges governments to create ethical standards for AI before it’s too late. Read more here and here. (Note: Articles on this podcast were making the rounds last week, but the podcast itself is from December).