Dear readers,
Welcome to another issue of the Digital Watch weekly!
This week saw exciting developments in AI governance in Europe as the Council of Europe’s Committee of Ministers adopted the Convention on AI and human rights, while the European Council gave final approval to the EU AI Act. Reads just a bit confusing at first glance, doesn’t it?
Firstly, these are different bodies: the Council of Europe is a European human rights organisation, while the European Council is one of the executive bodies of the EU. Secondly, and logically, the two documents in question are also different.
The Council of Europe adopted a Framework Convention on AI and human rights, democracy, and the rule of law. The convention encompasses activities throughout the AI system lifecycle that possess the potential to impact – you guessed it – human rights, democracy, and the rule of law. The convention has adopted a risk-based approach, similar to the EU AI Act, meaning the higher the potential harm to society, the stricter the regulations,. This approach applies to the entire lifecycle of AI systems, from design and development to use and decommissioning. The convention is international in nature, and it is open to non-European countries as well. It will be legally binding for signatory states. Its scope is layered: While signatories are to apply the convention to ‘activities undertaken within the lifecycle or AI systems undertaken by public authorities or private actors acting on their behalf’, they are left to decide whether to apply it to activities of private actors as well or to take other measures to meet the convention’s standards in the case of such actors. Key provisions include transparency and oversight requirements tailored to specific contexts and risks. The convention’s provisions do not extend to national security activities. Concerns have been raised regarding the convention’s effectiveness, with questions about whether it primarily reaffirms existing practices rather than introducing substantive regulatory measures.
The European Council gave final approval to the EU AI Act. We’ve written about the EU AI Act at length. It aims to promote the development and adoption of safe and trustworthy AI systems within the EU’s single market. It adopts a risk-based approach, meaning the higher the potential harm to society, the stricter the regulations. The act applies exclusively to areas within EU jurisdiction, with exemptions for military, defence, and research purposes. The EU AI Act does not offer options: AI providers who place AI systems or general-purpose AI models on the EU market must comply with the provisions, and so must providers and deployers of AI systems whose outputs are used within the EU.
The verdict? The convention has a broader perspective, and it is less detailed than the EU Act, both in provisions and in enforcement matters. While there is some overlap between them, these two distinct documents are complementary.
More US-China tensions, more national investments in the chip industry, and more AI-related launches round off the issue.
Next week, all eyes will be on the World Summit on the Information Society (WSIS)+20 Forum High-Level Event and the AI for Good Global Summit. We’ll bring you reports for a selected amount of sessions, so be sure to bookmark the hyperlinked pages.
Andrijana and the Digital Watch team
Highlights from the week of 10-17 May 2024
Tech companies have agreed on a set of safety commitments on AI at a second global safety summit led by South Korea and the UK.
The company emphasises the privacy of user data, assuring that it remains private and stored solely on the user’s device.
The company aims to prioritise factual information while striking a balance between creativity and accuracy in language models.
Funding comes from EU programs, the Flanders government, and industry players.
The country’s share of the global fabless sector stands at 1%, highlighting a gap with leading players like Taiwan’s TSMC.
The warning has raised concerns about the security of commercial and military data.
The US Justice Department and TikTok argue that the public has a significant interest in resolving this matter quickly due to the large number of TikTok users.
The United States Senate has passed a joint resolution calling for the Securities and Exchange Commission (SEC) to reverse a rule affecting financial institutions dealing with cryptocurrency firms
Announced at the ID4Africa event in Cape Town, a World Bank advisor noted Zambia digitized 81% of paper ID cards in three months, aiming for completion by July.
Efforts include expanding WiFi hotspots, reducing data costs, promoting digital skills training, and leveraging new technologies.
ICYMI
Reading corner
The declaration is the outcome of the AI Seoul Summit 2024, where world leaders gathered in response to the rapid advancements in AI since the first summit in November.
The TikTok legal saga highlights the complex interplay between technology, law, and geopolitics. As digital sovereignty and data privacy become increasingly important, the outcome of TikTok’s legal battles will have significant implications for the global tech industry.
Diplo has launched a new online literacy course, Building the future we need (with Anita Lamprecht), aligning with the UN’s 2024 Summit of the Future. In the blog post Speaking of futures, Diplo faculty member and linguist Biljana Scott explores how language shapes our perception of the future, highlighting the importance of recognising unconscious biases.
Upcoming
The WSIS+20 Forum High-Level Event, part of the World Summit on the Information Society, will be held on 27-31 May 2024. It aims to review progress related to information and knowledge societies, share best practices, and build partnerships.
The AI for Good Global Summit will be held on 30-31 May, 2024, in Geneva, Switzerland. This event, part of the AI for Good platform, will focus on identifying practical AI applications to advance the SDGs globally.