UN AI resolution a significant global effort to harness AI for sustainable development
The UN adopted its first, ground-breaking, resolution on AI on 21 March, calling on member states to ensure that ‘safe, secure, and trustworthy AI systems’ are developed responsibly and are respectful of human rights and international law.
On 21 March, the United Nations General Assembly (UNGA) overwhelmingly passed the first global resolution on AI. Member states are urged to protect human rights and personal data and to monitor AI for potential harms, so the technology can benefit all.
The unanimous adoption of the US-led resolution on the promotion of ‘safe, secure, and trustworthy artificial intelligence systems that will also benefit sustainable development for all’ is a historic global effort to ensure the ethical and sustainable use of AI. While nonbinding, the draft resolution was supported by more than 120 states, including China, and endorsed without a vote by all 193 UN member states.
Vice President Kamala Harris praised the agreement, stating that this ‘resolution, initiated by the USA and co-sponsored by more than 100 nations, is a historic step towards establishing clear international norms for AI and fostering safe, secure, and trustworthy AI systems’.
To unpack the significance of this resolution and its potential impact on AI policies, we will look at five dimensions: policy and regulation in the global context, ethical design, data privacy and protection, transparency and trust, and AI for sustainable development.
Global policy and regulation
EU policymakers have paved the way with the recently approved AI Act, the first comprehensive legislation covering the new technology. The Council of Europe (CoE), a 46-member human rights body, has also agreed on a draft AI Treaty to protect human rights, democracy, and the rule of law.
The United States wants to play a leadership role in shaping global AI regulations. Last October, President Biden unveiled a landmark Executive Order on ‘Safe, Secure, and Trustworthy AI’ and in March, VP Harris announced a new policy of the White House Office of Management and Budget for federal agencies’ use of AI.
Other countries and regions are also developing their own frameworks, guidelines, strategies, and policies. For instance, at the African Union (AU) level, its Development Agency (AUDA) released in March a White Paper on a pan-African AI policy and a continental roadmap.
The UN resolution acknowledges that multiple initiatives may lead the way in the right direction and further encourages member states, international organisations, and others, to assist developing countries in their national process.
Ethical design
The text highlights the need for ethical design in all AI-based decision-making systems (6.b, p5/8). AI systems should be designed, developed, and operated within the frameworks of national, regional, and international laws to minimise risks and liabilities and ensure the preservation of human rights and fundamental freedoms (5., p5/8). A collaborative approach combining AI, ethics, law, philosophy, and social sciences can help craft comprehensive ethical frameworks and standards to govern the design, deployment, and use of AI-powered decision-making tools. Ethical design is a critical aspect of promoting safe, secure, and trustworthy AI systems. The resolution urges member states and other stakeholders to integrate ethical considerations in the design, development, deployment, and use of AI to safeguard human rights and fundamental freedoms, including the right to life, privacy, and freedom of expression.
Introducing the draft, Linda Thomas-Greenfield, US Ambassador and Permanent Representative to the UN, added that ‘AI should be created and deployed through the lens of humanity and dignity, safety and security, human rights, and fundamental freedoms’.
Data privacy and protection
The UN resolution addresses data privacy safeguards to guarantee safe AI development, especially when data used is sensitive personal information such as health, biometrics, or financial data. Member states and relevant stakeholders are encouraged to monitor AI systems for risk and assess for impact on data security measures and personal data protection, throughout their life cycle (6.e, p5/8). Privacy impact assessments and detailed product testing during development are suggested as mechanisms to protect data and preserve our fundamental privacy rights. Additionally, transparency and reporting obligations in accordance with all applicable laws contribute to safeguarding privacy and protecting personal data (6.j, p6/8).
Transparency and trust
The document highlights the value of transparency and consent in AI systems. Transparency, inclusivity, and fairness promote our diverse needs, preferences, and emotions.
To preserve fundamental human rights, algorithms that affect our lives have to be developed in a way that does not cause any harm to us or the environment. This includes providing notice and explanation, promoting human oversight and ensuring that automated decisions are reviewed. When necessary, human decision-making alternatives should be accessible, as well as effective redress.
Transparent, interpretable, predictable, and explainable AI systems facilitate reliability and accountability, allowing end-users to better understand, accept, and trust outcomes and decisions that impact them.
AI for sustainable development
The resolution confirms that safe, secure, and trustworthy AI systems can accelerate progress toward achieving all 17 sustainable development goals (SDGs) in all three dimensions – economic, social, and environmental – in a balanced way.
AI technologies can be a driving force to help achieve the SDGs by augmenting human intelligence and capabilities, improving efficiency, and reducing environmental impact. For instance, AI models can predict and unveil errors, plan more effectively, and boost renewable energy efficiency. AI can also streamline transportation and traffic management and anticipate energy needs and production. Any AI system designed, developed, deployed, and used without proper safeguards engenders potential threats that could hamper progress toward the 2030 Agenda and its SDGs.
The aim is to reduce the digital divide between wealthy industrialised nations and developing countries, and within countries, to give all nations a proper representation at the table of discussions on AI governance for sustainable development. The intention is also to ensure that less developed nations have access to the needed technology, infrastructure, and capabilities to reap the promised gains of AI, such as disease detection, flood forecasting, effective capacity building, and a workforce upskilled for the future.
The UN resolution is a remarkable step in global AI policy because it addresses many of the key drivers for AI to play a safe and effective role in sustainable development that will benefit all. It also recognises that innovation and regulation, far from being mutually exclusive, complement and reinforce one another.
By following up on the current consensus, implementing these recommendations, and aligning them with other regional and global initiatives, governments, public and private sectors, and other involved stakeholders can harness AI’s potential while minimising its risks.
The road ahead for global AI governance
South Korea will co-host the second AI Safety Summit with the UK through a virtual conference in May, and 6 months later France will hold the next in-person global gathering after Prime Minister Rishi Sunak led the inaugural AI Safety Summit in Bletchley Park last November.
By September 2024 and the Summit of The Future in New York, more important developments in global AI policy and governance can be expected.
One is the work in progress from the UN ‘High-Level Advisory Body on AI’, which will lead to a final report. This will progress in parallel with and feed into the long-awaited Global Digital Compact process.
Another one will be the formal adoption of the CoE ‘Convention on AI, Human Rights, Democracy, and the Rule of Law’ and its subsequent ratification process open to member and non-member states.
On the EU side, the European Commission has started staffing and structuring the newly established AI Office. The EU AI Act was adopted by the EU Parliament, and it awaits the EU Council’s formal approval. The AI Act will enter into force 20 days after it is published in the Official Journal, with phased implementation and enforcement. After 6 months, unacceptable risks will be prohibited, after 12 months, obligations for providers of general-purpose AI systems come into effect and member states should designate their relevant national authority, and after 24 months, the legislation becomes fully applicable.
In Africa, the African Union Commission has begun holding a series of online consultations with diverse stakeholders across the continent to gather input and inform the development of an Africa-wide AI policy, with a focus on ‘building the capabilities of AU member states in AI skills, research and development, data availability, infrastructure, governance and private sector-led innovation’.
The rapid advance of AI technologies poses new challenges for legislators around the world since existing rules struggle to keep up with the acceleration of technical progress. This demonstrates the critical need for regulatory frameworks that can adapt to AI’s evolving landscape.
The governance of AI systems requires ongoing discussions on appropriate approaches that are agile, adaptable, interoperable, inclusive, and responsive to the needs of both developed and developing countries. The UNGA resolution opens the door to global cooperation on a safe, secure, and trustworthy AI for sustainable development that benefits all.