Agreement on testing new AI models between developers and governments reached at UK’s AI Safety Summit
The participants announced a shared objective of supporting public trust in AI safety, initially through increased emphasis on AI safety testing and research. They noted that comprehensive and effective evaluation processes are a complex technical challenge and collaboration will be important to advance the frontier of knowledge and best practice in this important area.
On the second day of the UK’s AI Safety Summit, leading AI developers and governments reached an agreement on testing the next generation of AI models to help manage the risks associated with rapidly developing technology. The agreement includes the testing of models, both pre-and post-deployment, to manage risks related to security, safety, and societal harms. It further extends the principles outlined in the Bletchley Declaration on AI Safety, signed by 28 nations (during the summit’s first day). Notably, the Chinese delegation, although a signatory of the ‘Bletchley Declaration,’ did not endorse the testing agreement.
As per the agreements, it is the responsibility of developers to create and execute safety testing, employing evaluations, transparency, and other suitable measures alongside technical methods to reduce risks and deal with vulnerabilities. Other actors, such as deployers and users, also carry the responsibility of ensuring the secure utilisation of cutting-edge AI systems.
The summit brought together approximately 100 politicians, academics, and tech executives, who discussed strategies and plans for this rapidly transformative technology, with some hoping to establish an independent body to provide global oversight on the development of AI, ensuring responsible practices. Prime Minister Sunak himself emphasised the importance of not relying solely on self-assessments by companies developing AI models, inaugurating the AI Safety Institute to build public sector capability to conduct safety testing and to research AI safety.