UK officially labels AI as chronic risk
This move underscores the need to address AI-related threats, including cyberattacks and cybersecurity challenges.
In the 2023 National Risk Register (NRR), the UK has officially categorised AI as a ‘chronic risk’ to its national security. The comprehensive NRR document acknowledges AI as a persistent long-term threat that could affect the safety, security, and critical systems of the nation. This classification positions AI alongside enduring risks, setting it apart from immediate dangers such as terrorist attacks.
The NRR recognises the potential security implications of advanced AI technology if utilised for launching cyberattacks against the UK. The document acknowledges the widely acknowledged cybersecurity complexities linked to the evolving capabilities of AI, including generative AI. Despite the various opportunities AI presents across diverse sectors, the UK government emphasises the country’s susceptibility to potential hazards.
In response to these challenges, the UK government committed to organising the inaugural worldwide summit focused on AI Safety. This conference will bring together key countries, leading technology firms, and researchers to define safety protocols for evaluating and supervising risks connected to AI. The National AI Strategy, released in 2021, outlines the nation’s shift towards an economy driven by AI and underscores the importance of research and development in facilitating this growth alongside the necessary governance structures. Moreover, a central risk mechanism will be established to monitor AI-related risks, aiming to harness the advantages of AI while mitigating its potential adverse consequences. Although the NRR doesn’t provide an exhaustive analysis of AI’s hazards, it does hint at concerns regarding disinformation and vulnerabilities in the economy.
Why does this matter? AI and its governance dimensions have featured high on the agenda of bilateral and multilateral processes within the EU and beyond. Many countries have started atempts to regulate and mitigate the risks of AI. With this, AI is recognised as a strategic risk for the first time, with potential implications spanning misinformation and economic competitiveness. However, criticism arises due to the report’s lack of detail on AI risks, prompting calls for better monitoring of AI’s impact.