OECD releases AI Incidents Monitor to address AI challenges with evidence-based policies
The AI Incidents Monitor, launched by the OECD, seeks to tackle AI challenges through evidence-based policies. Its goal is to offer comprehensive policy analysis and data across various disciplines, shedding light on AI’s impacts and fostering discussions to shape informed AI policies.
The OECD.AI Observatory released a beta version of the AI Incidents Monitor (AIM). Designed by the OECD.AI Observatory, the OECD AI Incidents Monitor (AIM) is a new tool for assessing the depth of AI-related issues. AIM uses a media monitoring platform that scans 150,000 news sources globally in real time and collects over one million news articles daily to extract data on AI incident occurrences.
The initiative is part of the OECD’s broader efforts to deepen insights into AI’s transformative power and its implications for our economies and societies.
The AIM can be particularly useful for evidence-based policymaking, which uses empirical evidence to inform and improve policy decisions. By integrating AI with research methodologies and prioritizing evidence-based policymaking, policymakers can harness the power of data-driven insights to develop adequate policies.
Source: OECD.AI Policy Observatory
Why does it matter?
The AI Incidents Monitor is a valuable tool, combining the OECD and partners’ resources to provide data, inform policymakers, and guide towards trustworthy AI. The AIM’s focus is on AI incidents or risks posed by AI that have appeared as factual events. Due to the widespread use of AI in diverse sectors, an increase in recorded incidents is expected.
To monitor and mitigate these risks, stakeholders require a clear but adaptive definition of AI events. The new tool includes research and insights on incident definition and practices in AI-specific and cross-disciplinary contexts. Additionally, the use of AI will give policymakers the option to simulate various policy strategies, helping them to anticipate future impacts and revise plans if necessary.
By leveraging the power of AI, policymakers can better make sense of the complexity of an AI-driven world. The tool will help them address social issues with relevant policies in a transparent, inclusive, and responsible manner.
The news comes as the EU and the G7 countries recently agreed on a voluntary code of conduct for companies developing the most advanced AI systems, built on the ‘Hiroshima process‘ and the ‘OECD AI Principles‘. The document is a response to recent developments in frontier AI models and is intended to help nations seize the benefits while addressing the risks and concerns posed by these technologies.