ISO and IEC issue new standard for AI risk management
ISO and IEC release a new standard, / 23894:2023, focusing on AI risk management. It provides guidance for organizations involved in AI development, deployment, and usage on how to manage AI-related risks effectively. The standard aims to help integrate risk management into AI activities, offering processes for implementation and integration, emphasizing systematic policies, procedures, and practices related to risk assessment and monitoring.
The International Organization for Standardization (ISO) and the International Electrotechnical Commission (IEC) have published a new standard providing guidance on risk management in artificial intelligence (AI). Titled ISO/IEC 23894:2023 Information technology – Artificial intelligence – Guidance on risk management, the standard offers guidance to organisations that develop, produce, deploy, or use products, systems, and services that use AI on how to manage AI-related risks.
With the goal of assisting organisations in integrating risk management into their AI-related activities, the standard also describes processes for the effective implementation and integration of AI risk management. In this context, risk management processes are described as involving the systematic application of policies, procedures, and practices to the activities of communicating and consulting, establishing the context, and assessing, treating, monitoring, reviewing, recording, and reporting risk.
The standard is the result of work carried out within the Joint Technical Committee ISO/IEC JTC on information technology – Subcommittee SC 42 on AI.