EU’s AI Act: MEPs forge ahead with high-risk AI classification despite legal concerns

Leading MEPs have unveiled a revised version of the legislation’s high-risk AI system classification, maintaining the filter-based approach despite legal concerns.

 City, Architecture, Building, Office Building, Flag, Urban, Mace Club, Weapon

Prominent Members of the European Parliament (MEPs) responsible for shaping the EU’s AI Act have introduced a modified edition of the legislation’s framework for classifying high-risk AI systems. The initial proposal automatically categorised AI solutions within predefined critical use cases as high-risk. Recent debates on exemptions have drawn criticism from the legal office of the European Parliament.

Nevertheless, the updated version predominantly upholds the horizontal exemption conditions. These criteria now encompass AI models that exert minimal influence on decision outcomes, those that augment human activities, systems capable of identifying decision patterns, and AI systems engaged in preparatory tasks for critical use scenarios.

The categorisation of high-risk is retained for AI systems involved in profiling individuals. The new text bestows authority upon market surveillance agencies to appraise AI systems and levy fines for misclassification. Additionally, the European Commission is granted the power to adjust the criteria under certain circumstances, with the goal of maintaining comprehensive AI law protection.

Why does this matter?

The AI Act is a significant element of EU legislation designed to oversee the use of AI, emphasising the assessment of associated risks. The debate over classification criteria highlights the need for clear and unambiguous regulations. This matter addresses the legal and procedural aspects of AI governance. MEPs have chosen to retain the filter-based approach, even in the face of a legal opinion that expressed reservations. The EU AI Act is still in draft form and is expected to undergo further editing and negotiations.