Stricter rules and prohibited practices: Unveiling the EU AI Act’s regulatory framework
The eagerly anticipated EU’s AI Act approved by parliament committees, introduces new rules concerning the use of facial recognition, biometric surveillance, and various other AI applications.
The AI Act, legislation aimed at regulating the use of AI and preventing its harmful effects, has received approval from the Civil Liberties and Internal Market committees of the European Parliament. This significant step means that the proposal will now progress to plenary adoption in June, marking the final stage of the legislative process – which will involve negotiations with the EU Council and Commission.
The primary objective of the AI Act is to safeguard people’s health, safety, and fundamental rights, while also mitigating the potential risks associated with AI. To achieve this, the act provides a clear definition of AI and prohibits certain practices that pose risks. These practices include manipulative techniques, social scoring, and the use of emotion recognition software in law enforcement. The legislation also introduces a tiered approach for regulating general-purpose AI, foundation models, and generative AI models.
One of the key aspects of the regulation is the establishment of a more stringent regime for high-risk AI applications. Additionally, the legislation outright bans specific applications, such as manipulative techniques and social scoring, due to their deemed unacceptable risks. The list of prohibited practices, which now includes biometric categorisation, predictive policing, and the compilation of facial image databases.
Regarding general-purpose AI, the AI Act adopts a tiered approach. Although this category is not initially covered, economic operators integrating general-purpose AI into high-risk applications must fulfil additional obligations. The legislation emphasises the importance of information sharing and compliance support from general-purpose AI providers to downstream operators.
High-risk AI applications face a more rigorous classification process under the AI Act. To be considered high-risk, these applications must not only pertain to critical areas and use cases but also pose significant risks to people’s health, safety, or fundamental rights. The legislation outlines obligations and penalties for misclassifications, specifically identifying certain sectors and platforms as falling within the high-risk categorisation.
In order to ensure responsible AI usage, the AI Act imposes specific obligations on high-risk AI providers. These obligations encompass risk management, data governance, technical documentation, and record keeping. Furthermore, users of high-risk AI solutions are required to conduct a fundamental rights impact assessment, taking into account potential negative effects on marginalised groups and the environment.