Regulating emerging technologies: Artificial intelligence and data governance
22 Jun 2022 10:30h - 11:15h
Event report
Regulating AI is a complex endeavour. Initiatives need to cater not only to cultural differences, but also adopt a holistic approach, from design to deployment and compliance. The global debate on principles for trustworthy AI could be complemented by discussions at the regional and national levels, seeking to translate principles into concrete guidelines for policy makers.
Mr Gianclaudio Malgieri (Associate Professor of Law & Technology at the EDHEC Business School in Lille (France)) started the session by making a distinction between having trust in AI and a trustworthy AI. ‘Trust’ is a market-related notion. However, iIt is not enough that consumers have trust in AI: the technology needs to be trustworthy. Trustworthiness can be assessed only by alignment with values and normative principles designed by political bodies and regulators. Malgieri explained why the European Union AI Act is needed, despite the existence of very encompassing norms, such as the General Data Protection regulation (GDPR). He explained that AI may have consequences for individuals beyond personal data processing and without personal identification. He added that the Act sets a political threshold of what is acceptable in the AI market, beyond the principles of fairness and lawfulness enshrined in the GDPR.
Malgieri explained that the proposed AI Act follows a risk-assessment approach. Risks are predetermined by the legislator and classified as acceptable, variable, and prohibited risks. The latter includes cases in which AI causes physical or psychological harm to vulnerable groups (e.g., the exploitation of vulnerability of individuals based on age or disability). Malgieri criticised the lack of inclusion of ‘economic harm’ in the regulation. He also argued for the extension of ‘vulnerable’ groups, which currently take into account only age and disability. He explained that in the case of AI applications considered as high risk, such as credit scoring, the AI act creates the need for human oversight for the design and the development of AI.
Ms Golestan (Sally) Radwan (International AI expert and PhD candidate at the Royal Holloway University of London), focussed on the implementation aspect of AI governance strategies. She mentioned that multilateral and global initiatives to draft AI principles are very positive,due to their inclusiveness. At the same time, these initiatives are based on consensus and compromise, so the result is usually a watered-down text. Because of that, Radwan believes that global initiatives need to be complemented with the development of regional documents of a more concrete nature, such as toolboxes, targeted at policy makers. Countries are very diverse and each of them needs to establish how the global guidelines apply to their specific context. Radwan suggested that regional dialogue happens in parallel to cross-regional dialogue that identifies common points, because AI is a cross-border technology. Contradictory regulations in different countries would hinder innovation.
Mr Thomas Schneider (Head of International Affairs in the Federal Office of Communication (Switzerland), Chair of the Committee on Artificial Intelligence of the Council of Europe, and moderator of the session) affirmed that the treaty that the Council of Europe proposes is a value‑based document. Countries adhere to values, and cultural differences may be brought onboard in implementation.
Mr Ryan Carrier (DataEthics4All Top 100 DIET Champion 2021, Executive Director at ForHumanity) focussed on the issues of auditing and compliance. He explained the mission of ForHumanity, a public charity that examines the effects of AI and Automation on jobs, society, rights, and freedoms. Carrier stated that the experience gathered in the field of financial auditing could be useful to the area of AI. In the financial area, independent auditing, transparency, oversight, and governance mechanisms enhanced trust significantly. He clarified that the risks related to AI are multifaceted, including bias, privacy, and security. A holistic approach, which examines risks from design to deployment is recommended. Auditors need to be trained to identify anomalies and have procedures in place to address them. Auditors also need to follow a harmonised approach to facilitate compliance by design; therefore, certification for auditors is important.
By Marilia Maciel