Artificial Intelligence, Justice and Human Rights
Event report
This side event of the Human Rights Council’s 36th session, organised by the Permanent Observer of the Holy See Mission to the UN and other international organisations in Geneva and the Permanent Mission of the Principality of Liechtenstein in Geneva, discussed the potential impact of artificial intelligence (AI) on justice systems and human rights.
The panel was opened by Mr Eric Salobir, President of OPTIC, who emphasised that the link between justice and AI is not just found in science fiction, but has already been tested and employed in judicial systems.
In his opening remarks, H.E. Archishop Ivan Jurkovic, Permanent Observer of the Holy See Mission, spoke about the importance of considering human dignity in discussions on AI, as well as the risk of machines substituting humans in certain key areas, such as education. H. E. Ambassador Peter Matt, Permanent Representative of Liechtenstein, explained that AI encompasses both opportunities and threats, especially related to the human rights to privacy and non-discrimination. He added that addressing these challenges effectively requires multistakeholder engagement.
Next, Prof. Pierre Vandergheynst, Professor at Ecole Polytecnique Fédérale de Lausanne, provided an introduction to AI and the way it could be applied to the judicial system. Although it is not a new concept, AI is mostly understood today as machine learning, powered by algorithms, which are based on data. Ultimately, ‘whoever controls data, controls AI’. AI’s predictive power comes from its ability to model the reasoning from the raw data to the final outcome.
There are several examples of AI being reasonably accurate in predicting verdicts and risk assessments. Yet, decisions based on AI cannot be easily disputed, as the patterns discovered by AI cannot be interpreted and clarified. If AI decisions are based on biased data, rooted in human judgement (such as previous verdicts), they risk disproportionally and negatively affecting certain population groups.
Prof. Louis Assier Andrieu, Professor at the Law School of SciencesPo in Paris, and Research Professor at the National Center of Scientific Research, provided a more in-depth analysis of the interplay between AI and legal traditions. According to him, both common and civil law are based on fictions, that would be internalised by AI. Common law’s fiction is based on its assumption that legal decisions can be based on previous cases; yet, ‘one never enters the same river twice’. Civil law assumes that laws and codes encompass every imaginable case, and that abstract rules can be applied to a variety of cases. To address these fictions, it could be useful to look at more communal, non-Western forms of justice.
Assier Andrieu highlighted the fact that France is already experimenting with predictive justice using big data, to make institutions more rational and less dependent on human bias. However, judgement ultimately needs trust. With 93% of the private practitioners in the USA fearing to be replaced by robots, ‘where is the trust in the making of algorithms and the predefinitions used?’ Can we trust AI to decide something as important as legal judgement? Salobir added that we need to consider whether AI makes judgements based on consequence or correlation, and whether it judges the individual or the group to which it belongs.
Prof. Lorna McGregor, Professor and Director of the Human Rights Centre, University of Essex, concluded the panel discussion by relating AI to human rights. She explained that it is ‘crucial’ to understand our current and future environment, to make sense of their human rights implications. AI could provide opportunities in making progress towards the sustainable development goals by creating efficiency, cost-effectiveness, and improvements through disaggregated data. It can help allocate resources and predict crime.
AI can also generate risks for human rights, not only by creating privacy threats and facilitating surveillance, but also by creating inequalities and discrimination. While the big data on which AI is based is extensive, it is neither complete nor perfect. This imperfect data feeds algorithms and AI, and can ‘bake discrimination into algorithms’. As a result, human bias is ‘accentuated, rather than resolved’. Echoing Vandergheynst, she repeated that AI decisions cannot easily be challenged, and that judges and lawyers might not be sufficiently equipped to understand the accuracy of these decisions.
McGregor concluded that international human rights law could provide a framework to address the risks posed by AI. We also need to consider the responsibility of states and business actors, as well as identifying red lines when the risks look too great to proceed.