The Future of AI in the Judiciary: Launch of the UNESCO Guidelines for the use of AI Systems in the Judiciary

29 May 2024 16:00h - 16:45h

Table of contents

Disclaimer: This is not an official record of the session. The DiploAI system automatically generates these resources from the audiovisual recording. Resources are presented in their original format, as provided by the AI (e.g. including any spelling mistakes). The accuracy of these resources cannot be guaranteed.

Full session report

Experts Discuss the Role of AI in the Judiciary at UNESCO Session

During a session on the future of Artificial Intelligence (AI) in the judiciary, hosted by UNESCO, experts gathered to discuss the implications of AI’s current use and its potential role in judicial processes. Mr. Prateek Sibal moderated the session, highlighting the importance of discussing the present application of AI in the judiciary and the need for a concise exchange due to the session’s limited duration.

Dr. Juan David Gutierrez Rodriguez presented the findings from a UNESCO survey, which revealed that while judicial operators are generally familiar with AI systems, the majority do not use AI tools for work-related activities. The survey indicated that AI is mainly employed for searching legal documents and assisting with drafting, but not for decision-making.

Dr. Irakli Beridze from Unicree addressed the human rights dimensions of AI, particularly in law enforcement and the judiciary. He discussed the development of a toolkit for the responsible use of AI by law enforcement, which aims to ensure human rights compliance.

Ms. Caitlin Kraft Buchman and Dr. Rachel Adams expressed concerns about biases in AI systems, especially large language models (LLMs). They stressed the need for empirical research to understand the extent of these biases and their real-life impacts.

Amanda Leal from the Future Society spoke about the governance of AI throughout its lifecycle and the supply chain behind AI tools. She emphasized the need for higher scrutiny and transparency from AI developers and deployers, particularly in the context of human rights violations associated with AI’s development process.

Prof. Anthony Wong cautioned against judges relying solely on AI for case reasoning and decision-making, advocating for the use of AI as a tool to enhance productivity and efficiency, with judges applying their intellect to analyze AI-provided materials.

The session also addressed audience questions on privacy concerns, the EU AI Act’s classification of the judiciary as a high-risk sector for AI deployment, and the need for increased awareness and education among judicial operators.

In conclusion, the session underscored the opportunities and challenges presented by AI in the judiciary. While AI has the potential to improve efficiency and access to justice, there is a clear need for practical approaches to ensure its ethical and effective use. This includes the development of guidelines, capacity-building programmes, and a focus on human rights compliance. The panellists called for a nuanced approach that recognizes the current use of AI in the judiciary and the need for simplified frameworks to support judicial operators.

Session transcript

Mr. Prateek Sibal:
this session on the future of AI in the judiciary. But I would actually say it’s really the present of judiciary that we are really going to discuss today. We have a fantastic lineup of speakers who have been engaging with UNESCO, but also more broadly on the work on artificial intelligence, on human rights, on ethics, on gender equality globally, and have been working as legal professions in some cases. So we look forward to a rich discussion. It’s only 45 minutes, so I would request everyone to keep their remarks short, brief to the point, and then let’s try to have a bit of an exchange. Just about the panel, we are here today to launch the findings of UNESCO survey on artificial intelligence use in the judiciary. I would, in a moment, hand over the floor to our Assistant Director General, Dr. Tawfiq Jalassi, to share his opening remarks, and then we’ll do an introduction with the panel and dive right into it. So, ADG, over to you.

Dr. Tawfik Jelassi:
Thank you very much, Pratik. Good afternoon, distinguished guests, esteemed panelists, ladies and gentlemen. Very pleased to welcome you to this event. My team prepared the whole speech for me, but for the sake of time, I’m not going to read it. I just say that this piece of work is part of a long-standing initiative of UNESCO. We have been working in this field of the judicial sector for now 11, 12 years. We have trained 36,000 judges, prosecutors, in 160 countries, on regional and international standards on freedom of expression and safety of journalists. And then recently, we have even looked at other… cutting-edge issues like AI and the rule of law, and how artificial intelligence impacts the work of the judiciary. I think in the first time we ran this, we had close to 6,000 judges, prosecutors, judiciary operators who took part of this online course. So we believe this is very important, but also we have expanded our interventions by looking at other key stakeholders, because ultimately our aim is to create an enabling platform, or let’s say an enabling ecosystem for media professionals to do their work. So recently we developed similar type of training and capacity development for MPs, for parliamentarians, but also for police and security forces. A journalist comes across a judge, a prosecutor, a police officer, a security force, agents, but also parliamentarians who draft and vote the law. So this is a longstanding piece of work, which will continue, and I’m sure that in a few minutes, Pratik will reveal the findings from this major survey that we have done. Last September, we had the meeting of the chiefs of Supreme Courts from different countries. I think altogether, my colleagues tell me, they preside over more than 2 billion people in their respective countries. As they say, we have the chief justice of India. Absolutely, we have the chief justice of India as well. So again, because we talk about the Supreme Court in different countries, and of course, that’s the highest judicial authority. So UNESCO has been carrying out this work as part of its mandate. Our mandate, as you may recall, goes back 80 years with the constitutional mission of UNESCO to build peace in the minds of men and women. And you may tell me, well, that’s a too ambitious goal. How can you build peace in the mind? of man and human? Well it is through education, through culture, through the sciences, and through information and communication. So it’s a contribution, as we know, peace is built in the mindset of the people. How can we reinforce that? Our latest piece of work, very briefly, is the UNESCO guidelines for the governance and the regulation of digital platforms, which we published last November, and what we have been seeing on digital platforms and social media, exponential increase of mis-disinformation, hate speech, and other online harmful content. These do not contribute to peace building. These are very decisive, very harmful, very negative effects on platforms, and that’s our latest, let’s say, initiative to say we cannot stay passive watching this disinformation impacting the 80-plus elections happening this year, the 2.6 billion voters going to cast their ballot. We have to do something in terms of dissemination of objective fact-checked information and combating disinformation, also through our media and information literacy program. This is an educational program for pupils and for students to make them become media and information literate in the digital age. Let me stop here.

Mr. Prateek Sibal:
Thank you so much for those words. So, we’ll start with actually revealing what the survey findings have shown us, and for that, I invite Professor Juan David Tuterez, who’s joining us from Colombia, who helped us design and run the survey.

Dr. Juan David Gutierrez Rodriguez:
So, Juan David, the floor is yours. Thank you very much, everyone. It’s a pleasure to be with you remotely in this case, but happy to present the results of the UNESCO Global Judges Initiative. It’s a survey that aimed at understanding how different judicial operators are using or not, and how they’re using AI systems for their legal work. As you know, the report is already available, so maybe someone in the chat can share the link. I saw that someone in the Zoom chat was asking for it. But let me tell you that the survey was conducted between September and December this year. We’ve got responses from over 500 people from over 90 countries around the world. This included judges, prosecutors, lawyers, civil staff that work in the judiciary, and in general, any individual that has a meaningful role in the administration of justice. What did we find? First, that overwhelmingly, the judicial operators are somehow familiar with AI systems. As you can see, these are the answers which show a normal distribution in the edges. There’s few people, so very few people say that they’re not familiar at all, and then as well, very few people say they’re experts. Most of the people feel that they’re moderately familiar with AI systems. With regards to the question on whether they use AI tools and particularly AI chatbots for work-related activities, the most common response was no, we don’t use them. So as you can see, over 40% of the respondents said I currently do not use it for work, and a few other around 20% said that they used it for purposes different than work. Then there’s an important amount, a significant amount of people who answered yes, I’ve used it sometimes, or I even use it in a monthly, weekly, or daily basis, which is quite surprising, given that the access to AI tools particularly generative AI tools is rather recent. There’s another very important point that we ask to those who claim that they have used AI tools somehow and it’s how they got access to those AI tools. So only 16% of the respondents said that their organization provided access to the tools, 71% responded that they accessed some free version of the tools or probably an AI chatbot and 12% said that they paid their own subscription. This has very important implications which I’m sure that they will be discussed later in this panel. How do the respondents use AI chatbots? So those who said that they actually use AI chatbots said mostly that they use the AI chatbots for searching, searching jurisprudence, searching in this case case law, laws, legal doctrine or specific content. This is very important as well I think that this should be discussed later on because of course there’s some limitations and some risks of using AI chatbots as a type of search engines and of course there’s difference between general purpose and commercial AI chatbots in comparison with let’s say tailor-made chatbots for legal purposes. But then if the main use is a search currently this is something that we should be discussing. The judicial operators also said that they use chatbots for other purposes in this case for writing documents but most of the answers aim at not like drafting documents from scratch but rather helping them summarize, improve the grammar or the tone of the text and even to draft emails. So there’s different ways in which judicial operators are using this to help them with the drafting of their text. And then finally for brainstorming. I’m almost done now and it’s important to know that the respondents are aware that there are some potential negative consequences for the use of ChatGPT and other AI chatbots. So most of them, as you can see, say that they’re aware of these limitations. They identified as well what potential negative issues there are. You can see that among them the quality of the output generated by the chatbot, issues regarding privacy, the security of information, issues regarding integrity and transparency as well as biases. And most of the judicial operators said that they did not have guidelines for using these chatbots nor had received any sort of training. And finally they all are aware that it would be pertinent to have some guidelines. And overwhelmingly most said, 90% of them said that it would be pertinent for UNESCO to launch guidelines. So with that I finish and I give the floor back to Prateek. Many thanks.

Mr. Prateek Sibal:
Thanks. Thanks for that presentation. So we are on the AI governance day-to-day and how is it that working with the judiciary is linked with AI governance? It’s really important to think about implementation of human rights standards. In most countries we do not have laws which can govern AI directly and the judiciary can actually leverage international human rights law to implement protections for bias, discrimination, fairness around the world. So it’s not something that we have to wait. there is a law, but we can actually already do it. And this is part of UNESCO’s work. And some of these findings also show that they are increasingly using AI. For instance, we saw in the United States, some lawyers put in some fictitious case citations to the court. And these are real, real world happenings that are threatening rule of law. So with that, I would actually now invite, open the floor and invite our panelists to quickly introduce themselves, and then we dive into the questions. So we’ll start with our panelists, Irakli from Unicree. Would you like to say a quick word of introduction?

Dr. Irakli Beridze:
Yes, yes. Thank you very much. Thank you for the invitation. My name is Irakli Beritse. I’m head of the Center for Artificial Intelligence and Robotics for one of the UN agencies called Unicree. And our mandate is to actually work in the policies or practical applications in the area of AI vis-a-vis crime prevention, criminal justice, rule of law and human rights. And it’s very pertinent with the work which UNESCO is doing in that field.

Mr. Prateek Sibal:
Thanks, Irakli. And Amanda Lee from the Future Society.

Ms. Amanda Leal:
Hi, it’s a pleasure to be here. So I work at the Future Society. We’re a non-profit focused on AI governance. I also work in our work stream in AI and the rule of law, and that’s how we partnered with UNESCO in activities related to capacity building in the judicial sector. But most of our work is focused on designing and implementing policy solutions that will advance AI governance. And now with a focus on general purpose AI and generative AI.

Mr. Prateek Sibal:
Thanks, Amanda. And over to you, Dr. Mirjam Stankovic.

Dr. Miriam Stankovich:
Hi, Mimi Stankovic. Nice to meet you all. I serve as a senior principal digital policy specialist with DAI. DAI is one of the biggest USAID implementers. We implement disability work and programs around the globe. I have 25 years of experience working with governments and digital governance initiatives. I specialize in AI governance and I’m one of the principal authors of the UNESCO global toolkit on AI and the rule of law in the Jewish area. Thanks Mimi. Anthony Wong. Thank you,

Prof. Anthony Wong:
thank you Pradeep and Taufik for your invitation to come to here today. I’m representing IFIP, the International Federation for Information Processing. I know it’s a mouthful created under the auspices of UNESCO in 1960. We represent half a million members in five continents, 100 working groups from AI governance to all sorts of technical issues including cyber security. I’m here wearing four different hats. I’m a practicing IT lawyer. I’m also was a CIO. I have two degrees, four degrees in IT and law. So I also used to design the first generation of Thompson’s expert systems and digitization which is a competitor to Lexis. As you know, Lexis just launched the AI system yesterday which is very timely, I think, for this conversation. So thank you, happy to participate. Thanks and over to you Caitlin Bookman.

Ms. Caitlin Kraft Buchman:
Hi, I’m Caitlin Bookman. I’m here in Geneva. I’m one of the co-founders of the International Gender Champions. I also run the A-plus Alliance for Inclusive Algorithms. I have a feeling that we’re here for maybe two reasons. One is this feminist AI research network that we have where we funded a tool for criminal court in Buenos Aires that’s being piloted. There’s one of the use cases in this wonderful course that you’ve put forth, MRAI, and the others that we have something that in consultation with OHCHR we had built a human rights-based approach to AI development which is currently on the, we’re working with the CERBON, it’s on their online portal and we’re about to go into do it in a sort of larger scale with the Turing Institute. Thank

Dr. Rachel Adams:
you. And over to you Dr Rachel Adams. Thank you Pratik. I’m the CEO of the the Global Center on AI Governance, which is a new research collective based in South Africa, but working around the world. Our big project is the Global Index on Responsible AI, which measures progress and commitments to responsible AI in 138 countries globally. And the results, I reckon, will be released next month. I think there’s some early insights I can share that’s relevant to this work. Thank you so much. So with that,

Mr. Prateek Sibal:
I think I’ll open the floor for questions. First to Irakli. So you’ve been working a lot on human rights dimensions of AI. Can you share some of the challenges that you’re seeing, particularly for the judiciary, at the intersection of human rights? What are these concerns?

Dr. Irakli Beridze:
Thank you. Thank you very much. And it’s good to be, a pleasure to be part of this panel discussion. I’m at the part of the AI for Good Summit, and I was at the roundtable of the AI governance roundtable this morning. It’s my, I don’t remember how many AI for Good Summits I’ve been, but I’ve been in every single one of them since 2017 and a lot of discussions have advanced since then. And we’re not talking about the governance from the beginning. While the technology has sort of exponentially grew, the risks also and the benefits associated to it also exponentially grew as well. And, and I’m happy that right now we’re very much focused. And I, I talk quite a lot about AI governance as probably one of the biggest challenges of our generations to be solved in the years to come. Otherwise, this technology is simply going to overtake us, especially on the background when technology itself is growing exponentially. And you have a challenge of creating governance instruments, especially on the global scale, which will address to that. We work in my center, which is, by the way, located in The Hague in the Netherlands, works quite a, quite a lot on the creation of the policy or governance instruments, especially in the area of law enforcement. We just launched a specialized toolkit, which is called the Toolkit for the responsible use of AI by law enforcement. And this was a joint work with Interpol for the last three years, and it’s been tested now in 15 different countries. We launched at the moment, a electronic version of that and entering the second phase of the discussions or second phase of the project where we will be helping countries adopt such a tool. If I can ask you to focus down on what would be the human rights? So on the human rights side, I mean, obviously the use of AI, technically use of AI is not perfect at the moment. The AI systems have numerous issues associated to it. Accuracy is not perfect. This was very much sort of exemplified in this morning’s discussions. I can name numerous human rights challenges starting from the equality and non-discrimination or privacy or freedom of thought, of expression or assembly association or right to the fair trial or equality of arms or presumption of innocence and many, many, many other things, which could be actually jeopardized or harmed by the use of AI tools, if not used properly. In our work, I always advocate that AI should be used to solve problems and there should be no banning to that, obviously, but it should be done in a responsible and human rights compliant manner. How to do that? So what are the tools available to us right now? We need to be creating more of those governance, sectoral governance instruments. And I think that UNESCO is doing a great job with launch of the survey and with work on the specialized guidelines and toolkits for the judiciary. I would be happy to contribute and help sharing our experiences. The way our toolkit was constructed, it was a multi-sectoral, multi-stakeholder process where we involved all government, private sector, academia, civil society, others, so that all voices could be heard and actually put it out in an instrument which would serve entire globes of all countries in the world. Thank you so much.

Mr. Prateek Sibal:
Stay on, I’m a crazy moderator, so I am going to go off the script a bit. So Mimi, can you dive down a little bit on some of the use cases of AI in the judiciary and how do they intersect with human rights?

Dr. Miriam Stankovich:
So I’m gonna be the devil’s advocate here. So, because a lot of people, a lot of experts say that, you know, there are a lot of risks, there are a lot of challenges, ethical from human rights perspective, related to AI. Listening to Juan David and his presentation, I think we need to have a more granular and nuanced approach. For example, whether judges and prosecutors or other stakeholders from the justice sector are using AI tools in their individual capacity or in organizational capacity. So now I’m not going to be preaching to the choir because all of you, I know, you know that there have been some instances when, you know, both judges and stakeholders in the justice sector as organizations, they have started experimenting with certain types of AI systems. I think this should be approached with caution. For example, the new EU AI Act says that judicial systems and the use of AI in judicial systems should be treated as a high risk application of AI. But having said this, I think that there are numerous opportunities for using AI systems in the judiciary when there is human in the loop. So AI should be always used as a tool for support. Okay, so tool for supporting judges, tool for supporting prosecutors, public prosecutors, for supporting stakeholders. So we have AI being. for case management and prioritization. We have AI used for legal research, Lexis, Nixis, Thomson, and document automation, filing of cases, creation support of creation of virtual courts, smart courts. We have seen instances in India. We have seen instances in China for drafting legal documents. In Brazil, the public prosecutors have started experimenting with the use of AI systems. In the United States, as Pradeep mentioned, there are a lot of cases where judges use AI for predictive analytics, for sentencing, for support in their decision making. Again, because of the controversial use of these AI systems, we really need to be aware of challenges and risks. But nevertheless, we should use AI if we are aware of all these challenges and risks. I would stop here. Another very exciting use of generative AI. I have found out about this use today. For example, the United Nations Human Rights Office of the High Commissioner have started using generative AI as guardian AI. Actually, they use a generative AI for monitoring and situational awareness, whether there is a human rights violation. Can you describe how they do that? I just found out about it. That’s something interesting to look at. It’s here. We could take a look at it. this is something that we should be talking about. So shifting the narrative from, okay, challenges and risks, they exist, but there are certain opportunities that come with certain types of AI systems. So again, I am advocating for more granular nuanced approach. So we need to educate stakeholders and we need to distinguish and we need to make a difference between different types of uses of AI systems and what types of AI we are talking about, for example, whether it’s an expert AI system or is it generative AI system. So different types of risks that come with different types of AI systems. I’m gonna stop here.

Mr. Prateek Sibal:
Anthony, since you mentioned about gen AI systems, you have been mentioning also in your introduction about large language models and their use in the judiciary. And going back to your earlier experience with Reuters and Nexus, can you share what are the challenges if people start using, say, chat GPT for writing judgments?

Dr. Miriam Stankovich:
Thank you, Prateek.

Prof. Anthony Wong:
I’ll start with the positive, as if you don’t mind, and then I’ll go to those challenges. So right now as a practicing lawyer, I would use the tools from Lexis and Thompson to do case summaries, to do help with my contract drafting and legal precedence. I would use it for legal, what I call legal prediction. So if the system has been trained on judges and their profiles, I would use it to predict the outcome of the case, to give percentages for my clients on the success or not of their facts. So this is what I would use as a practitioner, which I am. But on the downside, I also have developed systems for Thompson. So I know as a practicing lawyer, the law in Australia is different from law America. And it’s very important when you train your models to make sure you get your data for the right place. Otherwise, if I use an American chart GDP and database and latch models, I’ll be giving my client the wrong law, which I could be sued for negligence. So I think it’s very important just because there’s chart GDP that’s trained on public data. I would be very careful as a lawyer to use it because I don’t know what’s been trained on. Is it American law, Italy law, South America, different jurisdictions? So it’s a very complex issue. So I personally would make sure that if I use Lexis or Thomson, I would only use models that have been trained on Australian law as a practicing lawyer. So that’s one of the challenges. So you asked me also, Pratik, in the question, how do we ensure that AI tools would uphold the principles of due process, equity, and also fair trial process? I’d like to turn that question in the opposite. We are trained as lawyers, judges, and legal practitioners. We are highly trained. We spent many years in legal training, and we had many, many rules that we had to comply. And we apply the facts to the law and then see how that comes out for our client. So it’s not the tool that will ensure those things. It’s we as practitioners who have sworn to a level of ethics and also to comply with a number of laws in our own jurisdictions, and also to act for the best for the client without disturbing the natural balance of justice. So it’s not the tool that has the onus of responsibility. But I know that the EU legislation now is a misapproach. As my colleague said, it’s been rated high if you use it for judges and judiciary. And that’s the correct approach. Those developers, when they do it, if they say the model is going to be used for Australia to make sure that it’s trained on Australian law rather than American law. law. Because if it’s a trade-on-law, is it my negligence or Alexis Thompson I should look to?

Mr. Prateek Sibal:
Thanks, so it kind of brings us back to the human in the loop principle. I would turn now to you, Rachel, because you have a global view of the developments on ethical use of AI, ethical development and use of AI. How do you see from your kind of also the work that you’re doing with the observatory, some findings from your survey, how can some of these insights, what principles would be useful for the judiciary? And have you seen any examples of how to make this work in

Dr. Rachel Adams:
practice? Yeah, thank you for your question, Prateep. I think we have to remember that when we’re looking at this in different parts of the world, the risks and challenges look very different and are exacerbated in different ways. So the use of AI systems across much of Africa, which is where I work, is higher risk precisely in those places where we feel like it might help the most to fill some of those resource gaps. And it’s higher risk because one, there’s a lack of representative data, which leads to various kinds of biases. Two, there is less mechanisms, institutional mechanisms and legal mechanisms that are effective to challenge an outcome that AI might have led to and less opportunities for redress and remedy. And this was something we saw with the Global Index on Responsible AI was that redress and remedy was a very low performing area globally. And thirdly, there’s a lack of literacy around the particular risks and challenges and I suppose opportunities that AI may pose. So I think in that respect, we have to look at where are we thinking about adopting and using these systems and what are the most appropriate or most important ethical values or principles in those kinds of areas. I think the other thing to say is that, I’m not sure if anyone’s familiar with the study that Hoffman and others led, where they used, they tested various kinds of LLMs with different dialects. So they gave them exactly the same information, but used standard American English and African-American English, and asked various questions about how a particular situation should be dealt in terms of sentencing for a crime. And there was so very, very clear bias towards people with African-American accents being more likely to have harsher sentences and even be sentenced to death. So I think having better, more empirical and clear evidence about what these risks and challenges and real-life harms look like is really important. Because at the moment, I think there’s a lot of guessing in the policy space, which we need to do, because the problem is in front of us, but we really need the evidence-led research to show just how prescient and challenging some of these issues are that we’re facing. Thanks.

Mr. Prateek Sibal:
It’s interesting that you mentioned that there’s lack of redress and remedy mechanisms at the institutional level across the world. And there’s also kind of what we’re trying to work with the judiciary to help them understand, well, you have some existing legal frameworks that you can use to offer this redress and remedy to people that are coming to you potentially. I’ll turn now to Caitlin. There was some mention about biases in large language models. We at UNESCO actually launched a study which looked at gender bias in different large language models, and basically said that if you are a woman, then it will give you kind of professions which are like a nurse. If you are a teacher, if you are a man, you would get like a business executive and so on. models even while they are generating content show some kind of biases. This is also online. And Caitlin, you’ve been working with some researchers to actually not only understand bias, but also use AI to support the judiciary in identifying biases in existing cases. And this is also particularly in some of the work that you’ve been doing on gender-based violence. Can you, with the AMRI tool in Argentina? Yes, please go ahead.

Ms. Caitlin Kraft Buchman:
Well, I could answer that, but I thought you were going to ask me about LLMs. And I just want to push back a little bit. I want to support what Rachel said and just said, I think there’s new research coming. So we know there’s the FACT conference coming up, which is fairness and accountability and transparency. It’s taking place in Rio this year. There are lots of papers coming out, but there are two papers. There’s one from Stanford AI and also the Turing, which really talk about LLMs, the deep, deep, deep bias that’s inside them for everybody. It’s gender bias. It’s racial bias. It’s ethnic bias. It’s religious bias. It is really bias writ large. And we know a bunch of things about them. So on a sort of like a lighter level at the level of maybe you don’t think it’s so important, just writing a recommendation letter, because we see that that’s what some judges or judicial operators are using. If you just say, write me a letter for John, the software engineer, or Jill, the software engineer comes out, he exceeds expectations and should be kept. And it comes out, she meets expectations and is a nice employee. I mean, immediately just something as little as that, but it affects obviously their performance evaluations could affect them. And you just expand that to looking at something like Compass, which wasn’t using Gen AI, but as a model that has this sort of a poverty factor built into it, where people are going to jail because we’re trying to save people who are staying nights in jail. These are just bail. It’s about recidivism, but it’s a night or two or more in jail. or a very high bail that you’re not going to be able to meet. And these are affecting people’s lives in a very, very profound way. And it disturbs me, somebody who’s not part of the legal system, to hear that people are thinking about, oh, well, we’ll use it. There are some risks, but we’ll figure it out. Because I think we’re going to see, I make a prediction in six to nine months, that we’re going to just take gen AI sort of off the table until we figure out what’s right with it. One reason is, another, I think it’s still the same group of people do the research you just referenced, was covert racism and overt racism. So it showed how GPT-2 from two to four, that the overt racism in the United States went from very, very, it was very racist, to not racist at all, sort of on the other end of the spectrum. But as it became sort of more positive towards African-Americans, the covert racism, the underlying racism, went in the absolute opposite direction until it has degraded until levels were pre the Civil Rights Movement. So we don’t know how these things happen. There’s an internal logic, but we’re using machines where we don’t understand the logic. And I think that we should kind of take it easy. Thanks.

Mr. Prateek Sibal:
Thank you so much. I would now turn to Amanda, because you mentioned that you’ve been working on AI governance safeguards. And how can some of these safeguards that you’re working in general would apply to the judicial context? Can you share some examples, insights? Yeah, sure.

Ms. Amanda Leal:
And I think to contextualize, I wanted to bring two points. One about the governance throughout the AI system’s lifecycle, and another point about the supply chain. So the governance throughout the lifecycle is nothing new. Actually, UNESCO worked on it. It’s reflected in the UNESCO recommendation on AI ethics. And I think really important is the ethical impact assessment by UNESCO, which poses questions that relate to different stages of the lifecycle. And I think it’s important to focus, and I think this echoes a bit what Professor Wong said that it’s not about the tool, right? It’s important to focus on humans, not the technology. Because we need to understand that for governance, we need to understand who does what at each point of the life cycle, and that human choices underpin all the creation of AI model systems, how they’re generated. So nothing about AI systems should be mysterious, right? It’s all about choice, regardless of the level of capability a system might have. So for example, deploying a black box is a choice. And I think this points out to what Kaitlyn said. So I actually wanted to ask you all, and I think Kaitlyn kind of already answered that, but does it sound right that a few corporations have the say on whether they deploy black box models and that have not been proven to be safe or accurate, and they reproduce bias, and there’s evidence that there are biases in LLMs? And I think, is it reasonable that we adopt them without considering this? And I think you’ve already answered that it’s not, and I would echo you. So I just wanted to point that out. So I think given the nature of these tools, and I want to, because my work focuses more on general purpose AI and generative AI, I wanted to point out, because I noticed as Juan was presenting the survey, that 41% of respondents said they use chat GPT or other similar tool. I wanted to flag that I think the lack of transparency in those tools, it should be enough of a red flag for judicial operators to really have more scrutiny on whether and how they use those tools. And I think there are many frameworks that can be uptaken to analyze this, starting with UNESCO’s ethical impact assessment, there are others out there, and a good level of understanding of how the lifecycle works. But also look at the supply chain, because coming back with about the point on human rights violations, we should consider that, yeah, there could be… risks of human rights violations from outputs of AI systems, but there are also risks of human rights, not risks that are actually human rights violations behind those tools, because the way that their supply chain is built. So for example, we do have, we have recent reports of workers that are hired by companies like OpenAI and others to label data in poor working conditions. They’re exposed to a grotesque graphic content and violent content. They have no rights. They have a very precarious work situation. So when you’re using those tools, there’s also a human rights violation attached to it in the supply chain. And this is something that I think we shouldn’t ignore. And I wanted to echo also the point that those tools, and I’m focusing here on those generative AI tools, they don’t provide enough information about how their system works, what data they’re trained on, how it’s trained on, model weights and all that. So I know that there is a, I just wanted to point out that AI is not synonymous with efficiency and AI governance safeguards the use of AI in the judicial sector, hopefully in a way that will steer it in a good direction. So yes, let’s leverage AI tools when they’re pertinent, when they’re net positive. And so I’m interested in hearing positive examples of that from colleagues. Thanks.

Mr. Prateek Sibal:
So, I mean, there’s also this other system, which is like retrieval augmented generative AI generation, which is kind of also now able to provide some of these sources, but it’s still also kind of evolving. And let’s see how that goes. I’d now open the floor for anyone who has a question or would like to comment on something. Let’s try to keep it short and then take it from there. Yes, sir. Hi, introduce myself.

Eduardo Bertoni :
I’m Eduardo Bertoni. I’m. Director of Center for Human Rights at American University, Washington College of Law, and I used to be a Data Protection Authority in Argentina. So privacy is one of my obsessions. We are working on emerging technology and the impact on human rights. Two comments plus questions to get some feedback from this great panel. The first is, I see that most of the 17% of the judges that have concerns about privacy. Did you ask something about how they want to cope that concern? Because using personal data is one of the things that is more attractive for judges or prosecutors and it would be very problematic. And my second question is about the AI Act of the EU. You know that the AI Act of the EU put on high risk the administration of justice. So how UNESCO plans to cope with these things? I mean, because I really think that AI could be a very good tool for the judiciary, but at the same time, the risk for human rights, the risk for discriminations, the risk, many things that we say here are part of the thing. And the EU Act on Artificial Intentions say something that we need to pay attention. Thank you. Absolutely. We’ll come to those questions, but I want to go to the floor.

Mr. Prateek Sibal:
So we have, yes, please, ma’am.

Audience:
My name is Anjali and I come from an organization called IT for Change. And currently we’re working with G5, OECD’s G5 on gender transformative programming policies and gender transformation and real diversity into AI systems. I have two comments. One is that world over the concerns around AI systems is really about human in the loop when the final decision is made. And the judiciary works with the entire narrative of delivering decisions. So the concerns that people have ordinarily about being cautious should really be applicable, especially in use cases where decisions are made using AI. And I also think that there’s one thing to say that AI should be used in the judiciary. The other. to say AI will be used in judgments. So I think between justice, judgments, and the judiciary, we really need to draw lines so that use cases can be limited to what really enhances efficiency. And I think that exactly like in the case of violence against women, we’ve been talking about creating friction on the screen when there is disinformation or fake news. So any judgment authored using AI should, by force, carry a caveat that this has been generated. Doesn’t matter if the decision was introduced. Otherwise, there will never be anybody to judge the judges. That’s a good point. Thank you, Adichi. I think there are two more points, and then we’ll try to wrap up. So I go to Mamio, and then to Fabio. Thanks very much. And it’s a really, really fascinating discussion. Two questions. The first is we’ve talked about the extent to which the judiciary uses AI, to which they’re exposed and understand AI. So my question is, to the extent to which the judiciary is actually up to speed and able to question the use of AI in everything that happens up to the point where a case comes into the prosecution and into the dish, to the court, so for example, by police, et cetera. And the second question I have is actually from maybe USA’s experience, is the extent to which AI could provide, if managed properly, but could provide a way of actually getting through the vast backlog of cases that you see in a lot of developing countries? And is that, does this open a potential solution to be able to work through that? Thanks.

Mr. Prateek Sibal:
Mimi is not from USA. some diplomatic incidents, but we’ll come to that. There are concrete use cases. And then we take the last one from Fabio and then we wrap up. Yes, Fabio.

Audience:
I’m Fabio Setti from CETIC-PR Research Center in UNESCO Categorical Center in Brazil. And my question, we run a survey with public sector in the use of technologies in the public sector for around 10 years in Brazil. And regarding AI, what we know from the surveys that although the use of AI in the judiciary is much higher than the others, public sector authorities that we interview compared to the legislative and executive branches. And we have around 68% of each organization in the judiciary in Brazil relating using some type of AI. Although this level is high, we know that from the other indicators that the use is limited to very specific tools like automating processes that are very time consuming. So there is still not a very profound use of AI tools. And when you ask for the barriers, the capabilities and the capacities of the people in these teams are mentioned as the main barrier and more important than that or other things like this. So how do you see this type of discussion?

Mr. Prateek Sibal:
Because at the same time, the use is growing, but still very, very low capacities for their basic use of AI still in the judiciary in the case of Brazil. Sure, so I can try to combine some questions and share from what UNESCO’s perspective is. And then of course, panelists feel free to jump in and we’ll conclude with kind of 30 second remarks from each of you. So when you mentioned how is UNESCO working, so we are running a capacity building program and we are also offering concrete guidelines to the judiciary on how to use and how not. to use AI. And this capacity building program, as Fabio was mentioning, really goes into two dimensions. First is the use of AI within the judiciary. So what are the use cases? To give you an example also from Brazil, in Sao Paulo, in the courts, what they’ve done is they’ve created, before they used to take hours, and Mimic can correct me, it was a couple of hours to analyze the cases which are coming. These are kind of urgent, important cases which put your liberty at risk. And then now with an AI system, they have reduced their time to, I think, 40 minutes to analyze. And these cases can be listed before the courts sooner. So these are some examples which can help improve efficiency, but also access to justice and provide remedies sooner. So this is the use case of AI. But then what are the broader legal implications of AI? This is the second part of our work where we’re talking about, okay, how are the human rights implicated? What are the concerns around bias and discrimination? We heard about Compass system in the US. There are all these examples where they’re using AI for judgments, as Anita was mentioning. So this is another way where we are trying to inform and educate them. And there was this mention of how can judges be made aware? There is obviously this automation bias, right? If all of us are using, say, a computer or a calculator, calculator for show, we’re like, yes, this is the right answer. We apply the same thing to AI systems without recognizing that there are potential flaws in the kind of output they’re just getting. So to create that skeptical mindset to question, but for that, they need to be aware of how this technology actually works, what are the pitfalls, and so on and so forth. So I’ll try to stop here, but I want to give last five minutes that we have to our panelists. If you want to come in on any questions or share any closing remarks. So perhaps I start from this side with Rachel, then, yes.

Dr. Rachel Adams:
So helpful. you for your question and some of the other points that have been made. I think it’s important to remember that AI can’t fill gaps where there doesn’t already exist digital records. That is just a recipe for disaster and I think sometimes we’re trying to use AI as a solution for a problem that is not digitalised. It’s not about how we utilise information, it’s about we actually don’t have that information in a digital form yet. So I think we need to be very very mindful there. And then just on the judiciary, from my side I think the opportunity looking at this kind of big idea of responsible ethical human rights based AI, the opportunity with the judiciary is to train a set of people that can uphold and protect human rights that already exist. And I think we should be, that’s where I would like to see much more focus. Thank you. Over to you Katie. I added human rights impact assessments which we’re going to see

Ms. Caitlin Kraft Buchman:
coming into force with the Council of Europe Treaty and it’s all starting to be workshopped and I think it’s going to be very important. I’d like to also change a little bit two mindsets. One is not efficiency but really effectiveness. Where is it more effective? It’s not this sort of instead of sort of worshipping the idea of efficiency but really where there’s impact and where there’s value. And actually just I think for administrative caseload stuff, we had given talks with the OVC a while ago, I don’t know what happened to it, to do a caseload that would assign cases, in our case, that would have gender-based violence cases that often fall to the bottom of the queue so they would come up on a regular basis and also not assigned to judges who are necessarily not very open to the idea of gender-based violence. So that would be a

Mr. Prateek Sibal:
very bad instance. Thank you. So 30 seconds, my colleagues from UNEP are looking at me. So Anthony.

Prof. Anthony Wong:
So the highest, the Chief Justice of our this court in Australia, 200 years, has just mentioned that AI could be the biggest challenge facing the judiciary. And I agree with my colleague over there, no judge should just use AI to make their case reasoning and arrive at an outcome because it’s not ready today and you would not be ready for a very long time. So judges need to use their intellect to look at the research and materials provided and actually use that as a quick way for productivity, efficiency, to then look at, analyze it using the training that they have. So thank you. Thank you. Irakli.

Dr. Irakli Beridze:
Yeah, very quick, 15 seconds. I have two basically comments. One is that it became obvious that, I mean, in judiciary system, the awareness about the familiarity about AI is not that high and that creates a problem as well. So both on the benefits and the risks, and there has to be a lot more campaign or a lot more education for the judiciary professionals about what AI is in general and what are the problems associated and what are the benefits as well. And there has to be a lot more campaign or a lot more… We all have to work together. Exactly. And a lot more investment has to be done. And second is that tools and toolkits and the instruments, governance instruments should be developed and available for the judiciary officials so that they can use it and use the AI to solve problems, but use it in a human rights compliant manner.

Mr. Prateek Sibal:
Thank you so much. Amanda and then Mimi. Okay. So a kind of positive agenda. I hope that,

Ms. Amanda Leal:
because the judiciary is the last resort, right? That if anyone, judicial operators will push for a much, much higher scrutiny and for transparency and accountability from developers and employers

Dr. Miriam Stankovich:
of it. Thanks. Mimi. From development perspective, I would just say that judges are using AI. They’re using different types of AI. We need to be realistic. We need to be pragmatic. We need to be practical. So we need to simplify things for them. They need to use certain types of frameworks such as human rights impact assessments. And please take a look at the global toolkit. with Mimi, a global toolkit on AI and the rule of law from UNESCO, which is an open curriculum.

Mr. Prateek Sibal:
Yeah. And happy to feel free to look it up. Thank you so much to all the panellists and the attendees. I have a couple of copies of the survey in case someone still wants these.

D

Dr. Tawfik Jelassi

Speech speed

157 words per minute

Speech length

660 words

Speech time

252 secs

A

Audience

Speech speed

154 words per minute

Speech length

652 words

Speech time

255 secs

DI

Dr. Irakli Beridze

Speech speed

203 words per minute

Speech length

915 words

Speech time

270 secs

DJ

Dr. Juan David Gutierrez Rodriguez

Speech speed

137 words per minute

Speech length

809 words

Speech time

353 secs

DM

Dr. Miriam Stankovich

Speech speed

148 words per minute

Speech length

795 words

Speech time

322 secs

DR

Dr. Rachel Adams

Speech speed

167 words per minute

Speech length

707 words

Speech time

254 secs

EB

Eduardo Bertoni

Speech speed

180 words per minute

Speech length

258 words

Speech time

86 secs

MP

Mr. Prateek Sibal

Speech speed

184 words per minute

Speech length

1747 words

Speech time

569 secs

MA

Ms. Amanda Leal

Speech speed

170 words per minute

Speech length

856 words

Speech time

301 secs

MC

Ms. Caitlin Kraft Buchman

Speech speed

201 words per minute

Speech length

868 words

Speech time

259 secs

PA

Prof. Anthony Wong

Speech speed

175 words per minute

Speech length

812 words

Speech time

279 secs