Living with the genie: Responsible use of genAI in content creation

31 May 2024 11:00h - 11:45h

Table of contents

Disclaimer: This is not an official record of the session. The DiploAI system automatically generates these resources from the audiovisual recording. Resources are presented in their original format, as provided by the AI (e.g. including any spelling mistakes). The accuracy of these resources cannot be guaranteed.

Full session report

Experts Debate Responsible Governance of Generative AI at ITU Session

At an ITU session moderated by Monika Gehner, experts convened to discuss the governance of rapidly evolving generative AI technologies. The panel acknowledged the potential benefits of generative AI, such as content creation, while also addressing the risks associated with biases, insecurity, and societal harm if not managed correctly.

Linh Tong from Vietnam highlighted initiatives to promote inclusivity and diversity in AI systems, detailing efforts to develop a Vietnamese version of GPT and other projects aimed at enhancing Vietnamese-based natural language processing models. These initiatives are backed by political support, specific regulations, funding, and engagement with vulnerable groups to ensure the success of inclusive governance.

Roser Almenar, an AI and law researcher, explored the legal complexities of generative AI, focusing on the challenges of establishing liability for damages caused by AI systems. She pointed out the inadequacy of traditional liability models to address AI-related issues and mentioned the European Union’s legislative proposals to adapt civil liability rules and product liability directives to encompass AI software.

Natalie Domeisen discussed the intellectual property (IP) implications of generative AI, stressing the need for careful documentation of content creation processes and understanding the diverse IP laws across countries. She also emphasised the necessity of legal agreements and partnerships when developing AI systems to navigate the complex IP landscape.

Blaise Robert from the International Committee of the Red Cross shared insights on using generative AI in conflict zones, emphasising the need to protect the privacy and rights of individuals. He advocated for a principled approach to innovation that prioritises safety and trust, ensuring that risks are borne by the organisation and not vulnerable populations.

Halima Ismail, an AI data scientist and lecturer, outlined her approach to teaching ethical AI code programming. She described measures to mitigate bias in AI systems, including diverse data collection, data quality and pre-processing, and expert annotation and validation.

Audience participation enriched the discussion with concerns about AI’s impact on children’s rights, the absence of AI guidance in education, and the role of legislation in AI governance. One participant highlighted the educational sector’s challenges, where a lack of AI usage guidance leaves learners to use AI independently without proper oversight.

The session concluded with a Mentimeter survey, revealing that principles such as “do not harm,” “safety and security,” and “right to privacy and protection of personal data” are crucial for the responsible use of AI. Additional principles such as regulation, sustainability, transparency, and trust were also deemed significant by participants.

The session underscored the necessity for responsible management, ethical considerations, and inclusive governance to maximise the benefits of generative AI while minimising its risks. The panelists and audience contributions highlighted the multifaceted challenges and opportunities presented by generative AI, emphasising a principled and human-centric approach to its development and use.

Session transcript

Monika Gehner:
Good day everyone. I’m Monika Giener. I’m head of corporate communications here at the ITU, the UN digital agency for UN agency for digital technologies and I’m your moderator today in this session. How do we govern technologies that are evolving very fast and my boss Doreen Bogdan-Martin said it earlier this week we’re building a plane as we fly it. It’s evolving so fast there is no one question to answer it and today we have several answers and and many questions left probably. Generative AI is a technology in which algorithms are trained on data sets that create new content such as image, video, text, sound. They bring benefits and we will hear some of them and they bring risks and challenges and if we do not properly manage the risks and challenges we may perpetrate risks. risks and bias and insecurity. And we may put people out there at risk. We will feel that later. Or societies or ourselves as communicators or the organization on which behalf we are communicating. So this is, in a way, we have in mind communicators that are professionals, but also communicators that we are all increasingly, especially when we have AI tools at our disposal. We can all be communicators, especially on social media. So we have all these people in mind, organizations and individuals. And here in this session, we will touch on the life course of generative AI from the project design and investment to the design of the tool, to the programming of the tool, the input, the output, and the user. So we have these different perspectives. And the objective is to carve out a bit your understanding and your judgment as to how can we limit the risks, minimize the risks, and have maximum benefits. And that’s pretty much context-specific. So I have the pleasure now introducing the panelists. I have online, let’s go online, three bright young ladies. They are all members of the IT Secretary General US Advisory Board. We have Linh Tong, and she’s currently Deputy Director of the Center for Education Promotion and Empowerment of Women. She’s working on a report discussing the importance of protecting cultural and linguistic diversity and gender equality in the design and development of AI systems and tools, specifically in Vietnam. She also served as a country researcher for the Global Index on Responsible AI. We also have online, Roser Almenar from Spain, who is currently researcher on artificial intelligence and law at the University of Valencia in Spain. Her research lines focus on. the discipline of space law and policy, dealing with remote sensing technologies, for instance, AI and data protection, among others, and the impact of technological advances in the protection of human rights. We also have online Halima Ismail, who is an AI data scientist and part-time lecturer at the University of the Rhine. She’s teaching 19 to 21 year old students how to program ethical code. She joined recently the Telecommunication Directorate in the Ministry of Transportation and Telecommunications in Bahrain. With a focus on responsible coding, she teaches ways to avoid bias in AI input and output. And then we have here Natalie Domeisen, who is the co-author of Living with the Genie, Artificial Intelligence and Content Creation for Small Businesses in Trade. And we have a few copies here that Natalie brought along kindly. And it’s hot off the press. It has been updated with a section on copyrights with comments from the World Intellectual Property Organization next door. And it’s also uploaded on the panel session description, so you can download it from there electronically. Natalie, she has recently worked for the International Trade Center, but also many other UN organizations in strategic communications and publishing, multilingual publishing, printing video and communication services. So she’s a producer and user. And she has led teams also in other United Nations organizations. And then we have Blaise Robert here from the International Committee of the Red Cross, who is the lead on AI in the innovation team in Geneva, Switzerland. And he manages a portfolio of research and development and pilot initiatives that explore the applications of AI to further the Red Cross humanitarian. mandate. So you’re at the intersection of tech and policy if I understand well. And then I have Ulvia here. She is a member of the ITU Changemaker group that looks at AI in corporate communications and she will curate a question from the online audience and also a question here. So she will monitor that and at the end we would like to do a little Mentimeter survey to have some insights from you as well. So time is progressing. Let’s start. And I have the first question for Natalie. Generative AI is changing the way we produce, we publish, we analyze data and how we create it. So you have recently co-authored that publication and you can take us please through the benefits of the tools. What are the tools in content creation and what are the benefits please.

Natalie Domeisen:
Thank you. Thank You Monica. There are so many changes and yet so much stays the same. So in content creation how you create content doesn’t really change. You need to have a great idea, it needs to follow a strategic vision, you need to do your research, you need to write, you need to produce, you need to promote. And the tools were there and making you get faster already in the digital age and right now it’s as if we have everything we’ve done before but it’s on steroids now. So there are tools and this report we have takes you through what kinds of tools you can use from coming up with an idea about you know what you’re going to do, how do you research whether it’s text or whether you whether it’s illustrations. So for example we started with chat GBT when we illustrated the cover of this book and we came out with lots of biases which I’m sure my fellow panelists researchers can talk about. So we came up with women who were rather, first of all, men with muscles or women who are sexualized as a genies and so forth. And it was not easy to find something and that is just dealing with the inherent biases that are in these systems. But it will get you going and give you the ideas. And it’s the same thing. There are research tools that are getting out there that are getting better and better. More of the agencies themselves at the international level are using gen AI to create their own specific systems depending on the fields. And then it can go on all the way through. You can do market research, basic copy editing. You can reorganize your bibliographies. Don’t waste time on this, of course. You can speed up the writing, the analysis by using speech to text tools. But the challenge, of course, in this is just to stay AI literate and in the context around you, because everyone is starting to use these tools. So if everyone is using the tools, then that means that the marketplace is getting flooded with information. And you need to be very creative, very business focused, and use interactive tools to reach very specific audiences. So in a way, it’s like going back to basics, but just using these tools to make you more efficient. And that is what the major news industries are doing. So they are not using it to replace their big thinking. They’re using it to make themselves more efficient so they could reach a different language market or generate the great headlines as long as they have people behind them to watch what’s going on.

Monika Gehner:
Thank you, Natalie. And you mentioned language and target to a market. And that takes me to the next speaker, Lin Tong. We looked now, we saw there are greatly benefits. And in the questions and answers, we could dive into these more. But this session is more on the responsible use and the risks and challenges. that we face and that we want you to to help you manage responsibly. So the digital technologies are hardly neutral, yeah, at least for now. There are values and norms embedded in their design and research by the Paul G. Allen School of Computer Science and Engineering found that the output of large language models such as GPT to be heavily biased towards the views of white English-speaking individualistic Western men and you’ll find that reference here in that publication. So just last week the headline said that the big digital platform new AI council is composed entirely of white men. So Lynn, how can you ensure that inclusive governance of AI systems such as protecting cultural, linguistic, gender diversity and equality in the design of development of AI systems and tools and how do you do that in Vietnam? So we have Lynn online. Please, Lynn.

Linh Tong:
Thank you very much for inviting me to be here to join this discussion of this very important topic. In answering your question, I would like to share that in Vietnam we have different initiatives to overcome the bias challenge in large language models. First, from the private sector, we have VNAI, a Vietnamese company that has successfully launched the first GPT, which is a chat GPT version for Vietnamese people in 2023. And for GPT, it’s trained from scratch with a Vietnamese data set capable of understanding the Vietnamese culture and style. Second, from the public sector, we have Vietnam Ministry of Information and Communications is collaborating with Viettel, a state-owned company, to develop big data models for Vietnamese language and virtual assistance for government officials. And third, in the academic sector, you also have the AI Institute of the Vietnam National University presented the machine translation for ethnic minority languages in Vietnam. In Vietnam, by the way, we have 53 ethnic minorities with different languages. So all of these projects aim to promote the Vietnamese-based and natural language processing models, and they do not happen by coincidence, but by properly intended policy regulation. There are four important elements that I would like to highlight here to ensure the success. First of all, inclusion must be included in the high-level political discourses by the state leaders and the AI national strategy. In Vietnam, we have two mottoes. The first one is digital transformation leaves no one behind. That means that digital technology should serve every citizen in Vietnam, including the marginalized group. And the second one is make in Vietnam. That is aimed to promote the innovation by Vietnamese people and for Vietnamese people. So after that, the second point is inclusion must be translated into concrete regulations. In Vietnam, we are working on a national standard on AI to ensure inclusion across the AI lifecycle. By engaging the vulnerable groups like ethnic minorities, people with disabilities, youth, women leaders in the development, deployment, and evaluation of AI. And the third point, in order to make all of this happen, we must have the financial resources to ensure the implementation. For example, we have research funding for AI inclusive projects and inclusiveness was made a criterion for evaluating the funding eligibility of the project. And fourth, we have monitoring and evaluation. And at this stage, we also ensure the multi-stakeholder engagement. And I would like to say that everyone must be aware and responsible to put. the gateway to the generative AI and the role of civil society actors and their local think tanks are extremely important in this process. Engaging the vulnerable groups like ethnic minorities and people with disabilities, youth and women as end-users in evaluating the experiences with AI products are extremely important. To conclude, I would like to say that ensuring inclusion in AI development can bring compounded benefits. First, it can help prevent the risk against the marginalized population. Second, it helps promote the local startups and innovations. In the case of Vietnam, I give you the three examples of the AI models driven by Vietnamese people for Vietnamese people. Third, using AI for good by constraining its risks. For example, with proper regulations, we can universalize access to information regardless of language because we can use different language models for different ethnic minority languages. The same principle can apply for ensuring gender equality in AI. That’s from my side for now. If you have any further questions, let me know in the discussion. Thank you.

Monika Gehner:
Thank you very much. I’d like to say there is an interpretation into French. I see my colleagues here and thank you very much for that. And please do post questions into the chat. Also the people online. We heard that we need meaningful connectivity to include everyone in the digital revolution. 2.6 billion people have never used the internet in the world. We need to bring them online as well. Meaningful connectivity means you have the content in relevant local languages and the relevant local content. Going back to… the positive side, innovation is driving development and digital technologies do spur 70 percent of the SDG targets towards prosperity, so they are extremely important. I’d like to ask Rosa here online how we balance this innovation that we need for prosperity with the responsibility so we have generative AI for good.

Roser Almenar:
Thank you, Monica, and thank you all for the kind invitation to join this thought-provoking session. Actually, I have been following this AI for Good summit this week and I recall on AI Governance Day that the Secretary General rightly said, it’s not the benefits, it’s the risks of artificial intelligence that keep us all awake at night. And this point is where I wanted to discuss precisely the concept of liability. So we know that gen AI models may be used to produce creative content, but they can also be increasingly harnessed for malicious purposes. A clear example, deepfakes. In case you’re not familiar with this concept, deepfakes are images or recordings that have been convincingly altered and manipulated to misrepresent someone as being or saying something that was actually not done or said. And this may indeed lead to, for instance, a violation of that person’s fundamental right to honor. So in these cases, it is paramount to ensure a liability regime for an effective victim compensation. However, traditional liability models have become obsolete with the introduction of this generative AI systems and the solution to this issue only leads to more questions. Who should be held liable? for this damage? How do we determine the imputable act or omission, not to mention the difficulties of establishing this causal relationship between them? Could it be in any case necessary, for instance, to recognize the legal personality for AI, so that we can make them liable for these damages instead of persons, as it has been doing so far? Or will we have to find the origin of this anomaly to impute it to a natural person? For the vast majority of researchers and policymakers, the real unknown actually lies in determining who will be liable for this damaged cause by Gen AI systems. Is it the producer, is it the user, the seller, or the machine itself? And consequently, the person who will have to pay for this compensation for the damaged cause. In the European Union level, for instance, the European Commission presented in 2022 two legislative proposals that try to address a number of amendments to the current European regulatory framework regarding this use of artificial intelligence and the compensation for damages. These are a proposal for a directive on adapting non-contractual civil liability rules to artificial intelligence, and a proposal for a directive of the European Parliament and the Council on liability for defective products. This latter was formally endorsed by the European Parliament during this March 2024 plenary, and it will now have to be formally approved also by the Council. But these new rules bring the positive aspect that give natural persons, and not only consumers, but individuals per se, the right to claim compensation on a strict liability basis for manufacturers of products, and in some cases, also for… components of products, therefore defining this concept of product much more broadly than the current legislation in place, and consequently, including this AI software. So overall, we can say that there is still room to elucidate these new liability regimes that will have to necessarily address these questions in the short term due to the incessant developments in AI technology, including generative AI in the industry.

Monika Gehner:
Thank you very much, Rosner. And Natalie, I see you would like to jump in.

Natalie Domeisen:
Yes, I’m finding the panelists’ contributions super interesting. And I would just like to add on the IP angle, it really matters and changes from country to country. And there’s no commonality on these things yet. It’s the Wild West in terms of how this is all being developed. And the research that we’ve seen shows that the best thing you do is document your content creation as you’re going along, because that’s very important in all of the different places that you’re going to be, and to understand that it’s different from place to place, and to check very carefully, particularly if you are developing gen AI systems, to check into the copyright of any of the tools that you’re using in the process. And if you’re developing, you may even need legal agreements with people you’re in partnership with.

Monika Gehner:
Thank you very much, Natalie, for that precision. You will find more in this publication that I mentioned before. Now, the digital technologies also have real life consequences. Let’s not forget that. It’s not just to create nice images and be funky. We need to look at what we use generative AI systems for. So the mandate of the International Committee of the Red Cross is about people, helping protect people and save their lives in contexts of violence, of war, of infragile contexts. They deal with matters of life and death. So, Blaise, how does- does the Red Cross protect the privacy of people that you are responsible for in some way, their rights as data subjects when using generative AI systems?

Blaise Robert:
Yeah, there is a lot that is packed into that sentence, that what is happening in the digital space can have sometimes disastrous consequences on the life of people. And if we take, for example, Nissan, this information, it has been existing for dozens, if not hundreds of years, been accelerated with the rise of social media, and now even more so with generative AI applied to the creation of content. So the ICRC is working in conflict areas. And there, the impact of missing this information is not just that the feed in our social media apps is polluted or not as interesting to look. It can have very direct consequences in the physical violence that is happening in the field and where we work. So when we are thinking of using generative AI systems for content creation and taking particularly the lens of data protection, then we must keep these elements, the very reality of the risks that we are dealing with, in mind. Data protection questions are not abstract or nice to have. The way that we process beneficiary or the data of affected population can have an impact in their life. And a lot of the generative AI systems that we all use, if we take ChatGPT, Gemini, Mistral, or others, they are typically commercial services, cloud-based commercial services, that comes with a certain licensing models. And there is terms and conditions that apply when you start inputting something in these systems. So when we start using them, we need to keep in mind what type of data do we use with these systems. For example, if we are thinking of transforming a photo into a cartoon because we didn’t get the consent of the person or for another reason, then we must remember that once we input that photo, we can, depending on the terms and conditions, lose the control of what is done with this data. So generative AI is no exception to the use of other digital tools. And there is no shortcut that we can take, especially when it comes to data protection with people who find themselves in highly vulnerable situations.

Monika Gehner:
Thank you. Could we have a microphone for the speaker, please? Microphone for Monica Ginner. Would you please turn your microphone on? Halima?

Halima Ismail:
Yeah, hi. Hi, Monica.

Monika Gehner:
Hi. Sorry, did you hear my question?

Halima Ismail:
No, actually.

Monika Gehner:
Yeah, OK. Sorry. I forgot to put on my speaker. So with your students, you are producing code for an application that is detecting heart attacks in people. So it’s also a matter of life and death. And you have to make sure that the code you’re creating and the output it creates is not putting the potential patient’s lives at risk. So how do you produce that code? ethically and responsibly. How do you teach that to your students and how do you make sure with methodologies and data that the output generated is not putting people’s life at risk?

Halima Ismail:
Yeah, thanks Monika for your question. In the data mining course, it’s crucial to prioritize the mitigation of bias in both the input and output phases. In the input phase, the first thing we do is diverse and representative data collection. To mitigate bias, we must ensure that the training data used to develop in the medical AI system is diverse and representative. This involves collecting data from a wide range of sources, hospitals and demographics. By including data from different populations, we can reduce risk of bias patterns in the input. The second thing we can do is the data quality and pre-processing. Ensuring data quality is essential. We carefully pre-process and clean data to remove any errors or biases. This step includes addressing missing data and normalizing data across different sources. The third thing we do is expert annotation and validation. Expert annotation and validation of the data play a vital role in reducing bias. Medical professionals with diverse backgrounds review and validate the data to ensure accuracy and minimize any potential biases that might arise from misinterpretation or subjective labeling. Now moving on to the output phase. In the output phase, we need to do the regular evaluation and validation. Continuous evaluation and validation of the AI system’s output are crucial to identify and address any biases. This includes comparing the system’s prediction with expert diagnosis and clinical guidelines. The third thing we can do is the ethical guidelines and governance. In January 2024, the WHO, the World Health Organization, issued guidance to assist member states in mapping the benefits and challenges associated with the use of one of the generative AI technology for health. with the goal of ensuring responsible use of large multimodal models, LMMS, to safeguard and enhance public health. So lastly, it’s important to remember that the bias mitigation is an ongoing process. As the new data becomes available and medical practices evolve, we must continuously update and refine the AI system to ensure that biases are minimized and patient safety is prioritized. So thank you.

Monika Gehner:
Thank you, Halima. So we have already looked now a little bit at fairness, non-discrimination, inclusion and participation in the development of systems, do not harm, protect data, personal data, safety and security. Are there any questions here from the audience that you would like to ask or even a comment? Because we’re also here to learn from you. You’re all users, maybe producers. And as I said, we’re steering the plane as it flies, quoting my secretary general. So are there any questions from the floor? Please, gentlemen.

Audience:
Yes, sorry. Thank you.

Monika Gehner:
Give who you are and the perspective, your function, your organization, please.

Audience:
Okay, sure. My name is Alfredo Ronca. I’m from the CMEDIC framework. So I heard what you said, your colleagues said about IPRs in this field. And I think it’s quite interesting to, as I get much more in detail in this field, is actually major part of the events and publisher, they are all such kind of structure are asking for some declaration in order to certify if you use or not any AI in producing your content. And I think this is on one side to have an idea about the content itself. And it might be even in order to protect themselves from any future. future request for fees or something, but it’s very vague, the fact. So, you referenced the publication dealing with such kind of aspects. I think it’s interesting to know, which is much more in general, your idea about IPRs and generative AI.

Monika Gehner:
Natalie, can you speak to that?

Natalie Domeisen:
It’s hard to give an answer in just a minute because it depends on the case and it depends on the country that you’re in. So, if you look at like the painting, The Girl with the Pearl Earring, which is on display in The Hague, there’s been a version done with gen AI and lots and lots of inputs in it. Well, in some places, they would say, oh, the copyright belongs to the more recent creator, and some would say it belongs to the individual creator. And so, it depends whether in the Singapore, China, the US, the European Union, and so forth. The World Intellectual Property Organization does have resources that you can click on for small businesses or for general practitioners. And we do have them in this very short article, the end of the very short article, if you want to go for the specifics. And they have teams of people there who are rolling out information for people who want more.

Audience:
Sorry, I was asking this because we are dealing with similar problem at international level, having created some software application to enhance creativity within the European program of the CCIs. And we face this problem related to the different regulations and the different perception of what is called open source in terms of software that could be merged with some open source, let’s say, performances, open source, I mean, public domain performances and so on. So it’s quite a tricky question.

Natalie Domeisen:
It is. And you need to get legal people behind it to check into the terms of the systems that you’re using and make partnerships with them is what WIPO is recommending.

Audience:
Thank you madam.

Monika Gehner:
If I may read a comment that was made online it’s actually from a it’s the only comment we got so far so I’m not biased in my selection it’s from a colleague in the ITU regional office I think Asia-Pacific and he’s a former employee of Google and Microsoft and so he says it’s very important when we want to reduce bias to provide as users and developers our feedback to the big platforms like OpenAI, Google and Cloud they are very keen to keep improving their large language models and we can ask them to expand the data sets that go into their systems especially related to vulnerable groups as Lynn mentioned before and they should be able to do and I think you mentioned that to me when we prepared a session as well that it’s very important that we provide feedback even when we see guidelines we provide feedback because you said it everything is context related and we can’t respect all the ethical principles at once you know that we have discussed some of them today we have to look at the benefits and and the risks and when we use these they are evolving very rapidly so nothing is fixed we can’t wait for it for a treaty we have to have a compass to start with and then we keep evolving and we need the feedback from users on how to improve both governance and also the tools so that was a comment from online please any other question from here, yea Pippa?

Audience:
Hi im Pippa.

Monika Gehner:
Pippa bigs is also my colleague.

Audience:
just couple of reflections firstly for Mark Zuckerberg’s recently appointed panel it’s not just that they are all men they are all male venture capitalists so if ever you want to see commercial incentive behind what he’s developing, you know, that was potentially rather significant. Vis-a-vis the IP, I did listen to WIPO’s dialogue on AI and its implications for copyright and IP and obviously, as you rightly pointed out Natalia, it varies from country to country. Some cases are going down case law routes, others are equitable principles, etc. But basically, the fundamental problem they were explaining is that copyright law relies on being able to prove provenance, the fact that material or text or images have been copied significantly, and that’s all the debates with music, soundtracks, who stole what from who. But the thing about AI is that it’s all done in the style of. So how do these LLMs acquire the style of? Well, it’s by looking at all of Phil Collins’ songs online. But then they will actually reproduce that you can’t prove it’s been copied. So that was the sort of real challenge for the copyright lawyers. Some of them seemed rather depressed, they realised that generative AI just smashes up 100 years of copyright laws. Others were clearly looking forward to prolonged and protracted court cases to decide what really was copied or not. But yes, it’s certainly posing massive challenges.

Monika Gehner:
And Natalia, you mentioned also before with me about investors and the design phase.

Natalie Domeisen:
Yes. Of course, the AI guardrails are important. But for me, what matters is what attention are executives putting into this? How are they framing the questions around gen AI? How are investors? How are funders for projects doing it and people will follow where the money is. If you follow the money then you know that’s where things are going to start to happen and right now the money has been in things like chat GPT how many billions of dollars 13 billion or something that’s gone into that and I’m thrilled to see these young women online who are developing their own approaches to counter the bias which is coming from the United States I have to say because they have power and money around these technologies and the question is where where are we going to have the partnerships and the funding to come together and we must have the leadership to keep our attention on the important issues of the day and not just use it as a distraction so it really starts in your corporate mission your company mission in your strategic direction of where you’re going and AI should serve you and we shouldn’t be running after AI but it’s a challenge

Monika Gehner:
there’s actually one of the principles purpose and proportionality. Lady in the back please, you had a question

Audience:
yes thank you my name is Margarita Lukavenko I’m from Edu Harbor company and I’m now doing an MBA in educational leadership in a Finnish University so we’re discussing the topic of AI quite a lot so I come from the educational sector and I’m really curious to know your opinions about what can we do with the education and AI because right now the schools educational organizations teachers don’t really have any support any guidance in AI usage and as we speak learners around the world use AI for different purposes without much guidance so we I’m really curious to hear your opinion because I know we’re short of time I will stop my question here but I would really like to roll this discussion further because as you said money the discussion happens where money is and it’s really in education it’s usually in the products but education is what creates the future society and leaders. Thank you.

Monika Gehner:
Perhaps I ask Halima if she would jump in because she works in education as a part-time lecturer. Halima would you like to say something?

Halima Ismail:
Yes. We are facing a problem. There is not an easy curriculum for them. We can’t implement AI in an easy way to them. So I think there should be a guideline or a special team to develop the curriculums for them. That’s my opinion. We can say that there is a problem. Usually when we say AI, they are moving to Internet of Things. So they are confusing between AI and Internet of Things. They are saying that Internet of Things is the same as AI. So I think there should be a method or a specific curriculum for this problem.

Monika Gehner:
I also heard the director of ILO speaking this week that lifelong learning will be key. Reskilling, upskilling, reorienting etc. So this is a big field and I think IMD is looking into this here in Lausanne and many universities.

Audience:
Yes. If I could follow up on the lifelong learning. Absolutely. It’s what’s happening now and in the near future. But also lifelong learning puts more responsibility on the learner herself or himself to keep on learning. Which takes away this collaboration unit. that we actually need now to educate ourselves and the future generations. Thank you all for your answers.

Monika Gehner:
Please, Natalie.

Natalie Domeisen:
Thank you. Trust and the loss of critical thinking, to me, are the two biggest dangers that we’re facing in the world of what’s out there. And if you are in the world of education, you have to go out of your way to counter the widespread use of chatGBT. Of course, there are detectors and so forth, but you need to come up with assignments that are group learning, interviewing, video skills, all sorts of things like that, and indeed, prioritize online learning throughout life. And so people who have agility and flexibility in their skillset, that needs to be taught along the way as well.

Monika Gehner:
Yeah, thank you, Natalie. It was actually one of the points we wanted to make later on, which is human oversight, autonomy and human oversight. Don’t give it to the machine, don’t become complacent and say, okay, now everything is done by the machine. You need to have human oversight and trust, to build trust, because trust is your currency. And trust builds legitimacy, and trust builds power and convening power, yeah. So if I just, before I hand over to you for the final, Lynn had raised her hand, sorry, some time ago, and we may have moved on in the topic. Lynn?

Linh Tong:
Thank you very much, Patricia. And yeah, just a short comment back to one of the comments on the only venture capitalists of the advisory board of META. So I just want to emphasize once again, the importance of helping the inclusiveness and diversity throughout the life cycle of AI. So in the case of META, they do. have a well-recognized diversity and inclusiveness in their AI oversight board. But then in their newly established advisory board, it was the inclusiveness and diversity is not there. So it was an example of not ensuring these principles throughout the lifecycle of AI. So if you really want that AI is going to work for the people by the principle of respecting inclusivity and diversity, then we must integrate that throughout all the stages of AI development. Thank you.

Monika Gehner:
Thank you very much, Lynn. We have another question online. I don’t know if it’s Halima, if you could answer because you work on algorithms and data sets. Can we correlate possible algorithmic bias with human bias? Is there a correlation? Do you know?

Halima Ismail:
Can I? Yeah. So we can solve this by the input. It’s based on the input. For example, if we are detecting the heart attack, yes or no. If we give the model or if we give the algorithm more no, the output will be usually no. We can mostly say it’s usually will be no. So it’s based on the data. There are algorithms that are biased. So we should read and we should understand the main and the fundamentals of these algorithms.

Monika Gehner:
But the question was also is the algorithmic bias correlated with a human bias? Okay, all right. Yeah, I understood it like, is there a science around it that says, yeah, algorithm is biased and so on, human people are biased, but okay, that’s a field to plunge into. Pippa, you have any insights into this? You raise your hand.

Audience:
It’s also the data, right? It’s not just the algorithm. And in some, that’s the big criticism that the algorithm is very accurate because it reflects how the world is. So even my daughters managed to giggle a bit about the culturally diverse World War I squaddies because they knew full well that Asian women weren’t dressed up in the trenches. So even my kids know that, but others won’t. So we’re also, that’s the big question about if the algorithms are precise but reflect back real world discriminations, then you can make them ethnically diverse, but it might not actually make sense for what we know about the world.

Monika Gehner:
So I have one question for Blaise. We heard the autonomy, human oversight, perhaps a correlation between algorithmic bias and human bias, but definitely we as humans, we have to control what any machine gives us to work with because we mentioned trust before as well, which is paramount also for the Red Cross. And we mentioned your fragile context before. So how do you balance that, the need perhaps to use generative AI and your responsibility in fragile context? So innovation and responsibility, how do you do that when it comes to trust?

Blaise Robert:
Yes, I would first question that need to use generative AI. that’s the really the starting point of the reflections which is we shouldn’t use a tool just because it looks fancy because of fear of missing out etc. Of course we should be completely I mean acknowledge also the cost of missing opportunities but first we need to start with the needs and the need is there we can produce content faster we can it can help us also to produce high quality content but we have to start from this point rather than just playing with the new cool things. So I think the way to really bridge that gap between all of the outstanding questions which we heard from the experts and the opportunities which also Natalie touched upon is by having a principled approach to innovation once that starts with the need acknowledging that failing is okay but it’s not okay to fail at others expenses especially when we’re failing with the population that we that we work for. So creating a space for that type of innovation that types of trial and error while making sure that the risks are taken by the organization and not not on others at other people’s expense. So yeah it all boils down to trust and and if we want to maintain that trust we need to equip colleagues through education programs we need to form community of practice where they can share and once we have that in place once we have equipped the people then we can also highlight the fact that there is individual responsibility that is involved in how those tools are used so that we don’t push down the accountability on the machine functionally but that we keep humans in the loop and at the decision point and accountable for how this technology is used.

Monika Gehner:
Thank you Blaise. The first people are leaving but we’re behind schedule because we started later. Before we go to the men’s meeting then please.

Audience:
My name is Silvia Ortega. I am one of the winners of the WSIS prize concerning education and youth and my question is related to a legislation because I understand that we have to do our best we have to trust we have this all these general ideas but what is the legislation which will protect children? and on gen AI and all this methodology through education. So this is my question.

Monika Gehner:
Roser, would you be able to address this with your law background?

Roser Almenar:
Yes, sure. Thank you for the question. Actually, legislative action is necessary for, of course, to cope with all these technological advancements. And with that perspective on human rights, so I think this is the main challenge of our time, trying to protect, as I was mentioning during my intervention, for instance, the right to honor. And deepfakes and other generative AI outputs can have a special incidence in children rights and in children protection. So I think legislatures should always take into consideration the human rights perspective when enacting that. And I think that is something that, for instance, here in the European Union is being held and is being considered with the European AI Act. Deferring to other universal jurisdictions, we are trying to take that human rights perspective into consideration when regulating the different AI models that are in the market. So I think it’s, of course, a challenge for legislation and legislators to come up with the best regulatory practices in this realm, because this is all new for everyone. But I think at least here we are on the right path. And of course, with upcoming technology, there will be upcoming challenges. And we will have to be hand-on-hand with engineers and people with technical backgrounds in trying to protect human rights, especially those of children.

Monika Gehner:
Thank you. So still many questions open. And we heard it in the past days when the UN here and many at the AI Governance Day were discussing what could governance look like. so human rights was mentioned, but we’re still in the early stages of that. I’d like to go to the Mentimeter now. So you heard lots of principles and you said, Blaise, principled approach. So we heard trust, human oversight, ownership, inclusion, right to privacy, protection of data. And we would like to hear from you, which ones, you have several options, are important to you. So you’ve got some insights into these, and we have two questions. So one is, which of the principles we give you are important for you? And Ulvia is sharing the QR code. And also for those that are following us online, please. So we had, have also some insights from you on this and data driven. And we kept it as a publisher on social media, because all of you probably are on social media, even if you’re not a developer or a publisher, you are exposed to the potential use of generative AI in content creation. Okay, let’s stop here. So, or continue, but, so really outstanding are do not harm, safety and security, right to privacy and protection of personal data, which all come down to safety and, yeah, of people out there, of societies out there. It’s not nice to have stuff, it’s must have things. So, safety and security on the top, privacy and protection of personal data second, and then do not harm and fairness and non-discrimination as well. So thank you for this one and we have another Mentimeter question. Beyond the aspects that we just listed, we would like to give you room to put down your own principles or things that you think should be considered when using AI, generative AI in content creation. So here, free flow, but you may also repeat what you heard in the panel discussion. Okay, so what we see here is, for the moment, regulation standing out. Now what we discussed today is rather soft regulation, norms, values that are driving the responsible use, the ethical use, but sustainability stands out and that’s a subject we did not discuss today because we did not have the time, but the use of generative AI is producing a lot of CO2. And that also brings us to what Blaise said before, we really have to think about why do we use it? Is it really necessary when we use it? What’s the benefit? The proportionality for harm, etc. et cetera, but also for sustainability. Other things here, transparency. Yes, we did not touch upon that today. It’s another ethical principle. Transparency, for instance, you may be aware that a human rights agency, they used a photo produced by AI and did not disclose that it was AI and that really made headlines and was not positive for the country that was the context. So disclosing when you use AI. Respectful creativity, a tailored approach, context-specific trust, duty of care towards people we communicate for and with. So thanks a lot for all of that. I’m taking a photo again. So we do have a draft of principles for the responsible use of AI in content creation along some of the ethical principles we discussed today, but foremost along the ethical principles that the UN has agreed on for the use of AI in the UN system. So there are several ethical principles and we’re using in IT the same to come up with ethical principles for staff to use these tools. And if anybody would like to feed into this, you can please direct message me on LinkedIn. So my name is Monika Gaynor. And because you heard it today, inclusivity and diversity in development of governance is very important. And here, there is a colleague from FAO who came forward and would like to work on that. But any other who would like to feed into this with their presentation? perspective is most welcome. So thank you very much to all panelists today. This was just scratching the top of the iceberg and I wish you good judgment in using the tools when you do and hope you could take something away from today. Thank you for coming.

A

Audience

Speech speed

139 words per minute

Speech length

962 words

Speech time

415 secs

BR

Blaise Robert

Speech speed

181 words per minute

Speech length

738 words

Speech time

245 secs

HI

Halima Ismail

Speech speed

156 words per minute

Speech length

605 words

Speech time

233 secs

LT

Linh Tong

Speech speed

144 words per minute

Speech length

782 words

Speech time

327 secs

MG

Monika Gehner

Speech speed

146 words per minute

Speech length

3325 words

Speech time

1366 secs

ND

Natalie Domeisen

Speech speed

183 words per minute

Speech length

1207 words

Speech time

395 secs

RA

Roser Almenar

Speech speed

152 words per minute

Speech length

855 words

Speech time

336 secs