“Re” Generative AI: Using Artificial and Human Intelligence in tandem for innovation
28 May 2024 09:00h - 09:45h
Table of contents
Disclaimer: This is not an official record of the session. The DiploAI system automatically generates these resources from the audiovisual recording. Resources are presented in their original format, as provided by the AI (e.g. including any spelling mistakes). The accuracy of these resources cannot be guaranteed.
Knowledge Graph of Debate
Session report
Full session report
Experts tackle the complexities of AI in dynamic panel discussion
During a dynamic panel discussion on artificial intelligence (AI), experts from various disciplines, along with engaged audience members, explored the complexities of AI, including its practical applications, ethical implications, and the challenges it presents. The session was chaired by Moira de Roche and featured notable contributions from Eliezer Manor, Joyce Benza, Anthony Wong, and others.
Eliezer Manor, an entrepreneur and venture capitalist, shared a case study demonstrating the synergy between human intelligence and generative AI in fostering innovation, especially in high-tech entrepreneurship. He described an iterative process of posing questions to generative AI and using its responses to develop creative ideas. This approach was exemplified by his explanation of how questions about photosynthesis led to an innovative idea for industrializing the process.
Joyce Benza from Zimbabwe discussed the application of AI in the financial services sector, focusing on data analytics and machine learning. She outlined three levels of AI tools: Python-based custom applications, no-code platforms with optional coding, and fully facilitated AI tools. Benza emphasized the significance of data cleansing and strategic planning in ensuring the accuracy and utility of AI applications.
Throughout the discussion, ethical considerations were paramount. Participants expressed concerns about intellectual property rights, the potential for AI misuse, and the need for regulation. The panel highlighted the necessity of human oversight and critical thinking when interacting with AI, as well as the integration of ethical guidelines into all stages of technology creation and implementation.
The capabilities and limitations of generative AI were debated, with some questioning whether it represents genuine intelligence or is merely a sophisticated statistical tool. The impact of AI on society, particularly in terms of knowledge management and the potential degradation of internet content quality, was discussed. There was consensus on the need for human-AI interaction, emphasizing the hybridization of AI and human creativity for innovation.
Noteworthy observations included the importance of educating people about AI from a young age, highlighting the need to understand AI’s use and limitations. The discussion also touched on the ubiquity of AI in everyday life and its potential to augment human capabilities.
In conclusion, the panel agreed that while AI holds significant potential for innovation and can be a valuable tool in various sectors, it requires careful management, ethical oversight, and an understanding of its limitations. Human judgment and context remain crucial for the effective use of AI. There is a need for a balanced dialogue that includes ethical considerations, the impact of AI on society, and the development of human capabilities alongside technological advancements. The session underscored the importance of vigilance and critical thinking to ensure the positive use of AI and mitigate potential negative consequences.
Session transcript
Moira de Roche:
So she has deep knowledge of artificial intelligence and certainly knows the good, the bad and the ugly of AI. Then we have Joyce Benza, who comes from Zimbabwe, and she also has some use cases around AI. I’m Moira de Roche, I’m the chair of Art of Artificial Intelligence, and I’m going to introduce Joyce Benza to you. We also have in the room, Mr. Anthony Wong, who is the president of ARFIP, and joining remotely is Eliezer Manor, who is on our Global Industry Council. He is himself an entrepreneur, but spends a lot of time coaching and helping entrepreneurs to get going. So what he’s going to look at today is a case study of how he has used remote artificial intelligence with human intelligence for innovation. Eliezer, can I hand over to you, please?
Eliezer Manor:
Yes, I’m with you. Can you hear me?
Moira de Roche:
Yes, we can. We can see you loud and clear and see you quite clearly. Thank you for being here.
Eliezer Manor:
Okay, I’m far away.
Moira de Roche:
Yeah. But thanks for hanging in there. Right. Eliza, can I get over to you? So that we can try and Let’s see if we can sort out the video. So over to you, Elazer. Do you want to say more about yourself or more about what you’re gonna talk about? Please feel free.
Eliezer Manor:
Yes, can I start?
Moira de Roche:
Yes, please start.
Eliezer Manor:
Yes, okay. Good morning to everybody in Europe. As the topic of my pitch is on how to use artificial intelligence to facilitate and improve the human innovation process. My name is Elazer Manor, I’m from Israel and I’m part of the Global Industrial Council and also partner in Reds Capital Venture Capital. Generative AI became a standard tool to use artificial intelligence, large data banks and fast access to them to receive comprehensive information on almost any topic one can think about. We coined the term of regenerative AI to describe an interactive and iterative process between a human being with human intelligence and generative AI for the purpose of developing a creative new idea for high-tech entrepreneurship. For this purpose, we developed a simple procedure consisting of a sequence of converging human questions answered by the generative AI. The technique is used by us to teach and train MBA students in classes for innovation thinking to crystallize an idea of developing a new product for high-tech entrepreneurship. However, this technique can be used for many other purposes. It is regenerative hybridization between computer artificial intelligence, which has a very high IQ, higher than our IQ and human emotional intelligence, the EQ. the human being is controlling the process. I will present a specific example to see the convergence process in a particular case. So the process is the following, it’s an iterative process. We start with the first curiosity question to check GPT, and I’ll give an example. Why are the leaves green? The answer from the answer, we can learn the following. We can learn about chlorophyll and spectral light absorption which makes the leaves look green. We can learn about the role of the photosynthesis, and we can learn about the plant is a chemical biological reactor which converts light into chemical energy to fuel their growth using glucose. After I learned that, I asked the second question to check GPT. What is chlorophyll? The answer says the following. It shows me what the functionality of chlorophyll is, and also it teaches me about the task of chlorophyll for photosynthesis which produces oxygen out of carbon dioxide. After I learned that, I asked the third question. What is photosynthesis? From the answer of check GPT, I can learn the following. The process that the photosynthesis is the number one producer of oxygen in the atmosphere, and second, I can learn about the importance of photosynthesis. The photosynthesis is the energy source for plants generating glucose, production of oxygen, and carbon dioxide removal from the atmosphere. My insight from this answer, the looking on the findings of removing and using carbon dioxide to generate oxygen, my inspiration was that we may think about imitating the natural process and industrialize it as an answer to the SDG related to climate change caused by the large emissions of carbon dioxide in our atmosphere. The first question that I asked the GPT is the photosynthesis process fully understood and industrialized or commercialized? The answer I received is not yet. Attempts are being made, but the efficiency is still low. There is room to improve. For this purpose, we must understand why the attempts failed on the efficiency and what is the meaning of it is, because we have enormous sources of light. So this is the way, the iterative way that I came to the conclusion, what an innovative idea I can come with. And the innovative idea I came from this sequence of questions and the dialogue with the GPT is that I can improve the industrialization, commercialization of the photosynthesis process that is happening in the plants. So this is the sequence and this is the example that I wanted to use in order to show you how I come from a simple question, why is the leaves of plants are green? I come to the idea of industrializing and commercializing the photosynthesis process. Thank you.
Moira de Roche:
Thank you, Elizer. Does anybody at this point have any questions for Lazar or ask him to perhaps explain a bit more how the interaction will work? With Armenia. Where’s the person who’s supposed to be helping us? Sorry, Elizer, I’m just, just ignore me. Questions from the room? Yes, I want questions from the room. Thank you, Unico, or Donna’s got a question, I know. Yes, I know, I know, I know. So, Eliza, the one
Don Gotterbarn:
question I have is, you have to bring some background knowledge to ask these questions that you have. How is it that you generate these questions if you’re asking about a subject you know nothing about? So, if I ask a system about quantum physics and it gives me an answer, how do I then come to it and know what other questions to ask?
Eliezer Manor:
I think that every question, any question, let’s take another question. Why is the sky blue? Okay, it’s a very simple question that any child will ask. Why is the sky blue? I think that from this question, if you know how to ask the questions after you get the first answer, you are coming to the polarized light that’s coming from the sky. And then you can come to the innovative idea of developing the polarized sunglasses, for example. This is a process. Now, it’s not an automatic process. It cannot be an automatic process. It is a hybridization between artificial intelligence, which has a much higher IQ than anybody of us has, and the emotional intelligence that only we, the human beings, have. We are creative. and we come and we decide what kind of questions to ask. And these questions must converge once we ask and get the answers because we do not have the answers. The AI has access to a lot of data and immediate access of this data to provide the answer.
Audience:
Sorry to interrupt. Can I ask that we hear the other panelists because we have only 20 minutes left for this session. We haven’t, we’ve got longer. We only started late because we only got access to the room. It will be stopped at 9.45. No, we won’t. We won’t. I’ve been assured that you can actually- Thank you. In order not to open the discussion just to hear the other. Okay.
Eliezer Manor:
Did I answer the question?
Moira de Roche:
Just so, yes, Eliza, just one moment. We can stay in this room till 11. Now, I know you’ve probably got other sessions you want to go to, but we don’t have to be out by 9.45 or even 10 because we were disadvantaged by another session being in here. So we have time, but do you want to ask a question?
Audience:
No. Okay. Just the proposal for listening to the others.
Moira de Roche:
Okay.
Audience:
Eliza, my name is Nata Glaser. Thank you for speaking with us today. You said that right now there’s a limit between what AI can achieve on its own and therefore it needs human interaction. Do you think at one point AI will develop enough to not need human interaction to develop new products?
Eliezer Manor:
If you regard human intelligence as IQ, yes, the computer is much better than we are. But if you regard the human intelligence as emotional intelligence, I don’t see when and how the computer can replace a human being. And I think the power, our power is a hybridization between these two qualities. The quality of the computer with high. IQ of the AI and our emotional intelligence. Take, for example, Einstein. Einstein did not have access to artificial intelligence, of course, and not to generative AI. Now, I believe that people with the EQ of Einstein are many among us. The question is, how can we use our emotional intelligence, our creativity in order to use AI? And I think that we have many more capabilities and opportunities in our lives as Einstein or Freud had hundreds of years ago, or maybe even less. Thank you.
Moira de Roche:
Thank you, Liza. Sorry, I’m just going to leave the room because otherwise we’re going to have. Because, oh, yeah, I’ve connected, but you see. Thank you. Let me disconnect. They were going to bring me a stick so that I can, so that I can. Just let me leave the Zoom meeting, please. Joyce, I wonder if it would be a good time for you to come in because you’re going to talk about machine learning and which plays into the interconnect.
Joyce Benza:
Thank you, Moira. Good morning, everyone. I’m coming from the practical side where I’ve applied AI in the financial sector. financial services sector, insurance, health, and areas like that. So we’re looking particularly at three levels of AI are facilitated tools. The first level will be where you use Python and you develop your application to make sure that you are able to do data analysis. So data analytics and analysis is the area that I come from. And the other level is a no-code. So no-code means that there are tools that are available and some of them you may know them, Jupyter, Qlik, and stuff like that. So when you use the no-code, you can apply a bit of code from Python if the vendor allows you to. So that means there’s a bit of code and there’s a bit of no-code. But you can also get some tools that are 100% facilitating your AI. That means that as an organization, you are deciding like, so when we talk to our clients, we are saying, what is it that you want to do with your data? So as you may know, in insurance, pensions, and health, there are lots of sensitivities of data. So in most cases, some of them are choosing for us to go custom, which means we’re using Python. So when you use your Python, all you are doing is you’re creating your pandas, those of us who’ve got a background of how Python works. So you can actually select the rows and the columns in the data depending on the data structure. So when you do select, you are now enabling your client to have data that is structured in such a way that you can now say, okay, it will be a huge database or you’re talking big data analytics in a way. So you are now selecting the data. So it can either be text, or you are looking at the visualization where you’ve got images. So that then enables you to say, so I’m just looking at the health site. You may be registering and you’re registering with your ID, and you may also be having some other visuals that are coming through that you want to identify using AI. So in all that process, you’re using machine learning or deep learning. So when you do that, you are now able to identify the information that’s needed by your client, and you are also able to now, so you start with a strategy that says, what do I want to do with my data? So from that strategic planning, you already know what columns of data, what information is spread across. So you’re also looking at the accuracy of the data. So you start with data cleaning. So some will call it data cleansing. So you can also be using Python to go in there and look at where the errors are. In a specific area, you might actually have, like where I come from, we’ve got names that are Moyo or Duwe or Nguwe. There are so many of them. So if you’re using SINEM as your selection criteria, you might not get it right. So you then have to drill deeper and look for maybe date of birth, and maybe look at your biometrics, facial looks, and so on. So when you do that, you are now also using the same sort of facility, but you are enabling your client to now have the information that they require, but using specifics. So Python for those that are programmed in other languages, it’s not that difficult, because all you’re doing is you’re just doing some selections using rows and columns, and you can get that right. So I think my advice would be to say, if you want to go into artificial intelligence and you want to get the information right, especially when you’re dealing with data, start with data cleansing. So you… use your machine learning to do the cleaning. That means that your data then comes into a format that you require. Once you’ve done that, you are now able to manipulate further because now you know that the information that’s there is now correct or is now formatted in a way that you want. You can then draw further down. You can use Power BI and any other tools that you may use to now make sure that you can have dashboards, you can have whatever other formats of information that you require. Machine learning in a way, AI has actually made it easier and faster. You say in your strategy, you would have said, do we want to improve productivity, argument say, or do we want to enhance our marketing, which is where some of your visuals will come through using the various tools that are available. Which means that your social media, there are lots of visuals there, so you’re able to pick up those as well. Because now you’ve now characterized or categorized your data, and you know that this is text and these are visuals, and you can integrate the visuals as well as the text to make it more meaningful. In a way, I think some of the tools that are available, they actually, you may have to pay. But in some cases where it’s coding, you just have to have the right version of Python, if you’re going to be using Python. As long as on your PC or server, you’ve got Python sitting there and it’s the correct version, you’re also able now to be able to drill down and make sure, and depending on your APIs, in some cases you may have to buy the API because it’s not part of the suite. Depending on the systems that you’re running. Basically, all I’m saying is that in conclusion. you can actually be able to customize artificial intelligence. It’s a simple process, simple in quotes, in the sense that if you’ve got that kind of background, where you’ve programmed a bit or where you’re prepared to learn the programming, you can actually get it right, and you can either have a mixture of the tools that are already readily available, or you can actually do your own custom-made AI solution. Thank you.
Moira de Roche:
Thank you very much, Joyce. We’re now going to try to play the video, so pray to whichever God you pray to that it works, because it’s good and it’ll introduce you to a lot of tools.
Stephen Ibaraki:
Thank you. It’s such a pleasure to be here. I’m going to demonstrate the power of generative AI. Here’s an example with Suno AI where just through some simple prompting, I was able to generate the music, the lyrics, the singer’s voice, and even the graphics. Here it is. I’m going to give some more demonstrations of what I call the 10th machine age investments in new age innovations that are fundamental, changing and very disruptive across business models, and government, and education, and really radically shifting the meaning of life, society, and culture, and impacting everything out there. Empowered by this double convergence or multiple convergence, I should say, these double exponential technology transformations. So are you ready? And do you have a fear of missing out or fear, uncertainty, and doubt? Here, I’m gonna give you a demonstration of some of the video capabilities. This is Pika, you can prompt it. It generates this graphics here with some sound and of course, OpenAI and Sora. And then finally, something from a Microsoft resource called VASA1. And I should mention with VASA1, Microsoft won’t release it until they feel that they’ve got enough safeguards in the marketplace. VASA1, by the way, could be used to create custom companions. It could be used for those people who have communication challenges and that could be used in education. Look at how photo realistic this is. Look at how photo realistic this is. And even the waves breaking the grains of sand here, the shadows in the eye, these toy ships and a coffee cup, the grain, the hairs on the dog, all of these TV sets. And then again, the shadows in the train. And now this is VASA1. Everything here is created from-
VIDEO:
Have you ever had, maybe you’re in that place right now where you want to turn your life around and you know somewhere deep in your soul. And so we’re doing the best we can with what is available. And if you want to turn your life around, I want- ♪ Talk myself in, I talk myself out. ♪ ♪ I get all worked up, then I let myself down. ♪ I would say that we as readers are not meant to look at him in any other way but with disdain, especially in how he treats his daughter. You had me on one of the most fun experiences of my life as I was on the show in Chicago. You know what I decided to do? I decided to focus. Listened, listened, and listened. Because I’m a true believer that- We introduce VASA, a framework for generating lifelike talking faces with appealing visual effective skills, given a single static image and a speech audio clip. Our model is capable of not only producing lip movements that are exquisitely synchronized with the audio, but also capturing a large spectrum of facial nuances and natural head motions that contribute to the perception of authenticity and liveliness.
Stephen Ibaraki:
This is D-I-D. You can actually use the agents. You can train it. You can upload files and other things to train it. And then you can ask it questions. Here’s a demonstration. I’ve loaded D-I-D, and now I’m going to click the Create button. iFIT, the International Federation for Information Processing, is a global organization for researchers and professionals working in the field of computing. You can also use OpenAI and their custom GPT Builder tech. However, to be able to have full access, you have to have a subscription to OpenAI. So here’s a demonstration.
VIDEO:
So it’s really an honor, Stephen, to work with you and our other members of our board and Aegis Trust, the government of Rwanda, the Peace Action Network of YPO, the youth of YPO, and many other collaborators, from Worst Life to WorkBay, we’re working with Junior Achievement and other organizations to train a million youth as peacemakers this year. And, of course, leveraging technology to scale. PeaceGPT is already out there now. It’s trained in ONFLIC. We are absolutely going to use the best of technology out there as it’s growing, as it’s evolving, to scale even further, to drive greater impact. So you can use the Explorer function to find this app called PeaceGPT, and then here I’m asking it questions. about I’m having a problem with conflict. It sounds like you’re going through a tough time with your partner. I can help guide you through a process based on the principles of non-flict, a method for peaceful conflict resolution outlined in a book you provided. So I want to thank you, and allow me to share some of my insights and demonstrations here, and I’ll pass it back to my colleagues at the World Summit.
Moira de Roche:
Thank you, Stephen. Remotely, so we wanted to show you that so that you perhaps were introduced to some new tools. Some of them, as Stephen mentioned, haven’t even hit the market yet. But at this point, I want to ask you if there’s anything you want to, if you want to try one out while you’re here, and then if you encounter difficulties, we can perhaps help you. Or do you have any other questions right now?
Audience:
My name is Vadim Brak. I work for the Council of Europe. This was very impressive, and I don’t want to sound like I’m not supporting innovation, but it looked a little bit scary. I wanted to ask Stephen a few things, personally, as somebody who is really was involved in this sphere, whether you need at all any rules on that. Because manipulation of images of somebody else is really, gives really serious concerns. That would be my question.
Moira de Roche:
And I think I’m going to ask both Unika and Anthony to comment, and I just think you’re right. What I believe is we must approach it with healthy skepticism, but not be too scared. to use it, because if you don’t use generative AI, you’re really doing yourself a disservice. I always say it’s like Google on steroids. We all use Google to search without worrying too much about it. We were less concerned about using Google, whereas Google gives us a whole lot of results that maybe are good or bad, whereas with generative AI, if you’re clever with your prompts, you can get good results, and you can keep interrogating. Unika, can you answer the question, please?
Eunika Mercier-Laurent:
Okay, first things, generative AI is not a magic wand. It is exploring the whole data. In data, we have obsolete data, as you can see when you consult Google. We have obsolete, we have false data, then we need the critical thinking. If you ask the question to ChatGPT, now we have ChatGPT 4.0, which is with dialogue, with emotional intelligence, then it can be tricky. You can think it’s true. It’s not true. It is just exploring the existing data, and it should be wrong. For example, someone asked about what books I wrote, and in the list, it was a book I never wrote. I demystified things because it was a used book selling on the web, and the cover was missing. Then they searched on the web, and then find me, because it was the first Google answer, and put my name as author. I’m not an author. Think about this. It’s not a magic wand. You have to use your critical thinking. It helps a lot. Of course, if you don’t ask this to because sometimes you have your brain, you have your knowledge, you have around you the people who know, and ask the people. It is our way of preserving humanity. If not, we will be brain-off, without brain, our cognitive capacity may decline, and we will be as robots, searching on our smartphone, walking on the street like that, and it is the world you want.
Audience:
I’ve also used AI tools multiple times. I’m a biomedical engineer, I’ve used it in developing medical devices. I’ve used a couple of different tools, mainly borrowed and charge-repeated. Even if you ask the right questions, sometimes it comes up with completely absurd answers, so it’s always good to cross-check the responses you get.
Eunika Mercier-Laurent:
This is the art of asking the question, and we have to develop, I teach this in Paris, in a PETA engineering school, how to ask the right question to generate AI. I was using it, for example, in one case, to compare between the results of multiple clinical trials. Sometimes it would just make up the results of the clinical trial, and I would go back in, and I would read the paper, and see what the results were, and it was completely wrong, regardless of how I asked the question. The second thing, when you ask the second question to generate AI, generative AI learns from you. It’s updated, and it’s available for the next search.
Don Gotterbarn:
We looked at two different things here. Stephen presented two different things. One was the genetic AI thing, and then the other was the VASA-1. Now, the VASA-1, I presume, that was the great concern that you had. you had, and there’s wonderful phrases of language. When I’m selling it to you, I can say, we turn the head, and this contributes to the perception of authenticity. Now think for a moment what contribute to the perception of authenticity means. I lied, but I’m going to look good to you.
Eunika Mercier-Laurent:
It makes me think about machine learning. Someone mentioned machine learning. Machine learning is over 40 years old. It is a science probably introduced by Richard Michalski in Chicago University after he moved to, and some other people from, and machine learning is what we give to the machine.
Don Gotterbarn:
Right, and your false data when you made up this question, your medical question became part of the data that the machine learning tool now uses.
Moira de Roche:
Yeah. Yeah. Thank you. Anthony, there’s a question specifically for you, although they don’t say so, but it’s one that’s in your ballpark. Generative AI is really good for innovation, but one argument is the intellectual property area. Who will own the right?
Anthony Wong:
I’ll start with answering your part of the question. Tomorrow afternoon, UNESCO and I will be having a session on the use of AI in the judiciary, and that will house some of the answers to you, but what I’d like to address is to thank you, Council of Europe, for your recent treaty that you’ve done for the world. A number of countries are very interested to know that there are now regulations coming in to regulate some of those things. So, as you know, technology, including AI, chart GDP, is a … It’s a tool that can be used for good as well as for bad. That’s why we’re having a session later this week, AI for Good. So unfortunately, this technology that we see at Basel can be used for positive things, but it also can be used for fraud and many other negative things. So the question then is how we, as a world and a body, like the Council of Europe, help to regulate the use of that tool? So as you know, a very simple example, a knife can be used because it’s very useful in the kitchen, but it also can be used for harm. So how do you balance that and how do you regulate it? So I think that’s part of the secret, but I think we’ve got to canvass that tomorrow in the AI for Judiciary section. So we have a question online about the intellectual property implication. For those of you who have been involved with the WIPO conversation recently, WIPO is one of the union agencies, they’ve been talking about how do we now cope with intellectual property like copyright created by output of chat GP and many other AI devices. So currently there is no answer. So the world’s now negotiating and talking about that. What’s the right way forward? So in the short time I have, please do look at the WIPO sessions, there are quite a few recent sessions on just this question alone, happy to talk offline, but it’s a very complex thing. When I spoke to, at the cocktail yesterday with a colleague from America, we are looking at a time in the next 10 to 20 years where the changing of the world with the new AI technologies, a lot of our laws would need to change to cope with working with, alongside, behind, in front of AI. because the world that we have structured is a human-based law, but with the law that we’re moving forward is how do we work together with things like artificial intelligence moving forward. So thank you.
Moira de Roche:
Thank you, Anthony. And just to say this, I think it’s always useful to have a conversation or thinking about generative AI and also keeping deepfake in mind because yes, these technologies could really create some, and we’ve seen examples already of world leaders being deepfaked. So we have to bear that in mind and it makes it even more important for us to manage our own privacy. Margaret, we haven’t heard from you yet.
Margaret Havey:
Yes. Well, I came to talk about PeaceGPT. So something that’s relatively new and it came out with GPT-4 because it needed the facilities of GPT-4 to work and it hasn’t been recently. I don’t think it was very available to the free users. So if you’ve got an account with OpenAI, which is open to anyone, you can get a free account that takes you so far. And if you really want to do things, you need to subscribe, which of course is money. And I’ve recently done that, but I haven’t gotten to the point where the Stevens Act with creating images and that sort of thing. However, I did take a look at PeaceGPT. So PeaceGPT is just one in a number of GPTs that has been created. And it works like ChatGPT in which it’s the giggle or the garbage and garbage or principle is still in play. The answers you get depends on how it’s trained and how you ask the question. So you need, there’s a new, I don’t know if a field, a new skill called prompt engineering to actually make sure you can ask the right questions. So just like when you’re writing a program, you have to ask specifically. you know, it’s the same thing, only now it’s going to be everyone, everyone doing it themselves. And PSGPT, when I asked you some questions, and I would like to put this on the screen, but I’m not sure that’s going to be technically feasible. And so it’s a simple question, question mode, where you just type in the question, you get your answers. And if anybody here has used any of these things, like what someone called a co-pilot, I get that sometimes on the Google, and just, you know, the words go through like several pages of words before you’ve read the first line. And then you have to scroll back up. And what you’re getting is, you’re getting all the textbook stuff, all the information that it has access to. The difference is that it can, with all the GPTs, with 4.4 at least, is that you can do more with it. So now the chat bot can write something for you, they can do it, they can take the information and synthesize it and come up with some kind of new product, or maybe it’s new, maybe it’s not. I think that’s where there are some things that need to be worked out. But the answers are all a lot of information. And what’s very timely about this is that the UN has just started a new a new agenda for peace. And we all know what’s happening in the world and how it’s, it’s, you know, a little skirmish here and a skirmish there, you know, it’s just, it’s just exploding in little bits. And we’re trying to manage it. And one of the ways, of course, it’s managed is through conciliation and empathy and things like this, and the way you communicate. And if this, the peace of GPT is aimed at doing that, and Stephen covered it a bit in the end of So it’s meant to teach people new ways of relating to keep some of these things from erupting into whatever the terrible things that could happen. And it’s, it uses a lot of good tools. And it’s, there’s a very good application for that, and I can see where it meshes really well with the UN’s current emphasis. And there’s also a song I wanted to mention. It’s a song I just heard recently. If anybody watches Expendables 3, it’s on that. It’s called Ticking Bomb, and it’s very appropriate to this subject, where it’s just talking about, you know, we just don’t know what’s going to happen next where. And this is a tool that can be used by people, if used appropriately, that might help with that situation. So with all the different things in the different cultures, bringing us all together so that we can do what the UN was set to do, and that’s preventing wars by cooperation across the world. And it asks all of its member nations to do these things. So the Peace GPT has a role that you can play in that. And if you want, we could ask it some questions. We have it here. We’ve got 54% power, and it’s just, there are long answers. So I did ask it to sing the song for me that I like so much, and it couldn’t do that. It refused. It said it’s not allowed. So that’s okay. I mean, it doesn’t know about it, or it’s not allowed, I’m not quite sure. But the other stuff looks very, very impressive. And it’s called the non-flicked way of a million peacemakers, I think is what it comes up on the screen. So I asked it what that was, and it had a good explanation, because I’ve never heard of that. And non-flicked, I think, is a new word, but it’s obviously, you know, not conflict, so it’s good. I think there are very real possibilities in this tool. And Don, did you want to ask or say something? So John, I’m very interested in ethics as well, and in world peace and all those good things, and they are very important. And John can, you know, talk about that at length. Does anybody want to do anything with Peace GPT while we’re here?
Moira de Roche:
So then, does anybody have any questions or questions? about challenges you’ve maybe had with using any generative AI. I say that chat GPT’s had its Hoover moment. For those of you old enough you’ll remember that the first vacuum cleaners that came out were the Hoover brand and so for ever more people call their vacuum cleaners a Hoover even though they’re not made by Hoover or we call tissues Kleenex. The chat GPT’s become like the Hoover. I said it’s chat because there are lots of GPT’s, chat GPT’s just one of them. I particularly like Copilot because it’s so good, so much better at referencing than the others so that you you know you easily get the references for what you need. You didn’t try it Mr. LAI. I did, I did. That one also there’s limited functionality and then there’s the PayPal version which I didn’t pay for but that’s another point. Always consider whether it’s worth paying for something. We always want things for free but actually you know if something’s gonna try and do a cost-benefit analysis, if it’s gonna save you a huge amount of time, isn’t it worth paying a few dollars for something? So if you really want the power of it, get out of the, I only want the free ones, to saying how does this help me? I can sum it up in one word that generative AI prevents that razza blanca, the blank page. When you’re gonna write something the hardest thing is to get started. With generative AI you let it get started for you and then you perfect it, add your own knowledge, add your own flavor. So it really can help you get past that first stumbling block.
Margaret Havey:
I don’t agree with that. Can I not agree?
Moira de Roche:
You cannot agree.
Margaret Havey:
So I tried, so I was at a, we had a board meeting, oh there’s a question there, did you want to? Yes, ask your question. Yeah, thank
Audience:
you so much. Fascinating, PSTPP. How do you help to use a speech chart? So, is there a tool, or how are we going to, if I want to use hate speech, for example, for a class, or for a public, or for myself, or whatever?
Margaret Havey:
Yeah, I’m sure that there’s, you can ask it a question, and it would help with that. And Anthony, were you going to say something? Or are you just getting up to leave? Okay.
Anthony Wong:
Okay, thanks for your time.
Margaret Havey:
He’s just holding, doing something. Yeah, it has a full suite. There’s a whole subject area that is covering on ways to deal with conflict better. So, I’m sure it has something out there. I haven’t asked about it. Part of the problem, we give too much credence to AI. We have to know.
Don Gotterbarn:
We give too much credence to AI, and so we move sometimes without understanding the nuances. One of the answers to your question might be, always tell the truth. So, when you’re working with someone across a diplomatic table, you can tell the truth and say, you know you’re a stupid idiot. I hate to tell the truth. You’ve done what AI said, and it hasn’t gotten the nuances. The other problem is the legal kinds of issues you get into. We have medical technology diagnostic systems, which have a very high success rate. Doctors now worry when they get an AI response. Suits in Pittsburgh. Worry when they get an AI response. If they go against what has been suggested, they open themselves up to lawsuits in the U.S., and they have happened. So, one of the issues is where you trust things. And of course, the chat GPT, which says it generates new content, that’s nonsense. gathers old content from things, and you can ask it. I was going to read a silly response from ChatGPT, which was on the web, which had a puzzle, which said, solve a puzzle for me. ChatGPT said, fine, which one? And the question was, you have a man and a goat and a boat, and you want to get the boat across the river. And ChatGPT came across with this response of, man goes with the boat across the river, man goes back with the goat, seven or eight steps, and it went insane. So I being thorough, last night, I checked to see if this was really the case. So I have a man and a goat on one side of the river. They have a boat, how can they go across? And ChatGPT4 said, listen to this authoritative stuff. To solve the classic man, goat, and river puzzle, we need to clarify the conditions. Typically such puzzles involve additional constraints, such as the man having to transport more than one item across the river, a goat, a cabbage, a wolf, without leaving certain items together unsupervised. Since you mentioned only a man and a goat, since ChatGPT did me, they have a boat and the solution is straightforward. The man and the goat get into the boat, the man rows the boat across the river, they both get off at the other side. It goes on, this simple puzzle involves no additional constraints, complications. However, if there are more elements involved in specific conditions, if there are a wolf and cabbage as well, please provide these details for a more complex solution. solution. I, late in the evening, said, ah, they also have a cabbage. It said, great, now we have a classic river crossing problem involving a man, a goat, and a cabbage. The goal is to get all of them across the river safely, but with the following constraints. The man can only take one item in the boat. The goat cannot be left alone with the cabbage because the goat will eat the cabbage. Then, in case I didn’t understand it, it tells me again, the goat cannot be left alone with the cabbage because the goat will eat the cabbage. And here is the solution, step by step. The man takes the goat across the river and leaves it on the other side. The man returns to the original side. The man takes the cabbage across the river. And he thought, yes, it’s got it. Step four, the man leaves the cabbage on the other side, but takes the goat back with him to the original side. The man takes the cabbage across the river and leaves it on the other side. I don’t know how he got the cabbage. It goes on. Great, more steps. And the question is, how do I make, what do I do? I mean, this is obviously stupid at a certain point. And I like what Eliza said. You bring, I don’t know what to call it, emotional intelligence. You bring some other standard to judge it. And we have to be very careful to any of the AI that we just buy into it. Now that I’ve said it, you can object to it.
Moira de Roche:
This is a question to ask to the children five years old. And the children answered your question correctly. Instead of using Do you know how much of CO2 you generated by this dialogue?
Don Gotterbarn:
Is that a defense of AI? Or I’m not sure.
Moira de Roche:
I will show you. Can Joyce just say something, please?
Joyce Benza:
I just wanted to say something about the open source tools that are available. And also dealing with organizations that are trying to do serious business that I work with. The moment you say these tools are there, you can get open source you can, they’re not taking AI that seriously because they’re used to purchasing stuff. So I’ve tended to find that developing something for them using Python, they actually appreciate that more because they are thinking that this one is value. And also it would appear like some of the clients that I’ve dealt with, they believe that AI is just something that people are still playing with. Because that’s why they emphasize on strategy, that our strategy is to say we want to improve productivity. So let’s do that, where do we start and where do we end? So I was just answering what you talked about on things not being paid for.
Moira de Roche:
The thing to remember is I’m not sure how many people in this audience would actually be able to write Python code and I include myself in it. So I think we’re more, correct me if I’m wrong, I think we’re more of a user audience and consumer audience here than a code writing audience. But the point’s well made that you get what you pay for, I suppose. Because now some of them are not licensed, the open source stuff is not licensed. So when they look at it, they’re like, could this be serious? Is this something that is sustainable? And yet we know it is. And also, like you say, the ordinary person will not want to hear about Python. They just want to see an already set up tool that is already doing things for them. And this is where now I think the relationship between human and AI comes in to say, if you are just doing it using a particular tool, then you have to look at it and say, in the long term even, I think we’re always going to need the human. We can still continue to do using the machine and all the software and all that. But I think the human is going to be very critical as we go forward. And I’m talking about the background of data that I was talking about. So you need to verify that that data is correct. And also the area which talks about the ethics and the legal side of things, there’s this need to then verify to say is this still AI for good or they’re just things that we can’t control. Thank you, Mui. Thank you, Joyce. Any other questions or comments? Has anybody had an unhappy experience with using generative AI?
Audience:
Yeah, I mean, I’ve had good and bad experiences.
Moira de Roche:
There you go. Thank you. And you’ve mentioned some of them. And I think that’s the reality. You know, I was using it. I use it extensively. I was using it to create some learning content. And then it hit a point where it just started to repeat itself over and over and over again. Another colleague of mine said she had asked it some serious science questions and the answers were absolute garbage. And that’s when I explained to her how, you know, if there’s nothing in the large language model on that stuff, you know, it’s not a genius. So it’s not going to be able probably actually when it started, couldn’t have even told you what to and to made because if that hadn’t been put in somewhere or been or be somewhere on the internet, easily accessible, you won’t get the right answers.
Audience:
Yes. Just a very short remark about this. It’s a great product. But one of the problems I saw when I used it, and I once asked it to just count a really large email list, and it couldn’t count it well. So it was like 150 email, it couldn’t count well. And then like, it made me think that one of the drawbacks or maybe I don’t know, it’s kind of embedded in the in the model itself, the model of the product is that those who produce it, they do not really give you a clear notice of what this thing is good at. and what this thing is bad at, and actually I don’t think that they themselves know it, so they kind of learn, they use us to learn the drawback.
Moira de Roche:
I know that is what our large language models work, I’ll get you now Unika, and that’s the point, but you know the prompt needs to be good to start with, but never never be afraid of going back to it and say hey you’ve got this wrong, if nothing else so that it learns from that, yeah Unika?
Eunika Mercier-Laurent:
Yes, my dream is still to have intelligent individual assistant, learning with me, subjective assistant, it is still not on the market and it is something we can see in one of Korean drama I love, it is a device embedded into the glasses, when you wear the glasses you see your intelligent assistant in hologram, and you can ask questions, and it is in the first episode, it was shown the girl with the glasses, she had to prepare something very huge work for tomorrow morning, and everybody left because it was eight o’clock in the evening, and she asked the intelligent assistant wearing the glasses, and intelligent assistant answered, do you have everything in the computer? Okay, yes, and it was a big screen, and it moved, the assistant moved the things, and after one minute she had the presentation on the desk, ready, it is my dream, and I hope we will have this in the next years.
Moira de Roche:
About presentations, where it is quite good, I’ll get you now, where it is quite good is just if you just ask it to give you an outline for presentation, because again you know when you’re doing a presentation where do I start, where do I end, please?
Audience:
I’d say generally I think the whole human machine is becoming blended very much, so it’s not just human feedback about many things, and increasingly using your own information and stuff, but also there’s a lot that’s happening where these assistants are very good at code, so that they can be coding things, adding to the abilities of humans, so as long as they know a bit. So it’s lots of bits and pieces that are all coming together, augmenting humans essentially.
Eunika Mercier-Laurent:
Come to my session on Thursday, Synergy Human AI.
Moira de Roche:
Yes, so we have another session on the subject and then around the security aspect we have a session tomorrow at four on the SDG stage about the whole security issues we’re facing these days.
Don Gotterbarn:
There is a basic problem in the way we interact with AI. We had a Secretary of State in the US who made what I thought was this absurd quote, he said something like, we don’t know what we don’t know. Don’t know what we don’t know. I realized later that describes something I did in business. When I would have some client come to me and I would develop software engineer their system, they would tell me all sorts of things and I would say, well, have you thought about this? And they would go, I didn’t know that. And the point is, they didn’t know what they didn’t know and I gave some help. Now, what we’ve been told earlier about Python is you can now start to write these constraints. So the customer says, I need this constraint and I need this constraint, but the AI now observes a pattern that the customer never thought of and didn’t have the constraint for and the system goes on its merry way without this constraint. And this is a human, I’m siding with Eliza, yes, we need to be constantly on guard and not be careful, not be ignoring things and say, oh, this is what AI said, and it’s fine, because there may be some new pattern and some underlying mistake that requires a different level of vigilance. And because AI uses data and you have to depend on the quality of the data, it’s, anyway, end of story.
Eunika Mercier-Laurent:
Thank you, the word vigilance is correct, I think it’s the relevant one.
Audience:
Back to the constraint you mentioned. We, the group in Munich, a research group elaborated constraint programming. It is made in Europe. Americans don’t know about that. It is excellent in very green for addressing the complex problem with constraint as scheduling, planning, manufacturing, etc. And it is too complicated. We have to talk about it. It is very easy to program. The one thing I forgot to say in the beginning in my fluster to get things going is that we have a book that we agreed we would give to the best question, but I think it’s been good, so I’m going to give it to the most questions.
Moira de Roche:
Thank you very much. And where the green thing is, that’s marking Munich’s paper.
Audience:
With the mapping of AI.
Moira de Roche:
Thank you. Thank you very much. Sorry, I should have said in the beginning that maybe we would have had more questions, but anyway, as you know, I was a bit. What’s the book? It’s from a conference.
Audience:
It’s from the conference workshop.
Don Gotterbarn:
AI for knowledge management.
Audience:
AI for knowledge management. management, now we extended to AI for knowledge, management, innovation, energy, and sustainability. And the next will be in Kyoto at PIKAI in November.
Moira de Roche:
Yeah, it is available on Springer, but you have to have paid access.
Audience:
It’s a series, and we published this since 2012.
Moira de Roche:
Yes, please.
Audience:
I love that book. I know there’s no more, but still one very curious about this question about knowledge management. So when we talk about genetic AI, and I think two questions has been addressed. One is about IP, but when we say IP, we often think about the output, the IP of the output. And also, we need to think about the input. And then most of the genetic AI, they learn from the website, the open data. And for example, if you, regarding to the knowledge management, there are some publications, they have the IP. So some companies or some organizations would like to think about to build their big language model in their private domain. Then they can protect their IP of the input data. So that is one, I think, how can we deal with regarding to the knowledge sharing to foster the innovation?
Eunika Mercier-Laurent:
Correcting and sharing, because it is the principle. In fact, I worked on this field when I met by serendipity Deborah Amidon from MIT in 1996. And we started to work on knowledge management. And I said, you need AI to this. And we wrote another book in three volumes. And one was about technology. In fact, knowledge. in the private process of the company, organization and stakeholders. All these people working together bring the knowledge and this knowledge will be used by the same person. And the principle of sustainability of this system is that when you provide something, someone should use. If you only provide and nobody uses, it stops. The process stops. Then there is a lot of experience. If you are curious about that, I recommend kmworld.com.
Audience:
I want to say something about knowledge measurement. I think some people recently with the advent of this kind of new generation of tools such as large language models, this is the death of knowledge actually. Because we are producing with these tools lots of new documents, text, images, videos, etc. These documents at the end will be available on the internet and the next generations of tools will use that to train their systems in the future. Some people already identified that there is a decrease of the quality. Every time you produce something with these tools and the next generation uses this to generate outputs, you see a decrease of the quality of the results. At the end, we will not be able to distinguish what is originally produced by humans and done by machines. The quality of the entire internet will probably decrease. It’s a huge concern with this kind of tools. We didn’t discuss that right now.
Eunika Mercier-Laurent:
That’s why we need chief knowledge. sir, for managing the process, usually.
Audience:
Yeah, so this point is exactly the next question I would like to say.
Don Gotterbarn:
Yes, but if you hired that person, somebody would say, well, I can write a bot for that.
Eunika Mercier-Laurent:
It is usually, we will not talk about knowledge management with this, because it’s another topic. Knowledge management, innovation, and AI are related. Then we had the question about innovation. I just wanted to add some points to innovation, because in innovation, we have two steps interrelated. We have creativity. Generative AI may help with that, but we can also build very simple things with case-based reasoning to be inspired by the previous ideas. And we have implementation process. In the implementation process, what is very important is to evaluate the impact before doing. And in this case, AI can help a lot, because, for example, ITU elaborated the simulators, various kind of simulators. And when you click on AI impact, you can see the various impact on the various fields that AI brings. This is, say, for climate change and the other. In innovation, we need to verify if you have a market for the product, if it is technically feasible, if we have competency for this, et cetera. And all these things, we have to verify before doing. And sometimes, we need the bio-inspiration. That’s why we have also to observe naturally instead of observing our smartphone 24 hours to take the inspiration from nature and mix these intelligences.
Moira de Roche:
Oh, thank you. Yes. Oh, it’s Christopher. Yeah.
Audience:
I am dealing. I’m a professor of ethics. And I’m dealing with AI and ethics in some years. And I’m struggling always with the same point, which I want to raise here. So that is. You know, it’s a bit this phenomenon of new wine, old wine skins in the old pipe, you know. That means we have new technologies, and I use JTPP and all these things, and that is good for me. It’s very good. But at the end of the day, I’m still the same human being. And so what does it mean? And my interest is somehow, how can we invest more in us as human beings, even the communication here? I mean, I’m now becoming older, I know I talk too much. So that’s the human being. So and we cannot listen to each other, we interrupt each other, these are human factors. So we are still the same old human being, and with our old value system, or our new value system, but how can we put more energy in that reflection, in the social sciences, in philosophy. I was in India in several conferences, for example, where they have a culture, which is interesting. I mean, they invite gurus and philosophers and to an AI and technology conference. In Europe, we don’t do it normally. In China, it’s starting, you know, Confucian ethics, and what does it mean for AI programming, for example. So that is my wish that we would have a more balanced dialogue and not focusing mainly on what can the technology facilitate in our life. That’s very good. It is a facilitator, I’m convinced, I’m convinced about the meaning, but as long as our human being is not changing, and that’s what I’m very interested in, in the peace GPT, I mean, because we all know the ambiguity of every technology, for the good and the bad. So how can we change our human being so that we use it for the good and not for the bad? So and that is a fundamental anthropological question. And it’s not as much discussed as the technical possibilities we have to program a bit here and there. You know, many years ago, at WSIS, I went to your session on global ethics, and in a lot of ways that inspired my interest in ethics. Right, right. I remember now. Yeah. And in fact, by the way, I founded it 20 years ago, exactly initiated by WSIS-I, I was in the WSIS-I 20 years ago, and that triggered my view, how can we use these new technologies and link it to ethical issues. So, it is a 20 years journey with the business and I think it’s very good to integrate ethics in this.
Moira de Roche:
Yes, that’s why I said, Don, we’ve always looked at everything through an ethical lens and we believe that with technology, everything’s got to be done through an ethical lens. It’s what we promote, we have a code of ethics, it’s all about if you’re creating any technology product, whether it’s software or hardware, we’ve got to have the ethical aspect. And in fact, the ethics question should be asked all the way through from design to development to implementation.
Don Gotterbarn:
But it takes time to get there.
Moira de Roche:
Of course it does.
Don Gotterbarn:
No, I mean, being a certifiable old person in this room, I can remember when you had to convince people in the 70s that computing had something to do with ethics and it wasn’t pure math and how you did. Then in the 90s, when the internet came about and you started saying, we have to worry about the ethical impact of this. No, it’s let’s get a new language, let’s get HTML, let’s get this, let’s get that. And it was the science of it. And this has happened with each cycle and the artificial intelligence one is, which is the best machine learning model to use and how should we test the data? Whose data are you getting? Who owns the data? Well, we’ll deal with that intellectual property thing. And now people are calling the artificial intelligence, artificial plagiarism, where all of the content is somebody else’s content. And the questions are starting to come a little faster now in computing because we’ve gotten used to start asking some of these questions, but you get in love with the science first and then you go, oh, it applies to people.
Audience:
Isn’t that true for any invention, though?
Don Gotterbarn:
Yeah, I mean, it’s…
Audience:
Atomic bombs. Yeah. Well, that one they asked pretty early. Yeah. What I’m saying is, I think that AI is like any other innovation. Like, the machine gun changed everything for people. Right. The atomic bomb did, too. The internet did, as well. And AI is the same. They’re all tools, and it’s up to us to decide how we’re going to use them.
Moira de Roche:
We absolutely agree, and that is why, when we promote our code of ethics, we actually don’t really want lots of codes of ethics. So, we don’t believe there should be a specific code for AI. There should be one code that’s for any technology creation, actually, that applies to everybody. FF has a pretty good one. We do. We do. I also think that, I think it’s about the impact.
Joyce Benza:
The impact that AI is causing, and the perception of the impact. Because when you look at cyber security, everyone, everyone is worried, and there’s fear that we must have standardization, we must have ISO, we must have this. But with AI, I think it has been left loose, and we’re just kind of going, I guess we’ll get there. But I think the agency is not there to be as ethical as we would be with cyber security.
Moira de Roche:
Yep. State impact, at least.
Don Gotterbarn:
It’s not just state impact, it’s the problem of knowing, or studying, or discovering the impact before it occurs. And what makes it truly difficult with AI is the fact that you only know the things that AI produces if you know the context of application really well. Context is very important. Which is hard, because people who make AI, they don’t necessarily have an idea of the application context. And that is where ethics come in.
Audience:
It is as Eiffel Tower. When Gustave Eiffel built Eiffel Tower for international exposition, the people said it’s ugly, we have to remove this. And now this is the innovation generating the most of revenue in the world. It is still the same with innovation. Well, I would say AI is also so ubiquitous. We may be very unpleasant. That’s why we need to teach from small children what is AI, how to apply, what you can do, what you cannot do.
Don Gotterbarn:
There is a dark side to humans that we should not forget about. Some kind of rules are really crucial. The problem with AI is that we use the wrong term. Since the Dartmouth workshop in 1956, we used this AI, this artificial intelligence term. And I think it’s completely misleading people because it’s nothing about AI. It’s about statistical tools because generative AI at the end is just something to produce.
Eunika Mercier-Laurent:
There is more than that.
Don Gotterbarn:
Yeah, for sure.
Eunika Mercier-Laurent:
If you’re back to 56, sorry.
Don Gotterbarn:
No, no. When we are just talking about large language models, for example, there is no symbolic AI because I guess you want to talk about symbolic AI.
Eunika Mercier-Laurent:
It was symbolic. It was natural language understanding. And in Grenoble, we have a team working on this since 30 years.
Don Gotterbarn:
Yeah. But if you just look at CGPT and large language models, for example, they are just based on statistical tools. And this is the whole story about machine learning until now. Advanced statistical machine learning is not about AI.
Eunika Mercier-Laurent:
Back to 1980-something. Because they call it data analysis before.
Moira de Roche:
So that’s why I’m always quite pedantic about calling it generative AI, because we make AI sound like this whole new fantastic thing. We use AI all the time, people. Anybody who’s got a smartphone is using AI about 50 times a day, and creating data. So I worry that people think it’s something – it reminds me of the old days when we used to have big mainframe computers that were locked away, and other than the people who could go in there, nobody was allowed near. And to me, we’re getting that same sort of mystery around AI, and we shouldn’t, because there is no mystery. We don’t all understand exactly how large language models work, or how machine learning works, but we don’t need to. What we need to do if we want to get the best out of it is to understand how the prompt works. And there are a lot of good works on prompts now. To write good prompts, and even to do things like put in a URL. If you want something from a specific place, tell JTBG. Analyze this for me. You know, on Adobe now, you can summarize your document with the click of a button using AI. Yes. Yeah, the problem with these tools is that we don’t know exactly what they are doing. And the people who release these tools in the public, they have no idea about how these tools are working, because we just, we are still discovering, I’m following some papers about that, we are still discovering what these tools can do, so it’s quite impressive compared to the previous generation of AI systems, we were more based on engineering principles. We are less engineering. So we are more into empirical science, in fact, it’s like biology, we discover things, we have no idea about what these tools are. It’s quite scary actually.
Eunika Mercier-Laurent:
Now there is some researcher working on explanation in this. It becomes interesting because we can add the explanation based on RAC, for example, and other AI, old generation tools. I recommend that if you have time, 70 minutes, it’s a very good video of John Loungebury from DARPA on three generation of AI. It explains the difference between this generation and what could be next, what should be next to balance.
Don Gotterbarn:
One of the things that happens is the human factor comes in, in terms of resistance to improving AI. I helped develop some police databases in Europe. One of the things was some of the input data were the decisions that police officers would make when presented with certain kinds of evidence. Then a reasonable thing to put in preventatively was to observe what police officers made, what kind of decisions to see if it was a pattern. They were not interested in that because they didn’t want their patterns identified because it might mean they’d lose their job. You were not allowed to put in the corrective data or the data that might match that kind of bias where someone always picked on people with Italian sounding names. The data then would just come out of that because of the input that people with Italian sounding names are more likely to commit crimes. You knew how to do it and it was not allowed.
Audience:
There are limitations though to using these statistical tools to, you know, when you apply it to social reality. I worked as a lawyer in the Court of Human Rights, in the European Court of Human Rights, and we kind of experimented with algorithmic tools. And what I have to say is the underlying reality also changes. So when you program your computer to tackle a given problem and you include certain variables that kind of describe the pattern that you noticed, it is also possible that the underlying social reality changes. And then your model is no longer responding to what’s needed. And then this element of human judgment that is essential in any sensitive decision making is gone. So, yeah, I mean, somehow to me, these tools are great, but they’re also looking to the past. Yeah. And they’re not by definition looking forward.
Eunika Mercier-Laurent:
And the context.
Audience:
Yeah. And then, of course, there is this problem.
Eunika Mercier-Laurent:
It’s typical from the work of Alan Newell, one of the founding fathers of AI. He proposed the system based on the modeling with modularity, genericity, and reusability. And we elaborated the project, European project on this, called CATS, Knowledge Acquisition Design Systems. So the first question we ask is, what is my goal? Second question, what is the context before solving the given problem?
Moira de Roche:
So, people, I really want to thank you all for being so involved in the conversation. We wanted it to be interactive. And for that reason, we didn’t have everybody presenting a formal presentation. But, you know, if you want to. any of us you know that our email addresses are on the event page for this event so if you want to contact Unika to get some of the the names and things she’s mentioned or any of us for that matter then please do so but we really appreciate your being here and mostly being engaged because that’s how we wanted it so our only disappointment was that I couldn’t get in there early and have everything set up but hey that’s life even without AI. Thank you, thank you for the great time. There’s an after table downstairs so you can pop in there and chat to us as well. I know I’m going to I’m fine
Speakers
AW
Anthony Wong
Speech speed
171 words per minute
Speech length
488 words
Speech time
171 secs
Report
The speaker opens by referencing an upcoming UNESCO session focused on the application of AI in the judicial system, which is expected to clarify numerous questions about integrating AI into legal practices. Gratitude is then expressed to the Council of Europe for their initiative in crafting a treaty to manage AI technologies, attracting international interest.
Acknowledging AI’s dual potential, the speaker likens it to a knife—useful in a kitchen but potentially harmful if misused. This analogy highlights the urgent necessity for careful regulation of dual-use technologies, with further discussion planned for a future ‘AI for Good’ session.
The speaker then delves into the contentious issue of intellectual property, particularly the assignment of copyright to works produced by sophisticated AI systems like chat GPT. Recent debates orchestrated by WIPO are noted, reflecting the global effort to navigate this legal quandary.
The lack of a definitive solution emphasizes the complexity and emergent nature of this issue. Looking ahead, the speaker predicts that within the next two decades, legal frameworks must evolve significantly to accommodate AI advancements. Laws conceived with human actors in mind may not suffice in an AI-integrated landscape, suggesting that human-centred legislation requires reinvention to reflect AI’s unique operational characteristics.
In conclusion, the address articulates the necessity for vigilant and adaptable policy-making to harness AI for societal benefit while containing inherent risks. This will entail international conversations, collaboration, and potentially a reconceptualization of legal notions. The speaker is open to ongoing discussions on the multifaceted relationship between technology, law, and society.
In terms of language, the summary is composed using UK spelling and grammar conventions and does not contain grammatical errors, sentence formation issues, or typos. It accurately incorporates long-tail keywords such as ‘AI integration in the legal system’, ‘regulation of AI technologies’, ‘AI-generated intellectual property’, and ‘adaptation of legal frameworks to AI’, ensuring relevance to the original content while maintaining the summary’s quality.
A
Audience
Speech speed
153 words per minute
Speech length
1985 words
Speech time
776 secs
Arguments
From simple questions, innovative ideas like polarized sunglasses can be developed
Supporting facts:
- The process of innovation involves asking subsequent questions leading to creative solutions
Topics: Innovation, Problem-Solving, Curiosity
Hybridization between AI and human creativity is crucial for innovation
Supporting facts:
- AI has high IQ capabilities
- Humans possess emotional intelligence and creativity
Topics: Artificial Intelligence, Creativity, Human-AI collaboration
The concern for the intellectual property (IP) of the input data in AI models is essential.
Supporting facts:
- AI models use data from the web which often includes data protected by IP rights.
- Organizations consider building their own language models to protect the IP of their data.
Topics: Artificial Intelligence, Intellectual Property, Data Privacy
Knowledge sharing is crucial for fostering innovation despite IP concerns.
Supporting facts:
- AI models often rely on learning from open data sources available on the web.
- Companies are trying to balance IP protection with the benefits of knowledge sharing.
Topics: Knowledge Management, Innovation, Open Data
The integration of ethics in computing has evolved over the decades and is now more urgent with the rise of AI.
Supporting facts:
- Ethics in computing was initially hard to advocate for in the 70s.
- The 90s internet boom shifted focus to technological advancement over ethics.
- AI’s current cycle is raising urgent ethical concerns such as data ownership and artificial plagiarism.
Topics: Computing Ethics, AI Ethics, Ethical Impact of Technology
AI is a tool, similar to historical innovations which impacted society.
Supporting facts:
- The machine gun changed warfare.
- The atomic bomb had a significant global impact.
- The internet revolutionized information and communication.
Topics: Artificial Intelligence, Technology Innovation, Historical Technology Impacts
Report
The discourse encapsulates the crucial role that innovation plays in achieving Sustainable Development Goal 9, which is centred on constructing resilient infrastructure, propelling sustainable industrialisation, and spurring innovation. It particularly highlights the transformative potential of Artificial Intelligence (AI) in this context.
The discussions begin with an acknowledgment of human curiosity as a springboard for innovative breakthroughs, with the development of polarised sunglasses cited as an example born from simple inquiry—representing a positive perspective on the power of human ingenuity in problem-solving.
AI is celebrated as a pivotal force in driving innovation, especially when harmonised with human creativity. This synergy between AI’s analytical capabilities and human emotional intelligence is presented as vital for unleashing future creative solutions. In historical context, AI is likened to major technologies—such as the machine gun, the atomic bomb, and the internet—that have had profound societal impacts.
The ethical deployment of AI is framed as a human obligation, reinforcing the narrative that technology’s positive potential relies on responsible stewardship. Intellectual Property (IP) protection emerges as a complex issue in the era of AI. There’s a tension highlighted between the need for open data to advance AI, which promotes innovation, and the requirement for safeguarding IP rights.
The dialogue suggests that companies might benefit from constructing private-domain language models to safeguard their data, a strategic move to control proprietary assets. This dialogue underscores the pressing need to balance knowledge sharing and innovation with the respect for IP rights amidst the digital revolution.
The ethical dimension of technological progress is examined, noting hindsight often prevails; initial excitement over scientific discoveries can overshadow ethical considerations, but such concerns rapidly gain significance as their societal impacts become apparent. The integration of ethics in tech, particularly in AI, is viewed as urgent and necessary, given its escalating role and potential ethical challenges, including data ownership and the issue of artificial plagiarism.
Effective time management in panel discussions is also highlighted as a concern, acknowledging the importance of inclusive dialogue within time-constrained environments. In conclusion, the synthesis advocates for a proactive stance towards exploiting AI’s potential, emphasising the need for a balanced approach that prioritises technological advancement, open data availability, IP protection, and steadfast ethical standards.
The overarching message promotes innovation that is not only forward-looking but also responsible, acknowledging the interplay between benefits and risks in shaping our collective future. The summary has been edited to remove errors, ensure adherence to UK English spelling and grammar, and accurately reflect the main analysis while incorporating long-tail keywords where feasible without compromising the summary’s quality.
DG
Don Gotterbarn
Speech speed
157 words per minute
Speech length
1900 words
Speech time
728 secs
Arguments
AI has a dark side that should not be ignored
Supporting facts:
- Don Gotterbarn acknowledges human failings and the potential for misuse of AI
Topics: Ethics of AI, AI risks
Rules are crucial for the development and application of AI
Supporting facts:
- Don Gotterbarn highlights the necessity of some kind of rules
Topics: AI governance, AI regulation
Report
The debate surrounding the development and use of Artificial Intelligence (AI) unveils pressing concerns regarding its ethical implications and potential risks to society. Renowned figure Don Gotterbarn has contributed to the conversation on AI ethics by acknowledging the capacity for human error and the potential misuse of AI, highlighting a foreboding aspect that warrants careful attention and cannot be overlooked.
Moreover, the term ‘Artificial Intelligence’ has come under scrutiny for its potentially misleading connotations. Since its genesis at the Dartmouth conference in 1956, there are assertions that ‘AI’ is a misnomer, suggesting that these technologies—despite being rooted in sophisticated statistical methods—lack genuine intelligence, thus fostering a gap between public understanding and the technical realities of AI, which could result in misconceptions about AI’s true capabilities.
Despite such challenges, the dialogue advances towards the necessity for governance and regulation in AI. Don Gotterbarn underscores the importance of regulation, asserting its critical role in guiding the ethical development and application of AI technology. This advocacy aligns with the Sustainable Development Goals (SDGs), notably SDG 9 on industry, innovation, and infrastructure, and SDG 16, which focuses on promoting peace, justice, and strong institutions.
This push for AI governance emphasises a global mandate to navigate AI developments responsibly, balancing societal good with ethical constraints. In summary, a narrative of cautious optimism unfolds, combining a vivid awareness of AI’s potential dangers, a critique of the term’s potential to mislead, and a proactive stance on regulatory measures.
These perspectives indicate a conscientious effort to shape AI’s influence within a framework that upholds ethical standards and strives to achieve sustainable development objectives. This strategic approach aims to leverage AI’s power for positive transformation while ensuring that its path remains true to ethical and sustainable precepts.
EM
Eliezer Manor
Speech speed
135 words per minute
Speech length
1130 words
Speech time
504 secs
Report
Elazer Manor, representing Israel on the Global Industrial Council and a partner at Reds Capital Venture Capital, addressed the potential of artificial intelligence (AI) in transforming the process of human innovation. He introduced the concept of “regenerative AI” as an advanced approach to human-machine interaction, suggesting that it can facilitate the generation of innovative ideas in high-tech entrepreneurship.
This interactive, iterative process is already being applied in MBA programs to teach students innovative thinking. In his presentation, Manor highlighted the importance of integrating AI’s high intellectual quotient (IQ) with human emotional intelligence (EQ) to steer and direct the innovation process effectively.
He proposed a framework that depends on an evolving series of targeted questions from human users, which guides the AI to provide comprehensive and insightful responses. Manor illustrated his methodology with an example that began with the simple question of why leaves are green.
This query sparked a progressive chain of questions and answers, exploring the roles of chlorophyll and photosynthesis, eventually prompting Manor to ponder the possibility of industrialising photosynthesis—a concept that could address sustainable development goals related to climate change. He further argued that human curiosity and ingenuity are crucial for directing AI towards such innovative thinking.
Despite AI’s superior processing capacity and ability to rapidly access vast swathes of data, Manor maintained that it lacks the EQ needed for creativity and sound decision-making. The presentation concluded with a reflection on how modern innovators could surpass intellectual giants like Einstein and Freud by harnessing the collective power of human EQ with AI’s capabilities.
Manor pointed out that while AI can surpass humans in raw intelligence and data processing, it cannot replicate the emotionally driven creativity that fuels human innovation. Manor’s key message was that human creativity is the cornerstone of innovation, particularly when combined with AI’s advanced capabilities.
His emphasis on the synergy between AI’s IQ and human EQ highlighted the significant role of human creativity in the evolving partnership between mankind and machine intelligence. He stressed that AI alone cannot match the nuanced, emotionally charged creativity that defines human innovative endeavours.
EM
Eunika Mercier-Laurent
Speech speed
133 words per minute
Speech length
1246 words
Speech time
563 secs
Report
The speaker outlines the limitations and the requisite critical scrutiny when engaging with advanced generative AI, such as ChatGPT 4.0. They acknowledge that while it can mimic emotionally intelligent conversation, it may produce obsolete or incorrect information because it can only replicate existing data.
They provide a personal anecdote where a book was mistakenly attributed to them due to a Google search error, highlighting the necessity of critical thinking when discerning AI-generated content and advising against overreliance on AI for definitive answers. The speaker emphasises the potential of achieving human-AI collaboration, imagining a personalised assistant that adapts alongside the user.
This notion is illustrated by a holographic intelligent assistant featured in a Korean drama, which exemplifies the potential benefits for task efficiency. Such an ambitious vision, however, is not yet realised in contemporary technology offerings. Future discussions in the “Synergy Human AI” series are anticipated to delve into knowledge management, examining AI’s capacity to enhance collaborations among stakeholders in the effective creation and utilisation of knowledge.
This ties in with the speaker’s historical involvement in the field, dating back to 1996 in conjunction with Deborah Amidon from MIT, and their work published in a three-volume series that investigates the intersection of technology and knowledge processes. Innovation is dissected into the realms of creativity, where AI—through generative technologies and case-based reasoning—could spark inspirational ideas, and into implementation, which involves analyses of impact before advancement.
The International Telecommunication Union’s (ITU) work is cited, which involves simulators predicting AI’s influence in various sectors, such as their implications for climate change. Decisions surrounding innovation should account for market readiness, technical feasibility, and the requisite expertise, showcasing AI’s potential in evaluating pre-implementation strategies.
Historical progression in AI, especially in the field of natural language understanding, is also examined. The speaker points to a need for explanatory systems that can augment user understanding. Reflections on historical AI milestones include projects like CATS, which implemented knowledge acquisition design systems, and they draw on the pioneering efforts of AI expert Alan Newell, an advocate for systems designed with modularity, genericity, and reusability.
In conclusion, the speaker suggests that before consulting AI for solutions, one must first establish clear objectives and contextual understanding, underscoring these elements as necessary for effective problem-solving. The preamble reinforces the necessity for achieving an equilibrium where AI’s functionality aligns meticulously with the complex requirements of its users.
In essence, the speaker presents a nuanced perspective on the potential and boundaries of generative AI, articulating the need for intelligent systems that can evolve with their users, supply pertinent and coherent explanations, and support human cognitive functions. As technological advancements continue, the importance of human intellect, active critical engagement, and situational context remains paramount.
JB
Joyce Benza
Speech speed
173 words per minute
Speech length
1434 words
Speech time
497 secs
Arguments
AI has been less regulated compared to cybersecurity.
Supporting facts:
- With AI, it has been left loose
- everyone is worried about cybersecurity and there’s a push for standardization like ISO
Topics: Artificial Intelligence, Cybersecurity, Regulation
Report
Within the sphere of technological governance, a stark divergence is perceived between the regulation of Artificial Intelligence (AI) and the advancements in cybersecurity regulation. Both sectors significantly impact Sustainable Development Goal (SDQ) 9, aiming to establish resilient infrastructure, promote inclusive and sustainable industrialisation, and encourage innovation.
This regulatory gap has stoked concerns over the urgency and ethical rigour with which AI is managed, suggesting that oversight of AI is somewhat deficient. AI is crucial for achieving SDG 9’s targets related to industrial and innovation growth, yet it appears to be encumbered with fewer regulatory controls.
This has raised alarms among various stakeholders about the potential for technological misconduct or errors. With AI being inherently borderless and able to overstep geographical limits rapidly, the challenge for standardisation and cohesive regulation intensifies. Contrastingly, cybersecurity has experienced a dynamic surge towards standardised security protocols, as specified by the International Organization for Standardisation (ISO).
This reflects the response to the acute fears associated with cybersecurity threats, including data breaches and national security risks. These pressing concerns have spurred agile action in cybersecurity regulation, which appears to be absent in the AI governance arena. Reflecting upon SDG 16, which emphasises peace, justice, and the establishment of strong institutions, similar concerns regarding AI governance arise.
Ethical considerations and the timeliness of addressing AI’s societal effects need more attention, as AI governance intricately intertwines with ethical queries, affecting human autonomy and societal values. In summary, there is a distinct call for reinforced regulatory and ethical oversight in AI, drawing parallels to the proactive cybersecurity frameworks in place.
Enhancing AI regulation is critical for maintaining public trust and ensuring that AI aligns with societal and ethical obligations, in keeping with the objectives of justice and institutional integrity outlined by SDG 16. The disparities highlighted necessitate an adjusted, dialogue-driven, and flexible approach to AI policy, emphasising risk anticipation and mitigation with a level of commitment matching that of cybersecurity measures.
The overall UK spelling and grammar in this text are correct, maintaining a high-quality, comprehensive summary reflective of the main analysis.
MH
Margaret Havey
Speech speed
188 words per minute
Speech length
1113 words
Speech time
355 secs
Report
PeaceGPT epitomises a forward-thinking deployment of GPT-4 technology, aimed specifically at fostering non-violent communication and facilitating the resolution of conflicts. Although the technology isn’t freely accessible on OpenAI’s platform, it’s available to subscribers, a strategy that may restrict widespread access but assures managed utilisation.
The success of PeaceGPT relies heavily on ‘prompt engineering’ — the skill of crafting queries to elicit refined and contextually appropriate responses from the AI, resembling a form of programming. Amidst the prevalent global tensions, the import of such a tool is amplified.
The renewed emphasis by the United Nations on nurturing peace and the ongoing evolution of international diplomatic discourse underscore PeaceGPT’s significance. Empathy and comprehension are now fundamental to international peace efforts, aligning well with the objectives of PeaceGPT. Demonstrations of PeaceGPT have highlighted its limitations in performing certain imaginative tasks such as singing, which could be due to either constraints in the programming or an insufficiency of the relevant data.
Nevertheless, the AI shines when offering detailed information and elucidations, demonstrated by its clever description of “non-flicked,” hinting at a concept antithetical to conflict. Nevertheless, the conversation includes ethical concerns and the possibility of an excessive dependence on AI for key global matters, like diplomacy and conflict mediation.
The extent to which AI can be reliably integrated into these sensitive areas attracts a fair share of scepticism. Despite these reservations, PeaceGPT’s contribution to the management of conflict is acknowledged, indicating a potential supporting role in universal peacekeeping initiatives.
In essence, this summary encapsulates the prospective role of PeaceGPT in enhancing global understanding and maintaining peace. By potentially preventing minor disputes from escalating into significant confrontations, PeaceGPT could have a considerable impact if seamlessly integrated with the UN’s current peace endeavours and their strategic framework.
As proficiency in prompt engineering and the interaction between users and AI platforms advances, the full prowess of PeaceGPT and akin technologies can be mobilised. Concurrently, it is crucial to keep a balanced view regarding the employment of AI in the intricacies of human conflict resolution to ensure complementarity rather than dependency.
MD
Moira de Roche
Speech speed
170 words per minute
Speech length
2638 words
Speech time
931 secs
Arguments
Many in the audience may not have coding skills.
Supporting facts:
- Moira de Roche mentions that she and many others in the audience may not be able to write Python code, indicating a lack of programming skills.
Topics: Software Usability, Audience Skills
Perception of value is associated with direct investment in technology.
Supporting facts:
- The mention of how organizations take AI more seriously when they purchase solutions or have them developed rather than when using free, open source tools.
Topics: Business Strategy, Investment in AI
Human role in AI implementation is crucial
Supporting facts:
- Moira de Roche argues that regardless of the tools being used, humans will be very critical in the long term for areas like data verification and addressing ethic and legal aspects of AI.
Topics: Human-AI Relationship, AI Maintenance, AI Ethics
Moira de Roche is involved in sessions on the subject of Human AI synergy and security issues.
Supporting facts:
- Moira de Roche acknowledged a session on the subject of Human AI synergy.
- There is a session about security issues related to the SDGs.
Topics: Human AI Interaction, Cybersecurity
One code of ethics should apply to all technology creation
Supporting facts:
- Promotion of a universal code of ethics
- Opposition to multiple codes of ethics for different technologies
Topics: Ethics, Technology Regulation
Report
During a comprehensive discussion focused on the interaction between technology and societal advancement, numerous key points were presented, relating to education, the evaluation of technology investments, ethical considerations, and the critical role of human management in AI development. A notable comment by Moira de Roche shed light on a significant skills gap: a considerable portion of attendees, including herself, lack the ability to write Python code.
This fact underlines a wider issue related to software usability and suggests a mismatch between technological developments and the skills of the target users. This point is pertinent to SDG 4, Quality Education, emphasising the need to empower individuals with the required skills for navigating a digital environment.
The conversation also revolved around how AI is viewed within corporate strategies. It was suggested that organisations place greater value on AI solutions when there is substantial financial expenditure, such as in procuring proprietary software or creating custom solutions. Conversely, the utilisation of free, open-source tools is often perceived as less credible, although they offer considerable advantages.
This matter intersects with SDG 9, Industry, Innovation and Infrastructure, highlighting the intricate dynamics between the funding in AI and its acknowledged worth and efficacy. Moira de Roche also underscored the invaluable contribution of human agents in the context of AI.
She maintained that human expertise is essential for responsibilities like data validation and in navigating ethical and legal complexities associated with AI technologies. This stance champions the symbiosis between AI and human skills, contributing to the discourse around Decent Work and Economic Growth (SDG 8).
There were expressions of concern around the future sustainability of open-source tools. The dialogue touched on the challenges pertaining to their long-term viability and the impact that a lack of formal licensing could have. Such considerations contribute to SDG 12, Responsible Consumption and Production, where the reliability and legitimacy of open-source tools are crucial for responsible progress within technology.
Furthermore, topics highlighting the collaboration between humans and AI, and the cybersecurity threats that are increasingly apparent, were acknowledged. Recognising these discussions underscores the ongoing relevance of such topics in the context of Industry, Innovation, and Infrastructure (SDG 9). The conference took cognisance of these immediate security challenges, affirming their salience to the aspirations of SDG 16, Peace, Justice, and Strong Institutions, indicating that stringent protection measures and regulatory frameworks are needed to maintain the integrity of emerging technologies.
Moreover, a robust debate emerged on the necessity of a universal code of ethics in technology production. The opposition to a multitude of ethical codes for disparate technologies aligns with the objectives of both SDG 9 and SDG 16. The argument for a singular, comprehensive ethical framework governing all technological creation aims to uphold consistency, impartiality, and global relevance within the rapidly evolving landscape of technology.
Overall, the dialogues reflected a coalescence of technological innovation with fundamental societal and ethical norms, revealing the requirement for comprehensive methods that incorporate the capabilities, financial commitments, standards, and longevity of technology against the backdrop of societal enhancement and progression.
SI
Stephen Ibaraki
Speech speed
152 words per minute
Speech length
440 words
Speech time
174 secs
Report
The presentation offers a comprehensive view on the transformative capabilities of generative artificial intelligence (AI), illustrating its potential to revolutionise everyday life. The speaker begins by showcasing Suno AI, which can create an entire musical package, including composition, lyrics, vocalisation, and visuals, all from simple user inputs.
Central to the talk is the idea that we are entering a ’10th machine age’, marked by significant innovation investments. These advancements are set to overhaul paradigms in a range of spheres such as business, governance, educational systems, and societal and cultural structures.
The concept of ‘double convergence’ is introduced, suggesting a synergistic acceleration in technological progress as different domains of exponential growth interact. This convergence is seen as a shaping force for the future, redefining human experience and capabilities. The presenter then sheds light on AI technologies like Pika, which can blend engaging graphics with sound, and showcases the advanced video creation capabilities of OpenAI and Sora.
Microsoft’s VASA1 is highlighted, a system capable of producing lifelike images and animations that promise new levels of realism and applications, pending the implementation of strong safeguards to ensure its responsible use. Also featured is D-I-D, an intelligent agent that allows interactive database querying and manipulation, exemplifying the personalised learning and interactivity in modern AI systems.
OpenAI’s proprietary GPT Builder is mentioned as a powerful resource for developing bespoke AI models, but it’s noted that full access requires a subscription to their service. The role of the International Federation for Information Processing (iFIT) is acknowledged, implying the importance of professional and academic collaborations in the computing field during this technological era.
The session wraps up with a practical demonstration of content creation utilising these AI tools, encapsulating the speaker’s passion and the concrete outcomes generative AI can achieve. The exploration into AI serves as a reflection on its increasing fusion with all aspects of life, urging individuals and society to deliberate the balance between technological potential and the need for careful governance, without errors in sentence formation or grammar and adhering to UK English conventions.
V
VIDEO
Speech speed
184 words per minute
Speech length
442 words
Speech time
144 secs
Arguments
Leveraging technology to train a million youth as peacemakers
Supporting facts:
- Working with Aegis Trust, the government of Rwanda, Peace Action Network of YPO, and other collaborators
- The initiative involves using technology to scale the impact of peace training
Topics: Youth Empowerment, Peacebuilding, Technology in Education
Using PeaceGPT for conflict resolution
Supporting facts:
- PeaceGPT is trained in ONFLIC, a method for peaceful conflict resolution
- Provides guidance based on principles from a provided book
Topics: Artificial Intelligence, Conflict Resolution, Peacemaking Technologies
Report
The initiative, developed in collaboration with esteemed organisations such as Aegis Trust and the Rwandan government, as well as peers like the Peace Action Network of Young Presidents’ Organisation (YPO), utilises the transformative power of technology to enhance the scope and efficacy of peace education globally.
The programme ambitiously aims to train an impressive one million youths to become skilled in peacemaking, aligning with the United Nations’ Sustainable Development Goals (SDGs) – particularly SDG 4, which stresses the importance of Quality Education, and SDG 16, which advocates for Peace, Justice, and Strong Institutions.
The burgeoning use of technology to widen the reach and deepen the impact of peace training generates a positive sentiment. By integrating innovative applications, the programme strives to inculcate conflict resolution principles in the next generation, fostering a more peaceful future.
Tools such as PeaceGPT, which harness Artificial Intelligence (AI) for guiding individuals in nonviolent conflict resolution, underline the positive embrace of technological solutions. PeaceGPT utilises the ONFLIC method, a structured approach to conflict resolution, and incorporates principles from reputable sources, representing a synergy of technology and established peacemaking strategies.
A steadfast dedication to employing cutting-edge technologies reflects a comprehensive strategy to tackle sustainable development challenges effectively. This commitment is in line with SDG 9, which centres on Industry, Innovation and Infrastructure, promoting the use of advanced technologies to enhance social impact and drive sustainable development.
Furthermore, this approach echoes the ethos of SDG 17, Partnerships for the Goals, recognising that multifaceted collaboration and the exchange of ideas are fundamental for expanding these peacebuilding initiatives worldwide. Collectively, these aspects present a compelling vision of a future where technology and education merge to overcome enduring societal challenges.
The strategic emphasis on empowering youths and advancing peacebuilding through tech-enabled platforms exemplifies an innovative method for cultivating changemakers capable of transforming the socio-political environments in their localities towards peace and prosperity. The sophisticated synergy of technology, educational interventions, and international cooperation offers a hopeful outlook for realising the SDGs and illustrates the significant potential for lasting, positive change within the global peacebuilding framework.
Related event
World Summit on the Information Society (WSIS)+20 Forum High-Level Event
27 May 2024 - 31 May 2024
Geneva, Switzerland and online