Main Topic 3 – Innovation and ethical implication
19 Jun 2024 10:30h - 11:30h
Table of contents
Disclaimer: This is not an official record of the session. The DiploAI system automatically generates these resources from the audiovisual recording. Resources are presented in their original format, as provided by the AI (e.g. including any spelling mistakes). The accuracy of these resources cannot be guaranteed.
Knowledge Graph of Debate
Session report
Full session report
Experts Tackle Ethical Challenges and Regulatory Frameworks in AI at Multifaceted Panel Discussion
In a dynamic and multifaceted panel discussion led by Dr. Paulius Pakutinskas, experts Thomas Schneider, Vanja Skoric, Nicola Palladino, and Marine Rabeyrin delved into the ethical challenges and regulatory landscape of Artificial Intelligence (AI). The conversation opened with audience insights, revealing privacy, bias, accountability, economic impact, security, misinformation, and existential risks as the main ethical concerns surrounding AI.
A central theme of the panel was the issue of trust in AI. The experts highlighted the role of institutional safeguards and the adoption of a human rights framework as key to fostering trust in AI technologies. They noted the variance in trust levels across different societies and cultures, suggesting that transparent regulatory frameworks aligned with human rights could significantly boost confidence in AI systems.
The proliferation of AI-related initiatives and documents was discussed, with the panel acknowledging the necessity of such diversity due to AI’s complexity and varied applications. The multitude of initiatives, ranging from soft law to recommendations, was seen as catering to different stakeholders and providing a spectrum of governance approaches. This diversity was deemed beneficial, as it allows for tailored solutions that consider the specific needs and contexts of different sectors and cultures.
Stakeholder involvement in AI governance emerged as a crucial point. The panelists concurred on the importance of engaging tech companies, governments, NGOs, and civil society in the AI development process. This inclusive strategy ensures that AI development is informed by a variety of perspectives and ultimately serves the broader interests of society.
An audience question raised the sensitive issue of AI’s military use, which is often excluded from ethical frameworks. The panel recognized the reality of AI’s deployment in warfare and the ethical complexities it entails. The need for more focused attention and potentially international regulation to address the military dimension of AI was underscored.
Another audience member inquired about AI’s impact on government and administration, leading to a discussion on how AI might transform governance models. The concept of augmented democracy, with digital twins participating in decision-making, was considered as a future possibility. The panelists acknowledged the potential risks and benefits of such a transformation and emphasized the importance of managing it to preserve democratic values.
The panel also briefly touched upon the importance of innovation in AI, emphasizing that while it is essential to encourage innovation, it must be pursued with ethical considerations in mind.
In summary, the panel emphasized that ethical concerns about AI are diverse and context-dependent, necessitating a multifaceted and inclusive governance approach. The variety of AI initiatives and documents was seen as advantageous, addressing the needs of different audiences and contexts. The military use of AI was identified as a pressing ethical challenge that requires further attention and possibly international regulation. The discussion concluded with the panel recognizing the need for AI innovation to be balanced with ethical considerations, ensuring that technological advancements align with societal values and human rights.
Session transcript
Dr. Paulius Pakutinskas:
Yeah, so, hello. I’d like to invite the only one live person who will participate in this discussion. It’s Thomas Schneider to our floor. And as it was introduced, I’m Paul Spokotinskas. It’s the same complicated family name as Tomas Lamanauskas, so you can call just Paulus. And why I’m here, so just to discuss some very controversial things like innovations and ethics and maybe some regulation. Because ethics is part of regulations at the same time. And we have other, our panelists, they are online, so it’s Vanja Skoric, Nicola Palladino, and Marine Rabeyrin. So I hope they will join us in some way. And after a few minutes I will ask all our panelists to introduce why they are here, how they are connected with the issue. But for the first step, let’s have a game, a play. We will check, do you have your phones? So could you just take your phones and we can have our audience poll. So please use Menti. Oh, I see our participants are joining. So we need the third one. Okay, hello, Marina, Nicola. Hello. Yeah, so, and we have a question. What do you consider to be the main ethical concerns of AI? And please vote, use Menti, join Menti.com, you can use code. Here it is, 12066360. And we have privacy and data protection, bias, autonomous decisions making and accountability, job displacement and economic inequality, safety and security, misinformation and manipulation as existential risks, and others. Why others? Because there are much more risks and some of them were mentioned in the key speakers’ notes. So, wow, we see quite clear leaders. Because there were quite a lot of such, you know… researches and there are quite different answers. There were some where safety and security and cyber security was mentioned as a key point. But here we can see that, yeah, so let’s say misinformation and manipulation is the main point. And bias, general bias is a topic which is quite equal to misinformation. And privacy and data protection issues are a big concern. Maybe we do not think about existential risks so much, but anyway, so we can see that is what concerns our auditorium here. So that’s very interesting. And I will ask a question for our panelists. So maybe we just can have a short roundtable on what are you doing, how you are connected with the topic. Maybe we can start from Thomas.
Thomas Schneider:
Yes, thank you. My name is Thomas. I work for the Swiss government. And I happen to be as the Secretary General of the Council of Europe has said, one that led the negotiations on the Framework Convention on AI in the last two years. And we’ve been dealing with AI for many time, also in another committee at the Council of Europe that deals with media, mass media, and human rights in the information society. And this is probably where the AI popped up as one of the first areas, because with Cambridge Analytica and all of these scandals, of course, people realize that algorithms have power. And yeah, that’s it.
Dr. Paulius Pakutinskas:
Perfect, thank you. Maybe you can go to Vanja.
Vanja Skoric:
Good morning, everyone. I’m Vanya Skoric. I’m Program Director at the European Center for Not-for-Profit Law, a non-profit organization that works on protection of. human rights and civic freedoms and also in digital space. That’s why we deal with AI as well.
Dr. Paulius Pakutinskas:
Perfect. Nicola?
Nicola Palladino:
Hello, everybody. I’m very happy to be here. I am an assistant professor on Global Governance of Artificial Intelligence in the University of Salerno, and I’m a wholesome and expert member of the Joint Technical Committee of the International Standard Organization on Artificial Intelligence, where they are developing several standards to comply with ethical issues, human rights-related issues connected with artificial intelligence.
Dr. Paulius Pakutinskas:
Perfect. Thank you. Marina?
Marine Rabeyrin:
Hello, everyone. Very happy to be here as well. I’m Marine. I’m working for Lenovo, and today I’m coming with two hats. First, because I’m working in a company, where I know AI is obviously one of the key development area or a key trend. The second reason is because I am leading a dedicated specific initiative to an NGO, which is really looking at how to mitigate some risk of AI, and more specifically, gender bias risk. How does AI can potentially lead to discrimination? So, obviously, you’re very much involved in the ethical aspect of AI.
Dr. Paulius Pakutinskas:
Perfect. So, we see that we have really, really experienced experts. So, our task is now to provoke them to talk and say all secrets. So, we would like to have a very interactive discussion. So, if you have any questions, you can raise your hand. Here is the microphone, which will be put to you, and you can ask, so you are free to do anything in the topic. And we are talking about these two, like a bit controversial sometimes, or maybe not, we will find. issues. So when we talk about AI, so it’s innovations, because we like to be innovative, we would like to be very very skilled technical, as TomasLamanauskas said, we need to solve big problems which are impossible to solve without AI. But we have some concerns and we talk about ethics. So is it, you know, contradicting or maybe we can do some solutions what are good. So my first question for all of panelists and just for warm-up, why we’re talking so much about artificial intelligence ethics and artificial intelligence but we do not talk about, you know, some internet or social networks ethics, where we have a lot of problems too. But somehow we stress AI as a more problematical maybe. Maybe we can start from Thomas.
Thomas Schneider:
Thank you. Well, we’ve also been talking about ethics on social media and so on and so forth. But I guess the difference is that with social media until and including the Arab Spring in 2011, we were all thinking that or many of us were thinking that more or less naturally new tools, new technologies will lead to a better world and to more democratization and so on and so forth. And then we realized and we made the same mistake like people 100 years ago with radio and other new communication tools that you can also use these tools for other things than good things. And I think we’re just a little bit more, we’ve burned our fingers already in the in the 10 years so that we now know that these tools can be used for good and bad things. And people are more concerned also because people realize the power. But of course in algorithms, if you talk about AI, algorithms are part of all the tools that you refer to. So it’s actually the same or it’s component of this.
Dr. Paulius Pakutinskas:
Thank you. What about other panelists?
Marine Rabeyrin:
So, yeah, I would maybe say that maybe it concerns more people because it is also related to the explainability. And, you know, we don’t master or most of people don’t master what is behind artificial intelligence. So it’s a little bit the fear of black box, you know, that box. So that could be one. And also, I think it’s related to the to some bad buzz that we heard, you know, on the news where we see what what it could look like, you know, if AI doesn’t is not led ethically. So I’m thinking about some bad buzz that happened a few years ago when we showed that AI could discriminate people, especially if we use it for legal and legal context or also in employment context. So I think that has maybe raised some concern. At least it was the case for me.
Dr. Paulius Pakutinskas:
OK, thank you. So, Nicola.
Nicola Palladino:
Well, I think that we are talking so much about artificial intelligence, I think also because, you know, compared to the other digital technologies, artificial intelligence has a more immediate and perceivable impact on people’s life. We are talking about application that are used to screen our CV when apply for a job position or to assess to a university. They are used to assess our risk profile when we request a loan. And we also see during the last election how they could be powerful with regard to the manipulation and polarization of public opinion. And so while I also like when we were used to talk about to think about the Internet as, you know, fundamentally beneficial technologies that could help us to spread, improve democracy. But in the case of artificial intelligence, we are more aware of the risks associated to these technologies also because they are more close to our experience. Even for these things, there is all this talk about AI ethics, because in the end, ethics could be defined as an approach to identify what are the most proper behaviors and rule in order to maximize the benefit of a situation, while minimizing the risk. Thank you.
Dr. Paulius Pakutinskas:
Okay. Thank you. Vanja?
Vanja Skoric:
Yes. Thank you. For the interest for our audience and to make it more interactive, I would actually challenge some of the assumptions. First of all, the ethics concept has been evolving. Actually, the conversation moved towards the human rights framework in many of the policy and regulatory efforts, not only concerning AI, but also the Internet and the social media. When we talk about the potential harms and potential risks, we frequently in the conversation, not only in the policy sphere, but also in the academic and research sphere, more and more talk about rights. Also, I think the Internet and social media are involved in the conversation together with AI, because we know and increasingly are aware that all of those are also being powered by AI in some way. When we talk about AI and the risks and benefits, and ethical and human rights dimensions, we actually touch upon all of these topics.
Dr. Paulius Pakutinskas:
Yeah.
Thomas Schneider:
I would just like to react because I was waiting if somebody would mention it. To give you a simple answer on the question is, because we’ve all seen Terminator and other movies, that are dealing with AI and something like social media, other things are not so easy to personalize, but everybody knows what a robot is that could destroy mankind and so on. We’ve all been influenced. by this fear of the man creating, starting with Frankenstein and whatever, the man creating the Superman that will then kill us. And of course, that is the hype that the media, the journalists, the big tech industry themselves, and politicians all mutually reinforce, and this is why we are not talking about data governance, which is equally important, but we are talking about, that’s because of creators of movies and stories.
Dr. Paulius Pakutinskas:
Thank you. Thank you, Thomas, you did a great job because my target was to provoke you, but Thomas provoked you, so thank you very much. That’s a very good point, because there are quite good researchers on trust of AI, and we can see that in different societies and different cultures, we can see quite different acceptance and trust on AI. And we can see quite clear division between Asia, let’s say China, India, Brazil, Singapore, where they trust much more, twice or sometimes more, than European countries, or let’s say rich countries in some cases, okay? So, and when we talk about ethics, ethics can be understood quite different in different societies, different cultures. So how we can increase the trust to AI, what we can do, so especially when we know that Europe is not the strongest economy in usage of AI. So maybe we can start from somebody online, maybe Nicola.
Nicola Palladino:
Sorry, well, what about how we can increase the trust across different society and cultural context? I will be quite optimistic, because in the last few years, we had a flourishing of initiatives, trying to set in some principle for the regulation governance of AI. And we can see that all of them converge around a very narrow set of principles that you just suspect, transparency, accountability, fairness, human oversight, safety and security. And of course, they are very, very high level. principles and then differences in interpretation occur when we move toward the implementation of these principles. But anyway I think it’s a very very good sign that we have this kind of common ground, a common lexicon around which we can develop our discourses and yes I personally think that probably we have to, as already highlighted by previous speakers, we have to move from ethics to rights and I believe that we should base our future regulation on international human rights law and I think that the Universal Declaration on Human Rights could be a very good base to overcome all the cultural and geographical differences, given that it’s a document that would be signed by more than 190 different countries. Yes we know that there are differences around the world in the level in which human rights, the Universal Declaration of Human Rights has been implemented, made legally enforceable but anyway I think that is a good starting point. And last thing, I think that behind the cultural differences I think that every human being on Earth want the same things. They don’t want to be discriminated, they want to communicate freely, they want that it’s privacy respected, they want not to be manipulated, they’re not to be subjected to a completely automated processes without having the ability to appeal a decision that could potentially damage its interest and so I think that we, all the human beings on the Earth, have this kind of things in common and these things and exactly the things that are protected, promoted, safeguarded by international human rights law and by the Universal Declaration of human rights. Thank you.
Dr. Paulius Pakutinskas:
Perfect. So we go to human rights as it was mentioned by Vanja. So ethics is somehow related to human rights, and that’s easier for us to understand. Marine, do you have your opinion on this issue? How we can increase trust, especially in different societies, in different cultures?
Marine Rabeyrin:
Well, I believe there is a role that everyone is playing to raise awareness about what is AI and how we are starting to organize ourselves at different levels in terms of governance to make sure that we put in place the right safeguards. And I believe when I say everyone has a responsibility, on my side, I’m thinking about the role of companies, tech companies, and how they are also there to contribute in implementing the right governance, but also raise awareness about what is really AI, unlock some fear related to AI, because most of the time AI is used on applications which are far from anything related to humans, which are more about processes, optimization of logistics, all those types of things, and then explain what is put in place when AI is using some or has some impact on humans. So I just want to raise that everyone has this responsibility to increase the awareness and the understanding, and I fully include the role of tech companies.
Dr. Paulius Pakutinskas:
Okay, good. We will go to stakeholders after a few minutes. And Vanja, do you have something to add?
Vanja Skoric:
Just to share the concrete data I posted in the chat as well, the link to the global study Trust in AI from KPNG last year, which showed that actually institutional safeguards are the strongest driver of trust for people globally and people are more trusting of AI systems when they believe that the current regulatory framework are sufficient to make AI use safe and have confidence in the government technology and commercial entities to develop use and govern AI. So this clearly shows the pathway we need to take and of course some regions have started on this pathway.
Dr. Paulius Pakutinskas:
Yeah, this study is really good. Maybe everything is changing by days and hours so maybe if we will do a new study it will be a bit different but that’s really good to fix the situation and to look at how the world looks like because it was worldwide so it’s a really good study so please read who has the possibility. Thomas, what would you like to add?
Thomas Schneider:
Yes, and it has been referred to this morning about the comparison to engines. When ChatGPT came up, I had to talk to many Swiss media and journalists and parliamentarians because they all wanted to know what is AI, what can we do and so on and so forth. And I realized also that like always when journalists and politicians see a problem, they want to have a Mr. Digital, a Mr. Cyber, one cyber security, government office, one law that will solve all the problems. That’s one element and that works less and less and the other thing is that I realized when discussing what to do that actually, yeah, there are similarities between how humankind deal with disruptive technologies, whether it’s this one or previous ones and probably also future ones because I’m not a lawyer working for a government but an economist. Of course, the connection to engines is, there are so many connections because that was a disruptive technology that was replacing human or animal labor, turning it into motion in vehicles but also machines that produce something. And do we have one engine law that regulates engines in one way all over the world? Do we have a UN agency for engines? No, because it’s all context-based. We have regulation for cars, for airplanes, for different types of cars, for the people that drive the cars or the airplanes, for the infrastructure and so on and so forth. And the same for machines, their health issues, their other product security issues and so on and so forth. And we have different levels of, and so much for harmonization, we have different levels of harmonization depending on the context of where an engine is used. In the aviation system it is fairly harmonized so you can land the plane all over in the world according to the same procedures. If you take the cars, our friends on the island in the northwest of Europe still drive their cars on the other side. The cars are built so that they’re driven on the other side of the road. We allow them to drive their cars with their expertise here. It more or less works. You see, and the Germans for instance, they still think that they can have roads where they have no speed limits because that’s important to their identity as that’s one part of their freedom. Of course, it’s also important for the image of the auto industry. And that doesn’t mean that they have more dead people per year because they have to take responsibility themselves compared to other countries that have very strict rules. In Italy at every turn you have a sign that says 90, 70, 50, 30, 10. And as a Swiss you think, do I really have to break down to 10? I know all the Italians are driving through with at least 70. That just means drive a little slower. So just to say that ways of dealing with risks are cultural based on your history and so on. And you will never get to harmonize this unless you harmonize all cultures. But of course you will have to interoperabilize and harmonize things. And you probably have to harmonize and interoperabilize things more in a digital world because it’s much easier to move AI systems and data around the world and copy them than it was with engines.
Dr. Paulius Pakutinskas:
Perfect. A lot of good examples which just illustrate the situation. But I have a question. So we have a lot of initiatives, a lot of initiatives over the world in different formats like UNESCO, OECD, G20, Hiroshima AI, professional level, associations and so on and so on. So looks like so many documents. Let’s talk about the soft law, some recommendations. Why we have so many and do we need so many?
Thomas Schneider:
We will see much more. And again, look at engines. How many technical standards do we have for engines in different machines? How many laws do we have that regulate engine driven machines? It’s thousands and thousands of technical norms, of legal norms and then again of social cultural norms. And we are about to build the same for AI. And of course, this is painful. It’s hard work. It goes slightly faster than developing the machines and developing the regulation on the machines. But we have to develop again, there’s no one size fits all solution. And the Secretary General of the Council of Europe has said it and also Thomas has explained what they do with partners in the sectors. We’ll have to have norms to use AI in all kinds of contexts. And we have to have some horizontal shared understanding and maybe some norms about how to do risk assessment across all sectors. But we’ll have to develop rules for every single application of AI. The question is, do we all have to develop thousands of laws? Or can we maybe, and as a Swiss, I would rather say, let’s see what the problem is and try to find easiest, most bureaucratic solution that may not always be a super complicated bureaucratic answer. But basically, we have to find a way to deal with AI in every single way it is used in every single way. area this is.
Dr. Paulius Pakutinskas:
Marine, do you have your opinion here?
Marine Rabeyrin:
I was about to make the comment because I believe there is so many initiatives as you said hard, soft guidance, but somehow they talk to different people. I believe there is recommendation or guidance for different types of audience. I don’t believe that we can come with a kind of consensus on what is the ultimate guidance or regulation because we just said before it also adapts to its own context, to the own culture of the audience, to the different type of stakeholders. So I believe this is OK to have so many initiatives and it’s probably demonstrates how different stakeholders want to handle and want to contribute in making this, addressing this topic the right way. And I can share with you the experience that I had myself. So as I said before, I’m leading an initiative dedicated more specifically to how to produce or use AI in a way that it does not amplify, replicate or amplify gender bias. This is a very specific aspect of the ethics. So you can say, well, then if you are so specific there should not be so many initiatives. And as we started to work on that one we realized that there is an ecosystem of association, initiative working on the ethical aspect which was already approaching this specific topic but they were approaching it in a different way. So it was this already four years ago. So do we need to align? Do we need to be only one voice? And we realized that comparing the different initiatives we were every time proposing something different. A little thing different. For example, one initiative was Our initiative was rather recommendation, supporting companies who were talking to companies. Other initiative was talking to companies, but most specifically to lead them to a certification, so a label. Another initiative was rather focusing only on the technical aspect of AI when you are looking at different type of area of action. Every time there is a little difference, which is okay because then we believe that the audience in front of us, we’ll have the choice to adapt and choose what is the most adapted to their own challenge. Rather than trying to be as one and merge our initiative, we said, no, let’s rather acknowledge there is different approach and promote each other. Make sure that anyone we talk to, we give them this panel of choice. If you want to take action on gender bias, there is what we propose, but also what the other one are proposing. Join forces by talking to our different audience with our own specificity, but make sure that we try to promote each other, that we also give the opportunity to let more people know about all those initiatives. Because we can de-multiply through the number of intervention, the number of people we talk to, then we are probably more effective than trying to merge. I think there is no issue of being more and more initiative and different people who wants to take action on it.
Dr. Paulius Pakutinskas:
Vanja, could you add some thinking?
Vanja Skoric:
I think this is a perfect segue also to the issue of diverse stakeholders and who needs to be involved, if you agree. Absolutely agree with both Thomas and Madine’s point. The framework that we operate in, both technical and legal and policy, is so complex and then fragile at the same time that it really requires robust safeguards are in place to protect individuals and societies from the negative impact of AI and to allow us to actually enhance the benefits. But this also means, as Marine was pointing out in the example, that the AI development needs to become fully inclusive, so to involve diverse disciplines, variety of expertise. We don’t even probably know who is needed still, aside from other disciplines that we already identified. And also different groups and communities with some lived experience and direct on the ground presence to provide input, to provide examples, to provide warning signs, but also to provide the needs that can be addressed. And all of that needs to happen throughout the whole AI lifecycle, from design, testing, development, piloting, use, evaluation. So it’s really important to embed also in the standard setting processes as we start to agree how these standards should look like in various different contexts, that AI developers and deployers really proactively include external stakeholders and to prioritize diverse voices, including those that might be actually affected by various AI systems, with the simple goal to make AI better and to make it beneficial for all of us and the society as a whole.
Dr. Paulius Pakutinskas:
Thank you, Nicola. Maybe you have a short adjustment, something to add.
Nicola Palladino:
Yeah, yeah, of course. But I think that having this variety of initiatives is a necessity also because, you know, artificial intelligence is so powerful, it involves so many layers of governance. So we have a transnational level in which, you know, we need to in some way harmonize the rules. And then there is also a national level in which states have, you know, I think, the rule to refine the high-level principles that have been established at this level. international level, according to the specificness of the national contest, and they also have to develop accountability and oversight system for companies, and also, above all, to place the enforcement capability. But we also need to involve a technical layer, and so this is the role of technical communities organization that have the very fundamental role to translate the rules, the rights and principles that we have defined at the political level into something that could be actually implemented, that could be translated at the operational level. And then we also need the contribution of NGOs, media, and civil society, because we need to raise the awareness about what are the social and political implication of the technical specification of the technologies that we are building. And then I think that we have so many initiatives also because there is some kind of, you know, power play, power struggle between different organizations that want to have a say on this very relevant topic, but in the end it seems that this is beneficial, also because my impression is that all these initiatives are more or less overlapping, and so they are contributing to create a common discourse about how to regulate the official intelligences that is a very fundamental prerequisite for non-creation, for the emergence of what is in political science is called a regime complex. So that is, yes, a set of rules, institutions around which we can, you know, regulate a particular domain.
Dr. Paulius Pakutinskas:
Okay, thank you. As I promised, we are waiting for questions from the auditorium. Okay, so, oh wow, how many? So I think the first was the lady in white. Maybe you can just shortly introduce yourself and point maybe a specific panelist or all of panelists.
Audience:
Good morning, everyone. Thank you for giving me the word. I’m Emily Khachatryan, coming and representing Council of Europe, the Advisory Council on Youth of Council of Europe coming from Armenia. And my question was more specific about AI being used at border control because right now it is a very rising issue as it’s causing a vast amount of discrimination. And I would like to ask your opinion and the ethical usage of it. As you already discussed, ethics and human rights are interchangeable words. And do you think this kind of surveillance technologies because they’re also getting a lot of biometric data on migrants, which is not ethically correct and is violating human rights. So do you think it should be fully banned or maybe used with human supervisors in order to prevent this? Thank you.
Dr. Paulius Pakutinskas:
Perfect. Would you like to address to somebody or any of the panelists?
Audience:
No, whoever wants to answer.
Dr. Paulius Pakutinskas:
Okay, so let’s start from Thomas.
Audience:
Thank you.
Thomas Schneider:
Well, I don’t work for the Ministry of Police or whatever it’s called in your countries or justice. But I mean, yeah, this is already a reality. In Switzerland, we could vote about five years ago or 10 years ago, whether we would want to have biometric passports or not. And there were some people that thought that this was not such a good idea. Of course, we were under pressure by the US and others that told us if you don’t have biometric passport, it will be much more complicated. So things are normally not so simple. And then it was almost turned down, not necessarily because people had the problem with the biometric data stored in your passport. But the main problem was that… It was decided to store the data in a central database and people didn’t like that idea of a central database. They would rather have preferred to have the data stored like more decentralized because there’s a feeling that it is safer. It’s not so easy for wanted or unwanted, let’s say, giving access. So it’s less about the what, sometimes it’s about the how and what the safeguards are, because let’s be honest, it’s much quicker, it’s much easier if you can walk through an airport and just put your passport into a machine than queue. So if these things are used in a trustworthy way, then it can actually make your life easier. If these things are not used in a trustworthy way, if you don’t know who has access to the data, who doesn’t, then, of course, you don’t trust it and you try not to use it. You may be forced to use it every now and then. So again, it’s and also the whole face recognition discussion is there are areas also in the medical field where emotion recognition, face recognition may be a great solution for a big problem. But if you use the same technology with a different purpose in another area, it can be a disaster. So you really need to look into the context and find the regime for each context based on. And we hope that this is the value added of the of the of the of the convention that gives you a guidance on a very high abstract principle level. What are the important issues? What does it mean to keep up human rights, democracy and rule of law in particular uses? But it doesn’t give you the concrete answers you need to develop themselves in the context.
Dr. Paulius Pakutinskas:
Perfect. Maybe somebody from other panelists would like to add.
Vanja Skoric:
Maybe I can add, I will put my legal hat on now as a human rights lawyer. I think the answer lies in your question, because if you premise it as a system that already breaches human rights, then the clear answer is it should not be used. Now, the question is, again, safeguards and the assessment to what extent does it harm the rights? If it breaches the human rights, it is essentially unacceptable. This is not only my saying, this is the saying of the UN Special Rapporteurs, UN OHCHR, Human Rights Commissioner, European Human Rights Commissioner. So everybody in the field with some authority on human rights have expressed the clear request to ban such use of AI that breaches human rights and poses threats and risks of mass surveillance and data breach.
Dr. Paulius Pakutinskas:
Good. Maybe, okay, so maybe we can just listen to two questions and summarize.
Nicola Palladino:
Sorry.
Dr. Paulius Pakutinskas:
Yes.
Nicola Palladino:
Can I add something to this question?
Dr. Paulius Pakutinskas:
Sure.
Nicola Palladino:
Yeah, well, thank you, the audience, for this question, because I think one of the most relevant concerns about artificial intelligence. Well, just to say that, as you know, the European Union approved a couple of months ago the Artificial Intelligence Act, and this is a very important piece of legislation because the first comprehensive legislation on artificial intelligence set a series of requirements, a limitation for the development, deployment, and use of artificial intelligence that are able to offer some guarantees, some kind of human rights. But, unfortunately, all these safeguards do not apply for military, defense, and security purposes, especially for migration and border control. And, personally, I think one of the most disappointing points of the Artificial Intelligence Act is also very dangerous, because we are allowing to experiment some kind of mass surveillance technologies that, one day, could also be extended to other feed over sectors, one day prove to be very efficient and, by the point of view of government, useful for security purposes that could be also extended to also the European citizens. And so I think coming to your question, where just to ban this kind of technology, I think it’s sufficient to apply even the case of border control, the same kind of guarantee that a limitation that we established for European citizens.
Dr. Paulius Pakutinskas:
Good, so we have two questions and then we will round up. Okay, please.
Audience:
Thank you, my name is Catalin Donsu, I’m representing GivDec, I’m currently a Mathematical and Computing Sciences student for AI at Bocconi and I would like to ask, so we already know the uses of AI in environmental policy, early warning systems make use of machine learning algorithms to predict wildfires, to predict earthquakes, and it’s used more and more by local and national administrations. During the COVID pandemic, there was much talk of optimizing governance, of making it more mathematical in a sense. And another example is in 2023, for example, Romania introduced Ion, the world’s first AI governmental counselor. And I wanted to ask, so with the surge of the popularity of AI, how will administration and the nature of the landscape of administration change? Is government by algorithm a growing trend? Is it something to be expected? I know Hidalgo, one MIT professor, even went as far as saying that in the near future, we could have a sort of augmented democracy with people’s digital twins, basically taking a part in decision making, sort of a decentralized system. I wanted to ask, besides the risks that were already mentioned, what other risks would you say are probable and what benefits could be expected? And in your opinion, is this a viable possibility?
Dr. Paulius Pakutinskas:
Thank you. Perfect question. Let’s ask one more and then panelists can choose which would you like to answer.
Audience:
Thank you very much. My name is Wolfgang Kleinwächter, I’m a retired professor. for Internet Policy and Regulation. Both the Framework Convention and the AI for Good Summit of the ITU excluded the military use of AI, but we know from the war in Gaza and Ukraine that AI plays a central role. You know, the Time magazine recently had a cover story, the first AI World War, and Guterres, the Secretary General of the United Nations, has called for a ban, and just last week, the Pope addressed the G7 Summit meeting in Italy and said, you cannot leave killing of people in the hands of machines. So that means my question goes, how to deal with the military dimension? Very delicate, I know, but we have to deal with this. Thank you.
Dr. Paulius Pakutinskas:
Yeah, let’s hope that we will have some other agreements on military, and it’s really strange. We are producing more and more precise guns, weapons, but we’re killing more and more civilians in all wars, so it looks like a bit of a contradiction. So maybe you can choose any of these questions.
Thomas Schneider:
Well, actually, all the three raised issues that I also wanted to raise. The first one about why the EU AI Act leaves out security. Sometimes the answer is fairly simple, because the Commission only has a mandate to regulate markets. National security is an issue of the member states, and they would never give the competence to the Commission to regulate their national security. I’m simplifying slightly, but of course, that’s one of the main reasons why. Also, that shows how an institutional setting shapes the instruments. If you look at the US, if you look at the UK, they have different approaches, also because they have different legal institutions, different legal traditions. The EU had, the only way they could regulate AI is the way they did because of the way their institutions are made. That’s one thing, which is sometimes nice not to forget. And then about military, of course, we at the Council of Europe as well, we don’t have a mandate to deal with military issues. National security is something different, as long as it’s civilian, like law enforcement and so on, but not the military part of it. And I just want to call this the elephant in the room. We know that in the current wars, algorithms are used in Ukraine, in Gaza, wherever in the world. And that this is the so-called race. And again, I mean, some wars in the past, in the last century, have been decided not only, but also by who had the better machines in the air, on the ground, in the water. And then, of course, you need to have a strategy and how they use them and so on. And this is a logical fact that as long as people think they can win something by starting a war, they will want to have the best technology to do that. Unfortunately, we were wrong in 1990 that we thought at least the Europeans have passed that stage. And the other thing about the governance model. First of all, and this is no matter whether you have like the EU, a horizontal law or whether you do it like the UK with sectorial adaptations. We all need to empower all administrations, all authorities. We need to empower a whole society to know about AI and data, otherwise we will not be able to cope with it. And then also to go back to the engines and that will fundamentally transform the way we govern. If you look at the engines, when the first railways basically conquered Europe, there was no Italy, there was no France, there was no Germany. These countries didn’t exist. 25 years later, we had these national states, we had parliaments with a working class, with entrepreneurs and then like the farmers or Christian conservatives. That was also to some extent an effect of the industrial revolution and the milieu this created. that this transformed the governance models of our societies. And the same will in another way happen probably also with AI. We’ll use AI to democratize our system. We’ll use AI to not have to work for five years on a law and then implement it, and then five years on another one. We’ll have to go for much more agile and dynamic rules-making with the use of AI. But also the milieus are changing, the political parties are breaking apart. So probably in 20 to 50 years time we’ll be in a different world also when it comes to how our societies are governed and decisions are taken.
Dr. Paulius Pakutinskas:
Thank you, thank you. It looks like we have no time. So let’s try to conclude. Just very, very few words, what is most important, what are the takeaways from our discussions. Please do not forget the innovations, which was a bit skipped from our discussion.
Speakers
A
Audience
Speech speed
185 words per minute
Speech length
569 words
Speech time
184 secs
Arguments
AI is increasingly used in environmental policy and governance
Supporting facts:
- Early warning systems use machine learning for natural disaster predictions
- AI is utilized by local and national administrations
Topics: Artificial Intelligence, Environmental Policy, Governance
Governments are incorporating AI tools like AI governmental counselors
Supporting facts:
- Romania introduced Ion, the world’s first AI governmental counselor in 2023
Topics: Artificial Intelligence, E-Governance
The notion of augmented democracy with digital twins is being considered
Supporting facts:
- MIT professor Hidalgo proposed a model of augmented democracy involving digital twins
Topics: Augmented Democracy, Digital Twins, Decentralized Systems
Report
The integration of Artificial Intelligence (AI) across varied dimensions of public service and eco-conscious policy-making manifests an array of sophisticated implementations and a generally positive attitude towards its potential. Central to this technological advancement is the application of AI’s predictive prowess in strengthening early warning systems against natural disasters.
Machine learning algorithms are employed to foresee and alleviate looming environmental catastrophes, signifying the confluence of AI with strategic endeavours aligned with the Sustainable Development Goals (SDGs), specifically SDG 13: Climate Action, and SDG 11: Sustainable Cities and Communities. Romania, setting a benchmark in 2023 with the introduction of Ion, the pioneering AI governmental counsellor, typifies the burgeoning trend of incorporating AI tools into the mechanism of policymaking and governance.
Ion symbolises a groundbreaking breed of AI tools in governance designed for enhancing decision-making processes, heralding a transformation towards more data-driven and efficient public administration, in line with SDG 16: Peace, Justice and Strong Institutions, which underlines the significance of robust institutions in fostering sustainable development.
Further, the discourse on AI in governmental structures delves into augmented democracy. A strategy suggested by an MIT professor, Hidalgo, integrates digital twins—a digital mirror of an actual system or entity—to advance democratic processes. Remaining a neutral proposition reflecting contemplation more than full-scale adoption, it indicates a leaning towards using technology to revitalise civic participation and institutional transparency.
A more significant discourse points to a trend towards governance by algorithm, where AI is not just a supplementary aid but a key piece in local and national governments’ decision-making frameworks. The underlying goal is to capitalise on AI’s computational precision to optimise services and policymaking.
Reiterations of Romania’s AI governmental counsellor illustrate that the embedding of AI into governance represents a pervasive movement aimed at modernising and enhancing the effectiveness of the public sector. Despite enthusiasm for an AI-immersed governance future, the conversation remains impartial regarding the implications of AI’s encroachment in the public realm.
The blend of prospective rewards and potential perils in proposing AI measures necessitates thorough contemplation. The ethics of AI, evidenced by its function in early warning systems and the idea of augmented democracy, highlight the need for careful consideration of the social impact alongside the technical benefits.
The narrative consistently promotes achieving equilibrium, referencing SDGs, notably SDG 9 (Industry, Innovation and Infrastructure), and SDG 16. This suggests a comprehensive assessment of AI’s role in governance and infrastructure, transcending immediate operational improvements to include reflective analysis of long-standing ethical and societal challenges.
In conclusion, the embrace of AI within governance structures and environmental planning is characterised by optimism for its capacity to prompt change. Nonetheless, this optimism is modulated by recognition of the import of a discerning review of outcomes, given the elevated goals of sustainable evolution and principled governance.
The collated evidence envisages a future in which AI could profoundly alter governance landscapes, contingent on its application being paired with astute evaluation of its wider societal repercussions.
DP
Dr. Paulius Pakutinskas
Speech speed
139 words per minute
Speech length
1428 words
Speech time
618 secs
Report
The panel discussion, moderated by Paul Spokotinskas (Paulus) and featuring Thomas Schneider alongside panellists Vanja Skoric, Nicola Palladino, and Marine Rabeyrin, via online connection, delved into the multidimensional facets of artificial intelligence, highlighting innovative progress, ethical dilemmas, and regulatory consequences.
Initially, an audience poll conducted with Menti assessed the participants’ perceptions of the primary ethical issues associated with AI’s development and deployment, revealing deep concerns about misinformation, manipulation, bias, privacy, data security, and to a lesser extent, existential threats and economic impacts.
Each panellist shared insights based on their specific professional connection to AI, prompting a debate about why AI garners more ethical scrutiny than other technologies, such as social networks, which have known issues. The deep potential of AI to profoundly impact human life was identified as a reason for heightened ethical vigilance.
The conversation shifted towards public trust in AI, noting a regional divide, with Asians and other non-European societies exhibiting higher levels of trust in these technologies than Europeans. Integrating AI ethics into the human rights discourse was suggested to potentially increase trust universally.
The proliferation of global AI initiatives and guidelines from entities like UNESCO, OECD, and G20 was critically evaluated. The diverse range of regulations highlighted the need for a unified approach amidst recognitions of the importance of AI ethics and governance, despite concerns about overlaps and inconsistencies.
Audience queries further broadened the discussion, touching on the regulation of AI governance structures and unease around AI in military applications. The complexities of crafting strong, enforceable, and culturally sensitive regulations were addressed by the panel. In concluding, the panellists agreed on the importance of both ethical frameworks and encouraging AI innovation, although they recognised that the innovative aspects of AI had not been the discussion’s focus and deserved more attention.
Overall, the session illuminated the complex issues surrounding the rapidly evolving AI landscape, emphasising the necessity for continuous international dialogue, responsible governance, and informed public debates to balance the embrace of AI advancements with the maintenance of ethical standards.
MR
Marine Rabeyrin
Speech speed
160 words per minute
Speech length
1057 words
Speech time
396 secs
Report
Marine, representing tech giant Lenovo and leading an initiative for a non-governmental organisation, spotlights the ethical challenges posed by artificial intelligence (AI), placing special emphasis on curbing gender bias and discrimination. Her dual role imparts a sense of shared accountability, as she manoeuvres through her corporate responsibilities and her commitment to mitigating AI-associated risks in a third-sector environment.
She raises awareness about the ‘fear of the black box’, a metaphor denoting the opacity of AI systems, which kindles public unease—a sentiment amplified by media portrayals of AI as a possible enabler of discrimination, notably within the realms of law and job recruitment.
Such representations have fuelled societal misgivings about AI’s ethical implications. In response, Marine underscores the need for a joint effort from a wide array of stakeholders. She underlines the importance of improving public comprehension and appreciation of AI, contending that its application is centred more on enhancing processes and logistical effectiveness rather than on aspects that directly impact human lives.
Marine postulates that the authentic character of AI is often misapprehended and misconstrued. As an emissary for a principal technology company, she concedes that these corporations have crucial parts to play. Their tasks encompass contributing to the development of regulatory frameworks concerning AI and playing an instrumental role in allying fears by educating the public.
She deems it vital for the industry to demystify AI for the general public. Engaging with several endeavours directed at AI ethics, Marine views the multiplicity of these programs as advantageous. These initiatives, which differ greatly—from delivering policy suggestions and pioneering certification to providing expert technical counsel—accommodate distinct societal segments, mirroring an adaptable approach sensitive to cultural nuances in AI governance.
She advises against homogenising these voices to prevent diluting their effectiveness or narrowing their scope; instead, she posits that embracing diverse methods is preferable. Her personal contributions to a project combatting gender bias in AI have led her to observe that each initiative proffers unique contributions, whether focusing on accreditation particulars or offering substantial technical guidance.
She advocates for these initiatives to bolster and reference each other rather than consolidate, enabling stakeholders an assortment of options to address specific issues. To summarise, Marine envisages a setting where a multitude of AI ethics initiatives can coalesce and mutually support the shared end of ethical AI deployment.
She argues that an array of strategies will not only chime with wider audiences but will also stimulate stronger, unified action towards fostering ethical AI usage. This plethora of initiatives isn’t a challenge to be streamlined but rather a resource that enhances our capacity to tackle AI’s ethical dilemmas in a comprehensive and multifaceted manner.
NP
Nicola Palladino
Speech speed
147 words per minute
Speech length
1291 words
Speech time
529 secs
Report
The assistant professor at the University of Salerno, who also contributes to the global discourse on AI at an ISO level, brings to light the pronounced impact of AI in contemporary life, surpassing other digital technologies. With AI’s widespread applications—from assessing potential employees and students to affecting credit ratings and political processes—it has undeniably brought forth significant societal implications.
The professor underscores the intimate risks associated with AI, such as discrimination and privacy violations, which amplify the urgent need for robust AI ethics. These ethics are envisioned to establish a framework that enhances the beneficial facets of AI while mitigating its inherent dangers.
His optimism is fueled by a worldwide agreement on AI governance principles that include respect, transparency, accountability, fairness, human oversight, safety, and security. Despite challenges in implementing these values, the notion of having a universal lexicon for AI governance discussions signals positive progression.
To further solidify future AI regulations, the professor advocates for their rooting in international human rights law, particularly citing the Universal Declaration of Human Rights as a universally acknowledged touchstone. This declaration serves as a potent tool to mediate the variances arising from cultural and regional differences globally.
He points out the requirement for multi-tiered governance initiatives, recommending the harmonisation of regulations at the transnational level, which are then fine-tuned nationally to suit respective contexts. These local adjustments should incorporate corporate responsibility and establish effective enforcement mechanisms. The role of technical communities in operationalising political objectives into tangible technical protocols is deemed essential.
Likewise, civil society—including NGOs and the media—is crucial in publicising the societal and political impacts of AI. The assistant professor acknowledges the power dynamics influencing AI governance, wherein organisations jockey for sway. However, he perceives the burgeoning initiatives as predominantly constructive.
These contribute to a collective narrative, aiding the creation of a comprehensive “regime complex” for AI governance. Addressing a pertinent issue, he critiques the lacunae in the EU Artificial Intelligence Act concerning exempt domains like defence and border control, which lack robust safeguards against mass surveillance excesses.
He calls for extending the Act’s protections to these sensitive sectors to thwart potential abuse and prevent the encroachment of mass surveillance. In summary, the professor candidly supports a globally coherent, ethically oriented, and rights-based approach to AI governance. He harmonises technology progress with the need to safeguard fundamental human rights, advocating for values such as non-discrimination, free expression, respected privacy, and fairness at the hands of automated systems—core tenets that resonate with international human rights law and are pivotal for the ethical assimilation of AI into society.
TS
Thomas Schneider
Speech speed
193 words per minute
Speech length
2446 words
Speech time
759 secs
Arguments
Thomas Schneider has been actively engaged in negotiations on the Framework Convention on AI.
Supporting facts:
- He led the negotiations on the Framework Convention on AI for two years.
Topics: Artificial Intelligence, Council of Europe, Framework Convention on AI
Awareness of AI’s influence on power structures increased with events like Cambridge Analytica.
Supporting facts:
- Cambridge Analytica scandal revealed the impact of algorithms on power and personal data.
Topics: Artificial Intelligence, Cambridge Analytica, Algorithmic Accountability
A multitude of norms and regulations for AI will be developed, similar to those for engines and machinery.
Supporting facts:
- Thousands of technical norms exist for engines.
- Development of regulations for machines is an extensive process.
Topics: AI Regulation, Technical Standards
Report
Thomas Schneider has made a significant, positive contribution to the governance of artificial intelligence (AI) by leading the negotiations relating to the Council of Europe’s Framework Convention on AI for two years. His pivotal role aligns with the objectives of Sustainable Development Goal (SDG) 9, which focuses on Industry, Innovation, and Infrastructure, as well as SDG 16, which aims at Peace, Justice, and Strong Institutions.
The Cambridge Analytica scandal has highlighted the critical influence of AI on power structures and personal data privacy, fostering increased awareness and neutrality in sentiment concerning algorithmic accountability. This realisation underscores the necessity for stringent AI governance and regulations to support SDG 16’s aspirations for justice and peace.
Comparisons have been drawn to the myriad of technical standards applied to engines and machinery when considering the complex nature of AI regulation. Anticipation of a similarly extensive and rigorous process reflects a wider consensus on the need for thorough AI regulatory frameworks that balance risk mitigation with the encouragement of innovation, upholding the aims of SDG 9 and SDG 16.
A diverse and adaptable regulatory ecosystem is favoured over a one-size-fits-all model, recognising the varied nature of AI applications. Tailored norms, informed by shared understanding, and cross-sectoral risk assessments are advocated, aligning with SDG 9’s emphasis on industry-specific innovation and resilient infrastructure.
The approach to AI governance also includes a preference for practical and minimally bureaucratic regulatory solutions. Streamlining the regulatory framework to ensure simplicity and efficacy aligns with economic growth (SDG 8) and supports the advancement of industry and innovation (SDG 9). This viewpoint underscores the need for regulations that facilitate rather than stifle technological advancement.
In summary, discussions on AI regulations are characterised by an appreciation of their complexity and the significant influence of AI, along with a consensus on the need for versatile and practical regulatory measures. The establishment of future AI governance frameworks is anticipated to be effective, adaptable, and conducive to innovation, ensuring that AI is responsibly utilised for the advancement of society in line with global sustainable development goals.
VS
Vanja Skoric
Speech speed
157 words per minute
Speech length
757 words
Speech time
290 secs
Report
Good morning. Vanya Skoric, serving as the Programme Director at the European Center for Not-for-Profit Law, spotlights a significant shift in the artificial intelligence (AI) dialogue, moving from ethical to human rights considerations. This shift is seen in policy development, academic research, and the regulation of AI, as well as within the realms of the Internet and social media platforms, all of which are increasingly driven by AI technology.
According to Skoric, a human rights-oriented discussion is in line with insights from the KPMG global study “Trust in AI”, which indicates that public trust in AI systems is linked to the existence of strong regulatory frameworks that ensure safety and instil confidence in governmental, technological, and commercial bodies overseeing AI.
This correlation demonstrates the direction that policy and regulation need to progress to maintain trust and safety in AI applications. Skoric, in agreement with her colleagues Thomas and Madine, calls attention to the complexity and sensitivity of the technical, legal, and policy environments of AI.
She argues that there must be rigorous safeguards to protect individuals and society against the potential adverse impacts of AI. The maximisation of AI’s benefits also hinges upon an inclusive development process, incorporating a variety of disciplines, expertise, societal groups, and communities to ensure AI systems meet diverse needs and reflect a broad range of experiences.
Further emphasising the point, Skoric suggests that AI developers and deployers should proactively engage external stakeholders and prioritise the inclusion of a diversity of voices, particularly those who might directly be affected by AI technologies. Such proactive efforts are essential for refining AI for the benefit of society as a whole.
In conclusion, Skoric, wearing her legal expert hat, takes a definitive human rights stance: AI systems that infringe upon human rights should unequivocally be prohibited. This position is supported by major figures and authorities in the human rights domain, such as UN Special Rapporteurs, the Office of the UN High Commissioner for Human Rights, and Human Rights Commissioners in Europe and globally.
They collectively stress the urgent need to prohibit AI applications that violate human rights, including mass surveillance and data breaches, thereby reinforcing the necessity for AI utilisation to be harmonised with human rights principles.