Inclusive AI For A Better World, Through Cross-Cultural And Multi-Generational Dialogue
31 May 2024 09:00h - 09:45h
Table of contents
Disclaimer: This is not an official record of the session. The DiploAI system automatically generates these resources from the audiovisual recording. Resources are presented in their original format, as provided by the AI (e.g. including any spelling mistakes). The accuracy of these resources cannot be guaranteed.
Knowledge Graph of Debate
Session report
Full session report
WSIS Panel Emphasises the Need for Inclusive AI Policies for Global Good
At the World Summit on the Information Society (WSIS), a panel discussion hosted by Globe Ethics focused on the theme of “inclusive AI for a better world,” emphasizing the need for cross-cultural and multi-generational dialogue. The session featured a diverse group of speakers from four continents, with one joining remotely from India, to explore the opportunities and challenges AI presents across different sectors and regions.
Larissa Zutter, a senior AI policy advisor, voiced concerns about AI’s impact on socio-economic systems, particularly regarding job markets and social security. She highlighted the younger generation’s fears about the planet’s future and the potential for increased inequalities, noting that while they are familiar with technology, concerns persist about socio-economic systems not being robust enough to support AI-induced changes.
Ma Xiaowu from Qingsong Health in China discussed AI’s potential in healthcare, especially in improving efficiency and aiding the elderly. He emphasized AI’s role in creating value and bridging generational healthcare gaps.
Diana Nyakundi from UNESCO Nairobi spoke about AI’s application in agriculture and cultural preservation in Kenya, stressing the importance of AI in addressing local challenges. She pointed out the difficulties in scaling AI projects due to infrastructure, awareness, and funding limitations. Nyakundi called for inclusive, multi-stakeholder governance in AI policy-making to ensure diverse perspectives are considered.
Qian Xiao from Tsinghua University highlighted AI’s integration in education, making it personalized and accessible to students in remote areas. She raised concerns about geopolitical tensions between China and the US affecting AI development and student exchanges, warning of the risks of developing separate AI systems.
Michel Roberto de Souza from Derechos Digitales advocated for a human rights-based approach to AI, urging the inclusion of marginalized communities in AI discussions. He suggested that policymakers need to understand technology’s limits and consider community-based solutions for AI deployment.
Pavan Duggal, an advocate in the Supreme Court of India, underscored the need for legal frameworks to minimize discrimination and bias in AI. He called for international and national regulations to ensure AI’s safety and trustworthiness, emphasizing the importance of transparency and accountability.
The panel reached a consensus that AI has the potential to contribute positively to various sectors, including healthcare, agriculture, and education, but only if inclusive and diverse policymaking is prioritized. They agreed on the need for policies to be flexible, regularly updated, and culturally sensitive. The challenges of infrastructure, awareness, funding, and geopolitical tensions were recognized as significant barriers to AI’s inclusive development.
In conclusion, the WSIS session underscored the complex nature of AI’s societal impact and highlighted the imperative for collaborative efforts to ensure that AI development is inclusive, ethical, and responsive to the diverse needs and contexts of different regions and generations.
Session transcript
Fadi Daou:
So, thank you and welcome everybody to this very important session at the WSIS in this rainy weather. Today, I hope that you will have a better weather during the weekend, but very busy also Geneva. We are having today at the same time the WSIS and AI4GOOD, the two summits somehow overlapping, and we have wonderful speakers on this panel to discuss together inclusive AI for a better world through cross-cultural and multi-generational dialogue. So, I would like to welcome all our speakers. Some of them came from far, I would say. We have speakers from four different continents on the panel, so welcome everybody. I will start by asking you to introduce yourself, but let me say one word. So, about Globe Ethics inviting for and hosting this session this morning, it will be a very fast session because we only have still have 42 minutes for the session, and we have five speakers present with us here in the room, and one speaker will join us also remotely from India. The aim of this discussion for those who are with us in the room and those following online, and thank you for the whole audience here in the room and online, is in fact to look at the question of AI and ask how we can and what we shall do to make it inclusive, and at the same time taking into consideration the cultural diversity and also the generational diversity and concerns and hopes. So welcome everybody. First, let’s start by a very quick round, less than one minute for each speaker to say who you are, which organization you represent, and if you have any expectations, specific expectations, from OSIS and AI for work in one month. We start with you, Larissa.
Larissa Zutter:
Hi, I’m Larissa Zutter. I’m a senior AI policy advisor at the Center for AI and Digital Policy, where I’m also a board member. I have a background in political economics and international security, and I’m mainly researching topics of socioeconomic implications of AI. I think my personal expectations, I don’t have that many. I’m interested to see some of these perspectives. Also, I think it’s great that we have a much more global representation than in a lot of other conferences that I go to, so that’ll be very interesting to get the opportunity to do that.
Fadi Daou:
Thank you, wonderful. So, please, Mark.
Xiaowu Ma:
Hey, hi. Good morning, everyone. Thank you for joining this forum. I’m Ma Xiaowu. Maybe condor, English name is Mark. Yeah, CMO of the Qingsong Health. Qingsong Health is a very large health technology company in China. We are 100 million after yours. No, next year, I will show you the world apart. Thank you.
Fadi Daou:
Thank you. Thank you, Ma. And I would like to thank Queen Chong Health Group who helped us in also having you here with us. And so, Diana, and also I would like to thank the UNESCO Nairobi Center who put us in contact and also helped in having you here in Geneva with us. Please.
Diana Nyakundi:
Thanks, Fadi. Good morning, everyone. I am Diana Nyakundi. I am based in Nairobi, Kenya. I work as a senior consultant to the UNESCO regional office for Eastern Africa, but I also work with research. ICT Africa is a full-time AI researcher. Research ICT Africa is based in South Africa. It’s an African think tank that works on digital inequalities and issues around AI and emerging technologies. And with UNESCO, I am the lead country expert on the readiness assessment methodology. So just researching on where Kenya is at with regards to AI and how ready the country is. Thank you.
Fadi Daou:
Thank you. Wonderful. Thank you, Diana, for being with us. You just arrived yesterday from Nairobi. And Xuan, please.
Qian Xiao:
Hi, everyone. Thank you for having me here. I’m Xiao Qian. I’m the Vice Dean of the Institute for AI International Governance, a think tank affiliated to Tsinghua University. And we are a very young think tank, but we focus on AI governance. And ever since 2020, we have established the AI for Sustainable Development boot camp open for young people around the world. And more than 3,000 young people have attended this boot camp. And we are doing everything we can to do the AI for Good initiative. Thank you.
Fadi Daou:
Thank you so much. And Michel.
Michel Roberto de Souza:
Thank you so much, Fadi. My name is Michel Roberto de Souza. I am Public Policy Director at Derechos Digitales. That is an NGO based in Santiago de Chile. We work in all Latin America with technologies and human rights. We work with cybercrime. We work with gender and technologies, and also artificial intelligence. And it’s a pleasure to be here. And our expectations regarding the devices, I think it’s to see what happened in these 20 years, but also to foster a multi-stakeholder approach and participation in UN processes. Thank you.
Fadi Daou:
Wonderful. Thank you, Michel. And welcome. So as you see, we have from Asia, Europe, Africa, Latin America, and we have online. So with us, Pavan, Duggal Pavan, are you hearing us? And can you introduce yourself maybe in one minute from India?
Pavan Duggal:
Thank you, Fadi. Good morning, everyone. I’m Dr. Pavan Duggal. I’m an advocate in the Supreme Court of India, specializing in cyber law, cybercrime, cybersecurity, and artificial intelligence. I’m the chief executive of artificial intelligence law hub, which is looking at evolving legal principles for AI. I’m the chancellor of Cyber Law University, which is an online platform for. providing various courses, I have written 199 books, and I’ve been working in the intersection of law and technology, specifically law and AI. My expectation is with this WSIS Forum and the AI for Good Summit, the envelope is going to be pushed for more quicker, rapid evolution of legal frameworks in countries for governing and regulating artificial intelligence. Thanks.
Fadi Daou:
Thank you, Pawan. I suggest that your next book should have the title The 200. So that’s wonderful. So we look forward for a very rich input and discussion. We’ll try to keep five, seven, 10 minutes if we can for the audience, if we have any questions, but the priority for the speakers too. And so again, inclusive AI, sorry, this is the main discussion, but we will try to look at this question from the perspective, as I mentioned, from intercultural and intergenerational perspective. So how can we take into consideration the contextual realities related to AI governance, AI regulations, AI hopes and concerns, but at the same time, the different approaches related to age and generations. So I’d like to start with you, Larissa. I think you probably are, I mean, you have lots of young people around the table here, I mean, on the panel, which is wonderful. And starting with you, I think you can bring the voice of a younger generation. What does it mean, AI for younger generation from hope and concern perspective?
Larissa Zutter:
So first I want to preface that obviously I can’t speak for my entire generation. I think everyone has a lot of different views. They’re also very dependent on where you live in the world, et cetera. But I think one big concern that we can see does not actually necessarily relate to the technology itself, but much more. the underlying structures that we have that then have consequences on our lives. So for example, a lot of young people are very worried about the future of our planet, they’re very worried about, for example, also the systems like social security systems that have a massive impact on how AI will influence, for example, our job markets or unemployment rates, housing, things like that. So I think whilst I see with maybe the older generations there’s a lot of fear around the technology itself, I feel like younger generations are much more used to the technology so they don’t fear the technology as much, especially in their daily usage. However, there is quite a lot of concern around our social systems not being able to carry the weight of the next generations. For example, it’s what’s known when you look at some of our systems needing to be revised, that’s something that a lot of people in my generation are concerned about and I don’t think that that is a focus enough of these kinds of discussions. A lot of the time we talk about, and these are not not important topics, but we talk about the technology itself and things like addiction or bias and that’s super important, but also just all our socio-economic systems that underlie basically our entire society are equally important in terms of defining how we use the technology but also what consequences it has. So also on topics like inequality, there’s a lot of, I would say, a lot of people are quite scared also of inequality increasing in my generation, especially because we’re no longer seeing the gains that the generations before us made. So this is the first generation that will probably earn less than their parents, things like that, and I think that is something that my generation is quite worried about, especially in the longer term.
Fadi Daou:
Interesting, so the socio-economic perspective and source of concern for young people from the job market challenge, using jobs over inequalities, increasing inequalities, but can I ask you also, what would make AI an opportunity? or younger generation? How you look at it as also an opportunity, maybe an equalizer from the socioeconomic perspective? Do you believe, first, that it can be?
Larissa Zutter:
Yeah, so I think that’s a super loaded question because, yeah, I think, of course, there’s definitely positives to be had. I think, generally, the younger generation knows how to leverage the good parts of consumer-oriented AI. So chat, TPT, things like that is very commonly used by younger people. So we know how to leverage it. It’s much more integrated into our lives, and that definitely does have positive impacts. However, there are equally so many negative impacts that come when we look at, for example, social media, which is not necessarily the same thing as AI, but it’s integrated, whatever. But I think there are definitely positives to be had, but they need to be inserted into a world that has systems that also protects us.
Fadi Daou:
OK. Wonderful. So I really hear your concerns, in fact. And it’s interesting, starting by I was expecting, in fact, more, I would say, looking more at the opportunities. But this is very good to put on the table from the beginning, the concerns also of the younger generation. That’s very interesting, and I think we will discuss this with the other speakers now. So, Xiao, we are going back to you. And I know that you personally had a very interesting experience building your startups and getting to the highest level with it, and then also in the framework of King Chong Health Group. So what you would like to add from your experience on this topic?
Xiaowu Ma:
OK, because these are big questions. Yeah, I’ll try to answer. We are the Department of Artificial Intelligence branch. bringing opportunities to all camps in the industry, like improve efficiency, make better dishes, and creating new industry and job opportunities. And at the same time, the opportunities also brings challenges. So, a data of Perseian and secret, in the issues the argument are based, and the possibility of the closing parliament, maybe about my company is for health. So, I will, AI, maybe try all the people, this is maybe APB or another one. Let’s work together to make a better us after AI, so that I can give us more the chance. And the shine is also go to the person, and we need to go together, it’s all choose it, looking for water, after to future.
Fadi Daou:
That’s wonderful to see that AI can really contribute to the health development and access to health, especially for senior, I mean, for elderly people also. That’s also a very positive perspective, I would say, in a practical way, with your so institution. Diana, Africa, the continent of hope, and you led, and as a senior expert, so the readiness assessment methodology in Kenya, and you are involved on the whole continent also, in relation to the AI discussions. So, how do you perceive it from, I would say, contextual reality, Kenya, Africa, as an opportunity, or as a challenge for younger generation for sustainable development? And do you have evidence between your hands? I mean, you did that this readiness assessment. I also study in Kenya.
Diana Nyakundi:
Yeah, thanks Fadi. So with regards to opportunities, there are a lot of AI pilot projects that are coming up. AI in health regards to how AI can be used for accessibility for disease detection, AI in education, AI in cultural preservation. There’s a lot of AI that’s being used to preserve historical sites, AI that’s being used to digitize our national museums, et cetera. So there’s a lot of opportunities that are coming up. However, there’s still very small scale and that’s due to a lack of awareness. So AI is still centered. There’s a big diversity in the country and in the continent at large. There’s a rural and urban diversity. And so AI is mainly centered in the urban areas. So just because of that, there’s a lack of awareness, there’s lack of proper funding and that’s why these projects are still very small scale. There’s still lack of infrastructure to really continue the development of these technologies. So because of that, then there’s still the opportunities, but they’re still not as large scale. Secondly, just where the conversations are currently at with regards to the African and the Kenyan context. So big tech companies are really rolling out these products and we’re constantly finding ourselves in positions where we have to scramble to figure out how to use and how to respond to these technologies. So the conversations now are centered around how we can flip that narrative to become, just figure out where we want to go as a society, where we currently are, where we want to go as a society and how AI can come in to fit into that. So not us going to and responding to where AI technologies are, but AI coming to how we see best AI to fit into that. fit into our current realities. So that, for example, AI in agriculture, which is a very big contributor to our economies in Kenya and in a lot of African countries. So when we say we have used AI in agriculture and we have moved from point A to B, what that difference, can that difference be really attributed to AI and how can we really account for that difference? So when we speak about opportunities, we really don’t want to think about it in terms of us catching up, but AI coming into our context and AI really solving our contextual issues. Yeah.
Fadi Daou:
That’s wonderful. Thank you for this input, Diana. So from job market first, as mentioned by Larissa, then the health sector, then to the agriculture and rural area. So how AI can be also a tool to leverage some of the economy and socioeconomic developments. That’s great. In the second round that we will do, I will ask you about what is needed to be done to accelerate this process and from a policy perspective. But before that,Qian, I mean, you are so at the top as Vice Dean of the Institute for AI International Governance at Chiang Mai University. But before talking about policies, this will be the second round. Now, I want to ask you from a contextual perspective, how you look at AI, thinking that there are specific expectations or needs or realities in China or in Asia in general that needs to be heard, to be taken into consideration. And then I will ask you about in the second round about what is needed on the top level from a policy perspective.
Qian Xiao:
Okay, thank you for the interesting question. Actually, China, I should say, according to the survey, Chinese people are very much positive towards the development of AI and they enjoy the benefits brought about by AI. Just take Tsinghua as an example. We are a university that embraces technology very much. And we use this technology into our classrooms and in the learning methods and also… the classroom infrastructure. We use this AI a lot, and students enjoy the high-quality education, and we are also trying to bring this high-quality education to other parts of China who are not, I’m far away from Tsinghua and who cannot come to Tsinghua to have this kind of high-quality education. And at the same time, we also trying to make the education personalized so students can use AI technology to have their own curriculum and to analyze what is the best for them and what their disadvantages and advantages are. So it is really for, it is good for the students’ development. As for the perspective of using AI, we actually, in Asia, we sometimes feel that we are kind of culturally different from the West, and also now we, especially people, students in Tsinghua, we feel the tension between the East and the West in developing this technology, especially during the geopolitical tension between China and the US. A lot of the top-level, I mean, very advanced technologies are not able to be imported to China, and also many top-level students who study computer science and who study AI may not be able to go to the US for study. So that is a little bit worry for us. We just think this good technology, if we have separate systems in the future, it will make this world even more complicated, yeah.
Fadi Daou:
Wonderful, thank you. Thank you for this input on the educational level, and AI as also a mechanism that can contribute in enhancing the educational level, and also this geopolitical polarization or tension that has a direct impact on the technology development and deployment. So thanks, Qian. Michel, you… are leading policy engagement in an organization that is very much looking at technology and AI from a human rights perspective. So what is your input on this question, looking at AI now and from a Latin American, I would say, cultural or contextual reality? How we look at it, and we know that this year, it’s everybody being presented at the big year of elections, globally, but it won’t stop this year. I mean, the whole human rights, democracy, dignity are very much related to technology and more and more to AI. So how you look at this from your own perspective?
Michel Roberto de Souza:
Thank you very much for the question. Well, I think that we are in a very interesting year and time in the board and Latin America in general. And I think that, well, regarding artificial intelligence, I think that one of the big challenges that we face in Latin America, and I think that a lot of different places, is the difference between global north and global south. I think that this is something that we can see not only here in the discussions that are more global in the sense with the presence, for instance, of Latin American states or Latin American civil society organizations and how sometimes we don’t have this kind of presence here and how the discussion sometimes is more led by global north states and institutions. But also how these can be seen in the way that the global south maybe consumes AI, maybe is sometimes the source of data and so on. So I think that this is a huge challenge that we have to face. This is something really important. And when we look at Latin America, for instance, we have problems. We have problems regarding how is the role of the state regarding artificial intelligence. For instance, we had a great case studies that we did in the literature to see how different Latin American states were using or deploying AI and how they have, for instance, data governance, how they see the way that they do this kind of technology. And we see that sometimes there are a lot of techno-optimism in policymakers that they want to have the technology to solve all the problems that. that they have in their organizations and we know that this is not possible because there are limits for the technology and sometimes it’s kind of difficult to understand also what are these limits. So what we did in Latin America is also not only do this case studies but also to talk with policy makers, to do courses with policy makers to try to let them understand the technology so they can deploy the technology. But this is the first layer. I think that now we are in the second big challenge in Latin America. We know that we have the Brussels effect regarding data protections law but now we are in a time that we are facing maybe the Brussels effect regarding AI regulation. So we had a lot of discussions to regulate artificial intelligence in Brazil, in Peru, in Chile, in a lot of different places and I’m not sure if we are going to follow the EU model or if we’re going to create our own model but I think that this is a huge challenge that we are facing together with the questions and problems that you mentioned regarding the use of AI and deep fake for instance for elections. So I think that we are in this really interesting time.
Fadi Daou:
Okay, thank you. Thank you Michel and this is definitely a tension and maybe a balance at some point between the Brussels effects I mean as it happened with GDPR globally and now the question about AI. So how to include actual local realities but at the same time to cope with the international requirements. So I think this is a good transition to hear from you and you are a global expert in international law and cyber security and technology. and to being also a member of Globethics and involved in this topic also from an organizational perspective. So can you give us your input on, of course, you can also mention about the Indian reality in India and how things are evolving in India. But at the same time, from an international law perspective, so how is this balance between local realities and global needs for regulation can be taken into consideration?
Pavan Duggal:
We are very clear that the legal frameworks of artificial intelligence have to be an important catalyst in moving towards an inclusive AI. If you’re expecting AI on its own to be inclusive of all groups in humanity, then potentially we are not on track. Primarily because artificial intelligence today is dependent on the kind of big data sets that it’s actually being trained upon. And since the data sets themselves have intrinsic discrimination that’s embedded in the said data sets, the discrimination automatically comes into artificial intelligence. So if you’re looking at inclusive AI, you’ll have to make sure that there will be, if not no, at least minimal discrimination. Further, there should be no bias. Now, there are two options. One is that we can go with folded hands to companies and say, please don’t engage in bias or discrimination, which may not necessarily be a very effective process. Last year, the Future of Life Institute categorically had requested all these big AI giant companies not to continue their progress for six months to enable legal frameworks to catch up pace. But nobody listened. So I think this kind of a voluntary call for action may not necessarily hold much water. The better option will be that countries will have to come up with legal frameworks that encapsulate therein the requirement for AI companies and generative AI platforms to put in appropriate safeguards so as to minimize the possibility of bias or discrimination that’s going to be embedded in or perpetuated by the AI algorithm. Another big thing is there’s complete lack of transparency. Today, artificial intelligence is a black box. So therefore, every user have not just a right to inclusive AI as a fundamental human right, but also a right for explainability and accountability. And for that, if the legal frameworks can start pushing the envelope and start bringing in more accountability for the AI players, that could be a good starting point. When I look at three major laws, the recent one, the European Union’s AI. act. When I look at the Chinese generative law, the law on generative AI that they launched and implemented on 15th of August 2023. Or when I look at the New York’s law on AI impacting employment, almost all of them are pretty clear, they want to minimize the risk to the public at large. And for that particular direction, almost all three laws are mandating minimal discrimination, minimal bias, and are in that direction stipulating that you must resort to regular audits. Of course, there has to be legal liability as well. Because today, large number of people are not just swayed away by AI as a magical term, they’re also now being sucked into a new revolution. This is the new revolution of artificial stupidity. More and more people have begun relying upon AI without testing the authenticity or veracity of the output that AI is generating. Knowing fully well that AI is hallucinating, that the level of evolution has not yet reached, well hallucination is completely absent. Yes, with the 13th of May 2024, and with the launch of GPT-4.0, it’s the newest and the most advanced model. But that model is also capable of potentially being abused by different kinds of cyber actors. That’s the reason why there’s a massive flip in the last two and a half weeks on AI crimes and AI generated cyber security breaches. Therefore, these entire issues will have to be dealt up from a legal policy and regulatory standpoint, if we really want AI to be inclusive for a better world. Otherwise, we will continue to keep on seeing a world which will be further accentuated and divided by AI haves and AI have nots. And in this regard, while law can play its important role, till such time, the legal frameworks don’t get established at the international and national levels. It’s imperative that nations must start pushing towards agreeing upon certain common legal principles, which can be used for enabling and also regulating the use of AI. And that’s, I hope, the area in which the AI for Good Summit at ITU and the WSIS Forum could potentially be looking at. So a lot of work happening, but we’ll have to still wait and watch how the legal frameworks effectively get developed and get enforced with the passage of time. Thanks.
Fadi Daou:
Thank you, Pavan. Thank you for also introducing this complexity between voluntary engagement from an ethical perspective by companies or developers of AI and the need of regulations and legal frameworks to ensure that AI is also safe and trustworthy. And definitely this black box, I think which is somehow inherent to the technology itself. I mean, I’m personally taking a course now to understand this technology. And one of the things I learned that was a neural networks now. So the output does not only depend on the input and the quality of data, but also a very uncontrollable and unexplainable process that the machine processes exactly and without any control on it. So definitely I think we are taking the risks of this lack of explainability and transparency of this technology, hoping that it can be mitigated, as you mentioned, by regulations and also by ethical commitment from the developers and the deployers of this technology. Globethics launched a few months ago. Last year, we organized a global consultation on AI. And the title of the policy report that came out, it can be found on Globethics’ website and downloaded for free. The title is Inclusive AI for a Better Future. It’s a small policy report because we do believe that AI can contribute for a better future for all. However, we need to take all those concerns that have been expressed here into account. And so this is my now second and last question. And asking all our panelists to answer very, very quickly in one minute, one minute and a half. What are the needed policies or instruments or resources, whatever you choose from those categories, to make AI more trustworthy, more safer, more inclusive on the different levels or in different? and fields that you all mentioned. And I would like to start with you, Michel. If you have the opportunity to lobby, I would say, the leaders of Latin America gathered in the summit, and you have one minute to tell them what they have to do now from a policy perspective to enhance the situation. So what you will do, Michel?
Michel Roberto de Souza:
First of all, I think that they need to listen to civil society. They need to listen and not to exclude marginalized communities that maybe doesn’t know exactly the terms, the technical terms, but they suffer the consequences in their daily lives. So I think that this is the first thing. Second thing would be to have a human rights-based approach. I think that we have a human rights law. We have a very jurisprudence and so on. And this is something that we have to have in mind, like a human rights-based approach with the human rights that are in place. And second thing I would say to you is that we can have other solutions. For instance, we have a publication that we did regarding feminist artificial intelligence development in the sense that we can have other views, community-based solutions to the problems that we have. So we have different kinds of artificial intelligence, and we have this capacity to develop also in another way.
Fadi Daou:
Wonderful. I think this is so important to be considered by the policymakers. In fact, this multi-stakeholder approach, including those who are the most marginalized or maybe also suffering or under the highest risk from this type of technologies. Thank you, Michelle, for your input.Qian, what do you think from a policy perspective now on governance of AI?
Qian Xiao:
OK, well, I’m doing a lot of research on the international governance of AI. And from our perspective, we think if we We would like to have an international framework for governance. Now we need to have a very flexible policy because we know that the technology itself develops so quickly and governance always falls behind. So there is always a pacing problem in this regard. So the policies will need to be very flexible. We need to be open to admission and to updating. And just our Indian friend mentioned about the law of China in terms of the large language models. Actually, it is not a law, it is a temporary measure. It is temporary because we know that it needs to be updated all the time. So it is not certain. So it must be flexible for the policy perspective. And the second, it should be very inclusive. The policy itself should be very inclusive. We should have all countries included when talking about the international governance. Not only the developed countries, but also we need to have all stakeholders into consideration because AI is so different that we have different stakeholders, the governments, the non-government organizations, but also the companies. So we should have them all together when we design the framework. So I’ll stop here.
Fadi Daou:
Wonderful. Thank you. So Diana, am I too hopeful or not realistic if I think that the AI can be an opportunity for Africa to make a leap and rather than thickening the divide, it could be an opportunity to accelerate the socio-economic development of the continent?
Diana Nyakundi:
Yes, I think to some extent you are a bit too hopeful. because I would say we are currently making demands on policy, but we don’t have the building blocks to support that policy and to support its implementation. So if we talk about creation of policy, but we don’t have the infrastructure and the skills and issues of compute access to data and all of that, then what is the point of that strategy if it’s just going to sit there and not to be implemented? However, it’s not all doom and gloom. There’s still considerable progress that’s going around in the continent, like I said earlier. There’s still a lot of projects that are coming up, like I said earlier. So if we are then creating strategies and policies around AI, like you said, yes, multi-stakeholder and governance is really important, just bringing everyone to the room and not just the policy makers, but also the people who are going to be affected by these technologies, so that their voices can be heard, their perspectives can be heard, what they feel should be included in the creation of these technologies then should also be heard. Secondly, our policies have been said to really be a copy-paste from the West, which then doesn’t reflect our realities. So the creation of policies should really be contextual to our realities. And when we speak about things like ethical principles, so speak about not just blanket statements of transparency, accountability, responsibility and all of that, but really think about how we’re going to measure these principles when we talk about accountability, fairness, how we’re going to measure them, again, anchored on our contextual realities. And then lastly, as we create these policies, we really should understand our history, our perspectives, our geographical context and bring about just an intersectional expertise as we discuss these policies and as we discuss these developments. So then again, if we speak about inclusivity, then talk about varied platforms of dissemination. So because, again, just back to our context, the dissemination of AI products has just been really focused on the internet and all of that, but then there’s the issue of access. So access and affordability of just the internet. So think about varied platforms of dissemination. For example, radio is something that a wide amount of people can access, radio, television, and all of that. So those are just some of the things that we should think about as we think about policies with regards to the African context.
Fadi Daou:
Thank you so much. And I think this comes with also global budget engagement when it comes to infrastructure to make it so available, make the project available in some context. So thank you, Diana. So what would be your wish, proposal, idea to move forward in this direction?
Xiaowu Ma:
Yeah, time is so fast. Thank you for asking. For example, Qingsong Health, my company, is Qingsong Health continuous sales of their hospital and $10,000 in China. But with the Department of the AI, with the Department and AIGC to follow doctors, it help the doctors and medicine, regardless the corporation and any other. In China, and this man that has the knowledge is the state, is have the producing new one million pairs of their continual this. I think this AI to help the more people can create the more value. Yeah, but the last, I am very glad to attention this form. I hope we share some news idea, but we can talk down here how we talk about this. Thank you.
Fadi Daou:
OK, thank you. Thank you. And Larissa, in one minute, one suggestion from your side on the policy level.
Larissa Zutter:
So this is not quite as concrete as you might want, but I think I want to piggyback off of what was said before. I think on an international level, we really need to do a much better job of having much, much more diversity when it comes to making the overarching legal frameworks. But I also think that we need to remember that there needs to be enough space for what Diana said, which is we need to have enough space for cultural context to be able, or just any kind of context to be able to build into these principles or whatever we develop. And we need to not forget that, of course, once we have international principles, there needs to be sectoral, country level, all kinds of other talks that need to be had because AI is put into a socio-technical environment and that largely defines how it’s used. So there will be different needs on different country levels, on different regional levels, in different industries. And in those processes, we need to be even more careful of having a lot of diversity because that’s usually where it gets lost. On an international level, we usually try a little harder to be inclusive, but as soon as you look at the industry level and the principles that are being developed there, it’s very homogenous. And so we really need to make sure that even on this level and also on a country level, we really make sure to have all of the stakeholders and not just cultural and multi-generational diversity, but also things like educational diversity or different thoughts and things like that. So I think we really just need to be careful that we don’t just be diverse and inclusive on an international level, but then also once we have more concrete solutions on an industry or country level.
Fadi Daou:
Wonderful, thank you. This brings another question about the frameworks or mechanism to ensure this diversity on both levels, local and international, but we don’t have time to discuss it. It will be for the next year, Pavan. I would take advantage from the remaining. In fact, we are already out of time, but just to mention that now in 15 minutes, so you will be hosting another panel. It is in the room. K1 in the same building on artificial intelligence and regulation and its regulations, development and trends that also I will have the joy to contribute to. So, but to conclude with you, if ITU asks you to put the title of next year WSIS Summit, what would you propose in 10 seconds? Okay, wonderful. Thank you for this input and we’ll meet in a few minutes in the other session that I mentioned also. I would like to thank all our speakers, to thank the audience. I’m sorry, we, I mean, the format is not really easy to also have a discussion with the audience, but I’m sure we enjoyed so much, we learned so much from those very diverse perspectives, but also very much aligned, I would say, with some key outcomes. We will have a report, outcome report out of the session also that will be shared with ITU, WSIS, and will be also with the speakers and shared online. So, thank you all. I would have the pleasure to share with you a copy with the speakers. I have only copies for the speakers, but it’s available online for everybody from the club ethics or policy report on inclusive AI for a better future. Thank you all.
Speakers
DN
Diana Nyakundi
Speech speed
175 words per minute
Speech length
1059 words
Speech time
363 secs
Arguments
AI policy implementation in Africa is challenged by lack of infrastructure and skills
Supporting facts:
- Demands on policy exist without the building blocks to support its implementation
- Lack of infrastructure, skills, compute access, and data access hinder policy effectiveness
Topics: AI policy, Infrastructure, Skills development
Despite challenges, progress is evident across the continent with AI projects
Supporting facts:
- Notable progress with AI-related projects in Africa
- Potential for AI to accelerate socio-economic development recognized
Topics: AI development, Socio-economic development
AI policies and strategies need multi-stakeholder governance including affected communities
Supporting facts:
- Necessity for inclusivity of policy makers and communities affected by AI
- Importance of hearing various perspectives during policy formulation
Topics: AI governance, Multi-stakeholder approach
AI policies in Africa should be contextual, reflecting African realities rather than copying Western models
Supporting facts:
- Current policies often replicate Western standards, ignoring local contexts
- Need for policies anchored in the unique history, perspectives, and geographical context of Africa
Topics: Contextual policy making, Cultural relevance
Ethical AI principles should be measurable and adapted to African contexts
Supporting facts:
- Ethical principles need operational metrics within African realities
- Fairness, accountability, and transparency must be evaluated in a relevant way
Topics: AI ethics, Cultural adaptation
Dissemination of AI in Africa should utilize various platforms beyond the internet for inclusivity
Supporting facts:
- Wider reach is necessary owing to issues of internet access and affordability
- Utilizing radio and television could improve access to AI advancements
Topics: Dissemination, Digital divide, Media platforms
Report
The implementation of artificial intelligence (AI) policy in Africa faces substantial challenges, primarily due to inadequate infrastructure and a lack of essential skills. This mire of challenges is reflected in the AI policy landscape where, despite burgeoning policy demands, the fundamental support systems are acutely absent.
Factors such as restricted access to computing resources and data further impede policy efficacy. Nevertheless, the continent portrays a not so bleak narrative in AI advancements. Projects focused on AI are marking noticeable progress, with these developments affirming AI’s potential to catalyze socio-economic growth and contribute significantly towards Sustainable Development Goals (SDGs), notably SDG 8: Decent Work and Economic Growth, and SDG 9: Industry, Innovation, and Infrastructure.
The effective governance of AI necessitates a multi-stakeholder approach, underlining the importance of inclusivity in policy-making—a process that should extend to incorporate the voices of both policymakers and the communities influenced by AI. This approach is conducive to the aims of SDGs 16 and 17, which endeavour to establish peace, justice, and strong institutions, and to cultivate partnerships for the goals.
AI policies in Africa should ideally espouse a context-specific and culturally sensitive orientation. The prevailing tendency to mimic Western benchmarks is inadequate, as it fails to consider Africa’s distinctive socio-cultural and geographical complexion. Policies need to resonate with the continent’s diverse historical and cultural context, a move imperative to tackling inequalities and stimulating sustainable cities and communities, in accordance with SDGs 10 and 11.
In the domain of AI ethics, the sentiment is similarly progressive. Ethical principles, including fairness, accountability, and transparency, should be re-interpreted within the African cultural milieu, transforming them into tangible metrics aligned with the continent’s specificities. This alignment echoes the ethos of SDGs 10 and 16, which champion reduced inequalities and the promotion of justice.
Addressing the digital divide, there is a persuasive case for the diversification of AI dissemination mediums. To transcend the confines imposed by variable internet access and affordability, traditional media platforms like radio and television offer alternative pathways for broadening AI’s reach.
This ensures the inclusive propagation of AI advancements, propelling progress towards SDG 9 and SDG 10 which focus on curbing inequalities as well as developing infrastructure and innovation. In essence, Africa’s path towards the implementation and responsible governance of AI, despite being strewn with obstacles, harbours a landscape of potential and affirmative growth.
Key to this trajectory are collaborative and inclusive policy governance, culturally attuned ethical frameworks, and broad-based information dissemination—all pivotal in realising the transformative prospects of AI in alignment with sustainable development objectives. These strategies are instrumental in fostering educational empowerment, economic development, and technology’s equitable utilisation across the African continent, in harmony with the United Nations’ Sustainable Economic Goals.
FD
Fadi Daou
Speech speed
156 words per minute
Speech length
2496 words
Speech time
961 secs
Arguments
AI contributes to health development and access to health for elderly people
Supporting facts:
- Fadi Daou mentioned the positive impact of AI on health access, especially for seniors.
Topics: Artificial Intelligence, Healthcare
AI can be a tool to leverage socioeconomic developments in various sectors
Supporting facts:
- AI projects in disease detection and accessibility
- Use of AI for historical sites and digitization of national museums
- Contribution of AI in agriculture to economies in Kenya and Africa
Topics: AI in health, AI in education, AI in agriculture, AI in cultural preservation
Inclusion of marginalized communities in AI policymaking is crucial
Supporting facts:
- Marginalized communities suffer the consequences of AI, may lack technical knowledge but have valuable insights
- A human rights-based approach with community solutions is advocated
Topics: AI Governance, Public Participation
AI presents an opportunity for Africa’s socio-economic development
Topics: Artificial Intelligence, Africa, Socio-economic Development
Report
The conversation evidences a consensus on the transformative potential of Artificial Intelligence (AI) across multiple sectors, with a particular focus on its impact from healthcare to socio-economic development. Fadi Daou’s statement underscores the significant benefits AI offers in enhancing healthcare access, especially for the elderly, aligning this innovation with SDG 3: Good Health and Well-being.
Daou envisions AI as an enabler for improved health development and accessibility of medical services for senior citizens. Moreover, the discourse heralds AI’s capacity to meaningfully contribute to various domains such as education, agriculture, and cultural heritage preservation. Highlighted AI initiatives, commended for their advancements in disease detection, conservation of historical landmarks, and the digitalisation of national museums, emphasise its role in promoting decent work, economic growth, and sustainable infrastructure—core aspects of the pertinent Sustainable Development Goals (SDGs).
Within the agricultural sector, AI innovations have drawn attention for their invigorating effect on economies in Kenya and other African countries, furthering progress toward sustainable development. A salient notion within the discussion was the call for AI to be tailored specifically to diverse local and societal needs instead of compelling societies to adjust to technology.
This bespoke approach to AI integration stressed the importance of adapting AI technology to address the unique challenges and contexts within communities. Discussion surrounding AI governance accentuated inclusivity’s vital role, advocating for the engagement of marginalised groups in AI policymaking.
The argument asserts the importance of absorbing comprehensive insights from these communities into a human rights-centred governance model, a stance supported by Fadi Daou. However, neutrality emerged regarding the challenges associated with the expansion of AI projects, including infrastructure development, increased awareness, and the prerequisite of substantial funding.
The discourse also remained neutral on the subject of quantifying AI’s specific influence on social progress, reflecting contemplation on effectively measuring the technology’s direct contributions. The eagerly optimistic perspective regarding AI’s future in Africa was another key point, recognising AI as a promising avenue for the continent’s socio-economic development.
This optimistic context aligns with several SDGs, touching upon Good Health and Well-being, Quality Education, Decent Work and Economic Growth, Reduced Inequalities, and Peace, Justice, and Strong Institutions. In summary, although the overarching sentiment resonates positively concerning AI’s broad transformative capabilities, the conversation suggests that realising the full extent of AI’s advantages is a complex endeavour that demands considered planning, an inclusive governance framework, and a contextually sensitive approach.
This approach acknowledges the nuances inherent in local and societal peculiarities, ensuring the strategic and beneficial application of AI. The text consistently adheres to UK spelling and grammar standards, and no grammatical or spelling errors are present.
LZ
Larissa Zutter
Speech speed
205 words per minute
Speech length
1020 words
Speech time
299 secs
Arguments
Larissa Zutter has a background in political economics and international security.
Supporting facts:
- She is a senior AI policy advisor.
- She is researching topics of socioeconomic implications of AI.
Topics: AI Policy, Socioeconomic Implications of AI, International Security
Larissa finds the global representation at conferences significant.
Supporting facts:
- Appreciates more global representation,
Topics: Global Representation, Cultural Diversity
Report
Larissa Zutter stands out as a senior AI policy advisor, closely studying the socio-economic implications of artificial intelligence (AI). Her background in political economics and international security positions her as a critical voice in policy advisement, in alignment with the aims of Sustainable Development Goal (SDG) 16, which advocates for peace, justice, and the founding of robust institutions.
Her contributions are crucial for ethical considerations and governance in AI, as well as understanding potential disruptions to society. Zutter’s neutral perspective toward AI allows for a scholarly and objective analysis of its wider impacts, guiding informed decision-making that could influence global AI standards and implementation protocols.
Her expertise contributes significantly towards the realisation of SDG 16’s aspirations for stable and just institutions. Moreover, Zutter places high value on global representation and cultural diversity within AI conferences, reflecting a positive stance that mirrors the intentions of SDG 10, focused on reducing inequalities.
Her belief in the importance of inclusive dialogue from diverse cultures and regions is vital for enriching policy discussions, ensuring that the shaping of AI policy reflects a broad spectrum of insights and experiences. Her proactive stance is highlighted by her readiness to engage with a multitude of perspectives, particularly at prominent events like the World Summit on the Information Society (WSIS) and AI for Good.
Such forums are platforms where she anticipates the vibrant exchange of ideas, indicative of her conviction in the critical role of varied stakeholders in AI discourse. In summarising, Larissa Zutter’s engagement with AI policy reflects a nuanced understanding of its socio-political dimensions, coupled with a strong commitment to inclusiveness and wide representation.
She espouses the virtues of global collaboration and anticipates a diverse array of views at international summits, demonstrating a dedication to developing AI policies that are equitable, informed, and attuned to their comprehensive impacts on both society and international relations.
MR
Michel Roberto de Souza
Speech speed
162 words per minute
Speech length
848 words
Speech time
315 secs
Report
Michel Roberto de Souza is the Public Policy Director at Derechos Digitales, a Latin American NGO headquartered in Santiago de Chile that addresses the alignment of technological advances with human rights, concentrating on cybercrime, gender and technologies, and artificial intelligence (AI).
De Soussa highlights a notable disparity between the global north and south, wherein Latin America often acts as a data provider rather than a participant with influence in global technology discussions, leading to a lack of proportional representation. A critical issue raised by Souza is the ‘techno-optimism’ prevalent among Latin American policymakers, who sometimes regard technology as a panacea for systemic issues.
To counter this, Derechos Digitales produces case studies and runs educational programs for policymakers to elucidate the capacities and limitations of AI. These efforts aim to encourage judicious and constructive integration of AI technologies. Regarding regulation, Souza expresses concern over Latin American countries potentially replicating the European Union’s AI regulation approach, noting that countries like Brazil, Peru, and Chile are currently debating how to regulate AI.
Whether they will adopt the EU’s framework or devise new strategies that better reflect their unique contexts remains an open question. Souza advocates for the inclusion of marginalised groups and civil society in AI regulation discussions, emphasising that those who may not speak technical jargon are nonetheless impacted by AI and therefore should contribute to its governance.
He calls for a commitment to human rights within technology policy, insisting such a framework should be built on existing laws and human rights principles rather than creating new, unrelated regulations. Additionally, Souza envisions alternative pathways for AI development, promoting diversity in the approach to creating AI systems.
He point out Derechos Digitales’ involvement in fostering discussions on feminist artificial intelligence, suggesting that AI should be developed in a communal context. In summary, de Souza sheds light on the critical challenges and potential avenues for Latin America in global technology policy debates.
Positioned as an advocate for diverse, inclusive, and human rights-focused AI development, Derechos Digitales works to enhance the representation of the global south in technology discourse and advocates for policies to ensure equitable distribution of technological advancement’s benefits.
PD
Pavan Duggal
Speech speed
167 words per minute
Speech length
944 words
Speech time
339 secs
Report
Dr. Pavan Duggal’s speech implored the urgent development of a legal framework for governing Artificial Intelligence (AI), underlining the technology’s expanding societal role and the challenges of ensuring inclusivity and fairness. He acknowledged the biases inherent in AI training datasets, leading to algorithmic discrimination, and contended that without human corrective measures, AI’s promise of inclusivity remains unattainable.
Duggal dismissed the effectiveness of AI companies’ self-regulation, advocating for legal requirements to embed safeguards against bias within AI systems. He emphasised the opacity of AI operations, noting the ‘black box’ phenomenon, and asserted the need for enforceable user rights to comprehend and challenge AI, suggesting these be enshrined as fundamental human rights.
The legal frameworks envisioned by Dr. Duggal would mandate AI accountability and explainability. Highlighting various new laws—such as the EU’s AI Act, China’s new generative AI law, and New York’s employment-related AI legislation—Dr. Duggal underscored the global shift towards minimising public risks via audits and legal liability for AI developers and users.
Dr. Duggal warned of ‘artificial stupidity’, where uncritical dependency on AI facilitates the spread of unverified information and poses cybersecurity risks, as shown by sophisticated platforms like GPT-4.0. He suggested this environment necessitates swift legal and regulatory responses. Addressing the shift in AI-related crimes and cybersecurity breaches, Dr.
Duggal underlined the importance of legal frameworks in bridging the societal divide driven by differential AI access. He ended on a hopeful note for an international consensus on legal principles that ensure the responsible use of AI, referencing the potential of forums like the AI for Good Summit and the WSIS Forum to drive this change.
The objective is a global realisation of AI’s potential, mitigating existing inequities.
QX
Qian Xiao
Speech speed
166 words per minute
Speech length
694 words
Speech time
251 secs
Arguments
Chinese people generally have a positive attitude towards the development of AI
Supporting facts:
- According to a survey, Chinese people enjoy the benefits brought about by AI
Topics: AI Adoption, Public Perception
Tsinghua University extensively uses AI to improve education quality and accessibility
Supporting facts:
- Tsinghua employs AI technology in classroom infrastructure and learning methods
- The university aims to provide high-quality education across different regions
Topics: AI in Education, EdTech
AI technology is being leveraged to personalize education
Supporting facts:
- AI can analyze individual student needs to tailor a curriculum
Topics: Personalized Learning, AI Customization
Report
The comprehensive analysis paints a nuanced picture of China’s relationship with artificial intelligence (AI), which is mostly favourable and reflects a society receptive to technological advancements. The public’s attitude towards the development of AI is broadly positive, aligning with the ambitions of Sustainable Development Goal (SDG) 9, which emphasises the importance of innovation and sustainable industrialisation.
Tsinghua University, a leading Chinese educational institution, exemplifies AI’s positive influence by integrating it within classroom infrastructure and teaching methodologies to enhance education quality and accessibility. This strategic approach demonstrates a commitment to SDG 4, which advocates for equitable quality education and lifelong learning opportunities.
AI technology also offers the potential for personalised learning experiences, creating tailored curriculums that respond to individual student needs. This innovative approach to education aligns with SDG 4 by transforming how educational content is delivered, making learning more effective and accessible.
However, there are concerns about the negative impact of East-West geopolitical tensions on technology exchange and educational opportunities. Restrictions on advanced technology imports to China and difficulties faced by Chinese students in the US oppose the ideals of SDG 16, which promotes peaceful and inclusive societies, and SDG 4, challenging international educational exchange and cooperation.
The possibility of independent AI systems developing in the East and West could complicate global integration, threatening SDG 17’s vision of revitalising global partnerships for sustainable development. Such a split might create significant barriers to cooperation, hindering progress towards achieving the UN’s SDGs.
While AI’s development and integration in China shows promise in advancing certain SDGs, geopolitical tensions present considerable challenges. Addressing these issues requires collective effort from governments, institutions, and international bodies to reconcile differences, harmonise global AI systems, and continue collaborative efforts towards sustainable development goals.
XM
Xiaowu Ma
Speech speed
132 words per minute
Speech length
367 words
Speech time
166 secs
Report
Good morning, and a warm welcome to all who have joined this forum. I am Ma Xiaowu, though you might also know me as Mark, the Chief Marketing Officer at Qingsong Health. Our company is a distinguished, large-scale health technology organisation based in China, and we’re on the brink of a significant breakthrough.
We look forward to sharing our advancements and innovations with the world over the next year. In our Department of Artificial Intelligence, we are committed to exploring a variety of scenarios that promise to have a positive impact on various aspects of the healthcare sector.
Through the utilisation of AI, we aim to greatly improve efficiency, enhance healthcare service quality, and spur the creation of new industry segments and job opportunities. However, the introduction of AI into healthcare operations is not without its challenges. We are mindful of issues related to personal data protection and privacy.
The progression of AI technology brings to light ethical dilemmas and the influence it may have on policymaking and legislative decision-making. At Qingsong Health, we actively collaborate with AI General Computing (AIGC), significantly supporting the work of medical professionals throughout China.
AI tools aid doctors in diagnostics and treatment, improving patient health outcomes while upholding our company’s values and standards. Moreover, the application of AI is poised to greatly enhance the accessibility of healthcare services, with the potential for a profound, transformational influence on millions of lives.
We’re convinced that the employment of AI has the power to democratise medical knowledge, contributing to a more equitable healthcare system. During our discussions in this forum, I am particularly keen to exchange innovative thoughts concerning the ethical and effective integration of AI into healthcare systems.
It’s important to engage with various viewpoints on this matter. Through open dialogue, we can navigate towards a future where AI not only drives the evolution of healthcare but also significantly enhances societal well-being. I am thankful for your participation in this discussion.
It is through such forums that we can delve into the complexities of AI applications in healthcare and unlock its vast potential. Thank the you for your attention and for contributing to our meaningful exchange.
Related event
World Summit on the Information Society (WSIS)+20 Forum High-Level Event
27 May 2024 - 31 May 2024
Geneva, Switzerland and online