Inclusive AI For A Better World, Through Cross-Cultural And Multi-Generational Dialogue

31 May 2024 09:00h - 09:45h

Table of contents

Disclaimer: This is not an official record of the session. The DiploAI system automatically generates these resources from the audiovisual recording. Resources are presented in their original format, as provided by the AI (e.g. including any spelling mistakes). The accuracy of these resources cannot be guaranteed.

Full session report

WSIS Panel Emphasises the Need for Inclusive AI Policies for Global Good

At the World Summit on the Information Society (WSIS), a panel discussion hosted by Globe Ethics focused on the theme of “inclusive AI for a better world,” emphasizing the need for cross-cultural and multi-generational dialogue. The session featured a diverse group of speakers from four continents, with one joining remotely from India, to explore the opportunities and challenges AI presents across different sectors and regions.

Larissa Zutter, a senior AI policy advisor, voiced concerns about AI’s impact on socio-economic systems, particularly regarding job markets and social security. She highlighted the younger generation’s fears about the planet’s future and the potential for increased inequalities, noting that while they are familiar with technology, concerns persist about socio-economic systems not being robust enough to support AI-induced changes.

Ma Xiaowu from Qingsong Health in China discussed AI’s potential in healthcare, especially in improving efficiency and aiding the elderly. He emphasized AI’s role in creating value and bridging generational healthcare gaps.

Diana Nyakundi from UNESCO Nairobi spoke about AI’s application in agriculture and cultural preservation in Kenya, stressing the importance of AI in addressing local challenges. She pointed out the difficulties in scaling AI projects due to infrastructure, awareness, and funding limitations. Nyakundi called for inclusive, multi-stakeholder governance in AI policy-making to ensure diverse perspectives are considered.

Qian Xiao from Tsinghua University highlighted AI’s integration in education, making it personalized and accessible to students in remote areas. She raised concerns about geopolitical tensions between China and the US affecting AI development and student exchanges, warning of the risks of developing separate AI systems.

Michel Roberto de Souza from Derechos Digitales advocated for a human rights-based approach to AI, urging the inclusion of marginalized communities in AI discussions. He suggested that policymakers need to understand technology’s limits and consider community-based solutions for AI deployment.

Pavan Duggal, an advocate in the Supreme Court of India, underscored the need for legal frameworks to minimize discrimination and bias in AI. He called for international and national regulations to ensure AI’s safety and trustworthiness, emphasizing the importance of transparency and accountability.

The panel reached a consensus that AI has the potential to contribute positively to various sectors, including healthcare, agriculture, and education, but only if inclusive and diverse policymaking is prioritized. They agreed on the need for policies to be flexible, regularly updated, and culturally sensitive. The challenges of infrastructure, awareness, funding, and geopolitical tensions were recognized as significant barriers to AI’s inclusive development.

In conclusion, the WSIS session underscored the complex nature of AI’s societal impact and highlighted the imperative for collaborative efforts to ensure that AI development is inclusive, ethical, and responsive to the diverse needs and contexts of different regions and generations.

Session transcript

Fadi Daou:
So, thank you and welcome everybody to this very important session at the WSIS in this rainy weather. Today, I hope that you will have a better weather during the weekend, but very busy also Geneva. We are having today at the same time the WSIS and AI4GOOD, the two summits somehow overlapping, and we have wonderful speakers on this panel to discuss together inclusive AI for a better world through cross-cultural and multi-generational dialogue. So, I would like to welcome all our speakers. Some of them came from far, I would say. We have speakers from four different continents on the panel, so welcome everybody. I will start by asking you to introduce yourself, but let me say one word. So, about Globe Ethics inviting for and hosting this session this morning, it will be a very fast session because we only have still have 42 minutes for the session, and we have five speakers present with us here in the room, and one speaker will join us also remotely from India. The aim of this discussion for those who are with us in the room and those following online, and thank you for the whole audience here in the room and online, is in fact to look at the question of AI and ask how we can and what we shall do to make it inclusive, and at the same time taking into consideration the cultural diversity and also the generational diversity and concerns and hopes. So welcome everybody. First, let’s start by a very quick round, less than one minute for each speaker to say who you are, which organization you represent, and if you have any expectations, specific expectations, from OSIS and AI for work in one month. We start with you, Larissa.

Larissa Zutter:
Hi, I’m Larissa Zutter. I’m a senior AI policy advisor at the Center for AI and Digital Policy, where I’m also a board member. I have a background in political economics and international security, and I’m mainly researching topics of socioeconomic implications of AI. I think my personal expectations, I don’t have that many. I’m interested to see some of these perspectives. Also, I think it’s great that we have a much more global representation than in a lot of other conferences that I go to, so that’ll be very interesting to get the opportunity to do that.

Fadi Daou:
Thank you, wonderful. So, please, Mark.

Xiaowu Ma:
Hey, hi. Good morning, everyone. Thank you for joining this forum. I’m Ma Xiaowu. Maybe condor, English name is Mark. Yeah, CMO of the Qingsong Health. Qingsong Health is a very large health technology company in China. We are 100 million after yours. No, next year, I will show you the world apart. Thank you.

Fadi Daou:
Thank you. Thank you, Ma. And I would like to thank Queen Chong Health Group who helped us in also having you here with us. And so, Diana, and also I would like to thank the UNESCO Nairobi Center who put us in contact and also helped in having you here in Geneva with us. Please.

Diana Nyakundi:
Thanks, Fadi. Good morning, everyone. I am Diana Nyakundi. I am based in Nairobi, Kenya. I work as a senior consultant to the UNESCO regional office for Eastern Africa, but I also work with research. ICT Africa is a full-time AI researcher. Research ICT Africa is based in South Africa. It’s an African think tank that works on digital inequalities and issues around AI and emerging technologies. And with UNESCO, I am the lead country expert on the readiness assessment methodology. So just researching on where Kenya is at with regards to AI and how ready the country is. Thank you.

Fadi Daou:
Thank you. Wonderful. Thank you, Diana, for being with us. You just arrived yesterday from Nairobi. And Xuan, please.

Qian Xiao:
Hi, everyone. Thank you for having me here. I’m Xiao Qian. I’m the Vice Dean of the Institute for AI International Governance, a think tank affiliated to Tsinghua University. And we are a very young think tank, but we focus on AI governance. And ever since 2020, we have established the AI for Sustainable Development boot camp open for young people around the world. And more than 3,000 young people have attended this boot camp. And we are doing everything we can to do the AI for Good initiative. Thank you.

Fadi Daou:
Thank you so much. And Michel.

Michel Roberto de Souza:
Thank you so much, Fadi. My name is Michel Roberto de Souza. I am Public Policy Director at Derechos Digitales. That is an NGO based in Santiago de Chile. We work in all Latin America with technologies and human rights. We work with cybercrime. We work with gender and technologies, and also artificial intelligence. And it’s a pleasure to be here. And our expectations regarding the devices, I think it’s to see what happened in these 20 years, but also to foster a multi-stakeholder approach and participation in UN processes. Thank you.

Fadi Daou:
Wonderful. Thank you, Michel. And welcome. So as you see, we have from Asia, Europe, Africa, Latin America, and we have online. So with us, Pavan, Duggal Pavan, are you hearing us? And can you introduce yourself maybe in one minute from India?

Pavan Duggal:
Thank you, Fadi. Good morning, everyone. I’m Dr. Pavan Duggal. I’m an advocate in the Supreme Court of India, specializing in cyber law, cybercrime, cybersecurity, and artificial intelligence. I’m the chief executive of artificial intelligence law hub, which is looking at evolving legal principles for AI. I’m the chancellor of Cyber Law University, which is an online platform for. providing various courses, I have written 199 books, and I’ve been working in the intersection of law and technology, specifically law and AI. My expectation is with this WSIS Forum and the AI for Good Summit, the envelope is going to be pushed for more quicker, rapid evolution of legal frameworks in countries for governing and regulating artificial intelligence. Thanks.

Fadi Daou:
Thank you, Pawan. I suggest that your next book should have the title The 200. So that’s wonderful. So we look forward for a very rich input and discussion. We’ll try to keep five, seven, 10 minutes if we can for the audience, if we have any questions, but the priority for the speakers too. And so again, inclusive AI, sorry, this is the main discussion, but we will try to look at this question from the perspective, as I mentioned, from intercultural and intergenerational perspective. So how can we take into consideration the contextual realities related to AI governance, AI regulations, AI hopes and concerns, but at the same time, the different approaches related to age and generations. So I’d like to start with you, Larissa. I think you probably are, I mean, you have lots of young people around the table here, I mean, on the panel, which is wonderful. And starting with you, I think you can bring the voice of a younger generation. What does it mean, AI for younger generation from hope and concern perspective?

Larissa Zutter:
So first I want to preface that obviously I can’t speak for my entire generation. I think everyone has a lot of different views. They’re also very dependent on where you live in the world, et cetera. But I think one big concern that we can see does not actually necessarily relate to the technology itself, but much more. the underlying structures that we have that then have consequences on our lives. So for example, a lot of young people are very worried about the future of our planet, they’re very worried about, for example, also the systems like social security systems that have a massive impact on how AI will influence, for example, our job markets or unemployment rates, housing, things like that. So I think whilst I see with maybe the older generations there’s a lot of fear around the technology itself, I feel like younger generations are much more used to the technology so they don’t fear the technology as much, especially in their daily usage. However, there is quite a lot of concern around our social systems not being able to carry the weight of the next generations. For example, it’s what’s known when you look at some of our systems needing to be revised, that’s something that a lot of people in my generation are concerned about and I don’t think that that is a focus enough of these kinds of discussions. A lot of the time we talk about, and these are not not important topics, but we talk about the technology itself and things like addiction or bias and that’s super important, but also just all our socio-economic systems that underlie basically our entire society are equally important in terms of defining how we use the technology but also what consequences it has. So also on topics like inequality, there’s a lot of, I would say, a lot of people are quite scared also of inequality increasing in my generation, especially because we’re no longer seeing the gains that the generations before us made. So this is the first generation that will probably earn less than their parents, things like that, and I think that is something that my generation is quite worried about, especially in the longer term.

Fadi Daou:
Interesting, so the socio-economic perspective and source of concern for young people from the job market challenge, using jobs over inequalities, increasing inequalities, but can I ask you also, what would make AI an opportunity? or younger generation? How you look at it as also an opportunity, maybe an equalizer from the socioeconomic perspective? Do you believe, first, that it can be?

Larissa Zutter:
Yeah, so I think that’s a super loaded question because, yeah, I think, of course, there’s definitely positives to be had. I think, generally, the younger generation knows how to leverage the good parts of consumer-oriented AI. So chat, TPT, things like that is very commonly used by younger people. So we know how to leverage it. It’s much more integrated into our lives, and that definitely does have positive impacts. However, there are equally so many negative impacts that come when we look at, for example, social media, which is not necessarily the same thing as AI, but it’s integrated, whatever. But I think there are definitely positives to be had, but they need to be inserted into a world that has systems that also protects us.

Fadi Daou:
OK. Wonderful. So I really hear your concerns, in fact. And it’s interesting, starting by I was expecting, in fact, more, I would say, looking more at the opportunities. But this is very good to put on the table from the beginning, the concerns also of the younger generation. That’s very interesting, and I think we will discuss this with the other speakers now. So, Xiao, we are going back to you. And I know that you personally had a very interesting experience building your startups and getting to the highest level with it, and then also in the framework of King Chong Health Group. So what you would like to add from your experience on this topic?

Xiaowu Ma:
OK, because these are big questions. Yeah, I’ll try to answer. We are the Department of Artificial Intelligence branch. bringing opportunities to all camps in the industry, like improve efficiency, make better dishes, and creating new industry and job opportunities. And at the same time, the opportunities also brings challenges. So, a data of Perseian and secret, in the issues the argument are based, and the possibility of the closing parliament, maybe about my company is for health. So, I will, AI, maybe try all the people, this is maybe APB or another one. Let’s work together to make a better us after AI, so that I can give us more the chance. And the shine is also go to the person, and we need to go together, it’s all choose it, looking for water, after to future.

Fadi Daou:
That’s wonderful to see that AI can really contribute to the health development and access to health, especially for senior, I mean, for elderly people also. That’s also a very positive perspective, I would say, in a practical way, with your so institution. Diana, Africa, the continent of hope, and you led, and as a senior expert, so the readiness assessment methodology in Kenya, and you are involved on the whole continent also, in relation to the AI discussions. So, how do you perceive it from, I would say, contextual reality, Kenya, Africa, as an opportunity, or as a challenge for younger generation for sustainable development? And do you have evidence between your hands? I mean, you did that this readiness assessment. I also study in Kenya.

Diana Nyakundi:
Yeah, thanks Fadi. So with regards to opportunities, there are a lot of AI pilot projects that are coming up. AI in health regards to how AI can be used for accessibility for disease detection, AI in education, AI in cultural preservation. There’s a lot of AI that’s being used to preserve historical sites, AI that’s being used to digitize our national museums, et cetera. So there’s a lot of opportunities that are coming up. However, there’s still very small scale and that’s due to a lack of awareness. So AI is still centered. There’s a big diversity in the country and in the continent at large. There’s a rural and urban diversity. And so AI is mainly centered in the urban areas. So just because of that, there’s a lack of awareness, there’s lack of proper funding and that’s why these projects are still very small scale. There’s still lack of infrastructure to really continue the development of these technologies. So because of that, then there’s still the opportunities, but they’re still not as large scale. Secondly, just where the conversations are currently at with regards to the African and the Kenyan context. So big tech companies are really rolling out these products and we’re constantly finding ourselves in positions where we have to scramble to figure out how to use and how to respond to these technologies. So the conversations now are centered around how we can flip that narrative to become, just figure out where we want to go as a society, where we currently are, where we want to go as a society and how AI can come in to fit into that. So not us going to and responding to where AI technologies are, but AI coming to how we see best AI to fit into that. fit into our current realities. So that, for example, AI in agriculture, which is a very big contributor to our economies in Kenya and in a lot of African countries. So when we say we have used AI in agriculture and we have moved from point A to B, what that difference, can that difference be really attributed to AI and how can we really account for that difference? So when we speak about opportunities, we really don’t want to think about it in terms of us catching up, but AI coming into our context and AI really solving our contextual issues. Yeah.

Fadi Daou:
That’s wonderful. Thank you for this input, Diana. So from job market first, as mentioned by Larissa, then the health sector, then to the agriculture and rural area. So how AI can be also a tool to leverage some of the economy and socioeconomic developments. That’s great. In the second round that we will do, I will ask you about what is needed to be done to accelerate this process and from a policy perspective. But before that,Qian, I mean, you are so at the top as Vice Dean of the Institute for AI International Governance at Chiang Mai University. But before talking about policies, this will be the second round. Now, I want to ask you from a contextual perspective, how you look at AI, thinking that there are specific expectations or needs or realities in China or in Asia in general that needs to be heard, to be taken into consideration. And then I will ask you about in the second round about what is needed on the top level from a policy perspective.

Qian Xiao:
Okay, thank you for the interesting question. Actually, China, I should say, according to the survey, Chinese people are very much positive towards the development of AI and they enjoy the benefits brought about by AI. Just take Tsinghua as an example. We are a university that embraces technology very much. And we use this technology into our classrooms and in the learning methods and also… the classroom infrastructure. We use this AI a lot, and students enjoy the high-quality education, and we are also trying to bring this high-quality education to other parts of China who are not, I’m far away from Tsinghua and who cannot come to Tsinghua to have this kind of high-quality education. And at the same time, we also trying to make the education personalized so students can use AI technology to have their own curriculum and to analyze what is the best for them and what their disadvantages and advantages are. So it is really for, it is good for the students’ development. As for the perspective of using AI, we actually, in Asia, we sometimes feel that we are kind of culturally different from the West, and also now we, especially people, students in Tsinghua, we feel the tension between the East and the West in developing this technology, especially during the geopolitical tension between China and the US. A lot of the top-level, I mean, very advanced technologies are not able to be imported to China, and also many top-level students who study computer science and who study AI may not be able to go to the US for study. So that is a little bit worry for us. We just think this good technology, if we have separate systems in the future, it will make this world even more complicated, yeah.

Fadi Daou:
Wonderful, thank you. Thank you for this input on the educational level, and AI as also a mechanism that can contribute in enhancing the educational level, and also this geopolitical polarization or tension that has a direct impact on the technology development and deployment. So thanks, Qian. Michel, you… are leading policy engagement in an organization that is very much looking at technology and AI from a human rights perspective. So what is your input on this question, looking at AI now and from a Latin American, I would say, cultural or contextual reality? How we look at it, and we know that this year, it’s everybody being presented at the big year of elections, globally, but it won’t stop this year. I mean, the whole human rights, democracy, dignity are very much related to technology and more and more to AI. So how you look at this from your own perspective?

Michel Roberto de Souza:
Thank you very much for the question. Well, I think that we are in a very interesting year and time in the board and Latin America in general. And I think that, well, regarding artificial intelligence, I think that one of the big challenges that we face in Latin America, and I think that a lot of different places, is the difference between global north and global south. I think that this is something that we can see not only here in the discussions that are more global in the sense with the presence, for instance, of Latin American states or Latin American civil society organizations and how sometimes we don’t have this kind of presence here and how the discussion sometimes is more led by global north states and institutions. But also how these can be seen in the way that the global south maybe consumes AI, maybe is sometimes the source of data and so on. So I think that this is a huge challenge that we have to face. This is something really important. And when we look at Latin America, for instance, we have problems. We have problems regarding how is the role of the state regarding artificial intelligence. For instance, we had a great case studies that we did in the literature to see how different Latin American states were using or deploying AI and how they have, for instance, data governance, how they see the way that they do this kind of technology. And we see that sometimes there are a lot of techno-optimism in policymakers that they want to have the technology to solve all the problems that. that they have in their organizations and we know that this is not possible because there are limits for the technology and sometimes it’s kind of difficult to understand also what are these limits. So what we did in Latin America is also not only do this case studies but also to talk with policy makers, to do courses with policy makers to try to let them understand the technology so they can deploy the technology. But this is the first layer. I think that now we are in the second big challenge in Latin America. We know that we have the Brussels effect regarding data protections law but now we are in a time that we are facing maybe the Brussels effect regarding AI regulation. So we had a lot of discussions to regulate artificial intelligence in Brazil, in Peru, in Chile, in a lot of different places and I’m not sure if we are going to follow the EU model or if we’re going to create our own model but I think that this is a huge challenge that we are facing together with the questions and problems that you mentioned regarding the use of AI and deep fake for instance for elections. So I think that we are in this really interesting time.

Fadi Daou:
Okay, thank you. Thank you Michel and this is definitely a tension and maybe a balance at some point between the Brussels effects I mean as it happened with GDPR globally and now the question about AI. So how to include actual local realities but at the same time to cope with the international requirements. So I think this is a good transition to hear from you and you are a global expert in international law and cyber security and technology. and to being also a member of Globethics and involved in this topic also from an organizational perspective. So can you give us your input on, of course, you can also mention about the Indian reality in India and how things are evolving in India. But at the same time, from an international law perspective, so how is this balance between local realities and global needs for regulation can be taken into consideration?

Pavan Duggal:
We are very clear that the legal frameworks of artificial intelligence have to be an important catalyst in moving towards an inclusive AI. If you’re expecting AI on its own to be inclusive of all groups in humanity, then potentially we are not on track. Primarily because artificial intelligence today is dependent on the kind of big data sets that it’s actually being trained upon. And since the data sets themselves have intrinsic discrimination that’s embedded in the said data sets, the discrimination automatically comes into artificial intelligence. So if you’re looking at inclusive AI, you’ll have to make sure that there will be, if not no, at least minimal discrimination. Further, there should be no bias. Now, there are two options. One is that we can go with folded hands to companies and say, please don’t engage in bias or discrimination, which may not necessarily be a very effective process. Last year, the Future of Life Institute categorically had requested all these big AI giant companies not to continue their progress for six months to enable legal frameworks to catch up pace. But nobody listened. So I think this kind of a voluntary call for action may not necessarily hold much water. The better option will be that countries will have to come up with legal frameworks that encapsulate therein the requirement for AI companies and generative AI platforms to put in appropriate safeguards so as to minimize the possibility of bias or discrimination that’s going to be embedded in or perpetuated by the AI algorithm. Another big thing is there’s complete lack of transparency. Today, artificial intelligence is a black box. So therefore, every user have not just a right to inclusive AI as a fundamental human right, but also a right for explainability and accountability. And for that, if the legal frameworks can start pushing the envelope and start bringing in more accountability for the AI players, that could be a good starting point. When I look at three major laws, the recent one, the European Union’s AI. act. When I look at the Chinese generative law, the law on generative AI that they launched and implemented on 15th of August 2023. Or when I look at the New York’s law on AI impacting employment, almost all of them are pretty clear, they want to minimize the risk to the public at large. And for that particular direction, almost all three laws are mandating minimal discrimination, minimal bias, and are in that direction stipulating that you must resort to regular audits. Of course, there has to be legal liability as well. Because today, large number of people are not just swayed away by AI as a magical term, they’re also now being sucked into a new revolution. This is the new revolution of artificial stupidity. More and more people have begun relying upon AI without testing the authenticity or veracity of the output that AI is generating. Knowing fully well that AI is hallucinating, that the level of evolution has not yet reached, well hallucination is completely absent. Yes, with the 13th of May 2024, and with the launch of GPT-4.0, it’s the newest and the most advanced model. But that model is also capable of potentially being abused by different kinds of cyber actors. That’s the reason why there’s a massive flip in the last two and a half weeks on AI crimes and AI generated cyber security breaches. Therefore, these entire issues will have to be dealt up from a legal policy and regulatory standpoint, if we really want AI to be inclusive for a better world. Otherwise, we will continue to keep on seeing a world which will be further accentuated and divided by AI haves and AI have nots. And in this regard, while law can play its important role, till such time, the legal frameworks don’t get established at the international and national levels. It’s imperative that nations must start pushing towards agreeing upon certain common legal principles, which can be used for enabling and also regulating the use of AI. And that’s, I hope, the area in which the AI for Good Summit at ITU and the WSIS Forum could potentially be looking at. So a lot of work happening, but we’ll have to still wait and watch how the legal frameworks effectively get developed and get enforced with the passage of time. Thanks.

Fadi Daou:
Thank you, Pavan. Thank you for also introducing this complexity between voluntary engagement from an ethical perspective by companies or developers of AI and the need of regulations and legal frameworks to ensure that AI is also safe and trustworthy. And definitely this black box, I think which is somehow inherent to the technology itself. I mean, I’m personally taking a course now to understand this technology. And one of the things I learned that was a neural networks now. So the output does not only depend on the input and the quality of data, but also a very uncontrollable and unexplainable process that the machine processes exactly and without any control on it. So definitely I think we are taking the risks of this lack of explainability and transparency of this technology, hoping that it can be mitigated, as you mentioned, by regulations and also by ethical commitment from the developers and the deployers of this technology. Globethics launched a few months ago. Last year, we organized a global consultation on AI. And the title of the policy report that came out, it can be found on Globethics’ website and downloaded for free. The title is Inclusive AI for a Better Future. It’s a small policy report because we do believe that AI can contribute for a better future for all. However, we need to take all those concerns that have been expressed here into account. And so this is my now second and last question. And asking all our panelists to answer very, very quickly in one minute, one minute and a half. What are the needed policies or instruments or resources, whatever you choose from those categories, to make AI more trustworthy, more safer, more inclusive on the different levels or in different? and fields that you all mentioned. And I would like to start with you, Michel. If you have the opportunity to lobby, I would say, the leaders of Latin America gathered in the summit, and you have one minute to tell them what they have to do now from a policy perspective to enhance the situation. So what you will do, Michel?

Michel Roberto de Souza:
First of all, I think that they need to listen to civil society. They need to listen and not to exclude marginalized communities that maybe doesn’t know exactly the terms, the technical terms, but they suffer the consequences in their daily lives. So I think that this is the first thing. Second thing would be to have a human rights-based approach. I think that we have a human rights law. We have a very jurisprudence and so on. And this is something that we have to have in mind, like a human rights-based approach with the human rights that are in place. And second thing I would say to you is that we can have other solutions. For instance, we have a publication that we did regarding feminist artificial intelligence development in the sense that we can have other views, community-based solutions to the problems that we have. So we have different kinds of artificial intelligence, and we have this capacity to develop also in another way.

Fadi Daou:
Wonderful. I think this is so important to be considered by the policymakers. In fact, this multi-stakeholder approach, including those who are the most marginalized or maybe also suffering or under the highest risk from this type of technologies. Thank you, Michelle, for your input.Qian, what do you think from a policy perspective now on governance of AI?

Qian Xiao:
OK, well, I’m doing a lot of research on the international governance of AI. And from our perspective, we think if we We would like to have an international framework for governance. Now we need to have a very flexible policy because we know that the technology itself develops so quickly and governance always falls behind. So there is always a pacing problem in this regard. So the policies will need to be very flexible. We need to be open to admission and to updating. And just our Indian friend mentioned about the law of China in terms of the large language models. Actually, it is not a law, it is a temporary measure. It is temporary because we know that it needs to be updated all the time. So it is not certain. So it must be flexible for the policy perspective. And the second, it should be very inclusive. The policy itself should be very inclusive. We should have all countries included when talking about the international governance. Not only the developed countries, but also we need to have all stakeholders into consideration because AI is so different that we have different stakeholders, the governments, the non-government organizations, but also the companies. So we should have them all together when we design the framework. So I’ll stop here.

Fadi Daou:
Wonderful. Thank you. So Diana, am I too hopeful or not realistic if I think that the AI can be an opportunity for Africa to make a leap and rather than thickening the divide, it could be an opportunity to accelerate the socio-economic development of the continent?

Diana Nyakundi:
Yes, I think to some extent you are a bit too hopeful. because I would say we are currently making demands on policy, but we don’t have the building blocks to support that policy and to support its implementation. So if we talk about creation of policy, but we don’t have the infrastructure and the skills and issues of compute access to data and all of that, then what is the point of that strategy if it’s just going to sit there and not to be implemented? However, it’s not all doom and gloom. There’s still considerable progress that’s going around in the continent, like I said earlier. There’s still a lot of projects that are coming up, like I said earlier. So if we are then creating strategies and policies around AI, like you said, yes, multi-stakeholder and governance is really important, just bringing everyone to the room and not just the policy makers, but also the people who are going to be affected by these technologies, so that their voices can be heard, their perspectives can be heard, what they feel should be included in the creation of these technologies then should also be heard. Secondly, our policies have been said to really be a copy-paste from the West, which then doesn’t reflect our realities. So the creation of policies should really be contextual to our realities. And when we speak about things like ethical principles, so speak about not just blanket statements of transparency, accountability, responsibility and all of that, but really think about how we’re going to measure these principles when we talk about accountability, fairness, how we’re going to measure them, again, anchored on our contextual realities. And then lastly, as we create these policies, we really should understand our history, our perspectives, our geographical context and bring about just an intersectional expertise as we discuss these policies and as we discuss these developments. So then again, if we speak about inclusivity, then talk about varied platforms of dissemination. So because, again, just back to our context, the dissemination of AI products has just been really focused on the internet and all of that, but then there’s the issue of access. So access and affordability of just the internet. So think about varied platforms of dissemination. For example, radio is something that a wide amount of people can access, radio, television, and all of that. So those are just some of the things that we should think about as we think about policies with regards to the African context.

Fadi Daou:
Thank you so much. And I think this comes with also global budget engagement when it comes to infrastructure to make it so available, make the project available in some context. So thank you, Diana. So what would be your wish, proposal, idea to move forward in this direction?

Xiaowu Ma:
Yeah, time is so fast. Thank you for asking. For example, Qingsong Health, my company, is Qingsong Health continuous sales of their hospital and $10,000 in China. But with the Department of the AI, with the Department and AIGC to follow doctors, it help the doctors and medicine, regardless the corporation and any other. In China, and this man that has the knowledge is the state, is have the producing new one million pairs of their continual this. I think this AI to help the more people can create the more value. Yeah, but the last, I am very glad to attention this form. I hope we share some news idea, but we can talk down here how we talk about this. Thank you.

Fadi Daou:
OK, thank you. Thank you. And Larissa, in one minute, one suggestion from your side on the policy level.

Larissa Zutter:
So this is not quite as concrete as you might want, but I think I want to piggyback off of what was said before. I think on an international level, we really need to do a much better job of having much, much more diversity when it comes to making the overarching legal frameworks. But I also think that we need to remember that there needs to be enough space for what Diana said, which is we need to have enough space for cultural context to be able, or just any kind of context to be able to build into these principles or whatever we develop. And we need to not forget that, of course, once we have international principles, there needs to be sectoral, country level, all kinds of other talks that need to be had because AI is put into a socio-technical environment and that largely defines how it’s used. So there will be different needs on different country levels, on different regional levels, in different industries. And in those processes, we need to be even more careful of having a lot of diversity because that’s usually where it gets lost. On an international level, we usually try a little harder to be inclusive, but as soon as you look at the industry level and the principles that are being developed there, it’s very homogenous. And so we really need to make sure that even on this level and also on a country level, we really make sure to have all of the stakeholders and not just cultural and multi-generational diversity, but also things like educational diversity or different thoughts and things like that. So I think we really just need to be careful that we don’t just be diverse and inclusive on an international level, but then also once we have more concrete solutions on an industry or country level.

Fadi Daou:
Wonderful, thank you. This brings another question about the frameworks or mechanism to ensure this diversity on both levels, local and international, but we don’t have time to discuss it. It will be for the next year, Pavan. I would take advantage from the remaining. In fact, we are already out of time, but just to mention that now in 15 minutes, so you will be hosting another panel. It is in the room. K1 in the same building on artificial intelligence and regulation and its regulations, development and trends that also I will have the joy to contribute to. So, but to conclude with you, if ITU asks you to put the title of next year WSIS Summit, what would you propose in 10 seconds? Okay, wonderful. Thank you for this input and we’ll meet in a few minutes in the other session that I mentioned also. I would like to thank all our speakers, to thank the audience. I’m sorry, we, I mean, the format is not really easy to also have a discussion with the audience, but I’m sure we enjoyed so much, we learned so much from those very diverse perspectives, but also very much aligned, I would say, with some key outcomes. We will have a report, outcome report out of the session also that will be shared with ITU, WSIS, and will be also with the speakers and shared online. So, thank you all. I would have the pleasure to share with you a copy with the speakers. I have only copies for the speakers, but it’s available online for everybody from the club ethics or policy report on inclusive AI for a better future. Thank you all.

DN

Diana Nyakundi

Speech speed

175 words per minute

Speech length

1059 words

Speech time

363 secs

FD

Fadi Daou

Speech speed

156 words per minute

Speech length

2496 words

Speech time

961 secs

LZ

Larissa Zutter

Speech speed

205 words per minute

Speech length

1020 words

Speech time

299 secs

MR

Michel Roberto de Souza

Speech speed

162 words per minute

Speech length

848 words

Speech time

315 secs

PD

Pavan Duggal

Speech speed

167 words per minute

Speech length

944 words

Speech time

339 secs

QX

Qian Xiao

Speech speed

166 words per minute

Speech length

694 words

Speech time

251 secs

XM

Xiaowu Ma

Speech speed

132 words per minute

Speech length

367 words

Speech time

166 secs