Foster AI accessibility for building inclusive knowledge Societies: a multi-stakeholder reflection on WSIS+20 review
30 May 2024 10:00h - 10:45h
Table of contents
Disclaimer: This is not an official record of the session. The DiploAI system automatically generates these resources from the audiovisual recording. Resources are presented in their original format, as provided by the AI (e.g. including any spelling mistakes). The accuracy of these resources cannot be guaranteed.
Knowledge Graph of Debate
Session report
Full session report
UNESCO Session Explores AI Accessibility for Inclusive Knowledge Societies
The UNESCO session titled “Fostering AI Accessibility for Building Inclusive Knowledge Societies” was a comprehensive discussion on the role of artificial intelligence (AI) in promoting digital inclusion and the integration of marginalized groups. Moderated by Xianhong Hu, a program specialist from the Information for All Programme (IFAP), the session featured a panel of experts who provided their perspectives on the impact of AI on society, particularly focusing on the concept of meaningful connectivity and the inclusion of diverse voices in the AI value chain.
Onika Makwakwa, a founding partner of the dynamic coalition on measuring digital inclusion, emphasized the transformative potential of AI across various industries. However, she warned against the risks of excluding marginalized groups from the development of AI systems, which could lead to biases and further societal divides. She stressed the importance of inclusive digital development, advocating for meaningful connectivity that encompasses quality access, digital literacy, and affordability.
Alexandre Barbosa, head of CETIC in Brazil, discussed the shift from binary measures of connectivity to a more nuanced understanding of digital inclusion. He highlighted the Brazilian Observatory on AI, a collaborative initiative aimed at consolidating information on AI deployment in Brazil and engaging various stakeholders. Barbosa also mentioned Brazil’s G20 presidency and its focus on AI, meaningful connectivity, misinformation, and digital government.
Fabio Senne, also from CETIC, raised concerns about inequalities within AI, such as the representation of vulnerable populations in training data and the impact of AI literacy on the population’s ability to mitigate technology-related risks. He pointed out the need to address the new gap between those who have access to digital technologies and those who are empowered to make informed choices.
The session also saw the announcement of the Internet Governance Forum Dynamic Coalition on Measuring Digital Inclusion, which aims to develop comprehensive metrics for digital inclusion. This coalition is a step forward in promoting policies for inclusive, equitable, and sustainable knowledge societies.
Maja Maricevic, Director of Science and Innovation at the British Library, spoke about the evolving concept of digital inclusion in the AI era. She highlighted the need for actions that address complex issues such as digital rights, democratic participation, and ethical safety. Maricevic suggested practical steps like demystifying AI by educating the public about its various technologies and implications, promoting human-centric design, and creating open environments for experimentation with data and AI tools.
The session concluded with a call for ongoing dialogue and multi-stakeholder partnerships to ensure AI’s role as an empowering and equitable tool. Xianhong Hu called for collaborative research and policy recommendations to address the new digital divide in the AI age. Participants were invited to join the dynamic coalition and contribute to advancing digital inclusion metrics.
Key observations from the session included the recognition of the innovative potential of diverse teams in AI development, the importance of understanding AI’s economic models, and the emphasis on AI literacy as a foundational element for inclusive digital development. The need for a multi-stakeholder approach to AI governance, learning from internet governance structures, and ensuring civil society participation in AI governance discussions were also highlighted.
Session transcript
Xianhong Hu:
Thank you. Colleagues, if you allow me, let’s have another minute to have our chair to get his water. I think we can wait for one more minute, but I could just start to give some background of this session. Basically, welcome you all to this UNESCO session on Fostering AI Accessibility for Building Inclusive Knowledge Sciences, organized by the Information for All Program, jointly with our working groups on the information accessibility and information ethics. I think we also have some working group members joining us online and perhaps also in the room. Thanks to the digital technology, it’s my first time to moderate to the session remotely and actually I find it quite functional. My name is Xianhong, the program specialist in the Secretariat of Information for All Program. We are today having a tight schedule. We are having a fantastic panel of five speakers, each of them are supposed to give five minutes intervention to share their visions and views. I also like to have some time really reserved for the questions and answers in the end. But since it’s a hybrid session, for those who already have some comments, suggestions or questions, you can just type your comments in the chat. I also take this opportunity to introduce my team, our colleague, Mr. Yacoub Dutoit, also joined us from the HQ, and Victoria Gruduc, and also Ludovica. We are all joining the session, they’re supporting the discussion from Paris. I don’t feel there’s such a distance, basically, we can see the room and here. everything clearly. So we are more or less on the time. If we should wait for a slightly a bit more to have the chair to address the welcome remarks because that’s quite important for us to set a scene on today’s discussion. But if we still need to wait for some time perhaps let me take a maybe let’s take a bit of action forward. Today we are having two objectives. One is to trigger discussion and raise awareness on how we ensure the inclusion, ensure the participation of those marginalized groups into the development and evaluation of artificial intelligence. And we also have a second layer of objective to catalyze a partnership with the IFA working groups and also through our newly launched dynamic coalition on measuring the digital inclusion. I happen to have our first speaker actually Ms. Onika Makwakwa who is also a founding partner of the establishment of the dynamic coalition on measuring the digital inclusion. While we are waiting for the chair I would like to give the floor to Onika Makwakwa since we are the founding partner on this time. Maybe you can start to share some view on how you think about the impact of artificial intelligence and also other frontier technology. How does it impact the issue of digital inclusion? And what do you think that maybe your organization, DDIP, could contribute to support the further action on the digital inclusion, as well as through this dynamic coalition? If you don’t mind, Onika. So we are having the chair coming. So if you don’t mind now, I would like to give the floor to the chair before we start the panel. But more or less, I think you have already heard of what the session’s about. Okay, so hi, Mr. Chair. Hello. Good morning.
Pablo Medina Jimenez:
Thank you very much, Ms. Cheng Hong. A pleasure to have you online.
Xianhong Hu:
Yeah, Mr. Chair, I just to give you a very short briefing. Before you arrived, I just briefly introduced the objective of this workshop and also briefly about the scenario. So now I’m very honored to introduce Mr. Pablo Medina Jiménez, the chair of the Information Forum Program to give some welcome remarks. And also I’d like to thank you for your taking efforts to be present in Geneva. It’s your first visit to the forum and I wish you really enjoy it. So Mr. Chair, floor is yours.
Pablo Medina Jimenez:
Thank you very much for your kind words and for the introductory remarks. As I was saying, it’s a pleasure to be here among all these distinguished representative colleagues, experts. So I will start a very brief presentation, if you allow me, Ms. Cheng Hong. Distinguished participants, esteemed colleagues, it’s a privilege to welcome you today to this important gathering at the WSIS 20 review. We are here to engage in a pivotal thematic workshop titled Fostering AI Accessibility for Building Inclusive Knowledge Societies. organized by UNESCO’s information for our program. Reflecting on the past two decades, we observe a digital transformation that has reshaped every aspect of our society. As we navigate this era, frontier technologies like artificial intelligence, big data, have the potential to either bridge or widen existence societal gaps. So it’s our duty to ensure that the development of these technologies benefit everyone, ensuring no one is left behind. Information for our program, established in 2001, is a unique UNESCO intergovernmental program that champions equitable societies by promoting universal access to information and knowledge for sustainable development. Our mission revolves around six key areas. The first one, information for development, recognizing the value of information in addressing development challenges. Two, information literacy, empowering individuals to seek, evaluate, use, and create information. Three, information preservation, ensuring universal access by preserving libraries, archives, and museums. Four, information ethics, addressing ethical, legal, and societal aspects of ICT applications. Then information accessibility, focusing on availability, affordability, and accessibility of information. And the last one, the sixth one, multilingualism in cyberspace, which facilitates participation in knowledge societies. IFAP is committed to promote information accessibility for all and to promote an ethical approach to new technologies. Our mission, as outlined in the IFAP strategic plan 2023-2029, focuses on enhancing information accessibility, information literacy, information preservation, information ethics. information for development and multilingualism in cyberspace. These pillars are essential in building sustainable and inclusive knowledge societies. Strong partnerships also are fundamental to realizing our goals. I am pleased then to announce today the establishment of the Internet Governance Forum Dynamic Coalition on Measuring Digital Inclusion. This new dynamic coalition created in collaboration with the Global Digital Inclusion Partnership and key IFAP partners, including the International Federation of Library Associations and Institutions, the Regional Center for Studies on the Development on Information Society, CETIC, the Governance and the Tech and Global Affairs Innovation Hub, will lead efforts to develop comprehensive metrics for digital inclusion. At the heart of the coalition’s mission lies a commitment to mainstream policies for inclusive, equitable and sustainable knowledge societies, international development plans and digital transformation process. The coalition also considers addressing gender divides in digital access, literacy and participation, and fostering opportunities for women’s participation and leadership in the digital age. We invite all stakeholders, experts, governments, representatives, private sectors, leaders and members of the academic and technical community to join this coalition to contribute to its advancement. In conclusion, dear colleagues, today’s workshop offers a chance to start discussions, build partnerships and gather resources for inclusive innovations. Let’s use the collective strengths of our networks to ensure that AI and other emerging technologies serve as tools for empowerment and equity. Thank you very much for your participation, your commitment to building inclusive knowledge societies. And let’s continue this work beyond today’s session. Thank you very much for your attention.
Xianhong Hu:
Thank you. Thank you very much, Mr. Chair. for setting such a good scene for our following discussion. I also take this opportunity to inform that we also have the presence of the former IFEP chair, Madam Dorothy Gordon online. It shows such a strong support from the previous IFEP leadership. I’m pretty sure today we’re having a very inclusive discussion on this important subject. Onica, now you have the floor. As I just mentioned, I’d really like to have you to share your vision and the view about how AI is impacting the digital inclusion issue and how you and your organization, Global Digital Inclusion Partnership would be able to action on it. Thank you. Wonderful.
Onica Makwakwa:
Thank you so much for this opportunity and good morning, everyone. You know, the Global Digital Inclusion Partnership is steadfast focused on advancing meaningful connectivity for the global majority. With over 10 years of our dedicated work on digital inclusion through our team from affordable access to meaningful connectivity, we recognize the imperative of ensuring inclusion and participation of groups left behind throughout the AI value chain. AI holds immense promise to revolutionize industries, to improve efficiencies and enhance our lives in many countless ways. However exciting this seems, it’s important that we don’t overlook the importance of inclusive digital development as a foundation for every stage of AI development. Imagine a world where the voice and perspectives of women, rural communities, racial and ethnic minorities, persons with disabilities and low income communities. and 1.5 billion people who are unconnected, mostly from the global maturity world, are absent from the creation of AI systems. Unfortunately, this scenario is not too far fetched for us, as marginalized groups already face barriers that limit their participation in the AI value chain. Why does this matter, we would wonder, right? It matters because exclusion breeds bias. When AI development teams lack diversity, they unintentionally embed their biases into algorithms, perpetuating inequalities and exacerbating societal divides. From biased hiring practices to discriminatory predictive policies, the consequences of exclusion in AI development are far-reaching and quite profound. But it doesn’t have to be this way, right? Inclusion and participation are not just moral imperatives, they are prerequisites for building AI systems that are fair, accurate, and beneficial for everyone. Research has actually shown that diverse teams lead to more innovative solutions. And this brings me to one of the things that is really critical in this foundation of AI, and that is making sure that we have inclusive digital development, where everyone is meaningfully connected and able to participate in a way that can truly be transformative for their lives. One of the big questions is, what could be the impact of AI and other technologies on digital inclusion? We would not even be able to honestly get the full benefit of AI if it is only developed by just a select few and those who are connected at the moment. So I think the imperative and the… opportunity now is that because of this interest and growth of AI, it actually provides an imperative for us to truly double down and fast track our goals for digital inclusion. And in doing so, it’s also important that we look beyond basic connectivity. When you look at all the things that AI promises to do and can enable us to do, it becomes an imperative for us to look at how we are measuring access and how we are measuring connectivity, like what is a connected person, and really go back to some of the work we had started around meaningful connectivity to raise that work so that we raise the bar for the international standard for what a connected person is and what it means to be connected in a way that also can enable you to contribute towards and or benefit from the artificial intelligence ecosystem that we have at the moment. So we have a lot of work ahead of us to make sure that policies for digital inclusion are not only just about connecting people, but making sure that from all the lessons that we’ve learned, including through the COVID-19 lockdowns, that we are now raising the bar to make sure that we are also closing the emerging gap amongst those who are connected. And that is looking at the quality of the connectivity, looking at the utility in terms of them being able to have the requisite skills and the affordable devices to be able to fully enjoy the opportunity of being connected, and just literally the entire spectrum around meaningfully connected as opposed to just basically being connected. Thank you.
Xianhong Hu:
Thank you so much, Onika. Just to reinforce what you have said about the role of advocating measurement. on digital inclusions through our digital dynamic coalition. We had just launched the membership submission portal. So we are here really, we joined to invite all the stakeholders online in the room and at WSIS to join us on this joint endeavor. You only need to submit a very short form on what you are going to do to contribute. So we will be able to collaborate together and engage you further. And now I’d like to introduce the second speaker, Mr. Yves Poulet, the Emeritus Professor and Director of the University of Namur in Belgium. And he used to be the chair of IFA working group on AI ethics. He’s truly an expert in the regulation and policy of AI. So Professor Poulet, would you be able to take the floor? Hello Yves, are you able to hear me? Hi Yves, we couldn’t hear you clearly. Could you try again? We still cannot hear you. Yves, I think there’s a connection problem. Basically your voice are broken from time to time. No, we still cannot hear you. Would you be able to maybe… exit to try again, and my colleague Victoria will be able to help you. If you don’t mind, I’d like to have our next speaker to talk until the technical problem on your side will be solved. Is that okay with you, Eve? Okay, good. So now I’d like to introduce our two speakers from our long-term partner, CITIC, which means the Category 2 Institute of UNESCO, the Regional Center for Studies on the Development of Infectious Diseases, based in Brazil, the head of the center, Mr. Alexandre Barbosa, and also the expert, Fabio Sanna. I mean, thank you for attending our session and also sitting on the panel in the room. So, Alexandre, would you like to take the floor first?
Alexandre Barbosa:
Yes, thank you very much, Xianhong, and thank you, Mr. Chair, Pablo, for inviting us to be here. Good morning, everyone. It is indeed a great pleasure to be here, as Xianhong has mentioned. We have a partnership with UNESCO for a long period now. And in the context of this new dynamic collision on measuring digital inclusion, I think that, Xianhong, it is indeed a very important initiative because it does touch several WSIS action lines. And in my opinion, it is a very relevant initiative from UNESCO, aiming at providing insights on advancing beyond the simplistic binary measurement of connectivities, as we are used to discuss, to have or not having access, and to move to a position that advocates for a more complete comprehensive approach to digital inclusion and the debate and research around digital inclusion is one of our priority areas at CETIC, the Regional Center for Studies on the Development of the Information Society at the Brazilian Network Information Center. We are a data producer center in Brazil, and since 2005, we have been conducting surveys, research, capacity building, producing indicators for policymaking on the socioeconomic implications of digital technologies. So this is indeed a very important area for us. And as a UNESCO Category 2 center, we also cooperate with Latin America and the Caribbean countries and Portuguese-speaking countries in Africa to develop better and internationally comparable data related to digital inclusion, and of course, the implications and the data analysis around this topic. And in terms of the intersection of digital inclusion and the adoption of artificial intelligence, we have been working in the past five years or so on new indicators related to AI adoption in different areas of society, especially by enterprises, by government, how government are adopting AI to provide better services or to provide a new way to interact with the citizens, and also the adoption of AI in the health sector and also education, how those establishments have been using AI to provide better services. And particularly in terms of AI adoption, we have been also conducting… not only the traditional representative sample surveys that produce reliable data and representative data, we have also been conducting sectoral studies on AI. We have launched two years ago during the UNESCO World Conference on Culture, a publication on the implications of AI in the cultural sector. And I have the pleasure to announce that in the second semester of this year, maybe September, we are going to launch our publication on AI in the health sector. This is a quite important study. And in this regard of AI, I would like to mention that we are partnering with the Minister of Science and Technology and Innovation to implement the Brazilian Observatory on AI. And we are responsible for making this happen. It’s not only ourselves, but we have four key partners, including the Brazilian government. And this initiative will bring together different stakeholders and consolidate information from different sources in terms of deployment of AI in Brazil. Well, now I would like to give the floor to Fabio because it is very important to understand how AI relates to potential inequalities in different areas, such as how AI can be biased in terms of the data, the models used to be trained, or the tools or the lack of skills. And last but not least, I would like to mention that recently, one month ago, we published, under the G20 Digital Economy Working Group, the publication on meaningful connectivity. And Brazil has embraced this area, because we have to go beyond the binary states of being connected or not, to include, for instance, digital literacy, digital skills. So I would like to give the floor to Fabio to explore, Fabio, maybe, these issues of inequality and AI in terms of digital inclusion.
Fabio Senne:
Thank you, Alexandre. Thank you, Mr. Chair. And thank you, Shanhong and IFAP, for the invitation. Yes, I would like to comment a little bit on the evidence we have in Brazil, for these 20 years, on measuring issues related to digital inclusion, and now on the discussion and interaction with AI. I think I’d like to raise three points that I think are critical to understanding this discussion. I think the first of them, I think, Ms. Onika already mentioned, which is how to deal with the very beginning of AI, or the design and development of AI systems and models, and equality of the training data. So if you don’t have representative data from different populations and vulnerable populations, of course, you’ll be biased in the way that you train your AI models. So I think it’s important to raise the questions, not just from a perspective of inequalities within countries, because we know that the levels of digital inclusion vary a lot between countries and regions of the world. So how can we ensure that the most vulnerable, in terms of digital inclusion, are represented in AI models, but also the inequalities within countries that we know that are very large. This is also the case of Brazil when you compare, for instance, ethnicity, traditional populations, rural versus urban, income, level of education. So when you go to all these dimensions, you can see that those inequalities will also be represented in the AI models that are being developed. I think that there’s another second point on the use of, because now we know that with the large language models and generative AI, we know that every one of us can use also, can be very active in using AI-based solutions. So in the use perspective, also the inequalities will have a very important impact. So when you go, for instance, we know from other technologies that those that are early adopters tend to benefit more in a faster way. And those that do not have the experience to use those types of technologies tend to be left behind. And this is a way of generating new inequalities and increasing inequalities if we don’t take care of the use dimension. And finally, I think this is something that Omnika also mentioned, is the skills part of the generation of AI literacy among the whole population. We know from the evidence we have that there is also the most vulnerable population in terms of digital inclusion are also more vulnerable in facing the risks related to the technology. And this can have to do, for instance, with misinformation or disinformation process that we face in societies. And from the surveys we have, we have a survey, a very interesting survey. children in Brazil, which is more following the methodologies of Kids Online Survey, which is defined by Europe and UNICEF. And we know from the surveys that although children are very well connected in the country and using AI tools in a very innovative way, we know that, for instance, 40% of Brazilian children from 11 to 17 agree that the first result found online is the best result. 51% agree that every person finds the same content when searching online. And 42% are unsure about their own abilities and skills to check online information. So when we ask children about this, we know that from the very beginning, we need to develop online and digital skills that can make this type of AI implementation useful and inclusive and not create new types of inequalities. I would like to just raise these three points that I think are critical for our discussion and also to welcome this new IGF dynamic coalition that I think this can be a very useful space for developing this debate. Thank you very much.
Xianhong Hu:
Thank you so much, Alexandre and Fabio, for sharing so much substantial and pioneering work relating to meaningful connectivity and unpacking the complexity regarding the digital inequalities of AI chain and development, which is so inspiring. Please do share all your work with us so we can further disseminate through the IFAP network. I already copied the link of your new report on meaningful connectivity online. It’s a really wonderful report for everyone to have a check. So now let me try if Professor Yif Pulei, would you be able to try again if you can take the floor? Yif, are you there? Hello, perhaps we need more time for Professor Poulet to intervene. So now I’m lucky to introduce Dr. Maja Maricevic, the Director of Science and Innovation of the British Library. As you know, the library is playing an instrumental role in meaningful connectivity and fostering the digital inclusion, particularly in the AI age. And the British Library has done so much wonderful work. So Maja, please take the floor.
Maja Maricevic:
Thank you so much, Xinhong, and thank you so much to all the previous speakers. It was really good to hear the emerging consensus about the importance of the issue that’s facing us in this particular area. We’re so excited to be here and especially about the dynamic coalition, because I think the coalition of different stakeholders is exactly what’s needed in this particular moment. So we had an excellent session yesterday with also colleagues from IFLA talking about digital inclusion, and I spoke at that. And in some ways, I’m going to continue from where we stopped yesterday. So we’re talking about digital inclusion and libraries, and I was trying to suggest to colleagues, and I think I heard some of these views from other speakers today, that there is a very big change in how we’re seeing digital inclusion. And before AI, we simply saw it as closing the gap before those who do not have access to digital technologies and those who do not. And this issue of access obviously stays with us because there is still very large parts of communities internationally that do not have any access at all. But with AI, the situation obviously has become lots more complicated because now we have a new gap that I think everybody was referring to. And Fabio just gave us some amazing statistics from Brazil that we also now have a gap between. those who don’t have choice, how they access digital technologies. They might have access, but it’s very narrow and they are not empowered to make the right digital choices for a number of different reasons. And there is no doubt that AI offers us new economic opportunities and some amazing advancements, opportunities for the humanity, but also causes adverse effects by AI and data-driven technologies and obviously algorithmic profiling that leads to harms, bias and misinformation that’s already been mentioned. And also there is lots of issues, I think, in the area of digital rights that I’ll come back to in a little while, a little bit more. So for libraries and people who work with communities in different information settings, obviously the actions that we have been taking, which is about lending devices, making sure there is access to Wi-Fi, providing digital literacy education and many, many other things. There is now the whole additional consideration of different actions that are now related to very complex issues. I mentioned digital rights, including digital cultural rights, because internet does not have the equal amount of data information from different cultures and societies. There is the whole level of issues about democratic participation in digital decision-making. There is lots to be said about how we promote human-centric design to different systems and services, but there’s lots to be done on domestified technology and what it actually means, and to also take greater care of diversity of digital outputs in terms of linguistics, cultural, geographic, economic. Think how we. preserve digital information with AI in it and how we empower parents, teachers, citizens, creators, employers to make right informed digital choices and then the big issues around ethics and ethical safety issues. I want to just quickly do that very big list that just as we were talking about accessibility that maybe there are some practical steps that we could be thinking about. And I’ll suggest a few and see if people agree and if our discussion maybe leads to something there. One for me that is big on here is the mystification of AI. So we talk about AI all the time as one single technology but in effect we are dealing with many different technologies. They all have their advantages and disadvantages. We have machine automation, robotics, machine learning, large language models, computer vision, neural networks, theory of mind. There are many many of them. They all have different implications. They’re used in different ways. They have different advantages and different limitations and I think there is something about seriously upping education levels of people understanding what these technologies are and what they actually do and where they appear. So with large language models it should not be that difficult for start understanding that they simply use statistical models to analyse language and the different things can happen as a result of that and so on. So I think one thing I’m suggesting is that we need to get under the bonnet of AI and maybe not talk about it as this magical one thing because I don’t think it is a magical one thing. The other element for me is human element. So we talk about technology quite a lot but the human element and the human agency in AI is another thing that I think really needs to be separated out because we have human agency in designing and employing. employing AI. Legal and ethical aspects are really up to humans to make sure they’re responsible for, and humans have choice whether or how to use AI. As well, humans have rights to question AI-derived outcomes because AI is often very, very wrong. And I think that’s something that we really have to have on our agenda. And also make sure that we put in our education, understanding of AI economic models, because AI is there to give us a really new scientific breakthroughs, resolve health issues, make big human advancement, but it’s also there to make money. And all of these things are good things. Economic advancement is a good thing. But I think we have not maybe put enough stress on working with people to understand AI economic models, and also how all of us participate in this as data points in our own right. Our voices, our data that we leave on the internet, everything that we are becomes a part of this economic model. And I think working with people to understand that better would certainly, I think, increase understanding and inclusivity of this. And my last point is about participation and experimentation. So the libraries do quite a lot in making our collections available and our data available to people in accessible ways with different tools. Obviously, not everybody has the same resources, and we are very fortunate to have resources to do this. But I think there is something about making sure that across the world, there are open environments where people can play with data, play with tools, and learn by doing. And this can be made very accessible for digital storytelling, through crowdsourcing exercise, from free tools that are provided in a more… more responsible and ethical environments and I think that’s something that would be really important to do because then this technology will start to feel real and we will have more understanding how it actually works rather than being something that is just coming from somewhere else. So I’m so excited about the idea of coalition of different partners coming together on this because I think that combining of skills of different agencies and parts of the academia and the governmental organization and the international organization would be exactly the right way to go about this. Thank you.
Xianhong Hu:
Thank you Maria for sharing so many brilliant ideas and also sharing a great work talking about the humanistic aspect for AI. I think no one can really talk better maybe than our next speaker and Professor Puley. I think I just saw you. Could you try again? Hi Yves. Hello. Hi Professor. Could you try to turn on your camera and present a bit? Actually I saw a double screen of Professor Yves Poullet. Somehow it just showed up but now it didn’t. So Professor Eve, I hope you can hear me. So whenever you think you could talk. Ah yes. Hi Eve. I saw you. Can you hear me? Hello? Hi Eve. Oh you cannot hear me. Okay. Anyway I hope that you can speak whenever you can. So now we only have a few minutes left. I’d like to open the floor to the participants. in the room and online. I also like to ask Mr. Chair, Mr. Pablo Medinista, could you please help to co-moderate a bit if any participant in the room who want to take the question to share some comments?
Pablo Medina Jimenez:
Yes, team Xianhong, of course. Thank you very much for considering this possibility. So if you allowed me, as you are the moderating one, to ask here in the room if there is anybody who would like to take the floor to any comment or question, anything, feel free of intervening in this workshop. I see. Could you please present yourself, introduce yourself, and take the floor? Thank you.
Audience:
Thank you. I’m Amanda Leal. I work at the Future Society. We’re a nonprofit focused on AI governance. And I came here because I was interested also as a Brazilian to hear more about the work that CGI does and understand what we can learn in AI governance from internet governance and these structures that have been in place for many years. And one thing that I came here, I came here expecting that I would make an intervention arguing that it’s not only about accessibility. And luckily, I heard from all of you that it’s not only about accessibility, so this is great. So taking a step forward, because I think we’re all on the same page about the need to address AI economic models, power concentration, digital inequalities, media, and information literacy on top of digital literacy, I wanted to know from you in a positive agenda, how do you envision critical things that have happened from internet governance? What should we learn and take forward in AI governance? And I will make sure to try to reflect that as well as a civil society representative in this area. it’s more and more crowded with industry lobbying and interests that are not necessarily aligned with public interest because the economic incentives are not there. So what would you suggest in terms of bringing this bottom approach and having more civil society participation in AI governance? Thank you.
Pablo Medina Jimenez:
Thank you very much, Ms. Leal, for your remarks. I see Mr. Alexander, would you like to go to the floor? Okay, yeah.
Alexandre Barbosa:
Thank you so very much, Amanda, for this question. Indeed, a very critical question. As you may know, Brazil holds now the G20 presidency and it is not a coincidence that our four priority areas within the digital economy, a working group, we have AI, meaningful connectivity, misinformation or information integrity, and digital government. And we are involved in these four areas, but particularly in providing guidelines in terms of AI and meaningful connectivity. We have two knowledge partners being UNESCO, our knowledge partner for AI and ethics, and ITU on meaningful connectivity. And of course, you brought a very important message on how to engage different actors from society. And as you probably know, the Brazilian Internet Steering Committee, CGI.br, since its inception, a multi-stakeholder structure, nature. So we bring to the table different actors from society. And right now, the topic of disinformation related… to AI and information integrity as well as the regulation of AI, we have to, this is an ongoing process right now in Brazil and we are listening to different stakeholders so that we can ensure that we can incorporate different voices into this debate. Yesterday we saw an excellent presentation from UNESCO in terms of AI in the judiciary. We have been also working with UNESCO, the first MOOC that UNESCO has developed including your own organization, the Future Society and ourselves and of course UNESCO. We are very much concerned and worried with this potential bias that we have in terms of using AI-based applications in the judiciary. So this is not a completed discussion but it’s an ongoing process right now. But thank you for bringing this important issue. Thank you.
Xianhong Hu:
Mr. Chair, may I also intervene on this a bit? I mean today we are really focusing on a very crucial issue on the meaningful connectivity and inclusion regarding to the artificial intelligence which is a new cross-tech area. IFAP working group on information accessibility has been convening a milestone annual conference on artificial information accessibility related to the artificial intelligence. It’s called AI for AI. IA for AI means information accessibility for AI for information accessibility which will be convened every year on 28th September also in the commemoration of of the International Day of Universal Access to Information. I think that will be an important occasion for us to really carry forward what we’re discussing here. I think we do need to consider conducting more collaborative research to come up with some policy research, a brief recommendation to really sensitize our member states to tackle this emerging new issues on the inclusion regarding to the bias, inequality, et cetera, relating to the algorithm, data, infrastructure and application of the AI development. It’s become a new emerging digital divide in the age of AI. Age of AI before we have not succeeded in tackling the digital divide emerged by the internet, et cetera. So it’s definitely a new area we should tackle. And I welcome you all to join our IFAP working group for accessibilities activity in September. It’s called the AI for AI. I will also type a link to the chat. And also as a dynamic coalition, as all of your speakers have well supported, I mean, you are all our actor and driving force. We are going to launch this dynamic coalition with different stakeholders on different occasions throughout the year. First coming one will be at the EuroDIG. For those who will be there, we can further strategize the actions. And then we are going to have the iSCOV conference, which will take place in South Africa later this year. And then in December, we will have this dynamic coalition gathering in the ITF to be convened in Saudi Arabia. So let’s not stop the conversation on this, but really continue enlarge and deepen it because it definitely is one of the most crucial issues for IFAP, for our stakeholder to tackle in the future. And I think we’re almost at the edge of the finishing our session. I also like to. to have a final check for any speakers, panelists in the room and online. You have any final remarks to make? I saw Maja send a very nice smile and I take as a very good message. And also, if not, I’d like to give floor to Mr. Chair, Pablo Medines. Maybe you can say really last word to close our session. I thank you again for being present and also hybridly moderate a session with me. And Mr. Chair, the last words is for you.
Pablo Medina Jimenez:
Well, thank you very much, dear Xianghong. I didn’t prepare anything for closing remarks. Indeed, as you’re the one who is in control, I wanted to ask you the permission to ask the floor, to ask the room if there’s any other reactions about our interventions we had here. So if I may, then thank you and everybody for convening this excellent workshop. And of course, a salute to Madam Dorothy Gordon, which I didn’t know when I started. She was online. So dear Xianghong, I give you back the floor for your closing considerations as you’re the moderator in this workshop. So thanks again. Thanks, everybody, for having me here with you. And Xianghong, the floor is back to you.
Xianhong Hu:
Thank you. Thank you. Thank you, Mr. Chair. Thank you, everyone in the room and also online. I just want to last try for Professor Poullet can join us. It seems impossible, but I do. Yeah, and I apologize because we have the next. session coming. Yeah so probably next time. Okay and thank you. I also like to thank the excellent technical support of WSIS Forum. Thank you madam. I really enjoyed the whole hybrid session which went so smoothly. Thanks to all. Have a good day and enjoy your WSIS in the forthcoming days. Bye. Thank you. Bye-bye. Thank you. Bye.
Speakers
AB
Alexandre Barbosa
Speech speed
128 words per minute
Speech length
1009 words
Speech time
474 secs
Arguments
Partnership with UNESCO and the importance of digital inclusion measurement
Supporting facts:
- CETIC has a partnership with UNESCO and focuses on producing indicators for policy decisions.
- The Dynamic Coalition on Measuring Digital Inclusion is seen as a relevant initiative.
Topics: Digital Inclusion, WSIS Action Lines, Policy Making
The need for comprehensive approaches to digital inclusion beyond binary connectivity measures
Supporting facts:
- CETIC advocates for advancing beyond simple measures of connectivity to more comprehensive digital inclusion.
- There is a priority on research related to the socioeconomic implications of digital technologies.
Topics: Digital Inclusion, Connectivity, Comprehensive Measurement
Affirmation of AI’s relevance in advancing sectors like healthcare, government, education, and culture
Supporting facts:
- CETIC has conducted studies and launched publications on AI implications in various sectors.
- Brazilian Observatory on AI to consolidate information and stakeholder engagement in AI deployment in Brazil.
Topics: AI Adoption, Sectoral Innovation, Policy Making
Recognizing AI’s potential for creating or exacerbating inequalities, needing careful attention to bias and skills
Supporting facts:
- AI may be biased in terms of data, training models, and tools which can affect equality.
- Lack of skills is also a concern in terms of AI and inequality.
Topics: AI Bias, Inequality, Digital Skills
Report
The Centre for Information Technology and Development (CETIC) has closely aligned its objectives with the United Nations’ Sustainable Development Goals (SDGs), notably SDG 9, which focuses on industry, innovation, and infrastructure. CETIC’s initiatives, such as establishing partnerships with UNESCO, highlight their commitment to fostering digital inclusion and informed policy-making.
Their contribution to global efforts, like the WSIS Action Lines, demonstrates a commitment to tackling the challenges of building an inclusive information society on an international scale. CETIC’s involvement in the Dynamic Coalition on Measuring Digital Inclusion is heralded as an essential initiative for developing accurate indicators for digital inclusion.
This work emphasises the need for more comprehensive digital metrics beyond simplistic connectivity measures. It acknowledges the importance of understanding the socio-economic impacts of digitalisation, which is crucial for advancing toward the equitable society envisaged by SDG 10 to reduce inequalities.
The Brazilian Observatory on AI, spearheaded by CETIC, reflects the proactive engagement with the transformative wave of artificial intelligence. Gathering information and encouraging multi-stakeholder collaboration, this observatory aims to guide AI deployment and policy development. This effort aligns with targets set by SDGs 3, 4, and 9 by promising to fuel advancement in healthcare, education, administration, and culture through sectoral innovation.
CETIC’s research has also illuminated challenges like AI-induced biases and disparities in digital skills, drawing attention to how AI can potentially exacerbate social inequalities. This concern calls for careful scrutiny from policymakers, educators, and technologists, reflecting the aims of SDG 10 to foster equality within the burgeoning digital realm.
Multi-stakeholder collaboration, as CETIC advocates, is fundamental for leveraging the full potential of digital inclusion and AI deployment, resonating with SDG 17 which promotes partnerships as vital to achieving all SDGs. The Brazilian Observatory exemplifies this collaborative approach, combining insights from diverse stakeholders to shape a robust national AI strategy.
Furthermore, CETIC advocates for ‘meaningful connectivity’, a level of access that encompasses digital literacy and skills. This expanded definition of connectivity supports the objectives of SDGs 4 and 9, which stress the importance of education and infrastructure for sustainable development. CETIC’s push for digital literacy and skills is a stride towards a resilient and inclusive information society.
In summary, CETIC’s work reflects a comprehensive effort to forge partnerships, promote informed policy-making, foster sectoral innovation, and advocate for an inclusive approach to connectivity. Through initiatives, research, and international collaboration, CETic is a key player in shaping digital inclusion policies and practices.
CETIC’s alignment with the SDGs ensures that it remains at the forefront of addressing the complex interplay between technology, society, and global development as the digital age progresses.
A
Audience
Speech speed
163 words per minute
Speech length
269 words
Speech time
99 secs
Report
Amanda Leal, representing the Future Society—a non-governmental organisation focused on AI governance—participated in a conference to deepen her understanding of the interplay between AI governance and internet governance, informed by her Brazilian background. Initially pleased to find a unified recognition of the crucial role of accessibility in AI, Leal took the conversation further by outlining critical issues needing attention within AI governance.
These include the economic foundations of AI, the concentration of power in a few large corporations, the exacerbating digital divide, and the importance of media and information literacy alongside digital skills. She identified the challenge of strong industry lobbying, which can often oppose public interest, and advocated for a ‘bottom-up’ governance approach that would elevate the influence of civil society and protect public interest against dominant corporate agendas.
Leal sought advice on drawing from the successes of internet governance to strengthen AI governance mechanisms. Her goal: to identify transferable best practices that could enhance the inclusivity, impartiality, and efficacy of AI governance. In her concluding remarks, Leal emphasised the urgency of establishing a governance framework for AI that meaningfully involves civil society, thereby ensuring AI’s development remains in tune with democratic principles and serves the public good.
Her address was a powerful call for a participatory governance structure that can navigate the complexities of varied interests dictating the trajectory of AI governance policy. Leal remains committed to reflecting these values in her ongoing work, indicating a robust approach towards an equitable and democratic future of AI governance.
FS
Fabio Senne
Speech speed
151 words per minute
Speech length
701 words
Speech time
279 secs
Report
The speaker conveyed significant concern regarding the dual challenges of promoting digital inclusion and addressing the complexities within the growing AI landscape, particularly spotlighting the Brazilian situation where the digital divide is pronounced. Three notable concerns were underscored: 1. **Bias in AI Development:** The first concern the speaker mentioned pertained to entrenched biases in AI system development and design.
The highlighted issue was that bias in training data can cause AI models to unintentionally perpetuate inequality. Emphasising the need for diverse and representative datasets, the speaker pointed out that these should include input from marginalised and vulnerable groups to avert ingraining biases.
Moreover, such disparities are not limited to a national context; they are evident internationally with variations in digital access across different countries and regions. Within Brazil, the speaker exposed significant disparities along lines of ethnicity, urban versus rural settings, income levels, and education, which could potentially be reflected in AI systems.
2. **Impact of AI Use and Adoption:** The second concern raised revolved around the societal fragmentation that AI adoption could exacerbate. Early adopters of AI technologies, who tend to hail from more privileged backgrounds, could greatly benefit, contributing to wider inequality as late adopters lag behind.
The speaker urged that this could compound inequalities and advocated for addressing not solely the design but the application of AI to cultivate a technologically equitable future. 3. **AI Literacy and Education:** The final issue centred on the necessity for the appropriate skills to use AI technologies ethically and expertly.
The speaker voiced concerns that those excluded from digital advancements are particularly susceptible to misinformation and other risks. Illustrative of this was a survey of Brazilian children, aligned with the methodologies used in the Kids Online Survey by Europe and UNICEF, which exposed a lack of critical engagement with online content.
Among children aged 11 to 17, 40% believed the first result in an online search was the most reliable, more than half assumed that everyone receives the same search outcomes, and a significant proportion doubted their ability to discern the veracity of information online.
This demonstrates the pressing need for AI and digital literacy education to mitigate emerging forms of inequality that AI utilisation could advance. In response to these layered problems, the establishment of a new IGF dynamic coalition was commended by the speaker.
The coalition is envisaged as a critical platform to foster discussion and create solutions for the discussed challenges. It is hoped that the coalition will play a key role in shaping AI inclusively, ensuring that this transformative technology acts as a catalyst for equalising opportunities rather than exacerbating social and economic disparities.
In essence, the speaker delivered a comprehensive analysis of digital inclusion in the AI era, advocating for purposeful action to ensure that technological progress does not deepen the chasm of societal inequalities but instead aids in establishing a more just international community.
MM
Maja Maricevic
Speech speed
165 words per minute
Speech length
1301 words
Speech time
474 secs
Report
Xinhong, along with fellow contributors, has highlighted digital inclusion as an urgent issue, championing the newly-formed dynamic coalition aimed at addressing the multi-faceted challenges presented by artificial intelligence (AI) in this field. There is a unanimous feeling of optimism that this alliance could lead to significant strides in promoting inclusion.
Initially, the IFLA session conceptualised digital inclusion as a binary issue, focusing on the disparity between those with and without digital technology access. However, discussions have since evolved to recognise a secondary digital divide where access does not guarantee empowerment for effective use.
The introduction of AI has added layers of complexity to this landscape by influencing economic opportunities and societal development. Noteworthy is the concern over AI’s detrimental effects, such as algorithmic profiling that may result in bias or harm, and the propagation of misinformation.
Additionally, the disproportionate representation of different cultures online necessitates a strengthened assertion of digital cultural rights. In response to these emerging challenges, libraries and information services are encouraged to extend their services beyond providing access and education on digital literacy to navigate the consequences of AI.
Xinhong suggests that to combat these challenges, AI should not be treated as monolithic but rather be deconstructed to educate the public about the spectrum of technologies that constitute AI, and their specific uses and limitations. There is also a push for educational initiatives to reveal the mechanics of AI systems, enhancing the public’s understanding of how these technologies operate.
Moreover, Xinhong focuses on the human aspect in AI development and governance, underscoring the role of human intervention in ethical considerations, AI outcomes, and the broader economic ecosystem. Education on the impact individuals have on AI’s economic models by contributing their data is deemed crucial.
Xinhong proposes that libraries could transform into inclusive spaces for first-hand AI experimentation, providing key opportunities for the public to engage with tools and data through digital storytelling and crowdsourced projects. By facilitating these experiences with ethically sourced tools, libraries can help demystify the technology and elucidate its wider effects.
The ultimate call to action emphasises the creation of a broad coalition that includes diverse entities such as academia, government, and international bodies. Combining skills and resources, this coalition is envisioned to tackle the intricacies of AI to foster improved global digital inclusion.
The text adheres to UK spelling and grammatical standards and remains accurate in reflecting the core message of digital inclusion challenges and solutions in the AI era. All potential corrections have been addressed to ensure clarity and precision in the summary.
OM
Onica Makwakwa
Speech speed
147 words per minute
Speech length
737 words
Speech time
301 secs
Report
The speaker from the Global Digital Inclusion Partnership underscores the crucial role of fostering meaningful connectivity and digital inclusion, especially in the realm of Artificial Intelligence (AI). The emphasis is on the negative consequences that could arise if underrepresented groups such as women, rural communities, minorities, disabled individuals, and those from deprived sectors, including the 1.5 billion people who remain offline globally, are left out of the AI value chain—from ideation to development.
AI systems that lack diverse input risk perpetuating bias and social injustice. AI’s full potential, the speaker argues, can only be unlocked with contributions from diverse groups, affirming digital inclusion as not just an ethical imperative but also as a cornerstone for developing fair and efficient AI technologies.
This premise is supported by evidence demonstrating how team diversity leads to greater innovation. Thus, incorporating a broad spectrum of perspectives is crucial in AI development to avoid unintentionally embedding biases in algorithms, which could lead to discriminatory outcomes, such as in hiring or predictive policing.
Highlighting the urgency, the speaker calls upon stakeholders to seize the burgeoning progress of AI technology as an opportunity to step up digital inclusion efforts. Merely providing basic internet connectivity is no longer sufficient. The speaker insists on reassessing standards of digital inclusion to embrace ‘meaningful connectivity’, which takes into account the quality of connections, ensuring users have the skills and knowledge to utilise connectivity fully, and making devices affordable to remove financial obstacles to digital access.
The COVID-19 pandemic’s global lockdowns provide tangible proof of the importance of bridging digital divides, drawing attention to the quality gap among those with internet access. Achieving meaningful connectivity is said to enable meaningful engagement with, and benefits from, AI ecosystems.
In summation, the speaker advocates for a shift in digital inclusion policies towards a holistic approach, enhancing connection quality, digital literacy, and the accessibility of necessary devices. This shift will not only ready all communities for an AI-driven future but will also promote the creation of AI systems that are more innovative and equitable, truly reflecting the diversity of the global populace.
Key phrases for SEO might include ‘digital inclusion’, ‘AI development’, ‘meaningful connectivity’, ‘diverse perspectives in AI’, and ‘bridging digital divides’, though they should be integrated organically to maintain the quality of the summary.
PM
Pablo Medina Jimenez
Speech speed
130 words per minute
Speech length
920 words
Speech time
424 secs
Report
In a thematic workshop during the WSIS 20 review, Ms. Cheng Hong extended a warm invitation to attendees at a UNESCO Information for All Programme (IFAP) event. The workshop centred on “Fostering AI Accessibility for Building Inclusive Knowledge Societies” and aimed to explore the role of artificial intelligence (AI) in creating inclusive communities.
The poignant opening address highlighted two decades of staggering digital progress, focusing on how the era of digitalisation has reshaped global societies. The presentation underscored the dual nature of frontier technologies like AI and big data, which could either bridge or widen societal divides.
The collective duty to direct the progression of these technologies towards inclusivity and avoid the exclusion of any groups was emphasised. Since its inception in 2001, IFAP has been committed to promoting equal societies by ensuring universal access to information, structured around six key areas: 1.
Information for development, harnessing information to address complex developmental challenges. 2. Information literacy, equipping people with the capabilities to efficiently find, assess, and generate information. 3. Information preservation, aimed at maintaining knowledge in libraries, archives, and museums. 4. Information ethics, dealing with the moral, legal, and social consequences of ICTs.
5. Information accessibility, endeavouring to ensure the availability, affordability, and accessibility of information to everyone. 6. Multilingualism in cyberspace, enhancing diverse linguistic representation online to enable inclusive participation in knowledge societies. These pillars guide IFAP’s strategic planning for 2023-2029. A key announcement at the workshop was the establishment of the Internet Governance Forum Dynamic Coalition on Measuring Digital Inclusion, in partnership with the Global Digital Inclusion Partnership and key IFAP stakeholders.
This coalition focuses on creating comprehensive digital inclusion metrics, with special emphasis on increasing women’s digital access, literacy, and leadership. Stakeholders from various sectors are urged to collaborate within this innovative coalition to ensure a digitally inclusive future. The workshop thus provided a platform for dialogue, partnership-building, and pooling resources to foster inclusive innovation.
Unified efforts to utilise emerging technologies as tools of empowerment were stressed. In conclusion, participants were invited to be actively involved in the ongoing dialogue, and appreciation was shown for the engagement and commitment to fostering inclusive knowledge societies. Ms.
Cheng Hong concluded by thanking the attendees for their dedication and signalled an ongoing commitment to the conversation around AI and inclusivity. In compiling this summary, a particular attention was paid to maintaining UK English spelling and grammar conventions, ensuring grammatical accuracy, preserving sentence structure, and avoiding typographical errors.
Key phrases, such as ‘building inclusive knowledge societies’ and ‘fostering AI accessibility’, were included to reflect the workshop’s aims and discussions without sacrificing the quality of the summary.
XH
Xianhong Hu
Speech speed
139 words per minute
Speech length
2254 words
Speech time
973 secs
Arguments
The session aims to foster AI accessibility for building inclusive knowledge societies.
Supporting facts:
- Organized by the Information for All Program
- Jointly with working groups on information accessibility and information ethics
Topics: AI Accessibility, Inclusive Knowledge Societies
There’s a need to ensure the inclusion and participation of marginalized groups in the development and evaluation of artificial intelligence.
Supporting facts:
- They are focusing on marginalized groups
- They are discussing the inclusion in AI development and evaluation
Topics: Artificial Intelligence, Marginalized Groups, Inclusion
Digital technology has enabled remote participation and functionality in moderating sessions.
Supporting facts:
- Xianhong Hu is moderating the session remotely
- Digital technology allows clear visibility of the room and participants
Topics: Digital Technology, Remote Participation
Dynamic coalition on measuring digital inclusion is a catalyst for partnership within IFA working groups.
Supporting facts:
- Newly launched dynamic coalition
- Focus on measuring digital inclusion
Topics: Dynamic Coalition, Digital Inclusion, IFA Working Groups
UNESCO session has a tight schedule with a panel of five speakers ready to share their insights.
Supporting facts:
- There are five speakers
- Each speaker has a five-minute intervention
Topics: UNESCO Session, Panel Discussion
Acknowledgement of the supportive environment and continuity in leadership
Supporting facts:
- Xianhong Hu acknowledged the presence of the former IFAP chair, Madam Dorothy Gordon, showing support from previous IFAP leadership.
Topics: WSIS, IFAP leadership
Setting the scene for an inclusive discussion
Supporting facts:
- Xianhong Hu tasked Pablo Medina Jimenez with the WSIS thematic workshop, highlighting IFAP’s role and expressing optimism for the inclusive discussion to follow.
Topics: Inclusive knowledge societies, AI impact on digital inclusion
Report
During a pivotal session organised by the Information for All Programme, emphasis was firmly placed on the importance of AI accessibility and the creation of inclusive knowledge societies. The discussions resonated with a positive sentiment towards establishing an inclusive framework in the realm of artificial intelligence, particularly concerning the inclusivity of marginalised communities.
This ethos aligns with the aspirations of Sustainable Development Goals (SDGs) 9 and 10, which champion industry innovation and infrastructure, along with the reduction of inequalities. Digital technology’s pivotal role was brought to the fore, as it enabled Xianhong Hu to moderate the session remotely.
This demonstration of technology’s potential in facilitating wider participation underscores its integral contribution towards a robust infrastructure for inclusive growth and industry innovation. SDG 17, which focuses on partnerships for achieving the goals, was highlighted by the launch of a dynamic coalition aimed at measuring digital inclusion.
Coalitions such as this act as catalysts for collaboration, seeking tangible ways to close the digital divide. A UNESCO session, which ran on a tight schedule, facilitated dialogue among five speakers, each limited to a concise five-minute intervention. This setting underlined the importance of efficient communication in articulating key points during time-constrained discussions.
Strong partnerships between different groups and organisations are essential, as illustrated by Onika Makwakwa’s involvement in a coalition championing digital inclusion. The significance of continuous leadership was subtly underscored when Xianhong Hu acknowledged the contribution of former IFAP chair, Madam Dorothy Gordon.
Leadership both past and present is critical in maintaining momentum towards inclusive goals. Moreover, Hu’s tasking of Pablo Medina Jimenez with leading the WSIS thematic workshop sent a message of optimism and a commitment to fostering rich, inclusive debate. In summary, the session projected a cohesive and optimistic outlook, emphasising the crucial role of past and current leadership in facilitating an inclusive debate on AI’s impact on digital inclusion.
The narrative was one of a collective call for global digital inclusion partnerships and the belief in AI’s potential to provide equitable industry innovation if appropriately harnessed. Accordingly, it set a trajectory for AI and digital infrastructure that is not only innovative, but equally accessible and inclusive, confirming a future where artificial intelligence supports the fabric of an inclusive knowledge society.
Related event
World Summit on the Information Society (WSIS)+20 Forum High-Level Event
27 May 2024 - 31 May 2024
Geneva, Switzerland and online