How to ensure cultural and linguistic diversity in the digital and AI worlds?

29 May 2024 09:00h - 09:45h

Table of contents

Disclaimer: This is not an official record of the session. The DiploAI system automatically generates these resources from the audiovisual recording. Resources are presented in their original format, as provided by the AI (e.g. including any spelling mistakes). The accuracy of these resources cannot be guaranteed.

Full session report

Experts tackle linguistic diversity in the digital realm at the World Summit on the Information Society

During a session at the World Summit on the Information Society (WSSS), co-organized by UNESCO and the International Organisation of La Francophonie, experts convened to address the critical challenge of cultural and linguistic diversity in the digital space. The session, moderated by Henri Eli Monceau, highlighted the underrepresentation of languages online, with only 10 languages dominating the most-visited websites, despite the existence of approximately 7,000 languages spoken globally.

Xianhong Hu from UNESCO provided insights into the Information for All programme (IFAP), which focuses on six priorities, including multilingualism and information accessibility. Hu stressed the importance of linguistic diversity and multilingualism in the AI world, referencing UNESCO’s global framework on AI ethics that promotes diversity and inclusiveness.

The discussion then moved to practical solutions for enhancing discoverability and diversity. Hannah Taieb from Spideo discussed the company’s approach to creating transparent, semantic algorithms for cultural recommendations, which are supervised by humans to ensure diverse cultural offerings are discoverable and understandable. Taieb explained that while Spideo is not open-source, it is open to sharing its algorithmic construction methods.

Steven Tallec from Harvester spoke about the company’s role in analyzing recommendation algorithms’ impact on cultural diversity on online services. He highlighted the asymmetry of information between rights holders and the exposure of their work online. Tallec introduced the concept of visibility scores for content, which could transform discoverability and diversity in media offerings.

Audience questions raised concerns about user agency in algorithmic content discovery and the representation of minority languages in AI. One participant questioned whether users could choose discovery modes to promote diversity, while another highlighted the challenges faced by AI in understanding African dialects and languages.

The session concluded with a call to action for stakeholders to join a coalition for measuring digital inclusion and to engage in ongoing discussions to translate them into actions. The moderator mentioned upcoming negotiations in New York on the global digital compact, which will address cultural diversity and discoverability.

Key observations from the session included the need for sovereign databases to support cultural and linguistic diversity at the state or regional level. The importance of technology and metadata in ensuring that non-mainstream content can compete with dominant players like Netflix was emphasized. The session also suggested the potential for new user experiences to combat content overload, proposing alternative interfaces to traditional scrolling rails used by many platforms.

In summary, the session highlighted the challenges and potential solutions to promoting cultural and linguistic diversity in the digital age, emphasizing the need for transparency, human oversight in AI, and regulatory frameworks to govern digital platforms and algorithm development.

Session transcript

Moderator – Henri Eli Monceau:
Hello everyone. Welcome in this session co organized by UNESCO, if up. And, or if the session will be in French. So you have the possibility of course, to get the interpretation. You have all the devices necessary devices on your desk. Good morning to everyone. We shall start right ahead as time is running in this extremely busy space. That is the WSSS. We will get back to this essential question of the stakes of cultural and linguistic diversity. And most specifically, the discoverability of francophone contents, or more generally, of all types of contents. The director, the deputy director general of UNESCO, Mr. Gelassi, has reminded us during the opening session on Monday, time is indeed flying. Has reminded us that over the 10 million websites, most looked into websites, only 10 languages were used. 10 languages out of the more or less 7,000 languages being used on a daily basis by populations of our member states. Therefore, there is a real challenge, a real challenge in order to be able to guarantee this linguistic diversity, and therefore cultural diversity. And in order to be certain that the expression of each is also guaranteed. and heard. We have decided today to look at good morning Sharon whom I see appearing on the screen. We have today this issue in a very concrete and practical way, not just through statistics and analysis but rather through concrete actions being led by entrepreneurs in order to allow this diversity to take place. Before giving them the floor I would like to give the floor to our colleague Sian Young who is intervening from Paris today. She is also extremely busy at this very high time of preparations for a great number of projects. Dear Shannon, I thank you.

Xianhong Hu:
Thank you very much Mr. Ambassador. Good morning everyone. First of all please allow me, I’d like to be able to share my screen as I have prepared my presentation with a PowerPoint and also Mr. Ambassador if you’ll allow me, I would like to make my presentation in English. Is this possible for you? Well if you ask of it so kindly then certainly because my French is not quite up to par I will therefore switch to English. So thank you very much. My name is Sian Hong, working at UNESCO, Secretary of Information for our program. First I’d like to thank L’OEF, l’Organisation Internationale de la Francophonie, to raise this important subject at OASIS Forum, which has not been so much talked about in the past years. It’s such a good opportunity to tackle this important challenges on the cultural and the linguistic bias, and how we can ensure cultural and linguistic diversity in digital and AI world. And before going to subject, I just briefly introduce program I worked on. It’s called the Information for All program. It’s a very unique intergovernmental program working on six priorities, information for development, information literacy, information preservation, information ethics, information accessibility, and multilingualism to support all the member states, governments, stakeholders for building inclusive and knowledge societies in their country. Today’s topic is so central, so dear to IFAP priority, and not only on multilingualism, but actually it is resonating with all the area we are working on. Compared to other bias recognized related to the ethnic origin, for example, related to the gender bias, the culture and the linguistic bias are much less known to the policymakers and also to public, but they are equally detrimental, very often leading to a democratic deficit in digital age. As you know, UNESCO has endorsed the first global framework on ethics of artificial intelligence. It has a well-recognized the fundamental value of diversity, as you can see, it’s a number three, ensuring diversity and inclusiveness. It’s a well-recognized this fundamental. fundamental value of diversity, including cultural and linguistic diversity to be applied to the entire AI development cycle. IFAP has also updated our work strategy plan, consideration all this new context, particularly on the multilingualism. It’s such a focus of our new program objective. Language is a primary means of communicating information and knowledge in today’s age, mostly in the digital and the AI world. The ability to access content in the language one can use is a key determinant of one’s ability to participate in today’s digital society. That’s why we have firmly reaffirmed that our commitment to mainstream policies and solutions to strengthen linguistic diversity and multilingualism in digital age into the national development and digital inclusion plans, fostering international debate on multilingualism and contributing to the current international decade of indigenous language. However, as Ambassador Henry just highlighted, most languages are not at all represented in the space and absent from those large language models of AI development. This exacerbating the linguistic and digital divide, AI divides, which became a main barrier for universal access and also meaningful connectivity. Those cultural and linguistic bias further marginalize the already disadvantaged voices. AI tools like ChatterGBT tend to output standard answers that reflect the dominant values of the data and the mainstream. language, which you are used to train them. This limits the development of plural opinions and expressions of ideas, often, very often excluding the marginalized communities and minimizing their presence. Reducing the diversity of opinions and marginalizing their voices pose significant risks. AI models primarily train on data from internet pages, social media conversations, and reinforce the marginalized of the disadvantaged people, again, in the age of AI. Therefore, it’s so crucial for AI developers to address this bias since the very beginning from their data sets and outputs continuously. IFF has been working to encourage municipal member states to consider language when formulating and implementing their digital innovation policies as strategies and solutions to build a fair knowledge society. We advocate for the preservation of all world’s languages, those 7,000 languages, and the entry of new languages into the digital and AI world. This includes the creation of and dissemination of the multilingual content inside space, particularly in local language. We aim to highlight this within the broad spectrum of the international decades of indigenous languages. Lastly, I’d like to highlight our recent joint action with the OEF and also with other collaborators. We just launched a new established dynamic coalition on measuring digital inclusion. Ensuring culture and linguistic diversity in digital and AI world require concerted efforts and a holistic approach to address so many biases and promote inclusivity. That’s why we joined the long-pursuit. democration to reinforce the global partnership. And also we use this coalition to foster evidence-based approach for digital measurements framework, which can measure extent the language linguistic bias are being embedded in today’s AI and the digital world. And with this evidence found, we can really design and put in place a tailored policy and strategy for different countries and societies to be able to solve the challenges. And so here, I’m joined with Lou Yif and we call for all stakeholders to join our coalition. We are launching at WSYS and at IGF and also Europe and a number of locations. We are going to trigger more discussion on a subject but also join as synergies on actions. We just scan this QR and fill a very short membership form so we can work together through this as a new platform. Thank you for finishing my presence here. And I would like to thank Lou Yif and thank the ambassador again for this initiative of the issue or discussion on the cultural and linguistic diversity policy discussion at WSYS. I hope this subject would eventually be further flagged in the current WSYS plus 20 review process and also even be more highlighted in the follow-up after that. Thank you.

Moderator – Henri Eli Monceau:
Thank you very much for your presentation. This very precise and concise presentation which allows us to have a framework for our continuing work. As I have indicated the choice we have made. made, with Shannon in fact, for this morning, is to focus on doing on solutions, concrete solutions that exist, and we would like to exchange with you around a couple of examples. We will take about 15 minutes before starting this dialogue with you. Anna, you are the Director for Development for Spideo, and Spideo precisely aims at providing, or rather provides, recommendations for development industry. How do you take into account this issue of diversity, this issue of discoverability? It is quite interesting to look at things from the point of view of a company growing outside the large platforms that we are used to mentioning within the framework of this WSIS, whether positively or negatively.

Hannah Taieb:
Okay, so Spideo is a French company which is now 13 years old. We are truly specialists of discoverability in the sense that we fabricate algorithms for recommendation for the cultural sector, and the cultural sector only, and to speak very concretely, our algorithms are connected to the platform, and therefore what you see today on applications such as France Télévisions, such as MyCanal, Claro, Globo, and others, as well as other platforms in the world, is a French company that is working worldwide, looks into many of the values that have been mentioned earlier. It means that we We truly want to create transparent algorithms, algorithms that can be understood. And we give high value to discoverability by explaining and allowing to make visible the way in which algorithms work, which is not the case for most platforms. We do not understand how algorithms work, because we are aware that a user, and it’s a question of user experience rather than just ethical experience, the user will have a much better experience and a much better understanding as to what content is recommended to them. Therefore, there is a contract being drawn between the cultural platform, whether the podcast or streaming, and there is a contract being drawn, which is a contract of discoverability. And this contract of discoverability means that because I’m committing to a platform, I expect to have content which I like, which I want to look at. And therefore, in the world in which we live, taking into account the fact that we have an issue with the hyper choices, too many choices, too many contents, these algorithms for recommendations are a way, so far, the most practical way to create discoverability. The question now is, how can we do that? What differentiates our approach to others’ approaches from Netflix and others? Well, our approach is semantic in the sense that we can say, in order to discover content, there is a question of discoverability, but also of findability. We must be able to qualify a given contact. And this has to do with the implementation of keywords, meaning that the cultural sector can only be considered, cannot be… considered the same way as we would look for socks on Amazon, for instance. So we are forced in order to identify the multiplicity of cultural offers to qualify them. And how do we do that? Well, we do that, the human-centered AI is how we do this, which will insert the keywords and information that will say this film is a drama, this film is a comedy. But this is no longer sufficient. And currently, technologies, even the AI, which is evolving every day, nothing can replace human supervision. Also AI, of course, but AI with human supervision, whether in training the model or in the creation of databases that may allow to feed these models.

Moderator – Henri Eli Monceau:
OK, thank you. But we are not speaking of open-source systems.

Hannah Taieb:
That’s a very good question. Open-source systems nowadays, we are rather open to sharing the way in which algorithms are built. But SFIDIO is a private corporation, and therefore we have clients. And in consequence, we are not in complete autonomy and completely open-source.

Moderator – Henri Eli Monceau:
And this doesn’t keep us from being transparent. So let’s go back to this question, this rather strategic question in a minute, and of which we truly understand that it is difficult to be alone being an open-source in a very competitive market. And perhaps we can look into this with you. Stephen, you are the co-founder of Harvester, which is specialized in the visibility of AI and information on video platforms, online platforms. How do you deal with this issue of discoverability?

Steven Tallec:
Well, first of all, thank you very much. Thank you very much to OAF for this experience. I’d like to remind everyone here that the first time we spoke of discoverability, the question of UCAM is when I first spoke about this, and we advanced quite a bit because at the time I was working at the University, Paris 1, and we had developed a theoretical model to analyze the question of discoverability on SVOD services as well as other interfaces, video online interfaces, and since I left the University, I co-founded Investor in order to provide data analysis and make the functioning or the effects of algorithm recommendation on the question of cultural diversity on online services, so thank you very much for this. We’re speaking of transparency, so at Expedio, there are algorithms for recommendation, and we come on the other side with the analysis of the algorithm recommendations of different services operating in France as well as abroad, and this is important, there is a true asymmetry of recommendation where on one side there are international organizations and more generally the rights holders in regards to the exposing of their work online, and I’d like to remind you here that a service like Netflix has 270 million users in the world, available in 190 countries, and therefore the way their content is exposed, content other than French content on a service such as that one, has an extreme, a major importance in terms of discoverability. And it’s what we do with our services today. But I would like us, and most of the analysis today deals with catalogs, a catalog such as Netflix, with 75,000 films and 2,500 series. But it is an infinite proportion of this catalog is actually proposed to user, which is why the question of discoverability is extremely important, because every user will have access to an infinite, to a minimum part of the users. And this will be the whole catalog. Altogether, we’ll be able to do 270 million. So it’s important to truly understand what happens in terms of impact on cultural diversity for these teams. And 20% is truly the proportion of the catalog that is shown to a given user at a given moment. And of this 20%, only 15% of francophone content maximum will be presented to a given user at a given moment. There are fluctuations, which usually lower the number. So 15% of the 20%, yes, 15% of the 20% that are shown. So in a host page such as Netflix, it’s 1,400 images. And there will be about out of 1,400, there will be about 200 dedicated to francophone content. And we needed more of an empiric research on this, and particularly in francophone territories.

Moderator – Henri Eli Monceau:
So what is your model, I may ask? Or two questions, rather. Is there a demand, a request from users in regards to this diversity, on one hand? And on the other hand, what is the offer you may have on the other side in terms of discoverability and diversity?

Steven Tallec:
Well, it is true that our market nowadays is. mostly an institutional market, because in France, we are lucky enough to have the CNC, which truly allows access to a great number of data on these matters. But we have a regulator who was the first in 2018 to impose the CMEA directive by imposing quotas in terms of European and Francophone content, or French content. And so France, with Canada, with the transposition of the law C11, were the first to look into these issues of discoverability. And therefore, in order to answer your question, at this stage, our main clients are institutional clients in France, as well as abroad. I’ve mentioned Arcom and CRTC in Canada. And we aim today to inform rights holders, such as the Collective Management Society, the Cinema Society, who are supposed to be aware of issues of visibility. And there is an analogy that we must have today with the world of television, the linear world. We sell a program to televisions. We know the broadcast and the time at which it will be shown. And the sale of the program will not be the same. The price of the program being sold will not be the same, depending on the time and the service where it’s being shown. We sell this program without knowing how exposed it will be. And we don’t know when it will be shown, how much it will be shown, not only for rights holders, but not only, in order to solve this asymmetry of information and in order to obtain internal revenue. And we’ve seen it with the strike of writers in the U.S. which are extremely important in France, in North America, but more generally in Europe.

Moderator – Henri Eli Monceau:
And what happens is if tomorrow the legislature was to impose discoverabilities of scores such as NutriScore, would that change everything in regards to the offer?

Steven Tallec:
And it’s true I haven’t spoken of this, but we’ve implemented a score for visibility and not discoverability. And this allows us to be able to create categories of work or francophone series of offers, and therefore the corpus of work, which is francophone versus American on the given territory. And we’ve implemented such a scoring mechanism, which allows us to objectify very simply the realities, which work, which corpus of work, and based on which modalities.

Moderator – Henri Eli Monceau:
Thank you very much. If I’ve understood correctly, you work in French. And on the francophone start, you also include Portuguese.

Hannah Taieb:
We include about 15 languages, 15 languages that go from Arabic to Chinese, as well as others.

Moderator – Henri Eli Monceau:
Thank you. Well, great news, because in the francophone space, member states of francophonie, we’re speaking of hundreds of thousands, hundreds or thousands of languages, and therefore we have an important margin for progression. I give you the floor. If there are any questions or comments, please take the floor. Kindly introduce yourself.

Audience:
Thank you very much. It’s a pleasure to ask a question in French, which is rare. I’m from the Swiss mission for more cultural diversity in the algorithm and for the users. benefit? Do we have users who can choose, in their platforms, a discovery mode or an algorithm that will correct and remove some of the mainstreams and include something which is out of the beaten track for the sake of diversity? Is that possible to use as a user or is it only the algorithm?

Hannah Taieb:
I’m very pleased to hear this question. Obviously, this is a big challenge we’re trying to solve since the beginning, to address for years now and to answer from another angle. The first question is, do we have many TV viewers who really want to interact with algorithms? There are various levels. First of all, to have some likes and dislike icons and then we can put this forward. In terms of transparency, we have to explain all this very clearly to the user. If you like, then there will be more similar contents. That is the first aspect. Can we show the algorithm and not hide them? Now the current discourse is AI is everywhere and other platforms want to have AI. A couple of years ago, this was even shameful. So this is one aspect. The second stage or step, do we have algorithms for de-recommending? We have many. For example, the Surprise algorithm. We click on the button and then we can have a random… combination. We only have a few clients who have implemented that. Discoverability is not just a personalization, it is not the ultra-personalization. To recommend the user only content that he would like, this is what we believe could create retention. But the risk is to create bulbs of filters and users are enclosed in these filters. I’ve seen this and I felt really enclosed. Now, an awareness building to be done for web platforms or streamers and so on, because it’s not obvious, it’s not easy. Discoverability is a commitment for retention and it is very complicated and complex.

Steven Tallec:
In general, we do underestimate the power of algorithms, recommendation algorithms and their capacity to recommend all categories of contents. I would like to react to what Anna just said. It’s true that we often talk about discoverability or visibility of content for a given interface, for a service and a platform and chain, but currently there are other online interfaces, for example, connected TVs and so on. These are platforms for meeting other users, because there is a real power struggle here, especially in coming years, especially with French-speaking cultural services such as TV Saint-Germain, ARTE. In this ocean of contents and applications, I believe it will be very important to recall all these facts here.

Hannah Taieb:
Maybe to add on to what has just been said, there is also linguistic diversity. It is very important indeed that producers of content, producers and broadcasters must be at the same level as the Americans. If you do not have a metadata, if you do not have a good referencing, the algorithm will not make them emerge. It will be impossible. There is also a technological stake here. The various platforms, HBO Max and so on, the level is, the cost is very high. Some are creating new users every day, and if a platform such as TVC, Canal France-Télévisions for the French-speaking world and Quebec, if they do not have the same technological tools, the gap will be huge and not easy to catch up in terms of cultural and linguistic diversity. Thank you.

Audience:
Hamid Hawja is my name, from Morocco, director of Hebdo magazine. I have two questions. First, I’m just wondering, do we have guidelines for algorithm designers? We have seen, for example, Microsoft stopped Aliza application because some racist content was in it that was developed by AI. Secondly, unfortunately, platforms such as OPT, they aim at financial benefits and rentability. So taking into account cultural and linguistic diversity is an illusion. It will be very difficult because above all, they are motivated by money in all their projects. So it’s very important for international organizations such as the UN and UNESCO. We cannot, for example, AI today cannot really take on board African dialects and languages, and today there is 0.0 effort on these aspects. When I use the chat TBT and when I ask questions in French, I have the answer. When I have a question in English, the answer is then more accurate. I’m not even talking about Arabic.

Moderator – Henri Eli Monceau:
Thank you. This is a very important point that you are raising. That was also highlighted in the presentation, the material on which AI is based. As you said, talking about GVT, but generally how can we move towards this real diversity?

Hannah Taieb:
Real diversity is very important indeed, and it all depends on the models and business models. Algorithms are mainly based on a content, and then it gives some content. Now, there are other models that can be trained. It is very important to have a sovereign database, and this sovereign database must be at country level and regional level, must be supported for the purpose of cultural and linguistic diversity. We are currently advising the institutions, but platform in general, media platform in general, to create such a database basis. Creating database is why taking into account the specific cultural and linguistic diversity. This is the only way to go about it. So, it is through advisory, awareness building, and then the model will follow.

Moderator – Henri Eli Monceau:
This is almost a secondary consideration. Yes, please go ahead. We only have five minutes, so we have to be very brief and quick.

Audience:
So, if I understood correctly, you plan to feed the chat GPT? or the AI engine with local content, local cultural models, because the typical problem is if we refer to the typical car crash example in the use of the ethics in AI, they reply in between to save the elderly people or the kid will change in between eastern countries and western. So the Chinese people will save probably the young, the old man, while the western will save the kid. So, of course, to have local content, we need to feed the system with local information. That’s the program that I understood correctly.

Moderator – Henri Eli Monceau:
Yeah, thank you. And maybe we take also your question, so we will answer both at the same time.

Audience:
Hello, good morning. I’m from Malaysia. So we are from Malaysian Communication Multimedia Regulatory Commission, a regulatory body in Malaysia. So my question is, with the issue of hyper choices and content overload, how can regulatory bodies effectively control and guide the use of algorithm to ensure users are not overwhelmed and can access diverse content easily? So I’m looking at the perspective of users. Thank you.

Moderator – Henri Eli Monceau:
Just one second. Thank you. Thank you very much. We’ll give the floor to Stephen and Anna and then concluding word by Stephen. Anna, would you like to start?

Hannah Taieb:
Exactly. This is exactly what I’m saying, but I’m also saying that the model in itself should be sovereign. It doesn’t have to be necessarily feeding into ChatGPT. It could be also creating our own models. This is what we are doing at Video. We are creating our own models. So we are creating the good data sets and then the virtuous models. You don’t have to be putting your data out in the wild in ChatGPT or… or in any kind of open-source, not open-source actually, open to access Gen-AI models.

Audience:
This will probably not solve the typical problem about the fact that miniaturized languages because the amount of information will be far smaller than the Western one. Yeah. So it’s very limited.

Hannah Taieb:
This is why it has to be maybe at the DHL of the state or at the DHL of the region. It doesn’t have to be every single little company that builds their own models, but still it needs to be regulated. But of course, it’s an issue and we don’t know how those models are really trained because it’s a black box. So this is why it’s extremely important to have explainability and transparency. I’m not from OpenAI, I’m not from Microsoft, and so I don’t have all the answers on that. But of course, regulation is the key at the end of the day. Yeah. For the efficiency of the experience. For the hyper-choice question, I think it’s a matter also of visualization. I think the age of the Rails, I don’t know, Steven, you might have an opinion on this, but I think the age of the Rails is over. To fight hyper-choice, we need, of course, a good algorithm, but we need also a good UX. And good UX and good algorithm should go together. By that, I mean that we need to don’t have Rails anymore, but maybe have new stuff. We were talking about Surprise before. Maybe it’s a Tinder-like situation. Maybe it’s not like the 2,000 content that we are seeing at the same time. It’s impossible to choose, it’s overwhelming. So maybe it’s like that. Maybe it’s a Tunnel experience. Maybe it’s a chat GPT-like experience, actually. Maybe it’s a discussion that is ongoing, like what should I watch tonight? What am I in the mood for? And it’s one content by one content, so that would be my answer.

Steven Tallec:
I think it is more relevant to finish. on Anna’s statements on these two matters.

Moderator – Henri Eli Monceau:
I will therefore give the floor to Shannon, who is in the virtual space, or was in the virtual space. Shannon, can you hear me?

Xianhong Hu:
Yeah, thank you, I hear you all, thank you. Although it’s virtual, I feel so real. I heard so many real concerns from the distinguished delegates in the room. I think I’ve touched upon so many dimensions. I’d like to just join a last statement from UNESCO. I think all this would also boil down to a huge issue about how we govern the digital platforms and algorithm development in the AI age. Because if essentially we need a technical solution, but we also need a policy regulator framework to be in place to ensure the local language, local content to be a part of digital content, to be eventually used to shape more inclusive outputs from those generative AI. That’s why I just posted a link of a recent UNESCO developed guidelines on the governance of the digital platforms. It clearly sets out some principles for the governments to regulate the digital platform, whether they are from a global digital platform company, or they are some local national companies. But also it sets a principles for the digital companies to align with some overarching principles, such as the content curation and the moderation should be transparent as our panelists also mentioned, such as the diverse expertise should be a part of the feature of all regulatory arrangements, such as that the digital platform governance should. promote cultural diversity and a diversity of cultural expressions. And again, I mean, every stakeholder has their huge role to play. And the international organization like UNESCO, as the distinguished delegate from Morocco just mentioned, we do have an obligation to provide all the policy and technical assistance, monitoring and reporting the situation. But same for the civil society, media and other organizations. We are all the watchdogs to monitor, evaluate the development of this important issue on the discoverability, linguistic deficit in the AI age. So I again, I call upon you to join our ongoing discussion and the newly created coalition can carry forward this conversation in the future forum, and also eventually translate them into actions. Thank you.

Moderator – Henri Eli Monceau:
Merci beaucoup. Thank you very much, Shannon. And to conclude, the next session is already waiting to enter the room. So to conclude, I would like to indicate in regards to the International Organization of Francophonie and its member states. We are, of course, focused on the negotiations starting next year, next week in New York on the global digital compact in order to have references that are as clear as possible in regards to this issue of cultural diversity. Then discoverability. Many thanks to our panelists. These very quick and short sessions are a bit frustrating. But I believe this gave us a very good outlook on this issue. And of course, we will follow up. We will have several opportunities to discuss this further and in other session. Have a great day. at WSIS. Thank you very much.

A

Audience

Speech speed

126 words per minute

Speech length

494 words

Speech time

236 secs

HT

Hannah Taieb

Speech speed

142 words per minute

Speech length

1595 words

Speech time

674 secs

M-

Moderator – Henri Eli Monceau

Speech speed

125 words per minute

Speech length

1066 words

Speech time

514 secs

ST

Steven Tallec

Speech speed

133 words per minute

Speech length

1051 words

Speech time

473 secs

XH

Xianhong Hu

Speech speed

139 words per minute

Speech length

1526 words

Speech time

659 secs