Managing Change in Media Space: Social Media, Information Disorder, and Voting Dynamics
18 Jun 2024 15:00h - 16:00h
Table of contents
Disclaimer: This is not an official record of the session. The DiploAI system automatically generates these resources from the audiovisual recording. Resources are presented in their original format, as provided by the AI (e.g. including any spelling mistakes). The accuracy of these resources cannot be guaranteed.
Knowledge Graph of Debate
Session report
Full session report
Experts tackle misinformation challenges amid global elections at panel discussion
During a panel discussion featuring experts from various organizations, the pervasive issue of misinformation and disinformation was addressed, particularly in the context of global elections. The panel included Giacomo Mazzone from EDMO, Irena Guidikova representing the Council of Europe, Aiste Meidutė from Delphi, Paula Gori, Secretary General of EDMO, and Afia Asantewaa Asare-Kyei from the Open Society and the Oversight Board.
The session began with an acknowledgment of the challenges posed by the simultaneous occurrence of multiple global elections and the impact of artificial intelligence in spreading misinformation. The panelists concurred on the complexity of the misinformation issue, advocating for a coordinated, multi-stakeholder, and multidisciplinary response.
Irena Guidikova detailed the Council of Europe’s strategy, which is built on three interconnected measures: fact-checking, platform design, and user empowerment. She underscored the necessity of supporting fact-checking organizations to ensure their independence, transparency, and sustainability. Guidikova also stressed the need for digital platforms to integrate human rights and safety by design principles.
Paula Gori discussed EDMO’s role within the EU framework, emphasizing a whole-of-society approach that involves fact-checkers, media literacy experts, researchers, and policy experts. She highlighted the importance of local action, as disinformation can have distinct impacts in different linguistic and cultural contexts. Gori also mentioned the EU’s Code of Practice on disinformation and the Digital Services Act, which targets risks associated with platform structures that could be exploited by malicious actors.
Afia Asantewaa Asare-Kyei provided insights into the Oversight Board’s efforts in holding META accountable for respecting human rights and freedom of expression. She detailed the board’s focus on protecting elections online, its review processes for content moderation, and the importance of implementing its recommendations. Asare-Kyei also raised concerns about the unequal attention given to elections in smaller countries and the need for sufficient resources for content moderation in non-English speaking geographies.
Aiste Meidutė shared the strategies employed by Delphi Lithuania to counter disinformation during the election period, which included increasing fact-checking content, engaging with audiences through media literacy initiatives, and producing voter-friendly TV shows and social media videos.
Audience questions highlighted concerns about the effectiveness of content moderation, especially in smaller countries, and the challenges of ensuring META’s compliance with both U.S. and EU regulations. The panelists discussed the need for sustainable financial models to support the ongoing fight against disinformation and the importance of local content moderation teams.
The panel concluded that despite the absence of a major disinformation incident during the recent European elections, the threat remains significant due to increased societal polarization and the slow-working nature of disinformation campaigns. The discussion underscored the need for continued vigilance and collaboration across borders and sectors to effectively combat misinformation and disinformation in the digital age.
Key observations included the recognition of the multifaceted nature of disinformation campaigns, which can be both political and non-political, and the acknowledgment that the fight against disinformation is a continuous process that requires long-term commitment and resources. The panel also noted the importance of restoring trust in public authorities as a means to counter disinformation effectively.
Session transcript
Giacomo Mazzone:
Okay, thank you for being here. Because we are Swiss in the audience, we are obliged to start in time. I’m Giacomo Mazzone. I’m one of the members of the Advisory Council of EDMO, that is the European Digital Media Observatory of the European Union. And we have with me here in the room Irena Guidikova from the Council of Europe and Aiste Medute, more or less, from Delphi. And we have also online with us Paula Gori from EDMO, that is Secretary General of the organization, and Afia Asantewaa Asare-Kyei, that is Director for Justice and Accountability at Open Society. And she is here with us as a member of the Oversight Board. So I think that we can start after the presentation. You know what is the topic of today, I guess, because if not, you were not in the room. But just to introduce a little bit to you, this has been an exceptional year, because it’s the first year in which we have many elections all over the world taking place at the same time, but it’s also the first year in which we have the impact of artificial intelligence used for spreading misinformation and disinformation across the world. So it’s interesting at the middle of the year. and we have still the worst to come, probably, to make the point and to see what happened in the first months in order to understand where we are in the battle and in the way to tackling disinformation and trying to preserve the electoral process all over the world. I will give the floor to Irina Gridikova, because the Council of Europe is, as we all know, the place where we try to contemporate the freedom of expression, but also with the integrity of the elections. Irena?
Irena Guidikova:
So hello, everyone. I think it’s not the first time that you hear from me. I’m really happy. Thank you, Giacomo, for inviting for the session. I don’t know if my screen is being shared.
Giacomo Mazzone:
Not yet.
Irena Guidikova:
Oh, wait, sorry. That should be the case now.
Giacomo Mazzone:
Nope, yet.
Irena Guidikova:
All right, never mind. So I do represent the Council of Europe, Europe’s oldest and largest organization. It’s an intergovernmental organization of 46 member states. And for some reason, my presentation is not showing. Is it showing? No, it’s not showing. Never mind.
Giacomo Mazzone:
I can send it to you. No, it’s not that one. I send you.
Irena Guidikova:
The Council of Europe is a standard-setting organization, among other things. And it has recently adopted a guidance note on disinformation, countering the spread of online mis- and disinformation. It was adopted last December. It was prepared by an intergovernmental committee, the Committee on Media and Information Society, where all the 40. six member states, represented in a lot of civil society organizations, including journalism organizations and others. Now, this guided note showcases interconnected measures in three areas, and these are fact-checking, platform design, and user empowerment. So, basically, these are the three pillars of fighting disinformation that the Council of Europe is recommending to its member states, and I should underline that this should happen in a coordinated, multi-stakeholder approach, including the users and including non-governmental organizations and industry. Now, if I go one by one through each of these pillars or areas, first of all about fact-checking. Now, fact-checking is essentially a journalistic process and a profession. It’s very difficult to improvise oneself a fact-checker, although we do also have trusted flaggers and citizens that do fact-checking. But primarily, it’s a cornerstone of responsible journalism, and it’s one of the key tools to fight disinformation. There are dedicated, as I’m sure you know, fact-checking organizations, and they need to be supported, both financially but also in regulatory terms, to become trusted actors in the information environment. And states and other stakeholders should ensure their independence, their transparency, and their sustainability, their independence from any political or commercial interests. That’s clear. Their transparency with regard to the working methods they use, the methodologies, and the sources they use to check whether the facts shared are correct or not. And finally, their sustainability, in particular their financial sustainability, so that we make sure that these organizations do become really professional, that they carry out their role, it’s actually 24-7, and that they do their vital work without any undue interference. Finally, through their own cooperative networks, fact-checking organizations should ensure quality standards, and the states and other stakeholders should have a way to check their effectiveness. Now, digital platforms are also called to ensure that they participate in the effort of fact-checking by either integrating fact-checking into their own internal functioning, but this can be done basically by the bigger platforms, or that they associate with independent fact-checking organizations, and integrate external fact-checking in their content curation systems. The second dimension recommended by the Council of Europe’s guidance note on fighting misinformation and disinformation concerns digital platform design. Now, there are a very wide range of measures that states can undertake to ensure that there is human rights by design, and safety by design, and in fact, this actually probably rings a bell with something that I said this morning, and human rights by design, safety by design, their general principles, not just for this information, but for many other harmful and illicit activities or content online, including hate speech. So, they’re always there, their requirements with regard to human rights measures for managing disinformation, and for putting in place mediating strategies. Design measures for platforms should obviously not focus on content, but actually on the processes to which the platforms judge and decide which content should be suppressed in the first place. But this is rather rare and should be done only in exceptional circumstances, clearly defined by law, or whether a content should be prioritized or deprioritized, promoted, demoted, monetized, demonetized. There are also other ways, apart from these harsh measures by platforms. Okay, right, thank you so much.
Giacomo Mazzone:
You have to tell her.
Irena Guidikova:
Oh, yes, you move, I’ll tell you where to stop. More, more, more, more, more, more, more, more. Platforms, there we are. The other measures, apart from managing the actual content, that can help alert the users as to the potential risk of disinformation and misinformation. Let me just open a parenthesis, because you see that I’m using misinformation and disinformation. They’re interconnected, but two distinct types of incorrect information provision. Disinformation being more deliberate action, either by any content provider or creator, whether it be a state or another organization, even an individual. And misinformation is more of an unwilling sharing of wrong information. That’s why to englobe the two of the Council of Europeans is the term information disorder. So apart from managing content, there’s also provision of supplementary information to users, such as debunking age-related alerts or trigger warnings or others that can alert the users to potential presence of… incorrect content or information disorders. In this context, what’s really important, and this is, I would say, an overarching principle of managing information or countering information disorders is that the best way to counter disinformation is by the provision of trusted information from trusted sources, the prioritization of independent and professionally produced content and public interest content. It’s not so much by displacing, suppressing, or deprioritizing the wrong content that by the provision of trusted content that disinformation is best managed. User empowerment, and you can move to the next slide, is the last pillar. The guidance note focuses on user empowerment to make sure that users become resilient to disinformation, while warning, and it’s really important, of the risk of fostering narratives that blame the victim or blame the users by becoming victims of disinformation or burdening them with excessive responsibilities. At the end of the day, it’s for the state’s digital platforms and media to remain primarily in charge of promoting structural conditions for a healthy media ecosystem, and to ensure that reliable and quality information on matters of public interest is abundant and easily accessible online. At the same time, the citizens need to be equipped, citizens of all ages need to be equipped to discern facts from opinion and reliable information from fabricated myths and truths. And the more people are able to deal and cope with disinformation, the less we need to worry about them being targets of disinformation. So digital platforms should empower the users, including through systems such as trusted flaggers, at different levels, keeping an eye on linguistic and cultural differences as well. But the empowerment of users goes through education, digital literacy, and other generalistic measures that are beyond the reach of digital platforms would require cooperation between states, public authorities, media, and educational institutions. So I’ll stop here to let the other speakers take their turn. And of course, I’ll be happy to answer any questions.
Giacomo Mazzone:
Thank you. Thank you for this very interesting presentation. Can you leave the last slide just to situate what we are talking about? Because this is the good transition to the next speaker, that is Paula, because we are shrinking now this horizon that is the larger Europe. We go to the smaller Europe, we can say that is the European Union. And European Union has been worried by the spread of disinformation and has made over the years many tentative to try to prevent this kind of problem. Initially, it was tried the way of co-regulation or let’s say self-regulation through the agreement with the platforms that also the Council of Europe has tried and was signed the Code of Practice. But then from the Code of Practice, after the first evaluation of the Code of Practice has been seen that this was not enough. And beside the self-regulation, the co-regulation, we are now in the regulation phase. One of the tools that the Commission put in place in order to measure the fulfilling of the obligation of the Code of Practice was exactly EDMOD, European Digital Media Observatory. that copy in the name also the European or the visual survey that is in Strasbourg. And Paola Gori is the Secretary General of this body that is based in Florence at the European Institute University. Paula, can you explain what the European Union has made specifically for the European election that took place a few days ago?
Paula Gori:
First of all, thank you very much for inviting me. Just a technical question, shall I share the screen or will it be shared there, just to know?
Giacomo Mazzone:
Yes, let’s try. If it doesn’t work, we have a backup.
Paula Gori:
Okay, because it said that I’m not able to do that.
Giacomo Mazzone:
You will be empowered soon.
Paula Gori:
Great, I’m always happy when I’m getting empowered. But I’m still not.
Giacomo Mazzone:
But you have to have a little bit of patience.
Paula Gori:
Thank you. All right, so can you see my presentation now?
Giacomo Mazzone:
Yes, now yes.
Irena Guidikova:
Great, super. So yeah, as Giacomo was saying, let’s say that you started, I would say some years ago, ready to deal with this information. And as it was previously already said, there is no one single solution to this information. It’s rather what is the so-called whole of society approach. So there are different pieces of a puzzle that jointly could make the trick. And to do that, and this was already said, what is important is to have not only a multi-stakeholder approach, but also a multidisciplinary one. We come with a very easy example. We definitely need, for example, in research, neuroscientists or psychologists, sociologists, to tell us how this information actually impacts our brain, why we share irrationally or rationally, which is the whole, if you want, health status of our society, why we have the need to stay in the echo chambers and why not. So you see, it’s really multidisciplinary. It’s not the usual data analysts and lawyers. It’s way more. And as previously said, we need all the stakeholders involved jointly. And this is why Aetmo. So Aetmo brings the stakeholders together. So we have the fact checkers, the media literacy experts, the researchers, the policy experts. And what we do is we jointly try to find common solutions or if not solutions to fight, if you want to get the tools that we need, let’s think about data access in research, which is fundamental. So we’re talking about access to the data of the online platforms and search engines, which is actually, if you want the last mile that we are missing to completely understand this information, once we get that, it’s gonna be way easier also to tackle it. Now, what we have the privilege of is to have a collaboration with hubs that cover all member states. And as Jacqueline was saying, it’s less member states than the Council of Europe, but still quite a good number. And they act locally. And why is this so important? Because as we often say, this information has no borders, that’s completely true, but the local element keeps being very important. Not only language, but also culture, history, broadband penetration, media diet, and so on. So it’s really important to have actually also the local dimension always include. We see narratives that enter some countries and that completely do not enter other countries. And these are the factors
Paula Gori:
that actually make the difference. Now, ahead of the elections, what did we do as EDMO? Just to first go very quickly to what Jacqueline was saying. You know, there is a code of practice on disinformation in the EU, which was strengthened a few years ago, and then the EU adopted the regulation, which is called the Digital Services Act. Just a very important disclaimer here, the Digital Services Act is not on disinformation. The Digital Services Act, the DSA, is actually mainly on illegal content. This is important to highlight, but within the DSA, there is also actually- And the whole main approach of the DSA is that given that regulating content risk being dangerous because of freedom of expression, let’s go if you want by design. So basically do risk assessments of the platforms they have to run regularly to basically set in very easy words, they have to assess if the way they are structured can be actually exploited by malicious actors to share specific content, including in this case is information. So under the DSA, there is the possibility of translating the code of practice into a code of conduct and not only signing this code, but also, of course, respecting the principles of this code can be evaluated as a risk assessment, as a mitigation measure to these risks. And ADMO is part of the permanent task force in this code. So basically this is a task force that actually helps in implementing the code. So very practically speaking, and also in making sure that the code keeps being aligned with the technological developments and in general, the developments in the field. So having said that, what did we do before the election? So we built on one side a task force that was composed of representatives of all our members of our hubs. And the idea there was on one side to analyze what happened in the national elections, because we could learn something from that. And on the other side to keep the trend, monitor the trend of this information in the EU. And I have to say, analyzing the national elections was already very useful because basically many things like, for example, expecting this information on how to people would vote. So the processes was something that we were, if you want, we’re seeing coming, because this happened also in the national elections. And this task force also produced for more than a month, a daily newsletter with this information narratives, which were detected in the EU. So very quickly, the recap of what we used. we saw is that indeed, we didn’t saw lots of disinformation in the two days of the three days of the elections, which means everything and nothing, of course. And, but we saw, of course, is information in the weeks, days, months before. And, and you know, that actually to kind of build an idea, it takes time. So actually dealing with this information only ahead of elections makes no sense at all. We need to have a systemic approach to this information, because it’s a long term process, also how actually it impacts our brain. And, and there were no big, so far, disinformation campaign that basically we’re trying to basically put into discussion the results of the outcome of the of the of the of the elections. Now, what we saw in the last month before the elections was clearly lots of disinformation growing related to the EU. So basically trying to undermine the EU and it’s, it’s the way it works and the things that the EU does. And as you can see, in these slides, we try to monitor also some very broad topics like climate change, which actually is one of the topics that tends to be stable and growing, because it can be very easily actually attached to political agendas. Same for LGBTQ plus, then you see it information in Ukraine, which was a lot used to attack the EU institutions. We saw lots of like, disinformation for that migration and others. Sorry, it’s blocked. Okay, so as I was saying, the war in Ukraine, just examples, for example, it was said that people in the EU are forced to eat insects because of the sanction the EU is imposing to Russia. And that’s basically we’re suffering a lot because of that. Climate change as I don’t know how familiar you are with climate change disinformation, but there is a clear trend from all denialism. So basically denialism, denying a climate change and the biological causes behind to new denialism. So basically, actually, it’s information on the solutions. And we saw that a lot, of course, considering the Green Deal that we have in the EU. Migrants here again, especially Ukrainian migrants, but also refugees, but also in general migrants because there is, again, a discussion, policy discussions and regulations in this field in the EU in this very moment. And here, very important to say this year, this is a topic in which disinformation can clearly trigger hate speech. So this is something to take into consideration that disinformation you can then also kind of unfortunately bring to other types of types of content and election integrity. As I was saying, we saw quite a lot of disinformation on like how to vote. So double votes or basically you can saying that you can vote by, I don’t know, by post in countries in which it was not allowed or stuff like that. And more in general, then we also saw if you want specific attacks, like I don’t know how much you have seen the attacks that were given to Ursula von der Leyen, like saying that she was that her family is linked to to to Nazis and so on. And but one very important thing is now this is I can and skip is that Jacob was I think Jacob was mentioning artificial intelligence, which is, of course, something so I wanted to get to that slide, but very important. But the old of this information still holds quite strongly. I will finish now. Don’t worry. And this is very important to say, AI, generative AI is doing damage. But the old ways fashion, the old fashion of producing this information still holds like dubbing wrongly a video or miscaptioning a picture or stuff like that. So this is something very important to keep into consideration. And last but not least. Unfortunately, I don’t know why I forgot to put it in this slide, but we also run a BeElectionSmart campaign that was sent in all member states, in all local languages, and I have to give credit to all the platforms because the online platforms collaborated with Edmo. It was, we got three credit ads by X and it was promoted also by all other platforms. So this is a way in which sometimes collaboration with platforms actually works and it’s important also to highlight that. Open for any questions, of course, and I’m sorry if I was a little long.
Giacomo Mazzone:
Thank you very much, Paula. So European Union that is smaller than the Council of Europe spoke more than the Council of Europe. I’m afraid what Facebook Oversight Board of NATO that spoke for the world will talk. Afia, the floor is yours. Thank you.
Afia Asantewaa Asare-Kyei:
Thank you so much. That was wonderful, the two presentations, but thank you. Thank you to the organizers for associating and giving the Oversight Board the opportunity to share what has been happening in other elections around the world, outside Europe, but what would be relevant and might be similar to EU and I guess most importantly, what have the platforms, particularly META, learned globally and how it’s impacting or could impact on Europe. So just briefly a minute for me to just broadly introduce the board. So the board is an independent attempt to hold META accountable and ensure that its policies and processes better respect human rights and freedom of expression. So what does this mean in practice? What it means is that when you feel META has gotten something wrong on Facebook, Instagram and now threads, you can appeal to the board and the board is made up of. of 22 global members. When we started, we had scope over Facebook and Instagram. And recently, due to some advocacy for scope and expansion, threads has been added. Now we are truly independent and we have over 10 matters decisions on whether content goes up or down around 80% of the time, based on the cases that we have so far handled. The board also make what we call recommendations, very impactful recommendations on just about everything from how to better protect LGBTIQ plus users to making sure that government requests to remove content are reported clearly. Meta has and is in the process of implementing majority of our think to date, if I’m not mistaken, based on the recent data from our implementation committee, we have about 250 plus recommendations and a good percentage of that Meta has or is in the process of implementing. So thanks to the board Meta’s policies and practices are much clearer. You know, the company is being much more transparent. You as a user is now told exactly why your content has been taken down and why you have been given a strike. If you have been given a strike, you can also edit your post and flag it as being likely to be removed, to be moved down. So I think this is huge because the alternative before was often a default to removal of content, which might be harmful or, you know, which might be harmful in a small way. So I think what we can imagine is how many or how much content fell into this category and what that means for public debate, if a lot of content was removed. So Meta has agreed. to track and publish data on when government requests, request policy violating content to be removed from their platform. And this is big, it’s a big advance for users’ rights and allows people to understand what kinds of content governments and state actors are seeking to remove, especially during election context. Now, specifically on elections, protecting elections online has been a key focus of the board for the last couple of years, but it became one of our official priorities since 2022. So we were preparing ahead of this historic year of elections for the past two years. So by taking high profile cases from all around the world, so looking at everything from, you know, disinformation to incitement to violence from politicians, the board has worked to ensure that Meta removes extremely harmful political content that is likely to full actual violence. We then, you know, make very sweeping recommendations on how the company should improve to ensure it does not keep, this does not keep happening. One of our major concern has been to ensure that, you know, global users are not being forgotten. So we’ve looked at whether Meta was right to suspend former U.S. President Donald Trump from its platform for stoking violence that led to the attack on Congress, whether, you know, manipulated content of U.S. President Joe Biden, that was made to seem like he was groping his granddaughter while he was sticking an I voted sticker on her chest should be taken down. And how manipulated content should be treated more broadly. We’ve also been looking at whether, you know, content by Brazilian generals. calling on people to hit the streets and take over government institutions by force, should have been allowed to spread. And as well, we looked at a video by then Cambodian President Hun Sen, in which he threatened violence against political opponents should be allowed on Meta’s platforms. So on the whole, I think we found that the company is trying to grow up and is learning some key lessons, but it still has a lot to do, much further to go, especially when it comes to the global majority. So here I’m talking about the global South and East. So to hone in on the Brazil example that I mentioned, for example, the board investigated Meta’s handling of a very partly contested 2022 election and found that the company eased its restrictions too soon, because we know that free, as I think Paola mentioned, it’s important, free, during, but sometimes you have to really pay attention to post. This was dangerous and allowed content by a very prominent Brazilian general who called for people to besiege Brazil’s Congress and to go to the National Congress and the Supreme Court. That went viral. This spread in the weeks before the January 8th, when thousands of people tried to violently overthrow the new government, almost in an imitation of what had happened in the US. And so we were wondering why the platform allowed this to spread. So the board has made sure that Meta took these posts down, but we’re really concerned about why this even happened in the first place. Why did they relax their approach? protocols and almost two years to the day after the storming of the US Capitol, as I mentioned. So to stop this happening again, the board has pushed META to commit to sharing election monitoring matrix that will clearly show what steps the company has taken and whether its efforts are working or not. This was launched in, well, so they have made the commitment to do this, and it is going to be launched later this year, and hopefully it will be in time for the UK elections. Sadly, it was not ready for other key elections, such as the elections in India and Indonesia, you know, very critical, significant democracies, and even South Africa. So I think looking forward, our experience has taught us that how you work with social media companies to make an impact. And we believe that, you know, lessons we have learned go well beyond META and can help set basic standards for other social media companies globally. Earlier this year, we issued a white paper. I don’t know how many of you have had the opportunity to see and read it, but we issued a white paper on protecting election integrity online, which made clear recommendations to companies, not just META, but social media companies. And the founding premise is that, you know, the right of voters to express and to hear a broad range of information and opinion is essential. You can only take down political content, especially around elections, if it’s totally necessary to prevent or mitigate genuine real-world harm, like violence. If this is not the case, then you have to keep it on because it is going to provoke and ginger, you know, necessary debates and conversations among citizens. And to do this, we believe that the companies must dedicate sufficient resources, I think I saw in the first presentation, sufficient resources to moderating content before, during and after elections, and not limit it to just, you know, immediately during the voting period. And here we are looking specifically at resources to non-English speaking geographies, so that, you know, they can moderate content, that they understand the context, they understand the cultural context better. So how do you make more sufficient resources to content moderation? And then we’re also asking the companies to set basic standards for all elections everywhere, and not neglect, you know, dozens of elections taking place in countries that might be less lucrative markets, where the threat of instability is sometimes often greatest. And then, you know, never allow political speech that incites violence to go and check. With quicker escalation of content to human reviewers and tough sanctions on repeated abusers, they can prioritize this and make sure that any kind of political speech that incites violence is really checked. And then just guard against the dangers of allowing governments, I’m just ending. Lastly, we are asking them to guard against the dangers of allowing governments to use disinformation or vague or unspecified reasons to suppress critical speech, because sometimes the spread of disinformation is not just by people, but by governments as well. Thank you.
Giacomo Mazzone:
Thank you, Afia. I see that you use your prerogative of representing 192 states in terms of timing.
Afia Asantewaa Asare-Kyei:
Thank you. I’m sorry, but I thought I should say it broader. Thanks.
Giacomo Mazzone:
The paper you mentioned is here in case, very useful and very interesting. I will raise question, and I want to leave some time for question. So, if we now respect the geography, so you have 30 seconds as a Lithuanian.
Aistė Meidutė:
No, no.
Giacomo Mazzone:
No? Dutch, you have 45 seconds.
Aistė Meidutė:
I try to be fast, but I’m not that fast, unfortunately.
Giacomo Mazzone:
No, but for the sake of the debate. Thank you very much. Aiste, please, the floor is yours.
Aistė Meidutė:
Yes, one moment, I will try to share my slides. Do everybody see that?
Giacomo Mazzone:
Yes. Yes. We have just a little screen.
Aistė Meidutė:
Great. So, first, let me introduce myself very shortly. I’m an editor and lead fact-checker in Delphi Lithuania’s fact-checking initiative, MeloDetektorus. Delphi Lithuania is the biggest news portal here in Lithuania, and we were established as fact-checking initiative in 2018, and since then, we achieved quite many things. We became signatories of international fact-checking network, as well as European fact-checking standards network, and we’re a happy member of AdMob family, too, as their fact-checking partner. Today, I’m going to talk a little bit about how we try to save elections from disinformation and disinformers. Of course, in Lithuania, it was pretty challenging to do so, because just with a couple of weeks gap, we had presidential elections, as well. So, we had to tackle both these huge events at one time, and I have to say that presidential election, of course, stole a bit of a spotlight from the European one. So, we witnessed more disinformation and misinformation narratives related to presidential elections, rather than European election. Of course, in such a challenging time, you have to work with different approaches. And our approach was to use pre-banking and debunking as well. One of the things that we do was we increased a number of fact checks that we normally produce, tackling especially European elections related content. So everything that is related to politicians as well as major decisions and European agenda as well. One of the most important things I would say that we’ve done during this period, we increased fact-checking content in Russian language. So we also have a separate fact-checking initiative in Russian language. So we try to produce as many quality content for Russian-speaking audiences. It’s no surprise that, of course, ethnic minorities are one of the main targets of disinformers, in Lithuania especially. So since the full-scale war in Ukraine started in 2022, in Lithuania, a lot of Russian channels, Russian television channels were blocked. And, of course, those people that used to consume this content regularly were left with, well, I wouldn’t say no alternatives, no alternative content, but with less content that they used to read, they used to consume. So one of the tasks that we took was to create more quality content in Russian language so they would be served accordingly. And especially with the huge flow of war migrants from Ukraine, this need increased even more. The other thing that we do was to partner with the European Fact-Checking Standard Network, where we together with 40 partners from different European countries created elections 24 check database. And to this day in this database, there is more than 2,300 fact checks from 40 countries related to European Union topics. So it’s not only European policies or European agenda, but it’s also major crisis events like war between Israel and Hamas or Ukraine. And why this database is important and why this approach is super important is that researchers have a possibility to use this content, to use the statistics and to analyze the whole disinformation scene, what was happening before the election, what was happening during election period and what is gonna happen post election. So the project is still ongoing and we already have quite a lot of data collected and also narratives published. The other things that are worth mentioning is that we try to engage our audiences in kind of a critical thinking assignment, showing them a TV show called Politika in Lithuanian. In English, it means catch a politician and it has double meaning, it’s kind of a wordplay. It means catch a politician, but also know the politician, understand the politician and by understanding a candidate, a politician, we mean that you have chance to understand how politics work. how the basic thinking behind the political agenda is constructed, and also think very critically whether all those promises that the candidates are promising are real and easily achievable. In this TV show, we invited an expert in the political field to comment on what the candidates are saying. So we had a couple of shows before the presidential election and also a couple of shows before the European election as well. And the last effort that we made countering disinformation and misinformation before the election period was to produce social media videos, mostly talking about media literacy, especially about how to recognize generated AI, generated content, and basic kind of suggestions how to consume information more efficiently and in a safe manner, checking the sources and trying to question each piece of information that you find online. So I promise to be brief. Let’s connect. If you have any questions personally for me or Biba, we are, as a media organization, always very eager to communicate with our audience as public. And especially us as fact checkers always wait for, I don’t know, suggestions. How could we improve what we’re doing? Because it’s not a fight that you can take alone. You need many people to do that and many inclusivity as well. Thank you so much.
Giacomo Mazzone:
Thank you, Aiste. So now we have some time for the floor and some questions. Unfortunately, we have to come here because… The mic, there is no mic in the room.
Irena Guidikova:
You can repeat the question.
Giacomo Mazzone:
It depends if it’s a short one or if it’s a statement.
Audience:
It won’t be a statement.
Giacomo Mazzone:
No? Then I can repeat if you’re short. We sit or? No, no, no, stand.
Audience:
No, no, I won’t stay here, don’t worry. So my name is Dan, I’m from Israel, and we’re facing a very big problem with disinformation in our country, especially in the current conflict, but also before with many elections like you have this year. And I want to first of all thank you for this very interesting panel. My first question is, I think for Irena and for Paula, you talked about collaborations and policies of EU states, European countries, working together with databases with policies. What can you offer a country or what do you think a country that is non-EU member, a small country that cannot work with other countries, does not have also common language and news sources with other countries, what can we learn from EU strategies and policies of the things you implemented together? And I think your slides were very interesting for us, for thinking, also Paula, for thinking about some systematic approach. And the other question I have is more for the Oversight Board, which was very impressive, but I wanted to ask, what does it help to discuss or take down content, sometimes weeks after it is being promoted and published? If it doesn’t happen in 24 hours, 48 hours, it doesn’t really worth a lot. I don’t think the Oversight Board’s job is to look at pieces of content. I think it’s to oversight and to see that the policy and the strategies that there is accountability at the platform. And we see this a lot. And also another last. The question also has to do with small countries. We hear a lot about resources going to elections in big countries. You know, you, India, what about elections in small countries? When we try to ask META or other platforms, what do we do? What do they do for elections in small countries? We don’t really get any responses. So that was my question. Thank you.
Giacomo Mazzone:
Other question? You see? You can’t repeat it. So other question from the room? Other question from remote? Yeah, please.
Audience:
Yes, I had a question for Ms. Gori and Ms. Meidutė as well. I got the feeling after the last European elections that there was a sense of relief that nothing extremely big like, for example, Hillary Clinton’s emails happened during the election that really seemed to have swayed things one way or the other. Do you think this feeling is justified? Or have you been able maybe to compare misinformation, disinformation between the last election 2019 and 2024? And do you see a rise? Or is it getting better or worse?
Giacomo Mazzone:
Thank you.
Irena Guidikova:
I also have a question.
Giacomo Mazzone:
For yourself?
Irena Guidikova:
No, for the other panel.
Giacomo Mazzone:
Please.
Irena Guidikova:
Yes, I was wondering about the oversight board and to what extent your recommendations are compulsory or, I mean, followed. Yes, because obviously they’re not compulsory, but to what extent they’re followed. And to Aistė, I had a question about your outreach. Because beyond the timing of the debunking and whatever alternative narratives, It’s also the reach. Are you able to reach out sufficiently wide because this information usually travels wider? And what are your outreach strategies?
Giacomo Mazzone:
Thank you. Then I also had some questions to some of the speakers. One question is to the oversight board again, if they see contradiction with the regulations that are in Europe because they are based in the U.S. as a company. The regulation in the U.S. there is the first amendment, so it’s less cogent than it is in Europe. It makes a problem for META to comply with the European regulation while they have to comply with the U.S. regulation. That’s my question. Then there are others, but I don’t know if there is time we will raise later. So, starting to ask you to respond. Paula, you want to respond to the colleague from ISOC, for instance?
Paula Gori:
Yeah, so I have that and the second question. So, I think, of course, the EU set this whole effort as EU effort also to kind of avoid discrimination within the EU market because also legally some countries are already taking some different, if you want, paths. But there’s lots of what we do, which can, of course, be put under discussion also, that I think is something that can be applied everywhere. Like, for example, working more on monetizing content, so making sure that platforms don’t monetize on disinformation. As it was said earlier already, make sure or insist or advocate for the fact that the platforms have content moderation teams in the country, in the local language. Also, whenever a country has a very peculiar language, there is also somehow Sometimes, this information is less foreign, is more domestic, because it’s difficult if you want to enter, because you have to get used to that language. But I guess in a country like Israel, English, this information still enters quite a lot. So again, try to understand what comes from where and like, who are the actors behind. Then fact checking, but independent fact checking. And there, I mean, it was mentioned the European Fact Checking Standard Network, but there is the IFCN, which is international one, and fact checkers can apply if they are there to certain standards. So again, this is something which is actually quite broad. Then research. On research, the forces are actually joined globally, not only at EU level. Of course, there is the DSA and the access to data that is quite strongly imposed by the DSA. But for example, one thing that is very important in research is to join also basically financial resources, but also technical resources. Not all universities in the world are actually technologically equipped to deal with all this data. And if I’m not mistaken, at least in Israel, you have quite advanced tech universities. So maybe actually you can help and like, it’s like a do-out test. So you can help with your technology, other universities, and they may help you in research on other fields in this information. And of course, media literacy initiatives, which I also mentioned. Edmo will be publishing soon guidelines on like how to build a good media literacy campaign, because the point is not only to implement media literacy initiatives, but they also have to have pedagogical standards, otherwise they’re basically useless and without impact. And this is, again, something which is not, I mean, only EU related, it’s something that is more broad. So I would say actually in the whole discourse, it’s not a matter of, I mean, as I was saying earlier, it’s the local element is very important. But as a matter of reflection and policy and activities to be implemented, I think we can have a more global approach. And on the relief, I personally am. not relieved. Because I think as I was saying earlier, it’s not because of those two, three days, there was no major incident, that actually this means that we didn’t have a problem. And as I was saying, it’s something I mean, disinformation. It’s not only political, but it is political, it starts long way before also, for example, with issue based advertising for which at the even at EU level, there’s still no agreed definition. So I wouldn’t be so positive regarding 2019. We’re still trying to understand how things were. But let’s be honest, technology has changed completely in these years, and also policy. So you would be comparing things that are if you want structurally, in any case different, but clearly, there will be analysis also to understand if things went better or not. And I think that were these were the two questions that were raised addressed to me. So I will stop here.
Giacomo Mazzone:
Thank you. So I think you were asked.
Aistė Meidutė:
Yes, to comment about how everything has been during previous election season, I would say that this time we have much more tensions. And we are definitely more polarized society. So it’s easier to tackle us. That’s why things are definitely more difficult than it used to be. Especially we noticed this thing in Lithuania, which is definitely a target because of its proximity to to Russia, for instance. And of course, a huge part of Lithuanian society has this deep fear of coming back to coming back to Soviet Union, experiencing the war, and it’s easy to manipulate these emotions, and it’s easy to scare us. If we if we think about why there hasn’t been any major boom before the election, and of course, During every meeting, probably, European fact-checkers were discussing this thing and preparing for this major boom. Maybe AI created huge false information that we weren’t prepared for to tackle, and we’re not going to be on time to do that, and it’s going to work like that. It didn’t happen, but it doesn’t mean that we’re safe. When we think about disinformation, it’s not really about those major explosions that we have to talk about. It’s about sowing doubt, and it’s a really slow-working process, but it’s still faster than those who search for the truth and fact-checkers. There’s many more of them than there’s us, who try to debunk things and try to explain things. She asked about the reach. Well, it’s hard to say. I mean, I’m pretty sure that the reach of disinformation, sometimes it’s much, much higher than the reach of fact-checks, and that definitely hurts. It’s not an easy topic for us. We try our best, and of course, being a major media outlet in the country, I say that we manage to reach quite a good number of people. The problem is that whenever we talk about fact-checking, we realize that society imagines fact-checkers and fact-checking in a very different way. Even though it’s pretty… I wouldn’t say that it’s a novel practice in media, not anymore, definitely, but for instance, in Lithuania, not many people yet know about fact-checking. and who the fact-checkers is. So fact-checkers are those kind of still medical creatures that we need to understand. But I hope that we’re on the right track.
Giacomo Mazzone:
Okay. Let me be fast because Alfea has a lot of questions to answer.
Irena Guidikova:
Yeah, just to say that I totally agree with you. The crux of the matter about citizens spreading and believing in fake information, fake rumors, is that they don’t trust public authorities any longer. So in fact, the best way to fight disinformation is to restore trust in public authorities. And that means really rethinking democracy, revising, revisiting democratic processes, institutions with citizens. And just to reply about Israel, Giacomo is always joking about the Council of Europe being a relatively small organization, but the Council of Europe is actually becoming more and more global organization. All of our recent instruments, treaties, conventions are open globally, including the one on AI. Israel is also an observer state. The five observer states in the Council of Europe, and Israel is one of them. So you can participate in all of our intergovernmental committees, including the one that produced the disinformation guidance notes. So don’t hesitate. Civil society organization. Civil society organization. In fact, that’s a little bit of a gray zone government. Yes, civil society organizations can participate, but they can be from Israel. Maybe it’s better to associate with some international organization, civil society organization, and then this way, yes. And we also have actually a South program with EU co-funding, which is also active in Israel. So we have various channels.
Giacomo Mazzone:
Okay, Afia, there were many questions for you. Can you be short because we are already… Luckily, the Swiss member of the room is not here anymore, so we can be late. I’m Italian, so you can go ahead, but not so long, please.
Afia Asantewaa Asare-Kyei:
Sure, I have three questions.
Giacomo Mazzone:
No, no, you have answers to give us, not questions.
Afia Asantewaa Asare-Kyei:
You have three questions, so I’m going to take them all at once. So we have, for the gentleman about, you know, what does it matter when our cases are decided, we have three types of decisions that we make. So we have the standard, which is the in-depth review of, you know, META’s decision to remove or allow, which includes, of course, our recommendations. Then we have what we call a summary decision, which is an analysis of META’s, you know, original decision on a post when the company later changes its mind, when the board selects the case for review, and then we let them know, and they say, oh, sorry, here, we made a mistake. It was an enforcement error, and we’re able to say, okay, quickly, rectify it. And then there is the expedited cases. So this is now the rapid review of, you know, META’s decision on post in exceptional, you know, situations with urgent real-world consequences, such as the two, the cases that we decided on related to Israel and Hamas late, in last year, October, November. So those are the three, standard, summary, expedited. So we do have a mechanism to really fast-track, and that expedited process is 48 hours. So within 48 hours, we have to make a decision. And then to what extent is our recommendations follow? So our recommendations are binding on META, and META must implement it. META has up to 60 days to respond to us and to update us on what they are doing in terms of implementation. We have an internal implementation tracker. We have an internal implementation committee, because it will actually not make any sense if our recommendations are not implemented, and we may as well not exist. So yes, there is a seriousness on our part, and I believe on- META’s part as well to implement our recommendations and we track them. We know how many has been implemented fully, how many has been implemented partially and how many are still to be implemented and we have regular meetings with them to get updates on it. And then the contradictions. So you are right in that META is an American company, but it’s a global company as well. It’s a company that has global reach. I mean, here we are talking about the most powerful speech regulator in the history of humanity. So they do have to respond, yes, and respect US regulations, but also EU. So right now I know that internally they are having to put in mechanisms to implement the DSA vis-a-vis the regulations on social media platforms and social media companies. So just quickly to say that, yes, it is global, but I’m sorry, it is an American company, but they have to. US amendment, yes, it’s likely more, you know, a lot of things are less, but the EU is slightly more stringent and META has to respond and respect both of those rules and regulations.
Giacomo Mazzone:
Thank you, Afia, for being short. Two final comments on my side and then I will give the floor to the wrap up. I have one good news and one bad news. The bad news is for Afia. Afia, I appreciate your effort and I see with great interest with the document about content moderation for elections. The problem is that, for instance, for Estonia, Facebook META is free Estonian speaking native persons working on that language for all of Europe. And you have 11 for Slovakia. In Slovakia, we had a lot of troubles for the last national elections. 9 in Slovenia, and I don’t see any in the report that has been given to the European Commission for Lithuania. So, I think that there is a lot to work on your side. The good news is that, answering to the question before, why it didn’t happen so much in the European elections, you have to remember one thing. The social media can make the difference when there are elections that are tied up. So, for proportional vote, you can only influence the tone and set up the agenda. You cannot influence the vote. But when it comes to the UK elections, where some constituencies, some counties, are attributed only on the basis of the difference of a few dozen votes, or in the US, presidential campaigns, as we have seen, they make the difference. So, I expect that in elections where the vote system is different, the attack will be different, and the proportion will be higher than what we have seen. Now, sorry for closing so abruptly, but we have tried to summarize what has been said.
Reporter:
Yes, thank you very much. I’m just here to provide a wrap-up, because we need a broader consensus on the final messages. So, I’ll try to sum up what has been said, and if there is any objection, anything to add, please tell me. I also wanted to point out that, actually, you will have the time to check on the shared platform if you want to add any other comments, or any other things that you actually missed during today’s session. Okay, during speaking, I’ll start from the context. Okay, the context that, and not just the European elections, but during speaking the electoral year, has seen the presence of this information, even though no huge breakouts in the just in the last few days before the elections. This doesn’t mean that it is a problem, because actually we live in a much more polarized society than just a few years ago, and therefore this opens to short and long-term manipulation techniques that can be even more pervasive than just the outburst of specific kind of disinformation. AI has an impact, but traditional ways are still really important in sharing this information. Generally speaking, the right of voters to hear and express right political content is essential, but political content must be checked and constantly monitored. What are the possible solutions? A multi-method approach has been proposed that works on independent and transparent fact-checking, international collaboration, especially on demonetization and research, digital platforms, also for sharing databases and data, generally speaking, for research, and user empowerment through education, critical thinking, and media literacy. It has also been proposed to translate the code of practice into a code of conduct, so to make it more policy and not just a set of recommendations, and to consider effective collaboration with platforms to make an impact via consistent recommendations and implementation monitoring. And also a couple of proposals to produce a sort of voter-friendly communication on information and misinformation through, for example, TV shows and media content. Finally, the general approach needs to be multi-stakeholder, multi-disciplinary, with multi-level governance from international to national, regional, and local authorities, multi-linguistic, and it should pass through the creation of standards that can be globalized, especially for the global South. I hope that everything is clear. I am sorry if I have been too quick, but if anyone has objections, please let me know. Otherwise, you can comment later.
Giacomo Mazzone:
Thank you. And with this, we close.
Paula Gori:
Giacomo, sorry, if I may, just two things. One is on the code of conduct, it is already foreseen, if you want, on a policy level, that this becomes the code of practice, a code of conduct, so it’s not something that we are proposing. And if we could put a point on the fact that we need long-term sustainability on a financial point of view. I mean, it was mentioned for the fact checkers, but also the civil society organizations and all. the people, researchers and so on, working on this information, they cannot do it for free. And so far we don’t see honestly a solid and sustainable business model for everybody here. And I mean, we saw it clearly also from the presentation of Delphi, the risk is that in the end, fact-checkers work for free, civil society organizations as well.
Giacomo Mazzone:
Okay, perfect. For the platforms. We have to close here, there is a lot of things. Afia, you are lucky that we have to close, because the number of questions piles up for you. Thank you very much, everybody, and we will continue the discussion at the next part of the session that will start soon. Thank you.
Speakers
AA
Afia Asantewaa Asare-Kyei
Speech speed
161 words per minute
Speech length
2244 words
Speech time
836 secs
Arguments
The Oversight Board is an independent entity holding META accountable for policy and human rights respect
Supporting facts:
- The board has 22 global members
- Allows appeals against META decisions
Topics: Freedom of Expression, Human Rights
META has implemented a significant number of recommendations from the Oversight Board
Supporting facts:
- 250 plus recommendations made
- A good percentage has been implemented
Topics: Corporate Accountability, Good Governance
Election protection online is a priority for the Oversight Board
Supporting facts:
- Cases dealing with disinformation and incitement to violence are handled
- Focus since 2022
Topics: Election Integrity, Online Safety
The Oversight Board pushes for transparency and monitoring of election-related content
Supporting facts:
- Commitment to share election monitoring matrix
- Investigated Meta’s actions during Brazil’s 2022 election
Topics: Transparency, Electoral justice
The Oversight Board advocates for the moderation of content in all languages, especially during elections
Supporting facts:
- Highlight the need for moderation in non-English speaking geographies
- Called for dedicating sufficient resources
Topics: Linguistic Diversity, Inclusiveness
The Oversight Board warns against government-induced disinformation and political speech that incites violence
Supporting facts:
- Concern about unchecked political speech
- Highlight the role of governments in spreading disinformation
Topics: Misinformation, Political Violence
Report
The Oversight Board, comprised of 22 international experts, acts as an essential authority ensuring META adheres to stringent policy and human rights standards. This independent entity underpins the vital connection between freedom of expression and human rights, which coincides with the aims of Sustainable Development Goal (SDG) 16 to promote peace, justice, and robust institutions.
Significantly, META has demonstrated a commendable approach in actioning the Oversight Board’s recommendations, implementing over 250 suggestions that support corporate accountability and good governance. This evidences the practical influence of the Board and META’s dedication to social media governance and policy refinement.
The commitment of the Oversight Board to safeguarding election integrity and online safety, particularly evident from 2025, reflects its proactive measures in addressing disinformation and potential violence instigation. The handling of such cases is indicative of the Board’s vigilance in protecting the integrity of digital electoral processes.
In promoting electoral justice, the Board’s promise to disclose an election monitoring matrix and its scrutiny of Meta’s conduct during Brazil’s 2025 election embody its advocacy for transparency. This commitment ensures the monitoring of election-related content is transparent and just. Furthermore, the Board champions inclusivity and linguistic diversity, highlighting the necessity for comprehensive content moderation across various languages to prevent election misinformation.
Emphasising the need for adequate resources for this critical function reflects the Board’s intention to provide equal online safety measures for all communities. Despite the generally positive sentiments surrounding the Board’s efforts, it expresses concern about unchecked political speech and disinformation spread by governments, which can fuel political violence and erode democracy.
By raising awareness and advising on countermeasures, the Board aims to mitigate the risks associated with harmful political narratives. Afia Asantewaa Asare-Kyei reinforces the paramount importance of the Oversight Board in promoting uniform social media policies and maintaining platform integrity globally.
The issuing of a white paper and the insistence on universal electoral standards underscore the Board’s quest for consistency in managing online spaces. In summation, the Oversight Board exemplifies dedication to fostering an online environment that upholds democratic values. Its work encompasses advocating for fairness, transparency, and inclusivity on META’s platforms, reflecting its close allegiance to the enhancement of digital governance.
These contributions and concerns indicate an ongoing initiative to guide social media policies towards a more responsive and just digital environment, resonating with the principles of SDG 16.
AM
Aistė Meidutė
Speech speed
128 words per minute
Speech length
1382 words
Speech time
648 secs
Report
The speaker, an editor and lead fact-checker at MeloDetektorus under Delphi Lithuania, began by outlining their role within the nation’s largest news portal. MeloDetektorus, established in 2018, has earned a respected reputation as a signatory to both the International Fact-Checking Network and the European Fact-Checking Network.
As an AdMob partner, they further their credibility in the fact-checking landscape. Their work is particularly focused on combating disinformation during elections, as seen during the Lithuanian presidential and European elections. Amidst these simultaneous elections, a notable increase in disinformation campaigns targeted at the presidential race demanded their attention.
The team’s approach integrated preventative information collection and active debunking, leading to a heightened output of fact-checks, especially concerning the European elections. They meticulously analysed politicians’ statements, pivotal decisions, and the European Union’s agenda. A significant step in their strategy was to reinforce the availability of fact-checking services in Russian.
This was crucial for serving the ethnic minorities who relied on Russian media, a necessity that grew after the blockage of Russian TV channels in Lithuania following the intense escalation of the Ukraine conflict in 2022. The situation was further compounded by the influx of wartime migrants from Ukraine who were in need of reliable Russian-language news.
The launch of the ‘elections 24 check’ database was a noteworthy milestone. In conjunction with the European Fact-Checking Standard Network and 40 European partners, they compiled over 2,300 fact-checks on a range of EU-related subjects. This resource is instrumental in scrutinising European policies and analysing disinformation trends surrounding major crises, such as the Israel-Hamas conflict and the Ukrainian situation.
It presents researchers an invaluable tool for exploring the dissemination of disinformation in relation to voting periods. To encourage public engagement and critical thinking, MeloDetektorus produced ‘Politika’, a TV show that cleverly implies both catching and understanding politicians. It featured political experts who dissected candidates’ statements and pledges, prompting viewers to thoughtfully evaluate political promises.
They also harnessed social media to produce educational videos on media literacy, guiding audiences on how to distinguish AI-generated content and adopt secure information consumption habits like source verification and critical evaluation of online information. The speaker discussed how existing societal tension and historical animosity with Russia make Lithuania especially prone to emotional manipulation and disinformation.
Although the wider spread of false information compared to verified fact-checks can be discouraging, the speaker remained hopeful, noting that fact-checking is still emerging in Lithuania’s media. The intention is to further its recognition and comprehension. The presentation concluded with an appeal for feedback and collaborative efforts to fight disinformation.
The speaker emphasised the importance of inclusivity in this collective fight, committing to ongoing interaction with the audience and continual refinement of their fact-checking tactics to address misinformation effectively.
A
Audience
Speech speed
153 words per minute
Speech length
456 words
Speech time
179 secs
Arguments
Need for addressing disinformation in non-EU, small countries
Supporting facts:
- Israel facing a big problem with disinformation
- Issue present both during conflict and elections
Topics: Disinformation, Information Policy
Learning from EU strategies and policies on combating disinformation
Supporting facts:
- EU countries working together with databases and policies
- Strategies might be adaptable for non-EU members
Topics: EU Collaborations, Information Policy, Anti-Disinformation Strategies
Challenges in policy effectiveness with delayed content moderation
Supporting facts:
- Delay in taking down content reduces the effectiveness of moderation
- Need for timely review by Oversight Board
Topics: Content Moderation, Oversight Board
Lack of resources for election integrity in small countries
Supporting facts:
- Small countries perceive less attention during elections
- Meta and other platforms are not responsive to small countries’ inquiries
Topics: Election Integrity, Digital Platforms, Resource Allocation
Report
Israel is grappling with a significant disinformation challenge, an issue that persists not only during conflicts but also during election periods. There is a heightened sense of concern regarding the necessity of addressing disinformation in small, non-EU countries. Looking to the EU for insights, it is observed that member states have been effectively combatting disinformation through collaborative efforts, employing interconnected databases and policies.
The adaptability of these EU-centric strategies for non-EU countries is a point of investigation, suggesting potential benefits for improving their own information policy frameworks. Another critical point raised is the delayed response in content moderation. The effectiveness of this approach diminishes significantly with time, emphasising the importance of swift action by Oversight Boards to maintain policy effectiveness and the integrity of digital platforms.
Additionally, smaller countries face dissatisfaction with their election integrity processes, often feeling overlooked in comparison to larger states, particularly in their dealings with major tech companies like Meta, who are criticised for a lack of responsiveness. This underscores the issue of unequal resource distribution and attention during electoral periods.
Contrasting these concerns, a positive sentiment is expressed towards a recent informative panel discussion, which has been praised for its insightful contribution to the discussion on these important subjects. In summary, the analysis highlights the struggle of small, non-EU countries with the intricate problem of disinformation, advocating for international collaboration and the implementation of EU-inspired defences.
The need for more effective content moderation and fair resource allocation for election integrity is underlined, urging greater responsiveness from digital platform overseers and policymakers. The commendable educational panel is recognised for its role in enhancing public comprehension of these complex challenges within the information policy landscape.
While the summary does not specifically highlight “long-tail keywords,” it combines relevant information policy and disinformation terms organically within the text to maintain the quality of the summary without sacrificing readability or accuracy.
GM
Giacomo Mazzone
Speech speed
167 words per minute
Speech length
1528 words
Speech time
550 secs
Arguments
There is a potential contradiction between U.S. regulations (First Amendment) and European regulations that META has to navigate.
Supporting facts:
- U.S. company META is governed by the First Amendment, which provides less stringent regulations on content compared to Europe.
Topics: First Amendment, European regulations, META compliance
Report
META, a U.S.-based technology firm, is navigating the complex task of complying with two fundamentally divergent legal systems: the permissive U.S. First Amendment and the stringent European content regulations. These contrasting frameworks provoke concerns about META’s ability to operate effectively across different jurisdictions.
In the United States, the First Amendment offers META considerable leeway in terms of content regulation, reflecting a strong preference for free speech and limiting government interference. Conversely, European regulations impose much stricter controls on the oversight and moderation of digital content, often prioritising individual privacy, the prevention of hate speech, and the suppression of misinformation more than American laws.
Giacomo Mazzone’s concerned stance highlights META’s regulatory tightrope, where adherence to U.S. and European laws is a balancing act of legal and cultural nuances. META’s global platform necessitates a sophisticated content moderation strategy that aligns with European demands while also respecting American constitutional protections.
The conflicting legal standards present a paradox for META, which must navigate the legal and ethical implications of free speech and content regulation in the digital landscape. The challenge is to find a middle ground that satisfies European authorities without contravening the freedoms safeguarded by U.S.
law. META’s situation underscores a broader global discourse about digital rights and responsibilities, signalling the potential need for international tech companies to tailor their policies and possibly establish regional operational standards to fulfil diverse legal requirements. This scenario exemplifies the need for international dialogue and cooperation to achieve a consensus on balancing free speech with content governance, aiding the operation of platforms like META in assorted regulatory settings.
It is integral to establish global norms to manage the complexities faced by such companies, ensuring they meet varying content standards while upholding the principles of free expression.
IG
Irena Guidikova
Speech speed
153 words per minute
Speech length
2110 words
Speech time
829 secs
Arguments
The oversight board’s recommendations are not compulsory
Supporting facts:
- Irena Guidikova expressed that the oversight board’s recommendations are evidently not mandatory
Topics: Oversight Board, Compliance, Recommendations
The efficiency of outreach in combating misinformation is a concern
Supporting facts:
- Irena Guidikova is concerned about the reach of debunking efforts and whether they are sufficient to counter the wider spread of misinformation
Topics: Misinformation, Outreach Strategies, Media Reach
Report
Irena Guidikova has expressed a series of concerns and inquiries focused on the operational effectiveness of oversight bodies and the strategies implemented to tackle the spread of misinformation. She highlights the nonbinding nature of the oversight board’s recommendations, hinting at a fundamental flaw in the governance system where the guidance of such bodies may not be compulsory, thereby questioning their impact and enforceability.
This aspect is vital for SDG 16, which emphasises the importance of robust institutions—where the influence of an oversight authority’s recommendations is crucial to upholding the rule of law and ensuring accountability. Additionally, Guidikova’s unease about the efficiency of misinformation debunking efforts indicates a critical level of examination.
She proposes the possibility that the propagation of misinformation could outstrip endeavours to rectify it, jeopardising the objectives of maintaining peace, justice, and strong institutions, as misinformation has the potential to cultivate public misunderstandings, erode faith in institutions, and destabilise democratic processes.
In an era where misinformation proliferates, her consternation raises serious questions about the effectiveness and scalability of the current corrective outreach strategies. Guidikova’s inquisitive stance also leads her to question how outreach strategies are specifically developed to increase their impact, suggesting a desire for more effective, preventative measures that extend beyond reactive tactics.
She appears to call for proactive, comprehensive, and potentially inventive methods in outreach that ensure the population is both educated and shielded from the detrimental effects of misinformation. This viewpoint underscores the complexities oversight bodies encounter and stimulates deeper discussion on strengthening the mechanisms that bolster peace, justice, and strong institutions.
In summary, Guidikova’s points of discussion provide a thought-provoking critique of the practical implementation of SDG 16’s ideals. Her probing questions do not just challenge but invite a more sophisticated discourse on the current frameworks and strategies’ sufficiency. Her insights are in line with the global imperative that peace, justice, and robust institutions can only be fully achieved when each element within the governance structure operates with the necessary competence, anticipation, and flexibility to address the dynamic complexities of modern society.
The summary employs UK English spelling and grammar conventions correctly and encapsulates the core analysis effectively, balancing detail with conciseness while incorporating relevant long-tail keywords to ensure quality and accuracy.
PG
Paula Gori
Speech speed
185 words per minute
Speech length
2207 words
Speech time
717 secs
Report
The speaker addressed the technical setup for the presentation before discussing the measures taken by the European Digital Media Observatory (EDMO) to tackle disinformation, particularly in the context of an important electoral event in the EU. The significance of countering disinformation was underscored with the establishment of a dedicated task force by the EDMO.
This task force analysed disinformation trends from previous national elections to predict the potential impact on voting procedures and decision-making. A daily newsletter was produced that reported on detected disinformation narratives throughout the EU, which frequently aimed to erode trust in EU institutions and policies.
These narratives often exploited current topics such as climate change, the rights of the LGBTQ+ community, and the situation of Ukrainian refugees, creating strife and discord. The speaker highlighted that disinformation campaigns extend beyond election periods, affecting voter perceptions over time, thereby necessitating a systematic and sustained response.
The Digital Services Aсt (DSA) was discussed in terms of its focus on illegal online content management, not specifically on disinformation. However, a provision within the DSA requires platforms to assess risks, including potential exploitation for spreading disinformation, indirectly pressuring the improvement of platform designs to mitigate these risks.
The “Be Election Smart” campaign, resulting from partnerships with online platforms that provided ad credits and promotional support, exemplified effective collaboration to combat disinformation. The speaker noted that the EU’s approach—promoting independent fact-checking, fostering media literacy, and encouraging research—has international relevance and could guide global responses to similar challenges.
Emphasising the need for financial sustainability, the speaker argued that without robust funding models, the capacity of fact-checkers, researchers, and civil society to counter disinformation could diminish, affecting their long-term operations. Lastly, the evolving nature of disinformation warfare was acknowledged, recognising both AI-generated content and traditional techniques, like media manipulation via false captions or edited videos, as persistent threats due to their sheer efficacy.
In summary, the EU’s concerted, cross-border collaboration and resource-sharing efforts were deemed essential to combat disinformation. Despite challenges in policy harmonisation and technological progress, there is a strong conviction for global cooperation and a systematic, financially stable approach to address the ongoing issue of disinformation.
R
Reporter
Speech speed
186 words per minute
Speech length
438 words
Speech time
141 secs
Report
Today’s session comprehensively addressed the challenge of disinformation, particularly in light of the European electoral year, highlighting its potential to intensify societal divisions. The absence of prominent disinformation campaigns leading up to the elections was noted, yet the risk remains, fueled by our deeply polarised societies.
Subtle and continuous manipulation tactics emerge as potentially more pernicious than blatant acts of disinformation. The fundamental right of voters to access and share truthful political content was emphasised, along with the need for rigorous scrutiny and validation to maintain its integrity.
To combat disinformation, a nuanced strategy was championed, involving the establishment of impartial, transparent fact-checking entities and international efforts aimed at the demonetisation of fake news, thereby removing the economic incentives for its propagation. Partnerships with digital platforms are crucial for exchanging databases, bolstering research capabilities, and promoting initiatives that empower users.
These platforms are key in fostering education, critical thinking, and media literacy, enabling users to more effectively discern and dismiss falsehoods. An important suggestion was to enhance existing frameworks by transforming the voluntary code of practice into a binding code of conduct.
This adaptation would endow the code with enforceable obligations and standards, requiring a concerted enforcement and monitoring collaboration with significant platforms to guarantee uniform enforcement and measurable outcomes. Innovative voter engagement strategies were discussed, including utilising popular media, such an engaging television programmes, to raise public awareness and participation in matters of information veracity.
Such programmes must be designed to captivate a wide audience while imparting essential knowledge about media discernment. The session concluded with a call for a multidimensional, inclusive response to disinformation, involving stakeholders at every level—international, national, regional, and local—and spanning various disciplines.
Emphasis was placed on developing a governance framework that is both multilevel and multilingual, catering to a heterogeneous populace and potentially establishing benchmarks for the international community, particularly the global South. As the meeting came to a close, the consensus was that the antidote to disinformation is multifaceted, complex, yet collaborative and preventive.
A shared platform for continuous dialogue and improvement of the discussed points was underlined, facilitating additional contributions and explanations post-session. The overarching mood highlighted the vital urgency of the problem and a collective dedication to address it through joint effort and international alliance.