The EU rolls out guidelines to combat election disinformation
Social media platforms and search engines will be required to set up dedicated teams to tackle online disinformation during election periods.
The EU is set to implement strict guidelines to hold major online platforms accountable for lax moderation practices within weeks, signalling a departure from years of self-regulation in the industry. These guidelines, expected to be adopted by the European Commission, will target the spread of election disinformation and threats to electoral integrity. Platforms like TikTok, X, YouTube, Snapchat, and Meta’s Facebook could face fines of up to 6 % of their global turnover if they fail to adequately address AI-powered disinformation or deepfakes.
With European elections approaching in June, concerns over potential destabilising attacks from foreign actors, particularly Russian agents, have prompted the EU to take decisive action. The legal initiative marks a significant shift in how online platforms are regulated in Europe, focusing on combating systemic risks such as mass manipulation campaigns and the dissemination of fake content.
Social media platforms and search engines will be required to establish dedicated teams to monitor disinformation risks in 23 languages across the EU’s 27 member states, collaborating closely with cybersecurity agencies.
While the guidelines are broadly outlined, they hold legal enforceability under the Digital Services Act (DSA), an essential legislative framework governing Big Tech’s responsibilities in policing the internet. The EU officials emphasise the need for platforms to comply with the guidelines or provide transparent explanations for alternative risk mitigation measures.
Commissioner Thierry Breton underscores the importance of 2024 as a pivotal year for elections, stressing the need for major online platforms and search engines to implement measures safeguarding electoral processes, particularly concerning generative AI content.