Breton calls on Big Tech to help protect European elections from deepfakes
Breton said guidelines will be issued by March to help platforms counter electoral disinformation, including the establishment of a rapid reaction mechanism and simulation exercises.
Large tech platforms such as TikTok, X, and Facebook will soon be required to identify AI-generated content to safeguard the upcoming European election from disinformation. On Wednesday, Internal Market Commissioner Thierry Breton in Strasbourg highlighted the need for comprehensive measures to tackle disinformation during the electoral period, emphasising that inadequate actions will not suffice.
While the exact timeline for implementing the labelling of manipulated content under the Digital Services Act (DSA) remains unspecified, Breton oversees the Commission branch responsible for enforcing the DSA on major European social media and video platforms, including Facebook, Instagram, and YouTube.
In support of this objective, Breton announced that the Commission will release guidelines for very large online platforms by March, outlining the measures they should adopt to counter electoral disinformation. These guidelines will specifically describe the actions platforms must undertake to comply with the DSA in safeguarding the integrity of elections. Breton stressed the importance of platforms establishing a ‘rapid reaction mechanism for any kind of incident’ and underscored the need for simulation exercises to ensure the effectiveness of these systems.
Why does it matter?
Political campaigns worldwide have witnessed the emergence of easy-to-create deepfakes, raising concerns about the authenticity of information. Notably, an audio deepfake impersonating US President Joe Biden caused alarm among politicians in a year with numerous elections. Due to mounting pressure, some companies like OpenAI and Meta have committed to marking fake images and labelling AI-generated content.