Google takes steps to regulate deepfake content on YouTube
The policy emphasises disclosure by creators when manipulating reality, including labeling AI-generated electoral ads. It also enables content creators to watermark their material using Google’s tools.
Google is formulating a policy to guide content creators on the responsible use of synthetic content, particularly deepfakes, on platforms like YouTube.
The policy focuses on creator disclosures and labelling for AI-generated content, with plans for disclaimers on YouTube for video descriptions and within videos. Punitive measures for non-compliance were not detailed, but existing policies involve suspending accounts and removing content violating compliance guidelines.
Saikat Mitra, VP and Head of Trust and Safety for Asia-Pacific at Google, highlighted the need for nuanced regulation, emphasising the positive potential of AI-generated synthetic content.
Why does it matter?
Increased attention on synthetic content arose following the use of deepfake videos to target notable figures on various social media platforms. Google already requires advertisers globally to disclose deepfakes for electoral content. The fact that the Indian government is actively considering regulations to prevent the misuse of deepfakes signals a proactive stance rather than relying solely on tech companies to address the issue.