Meta Platforms develops labels for AI-generated content
This move comes as part of Meta’s commitment to AI safety measures, following a joint pledge by several tech companies to implement safeguards, including watermarks.
Meta Platforms is developing labels that allow creators to identify images produced using their AI technology. Alessandro Paluzzi, a developer, shared screenshots of an in-app message on social media indicating that Instagram posts created with generative AI tools will be labeled as such. The labels will specify that the content was ‘generated by Meta AI’ and that ‘content created with AI is typically labelled for easy detection.’
This development follows a commitment by various companies, including Meta, Google, Microsoft, and OpenAI, to adopt AI safety measures like watermarks. Recently, Meta and Microsoft released an AI model called Llama 2 for research and commercial use, highlighting their open-source approach to AI development. They aim to promote transparency and provide resources for responsible AI use. President Biden praised these commitments but acknowledged the need for further collaboration and work in the AI safety domain.
Why does it matter? The international community has voiced concerns about the challenges arising from AI chatbots’ ability to create complex content and visuals rapidly, which can contribute to the spread of disinformation. Eric Schmidt, former CEO of Google, shares worries about the possibility of widespread misinformation during the US 2024 election. Implementing labelling for AI-generated content could play a crucial role in preventing the misuse of AI for malicious purposes, including the creation of deepfake videos and deceptive content. This step forward promotes responsible AI practices, ensures transparency in content creation, and fosters user trust and awareness.