OpenAI uncovers misuse of AI in deceptive campaigns
The company that developed ChatGPT states that these operations aimed to manipulate public opinion and influence political outcomes.
OpenAI, led by Sam Altman, announced it had disrupted five covert influence operations that misused its AI models for deceptive activities online. Over the past three months, actors from Russia, China, Iran, and Israel used AI to generate fake comments, articles, and social media profiles. These operations targeted issues such as Russia’s invasion of Ukraine, the Gaza conflict, Indian elections, and politics in Europe and the US, aiming to manipulate public opinion and influence political outcomes.
Despite these efforts, OpenAI stated that the deceptive campaigns did not see increased audience engagement. The company emphasised that these operations included both AI-generated and manually-created content. OpenAI’s announcement highlights ongoing concerns about using AI technology to spread misinformation.
In response to these threats, OpenAI has formed a Safety and Security Committee, led by CEO Sam Altman and other board members, to oversee the training of its next AI model. Additionally, Meta Platforms reported similar findings of likely AI-generated content used deceptively on Facebook and Instagram, underscoring the broader issue of AI misuse in digital platforms.