Google warns of generative AI dangers
Generative AI misuse includes creating deceptive content and impersonation, posing significant regulatory challenges, according to Google’s recent research.
A recent research paper from Google reveals that generative AI already distorts socio-political reality and scientific consensus. The paper, titled ‘Generative AI Misuse: A Taxonomy of Tactics and Insights from Real-World Data,’ was co-authored by researchers from Google DeepMind, Jigsaw, and Google.org.
It categorises various ways generative AI tools are misused, analysing around 200 incidents reported in the media and academic papers between January 2023 and March 2024. Unlike warnings about hypothetical future risks, this research focuses on the real harm generative AI is currently causing, such as flooding the internet with generated text, audio, images, and videos.
The researchers found that most AI misuse involves exploiting system capabilities rather than attacking the models themselves. However, this misuse blurs the lines between authentic and deceptive content, undermining public trust. AI-generated content is being used for impersonation, creating non-consensual intimate images, and amplifying harmful content. These activities often uphold the terms of service of AI tools, highlighting a significant challenge in regulating AI misuse.
Google’s research also emphasises the environmental impact of generative AI. The increasing integration of AI into various products drives energy consumption, making it difficult to reduce emissions. Despite efforts to improve data centre efficiency, the overall rise in AI use has outpaced these gains. The paper calls for a multi-faceted approach to mitigate AI misuse, involving collaboration between policymakers, researchers, industry leaders, and civil society.