Hate groups use AI to spread antisemitic disinformation
AI is being exploited by hate groups amid conflicts like the Israel-Hamas tension. These groups leverage AI tools to create harmful content specifically targeting vulnerable communities.
AI is being exploited by hate groups amid conflicts like the Israel-Hamas tension. These groups leverage AI tools to create harmful content specifically targeting vulnerable communities. This trend follows a surge in antisemitic incidents, prompting law enforcement warnings.
AI has been used as follows:
- Content Creation: AI tools are used to generate images and audio content that spreads hateful messages targeting vulnerable communities. These tools are employed to create antisemitic depictions and memes, often relying on old stereotypes and tropes.
- Voice Manipulation: AI software is utilized to mimic people’s voices, enabling the creation of fake calls or audio messages that imitate well-known figures or spread hate speech during public meetings. This technology allows hate groups to disseminate false information or promote antisemitic messages under the guise of respected individuals.
- Content Proliferation: Hate groups leverage AI-generated content to amplify their messages across various online platforms. Despite efforts by major platforms to curb such content, these groups find ways to bypass restrictions, sometimes even re-uploading removed content.
- Disruption of Public Forums: Extremist groups use AI-generated voices or manipulate technology to disrupt public meetings, city council gatherings, or online events, injecting hate speech or antisemitic rhetoric into these spaces.
- Instructions: Hate groups have also shared instructions on how to use AI image generation tools that are freely available online to create antisemitic depictions.
Despite online platforms’ efforts, AI is manipulated to create and disseminate harmful content, in particular during conflicts. Furthermore, hate groups mix tech tactics with traditional means, distributing antisemitic flyers while leveraging the above methods as well.
Why does this matter?
AI has found extensive application in crafting various content forms, spanning from emails to news pieces, and AI-driven tools for image and video editing. The simplicity of creating diverse content types brings about worries regarding the rapid dissemination of easily fabricated misinformation. As AI progresses in simulating reality, the issue of deepfakes and other AI-generated visual and audio content grows increasingly severe. While governments and online platforms are putting safeguards in place, the prevention and investigations of these abuses of AI remain difficult.