FBI charges man with creating AI-generated child abuse material
Child safety advocates warn about the increasing use of AI in creating explicit deepfakes of children.
A Wisconsin man, Steven Anderegg, has been charged by the FBI for creating over 10,000 sexually explicit and abusive images of children using AI. The 42-year-old allegedly used the popular AI tool Stable Diffusion to generate around 13,000 hyper-realistic images depicting prepubescent children in disturbing and explicit scenarios. Authorities discovered these images on his laptop following a tip-off from the National Center for Missing & Exploited Children (NCMEC), which had flagged his Instagram activity.
Anderegg’s charges include creating, distributing, and possessing child sexual abuse material (CSAM), as well as sending explicit content to a minor. If convicted, he faces up to 70 years in prison. The following case marks one of the first instances where the FBI has charged someone for generating AI-created child abuse material. The rise in such cases has prompted significant concern among child safety advocates and AI researchers, who warn of the increasing potential for AI to facilitate the creation of harmful content.
Reports of online child abuse have surged, partly due to the proliferation of AI-generated material. In 2023, the NCMEC noted a 12% increase in flagged incidents, straining their resources. The Department of Justice has reaffirmed its commitment to prosecuting those who exploit AI to create CSAM, emphasising that AI-generated explicit content is equally punishable under the law.
Stable Diffusion, an open-source AI model, has been identified as a tool used to generate such material. Stability AI, the company behind its development, has stated that the model used by Anderegg was an earlier version created by another startup, RunwayML. Stability AI asserts that it has since implemented stronger safeguards to prevent misuse and prohibits creating illegal content with its tools.