Congress called upon to safeguard US children online amidst growing AI threats
US attorneys general request Congress to combat AI-generated child sex abuse material (CSAM), citing concerns about prosecution challenges and legal gaps.
The attorneys general from all 50 US states and four US territories have signed a letter requesting Congress to take action against AI-enabled child sex abuse material (CSAM). The signatories expressed concerns that AI technology may complicate the prosecution of individuals who engage in online child exploitation. They are actively seeking solutions to address these challenges and incorporate AI-generated content into legislation targeting CSAM. Only a few states, including New York, California, Virginia, and Georgia, have laws that prohibit AI deepfakes, with Texas being the first state to ban AI deepfakes in political elections.
Why does it matter?
Although major social platforms prohibit AI deepfakes, the circulation of sexually exploitative AI deepfakes remains a concern. For example, In March, an application claiming to enable the substitution of faces in suggestive videos ran more than 230 advertisements on Facebook, Instagram, and Messenger. Meta removed these ads promptly after receiving a notification from NBC News reporter Kat Tenbarge.