Australia implements code to prevent sharing of AI-generated child sexual abuse material
Search engines will be required to ensure that such content is not returned in search results and AI functions built into search engines cannot produce synthetic versions of the material (deepfakes).
Australia’s internet regulator, e-Safety Commissioner Julie Inman Grant, has announced a new code that will require search engines such as Google and Bing to take steps to prevent the sharing of child sexual abuse material created by AI. The code, drafted by industry giants at the government’s request, aims to combat the dissemination of child sexual abuse material generated by AI. It mandates that search engines ensure that such content does not appear in search results. Additionally, AI functions integrated into search engines must not produce synthetic versions of this material, also known as deepfakes.
Inman Grant highlighted that generative AI has proliferated, catching the world off guard. The initial code drafted by Google and Bing did not adequately cover AI-generated content, prompting the need for a new version. The Digital Industry Group Inc., representing Google and Microsoft, expressed satisfaction with the regulator’s approval of the code, noting that they had worked to incorporate recent developments in generative AI and codify best practices for the industry.
In contrast, earlier this year, the regulator registered safety codes for other internet services, including social media, smartphone applications, and equipment providers, which will take effect in late 2023. However, safety codes for internet storage and private messaging services are still being developed and have faced resistance from privacy advocates globally.
Why does it matter?
The approval of the code by industry representatives demonstrates a collaborative effort to address this issue. The new code reflects the evolving regulatory and legal landscape surrounding internet platforms due to the proliferation of products that can automatically generate lifelike content. It highlights search engines’ increasing responsibility in preventing the circulation of harmful AI-generated material.