Tech companies voice concerns over proposed safety standards in Australia
New safety standards proposed in Australia may hinder the ability of generative AI systems to detect and prevent online child abuse and pro-terrorism material.
Tech companies, including Microsoft, Meta, and Stability AI, are raising concerns about proposed Australian safety standards, claiming that they may impede generative AI systems’ ability to detect and prevent online child abuse and pro-terrorism material. The draft standards, released by eSafety Commissioner Julie Inman Grant, require providers to remove such content ‘where technically feasible.’
WeProtect Global Alliance, a non-profit consortium targeting online child sexual exploitation and abuse, supports the standards, noting the potential misuse of open-source AI for creating illegal content. However, tech companies argue that their technologies have built-in guardrails to prevent such misuse.
Microsoft warns that the proposed standards, as drafted, could limit the effectiveness of AI safety models designed to detect and flag child abuse or pro-terror material. They argue that exposure to relevant content during AI training is necessary to develop precise and nuanced content moderation tools. Stability AI echoes these concerns, emphasising that AI will play a crucial role in online moderation, and overly broad definitions could hinder compliance with the proposed standards. Meta, the parent company of Facebook, mentions difficulties enforcing safeguards once users download an AI model like Llama 2. Google recommends excluding AI from the standards and integrating it into the current government review of the Online Safety Act. The companies also emphasise the need to clarify that scanning cloud and message services ‘where technically feasible’ will not compromise encryption. Inman Grant reassures that the standards won’t weaken encryption or indiscriminately scan personal data. Final versions of the standards are expected to be presented in parliament later this year.