DCCOS Risk, opportunity and child safety in the age of AI | IGF 2023
Event report
Speakers and Moderators
Speakers:
- Liz Thomas, Microsoft, Private sector, WEOG (onsite)
- Sophie Pohle, Deutsches Kinderhilfswerk e.V, Civil Society, WEOG
- Katsuhiko Takeda, Childfund Japan, Civil Society, Asia Pacific
- Jenna Fung, Asia Pacific Youth IGF, Civil Society, Asia Pacific
- Patrick Burton, Centre for Justice and Crime Prevention, Civil Society, Africa
Moderators:
- Amy Crocker, ECPAT International, Civil Society, Asia Pacific
- Jim Prendergast, The Galway Strategy Group, Civil Society, WEOG
Event desciption
The overarching theme of IGF 2023 in Kyoto is The Internet We Want – Empowering All People. Our starting point for this discussion is clear: There can be no empowerment on the Internet without a foundation of safety. And the Internet we want, and need is one where children’s rights are protected. With AI and the Metaverse on the agenda of governments and increasingly embedded in the lives of digital platform users worldwide, tech legislation and safety policy are at a critical moment of transition. Different types and applications of these new and frontier technologies have the power to be transformative in positive ways as well as potentially harmful in ways we can and cannot yet predict, including for children and young people. As a result, and as seen in other areas of technology, governments often find themselves playing catch up, struggling to define the proper guardrails for its use across diverse environments from social media and online gaming to EdTech. Society will only be able to harness the benefits of the ongoing technological transition based on AI when proper safeguards are in place. We need to build a shared understanding of the risks and how we can develop the right safeguards for children and young people. Nowhere has the misalignment between what is technically possible, socially acceptable and legally permissible been exemplified more than in the debate around generative AI models. Indeed, between the date of submitting this proposal and the delivery of the session in October 2023, the societal, legal and other debates around this are likely to undergo further rapid change. At the same time, there is a risk that conversations around AI as a ‘new’ or ‘complex’ landscape distract from the foundational safety issues that already put children in harm’s ways in digital spaces that were not designed for them. For example, virtual worlds powered by AI and characterized by anonymity and open access will expand the opportunities for people to exploit children and other vulnerable groups, as already seen by law enforcement. For children, the psychological impact of abuse experienced in virtual worlds will present new and likely more intense forms of trauma. If a core goal of the Metaverse is to blur or remove the boundaries between physical and virtual realities, the differences between physical hands-on abuse and virtual abuse will vanish, with a hugely negative impact on victims and society at large. Either way, the principles and models underpinning AI and the Metaverse are mediums in which child protection must be addressed in a holistic, rights-based and evidence-informed way. This will inform safety policy, awareness by children and parents, and help ensure global alignment between regulation for safety and regulation for AI to avoid fragmentation and inefficiencies in our collective response. This is also based on General comment No. 25 (2021) on children’s rights in relation to the digital environment[1] which obliges state parties to protect children in digital environments from all forms of exploitation and abuse. This session will: 1. Discuss whether and how different approaches to regulation are needed for different digital spaces such as social media, online gaming, communications platforms and EdTech. 2. Discuss how existing safety nets and messaging meet the needs, aspirations and challenges voiced by children, young people and parents around the world. 3. Address the following policy questions: 4. How do you design robust and sustainable child safety policy in a rapid changing tech landscape? 5. How do you create meaningful dialogue around the design, implementation and accountability of AI towards children and society? Goals / outputs of the session 1. Identify the main impact of AI and new technologies on children globally. 2. Understand young peoples’ own perception of risks in virtual worlds 3. Create the basis for DC principles of AI regulation for child safety. 4. Initiate DC messaging for parents to support their children in the digital space. 5. Co-construct DC guidelines for a modern and child rights-oriented child and youth protection in the digital environment. [1] https://www.ohchr.org/en/documents/general-comments-and-recommendations… .
The session will be run as a roundtable, with speakers to guide the key topics of conversation, but an inclusive approach to discussion and idea-sharing. The online moderator will ensure a voice for those attending online, and the use of online polls and other techniques will ensure an effective hybrid experience. To answer the questions directly: 1) We will facilitate interaction between onsite and online speakers and attendees by inviting comments from both groups, and bringing questions or comments from the online attendees to the room. 2) We have designed the session as a roundtable to ensure that both online and onsite participants have the chance to have their voice heard. 3) We aim to use online surveys/polls to ensure an interaction session.