Developing policy guidelines for AI and child rights
28 Nov 2019 12:00h - 13:00h
Event report
[Read more session reports and updates from the 14th Internet Governance Forum]
The need for children’s rights to become incorporated to artificial intelligence (AI) policies was the underlying message of this session.
Most research on AI do not take into account issues of child safety or children rights, mentioned Ms Sandra Cortesi (Director of Youth and Media, Berkman Klein Center for Internet & Society). This was seconded by Mr Steven Vosloo (Policy Specialist, Digital Connectivity Policy Lab, UNICEF).
Vosloo emphasised the need to focus more on children as they are vulnerable and have the most potential.
Mr Sabelo Mhlambi (Fellow Berkman Klein Center) expressed concerns that with 70% of the population in the African continent being under 30, and almost 50% of Africa being youth under 18, there is not enough focus on elements such as how to protect children’s rights; the future of work in the continent; and whether the use of AI helps reduce existing disparities such as inequality in gender, economy, literacy, opportunities, and others.
Cortesi raised concerns on the privacy and safety in the existing educational context, citing examples of how schools in Asia incorporate sensors in schools, where children’s data is collected in masses. Cortesi shared an overview of a project on ethics and the governance of AI. A recent report from the Berkman Klein Center on ‘Youth and artificial intelligence: Where we stand’ highlights AI uses in education, health and well-being, future of work, privacy and safety, and creativity and entertainment. Cortesi questioned the role of AI in bringing benefits across the world, wondering whether it is not, in fact, generating a larger digital divide. To realise the potential and benefits of AI, Mhlambi mentioned the need to keep in mind potential harms of algorithms, ensure meaningful dialogue with children, and empower children to become digitally literate members of the community.
Ms Jasmina Byrne (Chief, Policy Lab, UNICEF) pointed out that the next 30 years will show significant differences in children’s lives, particularly with the development of AI technologies and AI systems.
On best practices, Mr Armando Guio (Advisor, Colombian Ministry of ICT) highlighted the AI strategy designed in Colombia, which, among other elements, tackled issues related to the future of work and implications for youth. Vosloo shared thoughts on UNICEF’s plan for 2019-2021 and is working on the draft principles on AI and child rights.
A participant pointed to the role of parents in protecting children from abuses associated with the use of AI technologies. Cortesi opined that rather than over emphasising parents, all adults around youth should be involved.
On the role of intermediaries, Ms Karuna Nain (Global Safety Policy Lead, Facebook) highlighted how AI and machine learning are used for content -based on community standards. She spoke about the three signals adopted in newsfeeds (post owner, post content, interaction with similar posts) that have helped extensively to protect children online. A participant expressed concern on the fact that the multiple tools developed by different companies could lead to a fragmented approach to protecting children. Creating a pool of open source tools that could be used by all was proposed as an alternative.
By Amrita Choudhury