Youth talk about IoT security and AI misuse
7 Dec 2021 16:00h - 17:30h
Event report
This session presented a youth perspective on the question: Are we paying enough attention to the cybersecurity of products powering the internet of things (IoT) and the development of artificial intelligence (AI) for general purposes? The aim of the session was to encourage young people to contribute to policy creation and future developments in these fields. It looked at the possible global standards and best practices for the security of such devices and the future connected world, and the challenges that are currently facing us.Younger generations are often under-represented in such forums, even though they are one of the main driving forces behind the rapid adoption and use of technology. Often they are the biggest user group for online platforms and IoT devices.
Mr Oarabile Mudongo (Research ICT Africa) reminded us of the serious risks of particular technology biases in AI development. He pointed out facial recognition technology (FRT) and surveillance systems based on AI and developed throughout the African continent.
A generational policy dialogue should be a starting point to turn the enormous power of AI to work for the benefit of all. One of the ways to initiate this shift is to invest more in national AI initiatives (in contrast to private ones) that can produce more universal benefits. Until that happens, we need to address the gaps in the power of AI to augment skills.
The digitalisation overlay in our society can also be observed in the rapid development and use of COVID-19 tracking apps. We do not see as much development in creating policy solutions that take into account the privacy of users. particularly in less developed countries. The first challenge is in the centralisation of power for companies that use AI and big data. The second challenge is to establish proper AI policies worldwide. This would include the creation of industry norms for privacy and security.
Knowing that 70% of internet users are in the youth group, it is unusual not to have them involved in the development of future solutions, as pointed out by Nicolas Fiumarelli (Software and Web Engineer for LACNIC, the regional internet registry for the Latin American and Caribbean regions). Models inside AI that learn to mimic user behaviour and make decisions are often black boxes; we do not know much about how they work. It is really hard sometimes to explain how this works to a computer scientist, let alone a non-technical person. We need to have a deeper understanding of the processes happening inside these systems so we can intervene if, or when, needed. This is particularly dangerous if we consider the development of Lethal Autonomous Weapons Systems (LAWS).
Data collection used for AI can often be based on false or biased data, which brings another challenge to this issue. Fiumarelli particularly mentioned the 30% error rate of FRT when processing the faces of people with darker skin colour.
Ihita Gangavarapu (Youth IGF India, Board member @ITU Generation Connect) drew a parallel between IoT and AI development and the human body. The main sensors would measure and monitor all the side data and send that data to one centre to decide what to do next. This data needs to be uncontaminated in order for the system to make the right decision. This is why it is really important to secure IoT devices on all levels. Such tempered IoT devices can also be used to launch attacks on others in the systems. Challenges for securing such devices lie in terms of costs, as these devices are usually small and cheap so there is no place for sophisticated encryption or deterrent systems. Gangavarapu also pointed out the importance of balanced policies around AI development, especially because we need to be prepared for the future where we are connected via a myriad of IoT devices.
Sávyo Vinícius de Morai (Systems Administrator at Instituto Federal do Rio Grande do Norte) talked about the specific use case of IoT in a home environment. Most devices have a cloud-based operation. To use them, we send a large amount of private data to the manufacturers or cloud operators. In addition, the availability of devices relies on internet connections and this can lead to various issues; For example, door locks not working because the internet is down. End users are not experts in security, so standards and norms should be developed to secure such devices on a basic level – for all users, not only the tech-enabled.
The session was moderated by Juliana Novaes (Institute for Internet & the Just Society, Big Data and Antitrust researcher) who, in later dialogue with the audience, concluded that younger generations should be included at the negotiation table when security and ethical standards are established and put in place.
By Arvin Kamberi
Session in numbers and graphs
Automated summary
Diplo’s AI Lab experiments with automated summaries generated from the IGF sessions. They will complement our traditional reporting. Please let us know if you would like to learn more about this experiment at ai@diplomacy.edu. The automated summary of this session can be found at this link.Related topics
Related event
Internet Governance Forum (IGF) 2021
6 Dec 2021 10:00h - 10 Dec 2021 18:00h
Katowice, Poland and Online