Implementations of AI to advance the SDGs (Panel 4): Safe and secure AI
19 May 2018 02:00h
Event report
The panel discussion was co-organised by the ITU and the UN Institute for Disarmament Research (UNIDIR) and moderated by Mr Thomas Wiegand (Professor, TU Berlin, Executive Director, Fraunhofer Heinrich Hertz Institute).
In his opening remarks, Wiegand mentioned two challenging aspects for the development of artificial intelligence (AI) from the engineering perspective, and well as the ethics of it. AI should reflect what society expects from it, but it must also come equipped with important safety measures.
Mr Robert Kirkpatrick (Director, UN Global Pulse) opened the discussion by introducing five tools regarding refugees. These tools range from recognition software able to identify xenophobic content about refugees on social media, to early warning systems of vessels in the Mediterranean, to satellite imagery support.
The UN Global Pulse has been working on guidelines for the use of AI which have been adopted by a variety of UN agencies. For Kirkpatrick, the widely accepted principle of ‘do no harm’ has two aspects to it that need to be taken into account for the development of AI tools. The first implication foresees that no direct harm should come from the use of a particular technology. But more importantly, this principle indicates that every reasonable step to prevent harm from happening must be undertaken. So far, privacy regulations fall short in establishing a satisfying level of protection. Indeed, nuclear technology regulation could serve as example on how to use and regulate the use of AI.
Mr Rob McCargow (Programme Leader Artificial Intelligence and Technology, PwC UK) foresaw that the greatest impact of AI on society will come when the private sector widely adopts AI technology. Its application will then range from the medical sector to the financial sector and truly change society.
He cited some figures from the PwC’s CEO survey which is conducted at Davos every year, showing that:
-
72% of CEOs believe that AI will be a business advantage in the future; and
-
67% of CEOs believe that AI will have a negative impact on stakeholder trust.
Thus, according to the speaker, the use of AI for good would be severely damaged if the disruptive aspects of AI are not addressed early on and gain traction. He further noted that AI will fail if its solely viewed as a standalone technology development. AI alongside other technologies will have severe workforce implications and therefore, companies need to prepare for it in a multidisciplinary and multistakeholder fashion.
Once it can be guaranteed that AI is safe, it will unlock its potential for good. McCargow said that so far there is not enough appropriate governance in businesses asking the right questions about the use and implementation of new technologies.
Mr Wojciech Samek (Head of Machine Learning Group, Fraunhofer Heinrich Hertz Institute) noted that one of the challenges of embracing AI for good stems from the fact that we do not understand how and why AI arrives to certain conclusions. To some extent, it can be viewed as a black box, where we fail to understand why certain methods work or fail. In order to build trust, it is therefore important to know and understand how these processes work and provide researchers with tools to interpret AI-generated results.
The interpretability of outcomes is also very important in terms of legal accountability and scientific progress. Results obtained through the use of AI need to be explainable and reproducible in order to unfold their full potential.
Tentative steps in that direction have been undertaken by Samek and his team who developed an application that visualise how AI image recognition operates. The AI algorithms were fed images of animals to be recognised and classified automatically by the software. Through the software, the researchers were able to identify the areas of the image that the algorithm had analysed to recognise the animal. They discovered that the software did not analyse the shape and features of the depicted animal, but rather scanned the small copyright signs on the bottom of the image, a sign that the AI had used deep learning to identify an animal. Samek points to the importance of being able to verify the predictions made by AI and to know how it comes to its conclusions.
During his speech, Mr Toufi Saliba (AI Decentralized) indicated that the way in which we judge the data will always be subjective. Our expectations of the outcome will always be biased in a certain way and we therefore have to look at feeding the learning patterns more precise data to teach the softwe how to come to our expected conclusions.
The criteria for AI’s operability should thus not be solely result-oriented but instead, should be focused on the input we provide it with.
Saliba further questioned our understanding of AI by asking what the audience would consider to be AI before stating that Bitcoin could be considered a form of AI because of its modus operandi: a machine that is incentivised to compete for resources and is not owned or directly controlled by humans.
According to Saliba, the question of regulating AI is central because it will define whether AI can liberate humanity or become one of its greatest challenges. Ethical considerations therefore need to be built-in from the beginning of its inception.
Mr Andy Chen (VP of Professional and Educational Activities, Institute of Electrical and Electronics Engineers (IEEE) Computer Society) spoke about the necessity to incentivise young professionals to build-in ethics into their AI developments.
He then introduced the Mind AI project, a linear qualitative research process whose results can be easily traced back by the researchers, and which works on the basis of natural languages. Through this open-source based and accessible to everyone project, AI will help to democratise progress.
He informed the audience about some ethics projects surrounding AI from Stanford University in the US and the IEEE’s global initiative on ethical design for autonomous and intelligent systems. The IEEE’s initiative has launched a call for papers for its second edition.
Ms Susan Oh (Chair of AI, Blockchain for Impact UN GA, Founder & CEO, MKR AI) briefly introduced MKR AI which she developed as a fact-checking system that tracks patterns of deception. The platform operates through input from users who validate or invalidate information that has been analysed on the website. If certain facts or methodologies are proven to be less accurate than those of the platform, users are rewarded with tokens.
The speaker noted that machine learning and AI will heavily rely on blockchain as they progress. On the other hand, blockchain also needs AI in order to validate or signal anomalies of the ledger.
Furthermore, if people have sovereignty over their data, they can volunteer to share their data and be rewarded for it in the form of tokens that could be used for their personal benefit. This way, AI evolution would be easier to regulate than through existing methods such as hard laws because regulations tend to be unable to determine what to track and are difficult to enforce.
According to Oh, tokenising societal processes benefits the development of AI because it helps AI better understand human interactions all the while benefiting all the parties involved. AI systems in combination with cryptocurrencies and other types of blockchains will provide a more transparent way of operating within society and incentivise collaboration among users.