Building trust for beneficial AI: Developer communities
17 May 2018 02:00h
Event report
The session was opened by Ms Claire Craig (Director of Science Policy, the Royal Society), who explained that trust is an issue which crosses the boundaries of nations and countries; different cultures may have different understandings of the notion of trust, and it is important to understand these differences to be able to develop trusted applications.
Mr Liu Zhe (Professor, Peking University) spoke about cultural differences when it comes to trust in artificial intelligence (AI). Trust in AI, he argued, must be considered in the context of existing technology and possible progress in the foreseeable future; basing the discussion on science fiction is a dangerous thing. Zhe then went on to discuss the issue of scoping the problem of trust in AI and robots.
He mentioned that in China and other Asian cultures people seem to be enthusiastic about AI and other emerging technologies. This may lead to an over-trust in technology, which involves a certain deception in the interaction between humans and technology. The risks of over-trust and misplaced trust are very high and we need to address such risks when we think about the relation between humans and AI and robots.
She emphasised the importance of making a distinction between mistrust and misplaced trust or over-trust. He then explained that, when we think about the notion of trust, we consider it largely from the perspective of personal relations. But is it appropriate to look at the relation between human and technology as some type of interpersonal relation? Should we insist on using trust as an appropriate framework to conceptualise our relation to beneficial AI? If not, what is the alternative?
Answering a question from the audience about how we can measure trust, Zhe noted that, before measuring, we should understand the relationship between humans and AI, and what it entails, In his view, it is not clear whether we should use ‘trust’ as a framework to assess this relation. In a follow-up comment, a participant asked whether trust in AI is not a question of trust in other human beings (i.e. the programmer or the engineers building the application, the company, the government, etc.) rather than a question of trust in technology itself. The same goes when we talk about ethics in AI: the discussion is about ethics in how the engineer designs the system.
Ms Kanta Dihal (Research Project Coordinator, Leverhulme Centre for the Future of Intelligence, University of Cambridge) presented the AI Narratives project, which focuses on examining the stories we tell about AI and the impact they have on the technology and its use. The goal of the project is to understand the hopes and fears that shape how we perceive AI, and the relationship between our imagining of the reality and the technology itself.
Dihal spoke about the fact that the impact of AI will be global, and, because of this, managing AI for the benefit of all requires international and multidisciplinary co-operation. But different cultures see AI differently. To build trust across cultures, we must understand the different ways AI and what it could do are perceived.
She also pointed out that there might be limitations in the way we talk about AI; for example, we might be distracted from the real problems by science fiction, fantasies, and the fear of ‘killer robots’. The narratives of rebellion seem to significantly impact our fears about intelligent machines. And this reveals a paradox: we want clever, ‘superhuman’ machines that can do things better than us (and for this we entrust machines with human attributes like agency and intellect autonomy), but at the same time we want to keep them ‘sub-human’ in statute. The perception of AI is influenced by both fiction and non-fiction, and this creates a goal-alignment problem: whose values and goals are actually represented in the development of AI?
Mr David Danks (Department Head and Professor of Philosophy and Psychology, Carnegie Mellon University) and Ms Aimee van Wynsberghe (Co-Founder and Co-Director, Foundation for Responsible Robotics) presented their project on ‘Cross-national comparisons of AI development and regulation strategies – the case of autonomous vehicles’. Danks spoke about the fact that sometimes, when we think about trust, there is a feeling that we are not sure what we are talking about. However, he noted that trust is a very well understood notion and there is no need to reinvent the wheel. When we speak about trust and technologies, there are several important questions to consider: What do we expect from technologies? How do we make ourselves vulnerable through the use of technology? And how we do we find a middle ground?
We can think of trust in two ways. On the one hand, we have behavioural trust, based on reliability, predictability, and expectation grounded in history. This kind of trust is useful, but it can be fragile. On the other hand, we have trust grounded in the understanding of how the system works. This is the kind of trust we have in one another, and is based on our knowledge of people’s values, interests, etc. This trust is helpful because it can be applied to novel situations. Danks gave the example of how pedestrians in the city of Pittsburg, USA (where Uber used to heavily test self driving cars) interact with self-driving cars. There are many cases of people jaywalking in front of self-driving cars. When asked why they do this, they often say that they trust the car would stop, because they have seen other cars stopping when other pedestrians jaywalked. This is behavioural trust: the pedestrians trust the technology because they have seen it function a number of times.
Giving a brief overview of the project, van Wynsberghe explained that the aim is to explore the ways in which different states regulate AI technologies, and how these regulations impact the notion of trust. The project also looks at the differences between regulations and cultural norms across various countries. The hope is to be able to use the results of the project as a starting point to more systematically understand various best practices in terms of technology, regulations, and social norms.
The session concluded with an emphasis on the need to facilitate a better understanding of the interactions with AI and robots. In the case of self-driving cars, for example, mechanisms that indicate to pedestrians when a car is on autonomous mode could improve this understanding.