Public views on fully autonomous weapons
29 Mar 2019 01:00h
Event report
This side event, organised by the Campaign to Stop Killer Robots, was moderated by Ms Mary Wareham (Human Rights Watch) and focused on the public views on fully autonomous systems. The event featured contributions by Dr Thompson Chengeta (International Committee for Robots Arms Control), Ms Alena Popova (Founder, Ethics and Technology), and Ms Liz O’Sullivan (Activist and Operational Leader).
Chengeta talked about the notion of human control and the importance of establishing its degree. He structured his presentation around two main questions: What makes the human control a necessity? And, what determines the level of human control that should be taken? With regard to the first question, he explained that from the military, philosophical, ethical and judicial viewpoint, human control is seen as necessary. With regard to the second question, what determines the level of human control, he argued, is a sum of different parts. First, there are state obligations determined by jus ad bellum (‘right to war’) and the laws on the use of force. Second, there is the production stage, in which control by design is implemented during the stages of development. He stressed that it is in the production stage that human control is first enabled. Third, human control must be present while using weapons in order to comply with jus in bello (international humanitarian law (IHL)). In the last stage, termed ‘after use’, the result of an action has to reflect the intention of its initiator. Following these concepts, he outlined the four stages of human decision-making, from the production to the use of weapons:
-
Human decision-making during the production of the weapon: civil/business liability
-
Human decision-making when state authority uses the weapon: state responsibility
-
Human decision-making during the initial command to deploy: command responsibility
-
Human decision-making during targeting: individual responsibility
Finally, he concluded that states should support a legally binding system that would ban the development and use of autonomous weapons systems.
O’Sullivan stressed the dangers of delegating critical functions to algorithms, and pressed that lethal autonomous weapons (LAWs) are weapons of mass destruction rather than conventional ones. She justified her argument with several points. First, she explained that algorithms perpetuate biases embedded in the data used for training them. Furthermore, some training is incomplete, such as the recognition of disabled people. As a result, the deployment of such technologies raises error rates when encountering civilians and the wounded. Second, any function of targeting and attacking is inherently vulnerable to accidents and hacking. Third, artificial intelligence (AI) yields the black box dilemma, i.e., engineers are not always able to understand why a machine has made a particular decision. As a result, in cases related to military activities, overseeing accountability is made even more difficult. Fourth, she explained that machines based on AI and machine learning mechanisms will never be able to make decisions upon a moral framework that is sufficient to pass judgements regarding human life. Therefore, a machine will never be able to understand when a civilian becomes a combatant by taking part in hostilities, when a soldier is surrendering, or similar situations that require subtle human control and judgment.
Related topics
Related event
Group of Governmental Experts on Lethal Autonomous Weapons Systems (GGE LAWS)
25 Mar 2019 14:30h - 29 Mar 2019 14:30h
Geneva, Switzerland