Meaningful human control of weapons systems
12 Apr 2018 02:00h
Event report
The event was organised by the Campaign to Stop Killer Robots, a coalition of non-governmental organisations that advocates to ban weapons systems that would select, target, and use force without ‘meaningful human control’. The discussion focused on human-machine interaction and how to retain human control of weapons systems and individual attacks in all circumstances. The event was moderated by Ms Rasha Abdul Rahim, from Amnesty International.
Mr Paul Scharre, from the Center for a New American Security, delivered a presentation on ‘The role of human judgment in war’. He argued that this is the fifth year of discussion about a topic which faces an extremely rapid technologic development. In 2014, technical experts did not think that machines would be good at targeting and recognising objects. Today, this is not just possible, but it is one of the main topics of discussion. To guide the discussion on the notion of human control when it comes to lethal autonomous weapons systems (LAWS), he proposed the following scenario: ‘If we had all the technology we could image, what role would we want humans to play in war? And why?’. Scharre also discussed the role of the principles of International Humanitarian Law (IHL) in trying to answer the question of what decisions would require human judgment in war. IHL indeed treats humans as legal agents, and not machines, who are obligated to respect the law. IHL applies to people; machines are not legal agents. Thus, autonomous weapons need to be regulated, and the discussion should be about the bounds and the limits of such weapons.
Ms Bonnie Docherty, campaign representative from the Human Rights Watch. She highlighted the different approaches and proposals moved by states in their working papers submitted in advance of the meeting of the CCW Group of Governmental Experts on Lethal Autonomous Weapons Systems. However, she underlined that a basic consensus exists: it is essential to maintain human control over the use of force. Despite the fact that there are different terms used (human control, judgment, or intervention), they all frame the need for meaningful and appropriate decisions, especially in cases such as the distinctions between combatants and civilians according to IHL principles. Retaining human control over weapons is a moral imperative. Meaningful human control assures that legal accountability is possible regardless of the technology involved.
The second campaign representative was Prof. Noel Sharkey, from the International Committee for Robot Arms Control. He talked about the notion of ‘human supervising control’ of weapons, proposing the following levels of targeting supervision:
- Human engagement in the selection of targets
- The program suggests alternatives and the human selects one of them
- The program selects the target that the human has to approve before the attack
- The program selects the target, and the human has a short time to veto it
Through a psychologically based approach, he argued that there are two types of processes: automatic and deliberative decisions. The deliberative process consumes time and resources, as deliberation is easily distractive. The automatic reasoning does not require resources. In the case of LAWS, Sharkey analysed several scenarios of targeting supervision. The human engagement in the selection of targets is defined as the ideal scenario. In the case of a program suggesting different alternatives to target, one of which has to be chosen by human control, a perspective bias comes into play. Humans are characterised by ‘automation bias’ in believing that one of the alternatives is right by default: they will certainly choose one of the options. The case in which the human has to approve an attack selected by the program raises the following issue: the human does not search for contradictory information. When the human has instead a restricted time to veto the targeting, the risk is to focus and only accept existing evidence.
After this presentation on human psychology and bias in taking decisions, Sharkey ended his speech with outlining the ‘most drastic scenario of autonomy in the critical function of selecting a target’, in which he is no human control.
Lastly, it was noted that reframing autonomy in terms of human control can clarify the role of humans in war decisions, making the process more transparent and the accountability aspect clearer.
Related topics
Related event
CCW Group of Governmental Experts on Lethal Autonomous Weapons Systems – First 2018 Meeting
9 Apr 2018 11:00h - 14 Apr 2018 01:30h
Geneva, Switzerland