Artificial intelligence and the law
Event report
The conference, Artificial intelligence (AI) and the law, organised by the Law Faculty of the University of Geneva in collaboration with the Geneva-Harvard-Renmin-Sydney Platform, tackled issues raised by the evolution and development of artificial intelligence (AI). The discussion focused specifically on the latest developments in robotics. The so-called ‘smart robots’ are now able to interact with humans and the environment, in addition to being able to learn, adapt, evolve, and therefore make autonomous decisions. The conference featured discussions on the global impact of AI on the law, the concept of choice and optimisation, state responsibility, and finally, on the humanitarian implications of the use of AI.
In his introductory remarks Prof. Yves Flückiger (Rector, University of Geneva) addressed the importance of research in digital policy-related topics. He highlighted that the evolution of new technologies raises crucial legal concerns and implications that need to be explored. Discussions on these topics are already taking place in international organisations and non-governmental institutions in the city of Geneva and around the world. Moreover, Switzerland, as shown by the adoption of the Swiss digital plan, is working towards preparing society for the challenges posed by the digital revolution.
Prof. Yanebi Meng (Renmin Law School) talked about big data and algorithms from the perspective of China’s Anti-Monopoly Law. She explained that the current situation represents a new modus operandi, in which standardised laws do not exist. Meng said that China is trying to comply with the standards applied in Western countries. She also explained that the existing legislation does not provide for a clear definition of data ownership nor does it recognise to the users the right to own their data. The Anti-Monopoly Law prohibits ‘monopoly agreements’, defined as agreements between competing businesses or trading partners containing specific restrictions on competition. In this context, it is important to understand that data can be a source of market power in the digital economy, therefore, in regards to big data and algorithms, the Anti-Monopoly Law is limited under the umbrella of the digital economy.
Prof. Jovan Kurbalija (Executive Director and Co-Lead, Secretariat of the High-level Panel on Digital Cooperation) talked about the concept of choice and responsibilities of governments, businesses, and users in digital policy. He explained that legal responsibilities need to be included in a greater framework of the overall responsibilities of three actors: governments, businesses, and users. Kurbalija said that responsibilities come from the concept of choice. For the first time in history, the underlying principle of freedom of choice is challenged by machines. Currently, machines are trained to give us better and more optimal choices, based on the data provided. Therefore, there is an arising conflict between the freedom of choice and the optimisation of choice. The dynamics between choice, represented by individuals in their freedom, and optimisation, represented by new technologies, is affecting – and will increasingly trigger – fundamental discussions about society.
Focusing on state responsibility, Kurbalija explained that attributing state responsibility means identifying the circumstances in which states should be held responsible for cyber-attacks coming from their territory, as well as the right to self-defence in the case of such attacks coming their way. He further stated that it is difficult to to call a state responsible for a cyber-attack launched from its territory and this is precisely why the United Nations Group of Governmental Experts (UN GGE) failed to approve a resolution during its last meeting. One hard-lined school of thought considers states responsible for almost all attacks coming from within their territories. However, such an approach would entail massive monitoring and surveillance mechanisms, thus yielding serious consequences for human rights and privacy.
Another group of experts identifies state authorities as being responsible for the actions of non-state actors if there is an interaction or link between the two. A third definition uses the concept of due diligence, namely, that states have the responsibility to do their best to prevent transboundary attacks.
Finally, Kurbalija concluded by underlining the link between virtuality and territoriality. Despite NATO recognising cyberspace as a 5th domain of operation, there is no cyber-activity that happens outside the physical world: although cyber-operations seem to take place online only, they use physical critical infrastructures such as cables. To this extent, legally speaking, it is difficult to find something happening in cyberspace, that would at the same time, be outside a country’s control. Therefore, the concept of cyberspace should be revised for a better definition.
Dr Amandeep Singh Gill (Chair, the CCW Group of Governmental Experts on Lethal Autonomous Weapons Systems, and Executive Director and Co-Lead, Secretariat of the High-level Panel on Digital Cooperation) talked about nudging accountability and the guiding principles for the application of technologies in the area of Lethal Autonomous Weapons Systems (LAWS). He highlighted the challenge of applying the law to LAWS and the attachment of an additional protocol to the Convention on Certain Conventional Weapons (CCW), meant to cover the application of AI to military systems. During the negotiations, the difficulty of defining AI and merging the different perspectives of the tech community, legal experts, and military was clear. For instance, while the legal perspective is used to see everything with a military-civil object duality, it is hard to define an AI object with military or civil purposes. In addition, he argued that it is difficult to monitor cyber activities because they involve extensive analysis and coding of the large amount of data gathered. Furthermore, cyber activities tend to be dispersed as they do not take place in a limited and localised space. Thus, existing monitoring procedures (such as for example, put in place for nuclear facilities with the aim to reduce nuclear proliferation) would be difficult to implement in the case of cyber activities.
In this context, a new approach was needed and this is how the notion of human responsibility came around. How is it possible to ensure accountability in the different phases of building new technologies? To this extent, the GGE agreed on ten guiding principles, such as but not limited to, the application of the International Humanitarian Law principles of proportionality and distinction in cyberspace, non-anthropomorphising, and risk-assessment related to weapons reduction. However, there is still a need to construct practical modalities and not just guiding principles.