Cybersecurity and artificial Intelligence: How to allocate liability between the stakeholders?
8 Apr 2019 16:30h - 18:15h
Event report
The session, organised by the Faculty of Law of the University of Geneva (UNIGE), mapped the technical specificities and challenges of the technology involved, address the issues of allocation of liability among the different stakeholders, the insurance options in place, as well as the relative regulatory challenges.
The event was moderated by Dr Yaniv Benhamou (PhD, Attorney-at-law, Lecturer, IP & Privacy, University of Geneva) who explained how the topic of discussion reflects a research project on cybersecurity and civil liability of the University of Geneva which is trying to address the liability of all stakeholders. Moreover, in order to contextualise the specific focus of the panel, the link between artificial intelligence (AI) and cybersecurity, Benhamou recalled that the context of AI is deeply characterised by a multi-stakeholder feature: there are indeed a variety of different stakeholders involved in the creation and implementation of AI. In addition to that, in the rapid transformation of AI-based methods, machine-learning systems have dual use of developing new capacity for cybersecurity, but also creating new venues for cyber attacks. Moreover, the multidisciplinary approach requires a common language among all stakeholders in order to face all the challenges posed by AI. When it comes to AI algorithms, the main question refers to the black box and how to allocate liability between all the technology parts involved such as IP, privacy, database and trade secrets, unfair practices, competition, contract and tort; and the actors involved such as manufacturer, programmer, trainer and users.
The technical part of the session highlighted four types of vulnerabilities: the use of AI to penetrate and force a system; the data poisoning in order to break the system; challenges connected to the data design flow; and the targeting of human vulnerabilities with the goal to understand their behaviour. The current status of understanding of the technologies shows that if there is a failure, and the point is if there is not interpretability and predictability, it results in a black box of liability.
Dr Olivier Crochat (Executive Director, Centre for Digital Trust (C4DT), EPFL) explained the technical features of AI arguing that it has recently been mapped in machine learning. There is a difference between AI, understood as a trained algorithm, and machine learning, in which the knowledge is not about the rules but it is contained in the data in various forms. In this context, a more recent development is deep learning, in which the black box results even darker and more blurred. The main challenge posed by AI technologies is its interpretability as a form of going back on the chain of decision and work on the accountability: in other words, how to understand why the action has been taken and from what inputs. Indeed, it should be stressed that when you can understand the interpretability, you can understand the vulnerabilities of the system. The second challenge stressed by the speaker is not to take AI for granted, especially with regards to the potential biases and datasets gaps. Furthermore, he underlines that in order to make the system as robust as possible for augmenting its performance and security, the focus should not be exclusively on potential external attacks but also on the unpredicted processes due to few data and rare events diverting from the normality. Finally, focusing on the example of self-driving cars, he explained that there are different actors that should be taken into account when discussing liability: there is the developer of the car, the driver and then there are connected variables such as governments and the authorities separating the cars from the pedestrian, which usually have behaviours too drastic for the car system to understand and act accordingly.
Prof. Stephane Marchand Maillet (Department of Computer Science, University of Geneva) complemented the technical framing of the issue explaining that while in artificial intelligence, most of the time there was one algorithm used once it was perfectly studied and developed, with machine learning, there are algorithms that learn by themselves. As a result, the main challenge is represented by the need to understand how the data is processed by the learning machine process. With regards to cybersecurity, he argued that it falls within the adverse machine learning process which aims to foster how to protect any type of machine learning from the threats. This is a brand new field which raises challenges as well as an additional ethical question to be taken into account especially when it comes to decisions on affecting the physical safety of individuals by these systems.
Mr Steven Meyer (CEO and Co-Founder of ZENData, Cybersecurity Services, ZENData Geneva) explained that the cyber industry is putting a lot of energy to scale up and compensate for the existing research gap. While AI has been largely used for improving the lives of individuals fostering innovation ’for good’, the malicious landscape of the use of AI is featured by fighters using cybersecurity for their own advantage such as through malwares which might use these kinds of techniques to penetrate and adapt to the system, or using open source and cloud computing for hacking the engine itself. As a result, the threats posed by the malicious use of AI will be different and more challenging and solutions might be looked for outside the landscape of AI.
Ms Ria Kechagia (Legal Counsel, Scientific Collaborator, Department of Commercial Law, UNIGE/ C4DT, EPFL) talked about the legal regimes currently present with regards to technology-related problems. The first legal regime is the product liability one, which features the European approach to the topic. This results to be more complicated in practice: there is a need to understand what are the problems, whether the manufacturer will be liable if there are problems, for instance, the production of an automated car; and additionally, the users would need to prove that (s)he took all the necessary measures, including the duty of care to keep it in good shape, in line with the information got from the manufacturer. Moreover, addressing the balance between innovation and security costs for businesses, she underlined that the high-security costs for the manufacturer might hamper innovation. A solution can be found in the investment in security from the manufacturer’s side, which would then ask the consumers to pay more for that, with the blessing of insurance companies in decreasing their prices. As a solution, this shows that there is an urgent need for a legal regime that better accommodates all the challenges in the discussion.
Ms Limor Shmerling Magazanik (Managing Director, Israel Tech Policy Institute) explained that while the ambition is to achieve the truth, cybersecurity has created challenges that make it impossible to fully tackle the problem. With regards to the black box discourse, she underlined that policymakers need to acknowledge that there will be a certain extent of lacking information. In this complex context, the practice is currently seeing the use of outsources processors, which have a lower level of liability while the controller remains accountable for the end-to-end process. Thus, it is difficult to differentiate and regulate AI, especially keeping in mind the need for a balanced approach between limiting regulations and innovation. Few regulatory tools such as the General Data Protection Regulation (GDPR) and stick liability regimes for self-driving cars are already in place, and these could be complemented by review boards as a way to create answers to the current challenges.
Ms Stephanie Borg Psaila (Interim Director of DiploFoundation and the Geneva Internet Platform) talked about the AI digital policy-related issues, arguing that policymakers and diplomats – not just the judiciary apparatus – should be aware and properly understand these challenges. The digital policy issues related to AI reflect technical challenges already known, such as the use of AI by cyber-criminals (AI vs AI). AI is indeed already being used by cyber criminals also to avoid detection. Additional policy issues regarding AI can be found in the protection of data, as the most valuable source for criminals; in the need to improve the current knowledge of AI; in the need for ensuring transparency and trust in AI decision-making processes; and in the liability of all the actors involved.
The panel then focused on the insurance options available, building on the Mondelez case vs. Zurich Assurance, which has raised the debate of whether to consider the case as a physical loss or damage to electronic data, programs or software, including physical loss or damage created by the malicious introduction of a machine code or instruction, or if cyberwar cases are not included in insurance provisions.
Dr Asaf Lubin (Lecturer, Yale University & Cybersecurity Postdoctoral Policy Fellow, Fletcher School of Law and Diplomacy at Tufts) argued that the predominant approach taken by regulators has been to avoid the issue. Nonetheless, in the case of self-driving cars, examples come from the UK, which has implemented a model that would make the driver liable for possible accidents. The driver would then rely on its insurance company which would then work with the manufacturer and assess its liability. Additionally, he remarked the importance of cybersecurity policies, and how these are becoming recognised as binding contracts in the United States. With regards to the insurance specific focus, he recalled that there is a fundamental difference between silent and affirmative cyber insurance, and that insurance companies are still not sufficiently aware of cybersecurity.
Ms Līga Raita Rozentāle (Director of Governmental Affairs for Cybersecurity Policy, Microsoft) gave an industry perspective arguing that on the topic, the two main aspects regard what is the actual threat, and what is the solution that the industry can undertake. She stressed the importance to understand how AI can pose challenges such as in cases of the manipulation of information and fake data injection. In this context, the focus should also be on the development of proactive hunting measures of the misuse of AI, able to easier identify threats and attacks. As a result, in this view the insurance is one part of the holistic solution that needs to be put in place, in which regulations should be careful in not limiting innovation.
Ms Katarzyna Gorgol (Adviser Digital Affairs and Telecommunication, Delegation of the European Union and other international organisations in Geneva) talked about a new EU framework for certification for cybersecurity programmes, services and processes, as an open system, voluntary that would be recognised all over the EU. Considering these features, the framework is designed to be open and transparent: open to all stakeholders. Focusing on the product liability, she explained that currently there is a discussion at the EU level, complemented by two experts groups, in assessing how the product liability directive constructed around the movable and tangible products can be applied to behaviours and intangible products or services. Finally, tackling the broader topic of AI, she recalled the EU approach to foster innovation in AI through a human-centric approach, based on seven principles ’for achieving trustworthy AI’, expressed by the European Commission, which calls for the accountability, explainability and transparency of algorithms.
By Stefania Grottola
Related topics
Related event
World Summit on the Information Society (WSIS) 2019
8 Apr 2019 09:30h - 12 Apr 2019 19:30h
Geneva, Switzerland; and online