Fighting Covid-19 with AI: How to build and deploy solutions we trust?
12 Jun 2020 14:30h - 16:00h
Event report
The workshops discussed the dilemmas surrounding the application of artificial intelligence (AI) and data. In particular, the speakers addressed how to mitigate risks and, at the same, ensure the trustworthy use of AI and data, while reaping the benefits and opportunities stemming from new technologies.
The panellists agreed that trustworthiness should be considered both as a prerequisite and a driver for innovation. Mr Sebastian Hallensleben (Head, Digitalisation and Artificial Intelligence, VDE e.V.) further explained that there are two sides to trustworthiness: one that regards the product’s characteristics (i.e., its ethically-relevant characteristics), and the other that is related to how trustworthiness is communicated to the people. He referred to the report of VDE’s AI Impact Group in which the operationalisation and measurement of abstract principles, such as transparency or fairness, are analysed. He added that if we want to communicate trustworthiness to users, citizens and customers, we need a sort of little data sheet that shows the relevant ethical characteristics in a standardised and clear way.
Building on the need of such a data sheet, Mr Mikael Jensen (CEO, Denmark’s new labelling scheme and seal for IT-security and responsible use of data, Confederation of Danish Industries (DI)) presented on the labelling scheme for IT-security and responsible data use proposed by four Danish organisations, namely the Confederation of Danish Industries (DI), the Danish Chamber of Commerce, SMEdenmark, and the Danish Consumer Council). The aim of the initiative is not to label specific services, but rather to label companies themselves on their fair and ethical use of data. Jensen explained that to operationalise the labelling scheme, the study defined nine overall criteria that are meant to deliver digital trust, and that the companies need to follow.
These ethical criteria are related to the Danish, European, and international legal frameworks. For example, in the case of the AI part, references are the European Union’s General Data Protection Regulation (GDPR) and the High-level Expert Group on Artificial Intelligence’s (AI HLEG) report on Ethics Guidelines for Trustworthy Artificial Intelligence.
He explained that the labelling system does not apply a one-size-fits-all approach but instead offers a well-defined and risk-based approach where the criteria are differently shaded based on the initial risk profile of the companies. Based on initial control questions, the companies are put into four different segments, from low-risk to high-risk, based on, for example, their data complexity or organisational complexity. Then the different criteria are applied according to the risk type.
Mr Martin Ulbrich (Policy Officer, Directorate-General for Communications Networks, Content and Technology (DG CONNECT), European Commission (EC)) presented on the EC’s ‘White Paper on Artificial Intelligence: a European approach to excellence and trust’ which supports a regulatory and investment oriented approach, with the twin objective of promoting the uptake of AI and of addressing the risks associated with certain uses of this new technology. Ulbrich considered that striking the right balance between trustworthiness and innovation represents an important regulatory challenge for AI applications, especially in high-risk and emergency scenarios like the COVID-19 pandemic when regulation could potentially hinder rapid and innovative responses.
On regulation, the link between AI and data was also mentioned. Speakers considered that it is difficult to make sense of large data sets without AI, and that AI applications are useless if fed with poor-quality data or no data at all. Therefore, AI discussions need to be linked to data governance schemes, and address the sharing, protection, and standardisation of data. However, Hallensleben noted that, compared to other technologies, AI presents important new characteristics such as the ‘black box’, as well as self-learning elements that make it necessary to update existing regulatory frameworks.
The session concluded with recommendations from each panellist. On the need to update existing regulatory frameworks, Hallensleben stressed the importance of developing a standardised way of describing an ethically-relevant framework for AI systems from which every stakeholder can benefit. Ulbrich reiterated the importance for regulators to keep the right balance between regulating AI systems and innovation. Finally, Jensen said that it is crucial that we as citizens and consumers have trust in the digital services we use every day and that we need to consider algorithmic systems and AI as part of a broader agenda regarding digital trust.
Marco Lotti