Projects in action: Towards AI and data commons (Part 2)
18 May 2018 02:00h
Event report
This session explored the need for a common framework for data and artificial intelligence (AI), allowing stakeholders to work together to make AI for good a reality. The moderator, Mr Amir Banifatemi (AI Lead at XPRIZE Foundation) reminded the participants about the twofold aims of the summit: identifying practical applications of AI to accelerate progress towards the sustainable development goals (SDGs), as well as formulating strategies to ensure the trusted, safe, and inclusive development and dissemination of AI technology, and the equitable access to its benefits.
Connecting remotely, Mr Wendell Wallach (Consultant, Ethicist, and Scholar at Yale University’s Interdisciplinary Center for Bioethics) recommended focusing on agile and comprehensive governance for AI to ensure that its adoption benefits humanity and minimises its potential harms. Comprehensive governance, ranging from technological solutions and standards to corporate oversight and soft law, can provide an agile way of managing this challenge. In this context, Wallach presented the Building Global Infrastructure for the comprehensive governance of AI (BGI for AI) initiative to resolve not only the technological, but also the political and practical challenges raised by AI through agile, comprehensive governance.
After providing an overview of UN initiatives which touch upon AI-related issues (the High Level Committee on Programmes, the Internet Governance Forum, and UN-DESA’s Forum on Science, Technology and Innovation), Mr Vincenzo Aquaro (Chief of Digital Governance, Public Institutions and Digital Government Division, United Nations) explained that this global summit is already one of the most important international forums about AI, due to its multistakeholder and multidisciplinary nature and especially in its aim to develop – rather than report about – concrete initiatives. Aquaro reminded the participants about the SDG’s mission to leave no one behind, which should be applied to the work of the AI community as well, to be able to support the creation and promotion of AI solutions for the common good. AI should be a ‘universal resource for all of humanity, to be equally distributed, available to everyone, no matter the level of development and capacity’. Yet, he noted that one of the biggest challenges is to create a common framework to regulate the proper use of AI without stifling innovation, and addressing this challenge requires the involvement of all stakeholders.
Banifatemi then presented a common platform for AI for good, which would facilitate the collaboration between AI practitioners and ‘problem owners’ (governments, civil society, domain experts, etc.) and provide solutions in a systematic manner, moving beyond pilots and individual projects. Mr Stuart Russell (Professor of Electrical Engineering and Computer Sciences, UC-Berkeley) added that this collaboration between problem owners and engineers, and the convergence of pilot projects into global services, was the main stumbling block identified by the AI + Satellites track. Projects often result in publications that are filed away, while real problems on the ground continue to persist. As this is a common challenge among almost all AI projects, we need to develop standardised ways of collaboration and ‘shepherds’ with experience to avoid the roadblocks that AI researchers are not equipped to anticipate. After all, AI for good is not just a technical issue, but also has governance and sociological dimensions requiring different kinds of expertise.
Mr Trent McConaghy (Founder, Ocean Protocol; Founder & CTO, BigchainDB) presented a framework for AI Commons, which is a scalable, de-centralised platform that brings together problem owners, AI practitioners, and suppliers of data and infrastructure. The platform contains a variety of data sources, provides incentives to share data, includes privacy provisions, and has in-built mechanisms for data governance (e.g. permissions, labels, and ontologies) and interoperability. McConaghy concluded that the SDGs are a great way to summarise global problems, and a high-level way to approach them with AI would benefit from a common platform, which is not just something that can hypothetically be built, but that is already in the process of being constructed.
Ms Francesca Rossi (Research Scientist at IBM Research and Professor at the University of Padova) highlighted the need for public involvement in creating AI, as AI will impact everybody. Besides practitioners and problem owners, it is important to include researchers, social scientists, data subjects, and policymakers. In addition, they need to be representative of different cultures, genders, disciplines, and stakeholders. Rossi emphasised the need for trustworthy AI, which should take into account fairness, values, explainability, and ethics, and which needs to collaborate with existing initiatives around AI ethics and trust.
Mr Chaesub Lee (Director of Telecommunication Standardization Bureau, ITU) closed the session by highlighting the urgency of working towards AI for good, as AI technologies risk being hijacked by those using it with bad attentions. In addition, he reiterated the identified need for smoother transitions from pilot projects to global services.