Ideas for impact: AI breakthrough team project pitches
18 May 2018 02:00h
Event report
Mr Kenny Chen (Innovation Director Ascender) moderated the debate which focused on sharing key lessons from the four tracks of the conference.
Dr Stuart Russel (Professor of Computer Science, University of California, Berkeley) summarised the ‘AI + satellites’ track by highlighting four broad areas of projects: a) predicting deforestation before it occurs, b) tracking livestock to reduce cattle raiding, c) implementing capabilities to ensure micro-insurance, and d) providing an infrastructure platform to deliver continuous, permanent global services based on the autonomous analysis of satellite data. He also stressed that while there are many laudable pilot projects, there is a gap between these projects and the availability of the services to a majority of people, on a global scale. Hence, in order to ensure an easier transition from pilot projects to global services, he suggested to build one single platform to facility this.
Dr Ramesh Krishnamurthi (Senior Advisor at the World Health Organization) summarised the findings from the ‘AI + health’ track. He outlined four work streams which include a) AI for primary care and service delivery, b) outbreaks, emergency response, and risk reduction, c) health promotion, prevention, and education, and d) AI health policy. He then described the 15 projects that the group discussed throughout the conference: AI to detect vision loss, detection of osteoarthritis, AI and digital identity, AI based health portal, AI-powered health infrastructure, AI-powered public health messaging, AI-powered epidemic modelling, Malnutrition detection based on images, child growth monitoring based on AI, strengthening the coordination of AI-powered resources, AI to improve the predictive abilities based on EMR data, Ai for public health in India, pre-primary care with AI, AI-powered snake bite identification for first responders, and AI-based social media mining to track health trends.
Dr Renato de Castro (SmartCity Expert) summarised the ‘AI + smart cities and communities’ track. He highlighted three key areas of this track. First, AI used for urban solutions should give voice to citizens in order to co-create their cities. It should also counter harassment and abuse. Second, AI should be used to foster smart governments. Examples of this came from Amsterdam and Brazil and de Castro stressed that the experience of Amsterdam shows that being allowed to fail and learning from failure is a very important feature. Third, AI can be used to empower smart citizens. Many examples came from Barcelona which focuses on using AI to empower people, not to replace them. Overall de Castro stressed that it is important to not only focus on cities but also the regions surrounding these cities. This was an important lesson from considering the African context where it is crucial that benefits are shared across the region so that citizens can benefit without moving to the city.
Also speaking about the findings of the ‘AI + smart cities and communities’ track, Mr Alexandre Cadain (CEO at Anima, Ambassador AI XPRIZE) identified some of the key questions and challenges ahead. First, he argued that it is important to counter the fear and the risk that all smart cities will eventually look alike. Tailored solutions are important that recognise history, cultural heritage, and linguistic diversity. Second, it is also important to get away from a top-down approach and to begin to view citizens as the problem owners who can identify areas of need and possible solutions. Third, connections and knowledge sharing between the emerging smart cities is needed and as such an ‘Internet of cities’ might be needed.
Dr Stephen Cave (Executive Director of the Leverhulme Centre for the Future of Intelligence, University of Cambridge) summarised some of the findings of the ‘Trust in AI’ track. He outlined four crucial tasks for the future: addressing gender imbalances, reaching marginalised communities, addressing structural inequalities, and decolonising AI. And he identified three important themes of the ‘Trust in AI’ track. First, developers must earn the trust of stakeholder communities that are affected. Second, there is a need to build trust across borders. Third, AI systems must be demonstrably trust worthy. In addition, he highlighted that broader outcomes of the track include the realisation that the idea of trust and trustworthiness needs to be interrogated in order to find a common frame of reference; the importance of recognising cultural differences; and the importance of recognising and fostering diversity.
Dr Huw Price (Professor of Philosophy at the University of Cambridge and Academic Director of the Leverhulme Centre for the Future of Intelligence, University of Cambridge) and Dr Francesca Rossi (Research Scientist at the IBM T.J. Watson Research Centre and Deputy Academic Director of the Leverhulme Centre for the Future of Intelligence, University of Cambridge) emphasised that it is important to create and use synergies and enable everyone to be aware of and learn from existing projects. In order to achieve this, they introduced Trustfactory.ai, which they envision to address some of the concerns that the track has discussed.
During the Q&A, Mr David Jensen (Head of Environmental Cooperation for Peacebuilding Programme at UNEP) mentioned the ‘planetary dashboard for global water monitoring’, which is a new partnership between UN Environment, Google, JRC, ESA, and NASA. The Q&A also raised the important question of how to meaningfully engage with GAFA (Google Apple, Facebook, Amazon), which was addressed with a reference to creating diversity and implementing multistakeholder approaches.