Researchers propose social and environmental certification framework for AI
Researchers at the Montreal AI Ethics Institute, Microsoft, McGill University, and Carnegie Mellon University have proposed a framework to quantify the environmental and social impact of artificial intelligence (AI). Called SECure, the framework is built on four pillars. (a) Under the compute-efficient machine learning (ML) pillar; researchers propose the computation of a standardised metric that would allow quantified comparisons across hardware and software configurations, so that people can make informed decisions when choosing a system. This metric is intended to help lower the computation burden that makes access inequitable for researchers not associated with entities that have heavy computing and data processing infrastructures. (b) The second pillar proposes the use of federated learning to perform on-device training and interfacing of ML models. This could decrease the carbon impact if computations are performed where electricity is produced using clean resources; it would also mitigate risks and harm arising from data centralisation, such as data breaches and privacy intrusions. (c) The third pillar proposes embedding data sovereignty in the system design, as a way to ‘combat socially extractive practices in machine learning today’ and allow users to withdraw consent for the use of their data. (d) The fourth and last pillar – LEED-esque certification – envisions a certification process providing metrics that would allow users to assess the state of an AI system in comparison to others. It is built on all previous pillars and is designed to certify products and services in a standardised manner on their environmental and social impacts. Overall, the researchers believe that the SECure framework would create ‘the impetus for consumers and investors to demand more transparency on the social and environmental impacts of these technologies’.