Designing an AI ethical framework in the Global South

1 Dec 2022 09:00h - 10:00h

Session page

Event report

The session explored how regulatory frameworks for AI have been shaped in the Global South and discussed the issues of multistakeholderism, inclusion, oversight, and enforceability. The first part of the session presented an overview regarding regulatory steps taken across the countries of the Global South. The overarching conclusion is that some countries, like Brazil and China, endorse hard legislation, while others take a soft law approach.

In India the approach rests on self-regulatory, non-binding regulation with some focus on risk-based governance. Promotion, innovation, education, and establishing centres of excellence are central. The recommendations of the UNESCO AI principles and initiatives like the ‘AI for All’ complement ministerial committees that look at specific aspects of governance. Recommendations lack obligation for industry and operators and enforcement mechanisms.

On the other hand, the Beijing model for AI governance is a hybrid approach treating science and technology as embedded in national laws to propel the growth of the national economy. The idea is to aggregate data for AI by encouraging the population to engage in the digital transformation. Besides initiatives like Made in China 2025, the Internet Plus Initiative, and The New Generation AI Development Plan that set AI commercialisation and marketplace goals, China has established a centralised but multistakeholder-oriented body. Like in other countries, China also regulates AI by data protection legislation.

Brazil has been observing the OECD working group on AI and nationally developing initiatives regarding the national strategy and the AI Act. In 2020, the AI strategy was framed around legislation, regulation, responsibility, and ethical use. It is relevant that the Congress initiated public hearings where several models of governance were proposed, emphasising the importance of multiple authorities coordinated by the multistakeholder committee including academia, the public sector, the private sector, and civil society.

Progress is slow in Africa and legislation is in its initial stages. The main focus is still on personal data protection; however, the Malabo Convention at the African Union level has not yet been ratified as only thirteen out of fifteen needed countries have signed it. Currently only six countries have a national AI strategy and only Mauritius has an AI Bill. 

The Chilean experience echoed that the purpose of policy is not to set the boundaries of AI, but to set the stage and the capacities at the national level to implement AI. Simultaneously, as is being attempted with the National AI Strategy 2021-2030, ethical and responsibility challenges should be recognised without establishing very strong stances on what the regulatory frameworks for liability should be for AI systems.

The second part of the session discussed what is needed for an ethical AI framework, by comparing the frameworks and approaches to inclusion, multistakeholderism, public consultations, and clarity in enforcement of existing regulation. Participation is still dominated by organisations already part of the digital rights agenda. External dialogue with laypeople, indigenous groups, gender and racial diversity groups, and youth is needed through targeted and directed efforts to be inclusive. 

Another point was how to include different audiences to ensure multistakeholderism. In Africa, the African Commission for Human and Peoples’ rights is engaging communities across the continent. In Chile, experience shows that the focus groups instigated by the government and the dialogue led by civil society are a good combination. The 2015 Net Neutrality Campaign in India exemplifies how a technical discussion can become a mass movement if brought closer to the general population using the technology itself. Grassroots campaigns can open the debate while mindful of a unilateral transmission of values traditionally done by the private sector and civil society.

What needs to be done in the future? Dialogue has to be more open, because the most affected groups, which are vulnerable groups and exploited groups, are not included. To ensure enforceability of AI frameworks, we could turn the focus to public-private partnerships and programmes like innovation hubs or hackathons, but also do more experimentation in policy through tentative regulatory frameworks. We could create more opportunities for regulators and public administration to get closer to the technical field. Also, we could address the paradox between transparency and trade secrets of algorithms by considering the algorithmic ranking system or other recommendation system regulation.

By Jana Misic

 

The session in keywords

WS497 WORDCLOUD Designing an AI ethical framework in the Global South IGF2022