The digital economy in the age of AI: Implications for developing countries (UNCTAD)

6 Dec 2023 15:00h - 16:30h UTC

official event page

Table of contents

Disclaimer: This is not an official record of the UNCTAD eWeek session. The DiploAI system automatically generates these resources from the audiovisual recording. Resources are presented in their original format, as provided by the AI (e.g. including any spelling mistakes). The accuracy of these resources cannot be guaranteed. The official record of the session can be found on the UNCTAD website.

Full session report

Teki Akuetteh

The discussion centered around the impact of artificial intelligence (AI) on developing nations, particularly African countries. It highlighted the transformative influence of AI technologies on African economies, addressing issues related to poverty, social impact, and sustainability. However, challenges were also identified, such as the high costs associated with AI development and the influence of big tech companies. The need for global cooperation, equitable access to resources, and supportive infrastructure were emphasized. Recognizing and compensating data contributors, careful regulation, and inclusivity were also deemed crucial for responsible and equitable AI development in developing nations.

Uma Rani

Uma Rani, an employee at the International Labour Organization (ILO), strongly advocates for digital worker rights. She has conducted extensive research on the impact of AI in developing countries, shaping her belief in the importance of protecting workers in the digital era. However, some perceive her as biased due to her background as a development economist and her work in the platform economy.

The adoption of AI in workplaces has not yet reached a large scale, but there has been significant investment in AI-related tools. This has led to the automation or outsourcing of tasks in various industries, raising concerns about the replacement of specific tasks rather than entire jobs. This argument suggests that AI adoption may result in increased productivity and efficiency, but there is a risk of certain tasks becoming obsolete.

One of the worries associated with AI adoption is the potential de-skilling of highly educated workers. In some cases, individuals with advanced degrees are assigned mundane tasks related to AI development, such as cleaning and feeding data to AI systems. Additionally, examples exist of IT graduates working on removing objectionable material from the web, which indicates a potential waste of their expertise and skills.

Content moderators, who are responsible for moderating online content, often experience psychosocial impacts and are unable to discuss their work due to non-disclosure agreements. This lack of communication and support can lead to the internalization of stress and negative mental health consequences. There have been calls for authorities to intervene and address these issues surrounding content moderation, as exemplified by a case involving Meta in Kenya.

Understanding the data value chain is crucial, and Uma Rani argues that worker empowerment can be achieved by fighting for data rights. She emphasizes the need for complete transparency in the collection, cleaning, analysis, and outcome of data. This would ensure that workers have control and ownership over the data they contribute. Furthermore, discussions on fiduciary or data trust have been initiated, asserting the right of everyone to the data they contribute and its usage.

Transparency in algorithms is also a key concern. It is argued that algorithm transparency is necessary to address fair practices and equity issues. By providing transparency, biases and potential discrimination embedded in algorithms can be identified and mitigated. This contributes to a more just and equitable use of AI and data-driven technologies.

The development of AI requires ethical regulation, as insufficient attention is currently being given to the process. The argument asserts the importance of considering ethical implications in the development and use of AI to ensure it aligns with societal values and upholds ethical standards. This includes addressing issues such as privacy, bias, and accountability.

Contrary to the widespread fear of job displacement, it is highlighted that artificial intelligence and emerging technologies do not necessarily result in job losses for developers and computer programmers. While AI has streamlined and facilitated certain programming tasks, human developers are still required for further development and innovative thinking. The argument suggests that rather than replacing jobs, AI can enhance and support the work of developers and programmers.

In light of technological advancements, there is a need to revisit and reframe industrial and employment policies. The advent of AI and platforms like GitHub has led to a shift from formal jobs to more informal arrangements. This raises concerns about de-skilling and the need to develop products that benefit our own societies and economies. Policies should be updated to provide support and address the challenges posed by technological revolutions.

Finally, the ethical development and use of AI is crucial. There is a risk that AI could be used for monitoring, surveillance, and work intensification, leading to worse working conditions. It is argued that clear regulation of AI is necessary to protect workers throughout its development and use. This includes safeguarding against unethical practices, ensuring privacy, and promoting fairness and respect for worker rights.

Overall, the expanded summary highlights the various arguments and perspectives concerning digital worker rights, AI adoption, the data value chain, algorithm transparency, ethical regulation, and the impact of AI on job displacement. Noteworthy observations include the need for worker empowerment, the importance of revisiting industrial and employment policies, and the risks and challenges associated with AI’s development and use.

Jovan Kurbalija

The analysis of perspectives on Artificial Intelligence (AI) and its impact reveals several interesting and important insights.

One perspective emphasizes the significance of accessible and distributed knowledge worldwide. This viewpoint highlights the belief that knowledge is still fairly distributed worldwide. It is argued that AI should be found not just in big centers but also in niches, flea markets, and favelas. The aim is to make AI accessible to all, regardless of their location or resources. This perspective aligns with the principles of reducing inequalities and promoting industry, innovation, and infrastructure.

Another viewpoint raises concerns about the risks associated with AI. One such risk is “knowledge slavery,” where a centralized system could codify and control access to historical and current knowledge. This perspective acknowledges that while AI has a higher chance of resulting in positive impacts, there are significant risks that need to be addressed urgently. It particularly highlights the immediate risks of AI exacerbating the digital divide and leading to knowledge slavery.

The analysis also reveals concerns about monopolies and misinformation in the AI space. It is argued that these issues pose a bigger immediate risk than extinction. Monopolies can curtail fair market competition, while misinformation can generate false identities. This perspective emphasizes the importance of peace, justice, and strong institutions in combating these challenges.

Another intriguing observation is the mention of fear-mongering surrounding AI. The argument is made that fear-mongering can create unnecessary confusion and diversion from addressing the real issues related to AI’s understanding and usage. It is suggested that a more measured and informed approach is required.

The analysis also brings up the need for open, honest, and frank discussions about AI. Currently, discussions around AI and its implications are often steeped in secrecy and confusion. Transparency is highlighted as crucial for gaining a better understanding of the technology and its processes.

Additionally, it is noted that AI systems have evolved over time. From attempts to codify human logic, AI has transitioned to a basis on probability due to the complexity of human logic. It is mentioned that AI systems are not capable of simple mathematics as they are not based on logic but on probability.

The analysis also touches upon the need for clarity in regulations and law regarding the digital space. It is argued that there should be more clarity in regulations and laws governing the digital space, particularly in addressing cybercrimes and establishing punishment systems.

In conclusion, the analysis of perspectives on AI and its impact reveals a complex landscape. While some highlight the importance of accessible and distributed knowledge, others raise concerns about the risks associated with AI, such as knowledge slavery and the digital divide. The need for open discussions, addressing monopolies and misinformation, and clarity in regulations is also emphasized.

Isabelle Kumar

During the discussion, the speakers delved into various aspects of AI and its impact on our lives. They highlighted that AI is already transforming our lives and that its landscape is continuously changing. This points towards the fact that we are just at the beginning stages of this technological advancement, and significant progress is expected in the future.

The crucial nature of the current stage of AI development was emphasized. The decisions being made now will deeply affect our collective futures. This insight underscores the need for careful consideration and strategic decision-making to ensure that AI is harnessed for the benefit of all.

Furthermore, the speakers stressed the need for equity in AI development, particularly in relation to developing nations. They discussed how developing nations can position themselves to participate fully in the AI revolution rather than being left marginalized. This calls for efforts to bridge the gap and provide equal opportunities for all countries to leverage AI for their progress and development.

Another important point of discussion was the potential impact of AI systems on existing inequalities and biases. It was essential to harness the potential of AI systems while ensuring that they do not exacerbate or create new forms of inequalities. This highlights the need for responsible development and implementation practices that consider the social impact of AI technologies.

The speakers also explored the essential and controversial topic of AI governance. The discussion focused on questions and debates surrounding AI governance, such as who should be involved in decision-making processes and how to strike a balance between regulation and innovation. Establishing a multi-stakeholder and international framework for AI governance was identified as crucial to fostering responsible and ethical AI practices on a global scale.

The importance of upholding worker rights in the context of AI, specifically in the content moderation industry, was also highlighted. The speakers pointed out that workers in this industry often sign non-disclosure agreements (NDAs), which can lead to psychosocial impacts due to their inability to discuss their work with family and friends. The need to prioritize worker rights and ensure decent working conditions in AI-related industries emerged as an important ethical concern.

Lastly, the speakers discussed the unique challenges and varying stages of digital infrastructure in Africa in relation to AI regulation. As Africa consists of 55 countries, each with its own level of digital advancement, it was emphasized that there cannot be a one-size-fits-all approach to AI rules for the continent. This observation underscores the importance of tailoring AI regulations to the specific context and needs of each African country.

Overall, the discussion shed light on the transformative power of AI and the need for responsible, equitable, and inclusive approaches to its development and implementation. It highlighted the importance of considering the potential social, economic, and ethical implications of AI technologies, as well as the necessity of multi-stakeholder collaboration and international cooperation in governing AI.

Gabriela Ramos

Gabriela Ramos, Assistant Director General for Social and Human Sciences at UNESCO, has highlighted the unique mandate of UNESCO in relation to emerging technologies. She emphasises the need to view these technologies through ethical lenses, considering their potential impact on society. UNESCO has developed a recommendation on the ethics of artificial intelligence, which was adopted by 193 countries in 2021. The recommendation aims to ensure that AI technologies are aligned with human rights, human dignity, fairness, and inclusiveness.

Ramos expresses concern about the misuse and abuse of AI technologies, which could pose existential threats. Therefore, she emphasises the importance of governance and regulations for the development, usage, and potential pitfalls of AI technologies. The recommendation serves as a guideline for policymakers and stakeholders to navigate the ethical dimensions of AI.

In addition to ethical considerations, the recommendation also addresses the underrepresentation of women in the development of AI technologies. It specifically calls for affirmative action and investment in businesses that are led by women. Currently, women make up only 22% of professionals in the AI tech sector, and there are issues of recognition and discrimination against women in this field.

The recommendation acknowledges the transformative potential of technology in education and highlights the need for teachers to be trained to maximise its advantages. It recognises that technology is omnipresent and advancing rapidly, which poses challenges that need to be addressed in order to adapt effectively.

Countries around the world are concerned about staying competitive in terms of technology. During the initial phase of launching national AI strategies, the focus was mainly on technological competition. However, there is now a growing concern about the downsides of AI and the need to regulate its use. Governments are considering who should be in charge when something goes wrong with AI, what kind of liability regimes are needed, and what institutions should regulate AI. There is an ongoing debate on whether AI should be regulated by a regulatory institution, an institute, or a specific government body.

The recommendation also highlights the importance of introducing ethical guardrails in national AI strategies. Increasing emphasis is being placed on the ethical considerations surrounding the development and use of AI technologies. Effective institutions are seen as crucial for framing these technologies and ensuring they protect human rights and human dignity.

Importantly, the recommendation acknowledges the need for government investment and incentivisation in AI technologies. Governments are encouraged to invest in the competencies of their officials to better understand how these technologies work. The United States, for instance, has adopted various bills to organise and regulate the use of AI.

In conclusion, UNESCO’s recommendation on the ethics of artificial intelligence is a significant step towards promoting the responsible development and use of AI technologies. It emphasises the alignment of AI with human rights, dignity, fairness, and inclusiveness. The recommendation calls for governance and regulations, investment in AI technologies, and the underrepresentation of women in the field. It highlights the importance of training teachers to maximise the advantages of technology in education and addresses the challenges posed by the rapid advancement of technology. The recommendation also recognises the need to regulate AI and protect human rights and dignity through effective institutions. Overall, it provides a comprehensive framework for navigating ethical considerations in the development, implementation, and regulation of AI technologies.

Audience

In the discussions surrounding the evolution of Artificial Intelligence (AI), the participants have delved into various aspects. One area of interest is the historical development of AI and how it has progressed over time. Understanding the origins and transformation of AI into the technology we witness today is a subject of curiosity.

Another focal point is the impact of AI development on data requirements. A 2019 article from the Harvard Business Review predicts that future advancements in AI will rely less on extensive datasets and place more emphasis on top-down reasoning. This implies that AI algorithms could become more proficient in making complex decisions based on high-level knowledge and reasoning, thereby reducing the need for vast amounts of data.

However, alongside the potential benefits of AI, concerns have been raised about the potential de-skilling of workers, particularly in developing countries. Many nations are actively promoting job opportunities and skills development in tech-related fields to keep pace with the demands of the digital economy. Conversely, developed countries have implemented policies to re-skill their workforce, acknowledging the need to adapt to the changing landscape of AI and technology.

The discussions underscore the importance of striking a balance between AI advancements and the need for quality education and infrastructure development. While AI has the potential to reshape industries and enhance efficiency, it is crucial to ensure that the workforce is equipped with the necessary skills to thrive in an AI-driven world. This aligns with the objectives of Sustainable Development Goal 4: Quality Education and Sustainable Development Goal 9: Industry, Innovation, and Infrastructure.

In conclusion, the discussions have examined the evolution of AI by exploring its historical development, impact on data requirements, and concerns about worker de-skilling. They highlight the significance of fostering quality education and infrastructure development to harness the benefits of AI while preparing the workforce for the accompanying changes.

Pedro Manuel Moreno

Artificial Intelligence (AI) has the potential to revolutionise industries, enhance efficiency, and support innovation across various sectors. It offers innovative pathways to address global challenges such as poverty, inequality, climate change, and resource management. The applications of AI range from advanced data analytics and automation to augmenting human capabilities in healthcare, agriculture, and education. This highlights the positive impact AI can have on society and its potential to drive progress towards the Sustainable Development Goals (SDGs), particularly SDG 9: Industry, Innovation, and Infrastructure.

However, the pervasive integration of AI into our lives also raises critical questions and concerns. Privacy, data security, and the ethical use of technology become paramount as AI becomes more widespread. The unchecked expansion of AI technologies can potentially compromise personal privacy, leading to breaches in data security. There is a growing need to address these ethical considerations to ensure that AI is used responsibly and for the benefit of all.

Moreover, there is a negative aspect to the development of AI. There are concerns that AI may disrupt labour markets, leading to the displacement of traditional jobs and creating new forms of inequality. The potential impact on employment raises questions about how society will adapt to these changes and ensure decent work and economic growth, as stated in SDG 8: Decent Work and Economic Growth. Additionally, the dominance of countries like the United States, China, and the United Kingdom in AI research, patent ownership, and data control exacerbates global inequalities and deepens digital divides. This further highlights the need to address these disparities and foster inclusivity to mitigate the potential adverse effects of AI development.

Furthermore, the unchecked expansion of AI technologies also has potential environmental implications. The environmental impacts of AI are yet to be fully understood, and there is a need for careful consideration of the potential consequences. It is crucial to ensure that the development of AI is aligned with SDG 13: Climate Action and does not contribute to further environmental harm.

In conclusion, while AI holds tremendous potential to revolutionise industries, enhance efficiency, and tackle global challenges, its integration must be accompanied by careful considerations. The ethical use of technology, privacy, data security, and environmental impacts need to be addressed to ensure the responsible and inclusive development of AI. Involving developing countries in discussions about AI is vital to foster inclusivity and avoid excluding them from shaping the future. By addressing the challenges and considering the potential risks, we can harness the full potential of AI while minimising its negative impacts.

Paul-Olivier Dehaye

The analysis of the arguments from the speakers reveals several key points about artificial intelligence (AI). Overall, AI is seen as a powerful tool that can be applied in both positive and negative ways. It has the potential to revolutionise reasoning and knowledge and can be compared to the cognitive architecture for the world. This implies that AI has the ability to pull and push fragments of reasoning, similar to how the internet allows us to fetch and disseminate information. The potential of AI to manipulate cognitive elements like reasoning suggests that it can be a transformative force in various aspects of society.

One of the key concerns raised by the speakers is the need for inclusive entry into the new age of AI. There is a risk that many people may be de-skilled by new technologies, highlighting the urgency to act in order to ensure that the benefits of AI are accessible to all. This suggests the importance of adopting a collective perspective and taking proactive measures to protect and empower individuals in the face of AI advancements.

The speakers emphasise the need to protect data about social relations and the way people trust each other. They propose adopting a collective approach to safeguarding this information and creating systems that can be controlled by individuals themselves. By doing so, the speakers argue that it is possible to foster a culture of trust and ensure the responsible use of data in AI systems.

The analysis also highlights the significance of data exclusivity in AI. The speakers argue that small groups and populations can collect and curate data to build intelligence. This suggests that the inclusivity of AI can be enhanced by allowing broader access to curated data, rather than relying solely on large entities and corporations.

In addition, the speakers propose the concept of a circular economy of intelligence. This entails leveraging expertly curated data from smaller groups and populations to drive local and focused intelligence. By encouraging and supporting the development of such localised intelligence, AI can contribute to the goal of decent work and economic growth.

The analysis also emphasises the need for computing power and data processing to be made accessible to developing countries. The speakers argue for the establishment of a transparent and inclusive platform where individuals from around the world can process data according to rules and traceability. This would enable greater participation from developing countries and promote a fair distribution of AI capabilities.

Technical knowledge in data protection and management is highlighted as essential in order to avoid falling into dynamics dictated as the only technical way of doing things. Emphasising the importance of technical expertise, the speakers suggest that a comprehensive understanding of data protection and management is crucial for responsible and effective AI implementation.

Legal expertise is also emphasised as necessary for protecting datasets and preventing the capture of scientific outputs by larger entities. The speakers mention the French initiative called ‘Usage Rights’, which focuses on preserving scientific outputs and preventing their exploitation by big corporations. This highlights the need for legal frameworks that can safeguard the interests of individuals and promote peace, justice, and strong institutions in the context of AI development.

Transparency is a recurring theme throughout the analysis. The speakers argue that AI systems should be more transparent, with user participation and engagement in the design process. This can be facilitated by allowing users to export their data and observe how they interact with AI systems. Transparency in sourcing and methodologies used in AI systems is also underscored as a critical factor in building trust and accountability.

The analysis also draws attention to the potential of AI to advance fields beyond its own domain. The blend of human and machine intelligence observed in advanced mathematics is seen as the new core of intellectual endeavours. This suggests that the integration of AI and human intelligence can have far-reaching implications, particularly in the fields of education, industry, innovation, and infrastructure.

In conclusion, the analysis of the speakers’ arguments sheds light on various aspects of AI and its impact on society. The key takeaways include the transformative power of AI, the need for inclusive entry into the AI age, the importance of protecting data and adopting a collective perspective, the significant role of data exclusivity and circular economy of intelligence, the importance of accessible computing power and data processing, the need for technical and legal expertise, the value of transparency in AI systems, and the potential of AI to influence multiple fields.

A

Audience

Speech speed

130 words per minute

Speech length

379 words

Speech time

174 secs

Click for more

GR

Gabriela Ramos

Speech speed

185 words per minute

Speech length

1571 words

Speech time

509 secs

Click for more

IK

Isabelle Kumar

Speech speed

167 words per minute

Speech length

2805 words

Speech time

1008 secs

Click for more

JK

Jovan Kurbalija

Speech speed

164 words per minute

Speech length

2058 words

Speech time

755 secs

Click for more

PD

Paul-Olivier Dehaye

Speech speed

174 words per minute

Speech length

2046 words

Speech time

704 secs

Click for more

PM

Pedro Manuel Moreno

Speech speed

136 words per minute

Speech length

641 words

Speech time

283 secs

Click for more

TA

Teki Akuetteh

Speech speed

133 words per minute

Speech length

1636 words

Speech time

740 secs

Click for more

UR

Uma Rani

Speech speed

172 words per minute

Speech length

2537 words

Speech time

883 secs

Click for more