Policy Network on Artificial Intelligence | IGF 2023
Event report
Speakers and Moderators
Speakers:
- Speaker 1, Affiliation 1
- Speaker 2, Affiliation 2
Moderators:
- Moderator 1, Affiliation 1
- Moderator 2, Affiliation 2
Table of contents
Disclaimer: This is not an official record of the IGF session. The DiploAI system automatically generates these resources from the audiovisual recording. Resources are presented in their original format, as provided by the AI (e.g. including any spelling mistakes). The accuracy of these resources cannot be guaranteed. The official record of the session can be found on the IGF's official website.
Knowledge Graph of Debate
Session report
Sarayu Natarajan
Generative AI, a powerful technology that enables easy content generation, has resulted in the widespread production and dissemination of misinformation and disinformation. This has negative effects on society as false information can be easily created and spread through the internet and digital platforms. However, the rule of law plays a crucial role in curbing this spread of false information. Concrete legal protections are necessary to address the issue effectively.
Sarayu Natarajan advocates for a context-specific and rule of law approach in dealing with the issue of misinformation and disinformation. This suggests that addressing the problem requires understanding the specific context in which false information is generated and disseminated and implementing legal measures accordingly. This approach acknowledges the importance of tailored solutions based on a solid legal framework.
The labour-intensive task of AI labelling, crucial for the functioning of generative AI, is often outsourced to workers in the global south. These workers primarily label data based on categories defined by Western companies, which can introduce bias and reinforce existing power imbalances. This highlights the need for greater inclusivity and diversity in AI development processes to ensure fair representation and avoid perpetuating inequalities.
Efforts are being made to develop large language models in non-mainstream languages, allowing a wider range of communities to benefit from generative AI. Smaller organizations that work within specific communities are actively involved in creating these language models. This represents a positive step towards inclusivity and accessibility in the field of AI, particularly in underrepresented communities and non-mainstream languages.
Mutual understanding and engagement between AI technology and policy domains are crucial for effective governance. It is essential for these two disciplines to communicate with each other in a meaningful way. Creating forums that facilitate non-judgmental discussions and acknowledge the diverse empirical starting points is critical. This allows for a more integrated and collaborative approach towards harnessing the benefits of AI technology while addressing its ethical and societal implications.
While AI developments may lead to job losses, particularly in the global north, they also have the potential to generate new types of jobs. Careful observation of the impact of AI on employment is necessary to ensure just working conditions for workers worldwide. It is important to consider the potential benefits and challenges associated with AI technology and strive for humane conditions for workers in different parts of the world.
In conclusion, the advent of generative AI has made it easier and cheaper to produce and disseminate misinformation and disinformation, posing negative effects on society. However, the rule of law, through proper legal protections, plays a significant role in curbing the spread of false information. A context-specific and rule of law approach, advocated by Sarayu Natarajan, is key to effectively addressing this issue. Inclusivity, diversity, and mutual understanding between AI technology and policy domains are crucial considerations in the development and governance of AI. It is essential to closely monitor the impact of AI on job loss and ensure fair working conditions for all.
Shamira Ahmed
The analysis focuses on several key themes related to AI and its impact on various aspects, including the environment, data governance, geopolitical power dynamics, and historical injustices. It begins by highlighting the importance of data governance in the intersection of AI and the environment. This aspect is considered to be quite broad and requires attention and effective management.
Moving on, the analysis advocates for a decolonial-informed approach to address power imbalances and historical injustices in AI. It emphasizes the need to acknowledge and rectify historical injustices that have shaped the global power dynamics related to AI. By adopting a decolonial approach, it is believed that these injustices can be addressed and a more equitable and just AI landscape can be achieved.
Furthermore, the analysis highlights the concept of a just green digital transition, which is essential for achieving a sustainable and equitable future. This transition leverages the power of AI to drive responsible practices for the environment while also promoting economic growth and social inclusion. It emphasizes the need for a balanced approach that takes into account the needs of the environment and all stakeholders involved.
In addition, the analysis underscores the importance of addressing historical injustices and promoting interoperable AI governance innovations. It emphasizes the significance of a representative multi-stakeholder process to ensure that the materiality of AI is properly addressed and that all voices are heard. By doing so, it aims to create an AI governance framework that is inclusive, fair, and capable of addressing the challenges associated with historical injustices.
Overall, the analysis provides important insights into the complex relationship between AI and various domains. It highlights the need to consider historical injustices, power imbalances, and environmental concerns in the development and deployment of AI technologies. The conclusions drawn from this analysis serve as a call to action for policymakers, stakeholders, and researchers to work towards a more responsible, equitable, and sustainable AI landscape.
Audience
The panel discussion explored several crucial aspects of AI technology and its societal impact. One notable challenge highlighted was the difficulty in capacity building due to the rapidly changing nature of AI. It was observed that AI is more of an empirical science than an engineering product, meaning that researchers and designers often don’t know what to expect due to continual testing and experimentation. Misinformation and the abundance of information sources further exacerbate the challenges in capacity building.
The importance of providing education to a diverse range of demographics, from school children to the elderly, was also emphasised. It was recognised that ensuring high-quality education in the field of AI is vital in equipping individuals with the knowledge and skills required to navigate the rapidly evolving technological landscape. This education should be accessible to all, regardless of their age or background.
Additionally, the panel discussion shed light on the blurring boundaries between regulatory development and technical development in AI and other digital technologies. It was noted that the political domain of regulatory development and the technical domain of standards development are increasingly overlapping in the field of AI. This convergence presents unique challenges that necessitate a thoughtful approach to ensure both regulatory compliance and technical excellence.
Furthermore, the role of standards in executing regulations in the context of AI was discussed. The panel emphasised that standards are becoming an essential tool for implementing and enforcing regulations. Developing and adhering to standards can help address challenges such as interoperability, transparency, and accountability in AI systems.
The need for capacity building was also emphasised, allowing a broader stakeholder community to engage in the technical aspects of AI, which have become integral to major policy tools. The panel acknowledged that empowering a diverse and inclusive group of stakeholders, including policymakers, experts, civil society representatives, academics, and industry professionals, is crucial for the development and governance of AI technology.
The process of contributing to AI training and education through UNESCO was discussed, highlighting the involvement of a UNESCO member who distributes AI research materials and textbooks to universities, particularly in developing countries. This partnership and knowledge-sharing initiative aim to bridge the global education gap and ensure that AI education is accessible to all.
The assessment of AI systems was deemed crucial, with recognition that assessing non-technical aspects is as important as evaluating technical performance. This includes considering the wider societal impact, such as potential consequences on workers and the categorisation of people. The panel emphasised the need for assessment processes to go beyond technical measures and include potential unintended consequences and ethical considerations.
Furthermore, it was acknowledged that the assessment of AI systems should extend beyond their current context and consider performance in future or “unbuilt” scenarios. This reflects the need to anticipate and mitigate potential negative outcomes resulting from the deployment of AI technology and to ensure its responsible development and use.
In conclusion, the panel discussion provided valuable insights into the challenges and opportunities associated with AI technology. The rapidly changing nature of AI necessitates continuous capacity building, particularly in the education sector, to equip individuals with the necessary skills and knowledge. Moreover, the convergence of regulatory and technical development in AI requires a thoughtful and inclusive approach, with standards playing a critical role in regulatory compliance. The assessment of AI systems was identified as a key area, underscoring the importance of considering non-technical aspects and potential societal impacts. Overall, the discussion emphasised the need for responsible development, governance, and stakeholder engagement to harness the potential of AI technology while mitigating its risks.
Nobuo Nishigata
The analysis reveals several key points regarding AI governance. Firstly, it emphasizes the importance of striking a balance between regulation and innovation in AI initiatives. This suggests that while regulations are necessary to address concerns and ensure ethical practices, there should also be room for innovation and advancement in the field.
Furthermore, the report highlights the need for AI policy development to take into consideration perspectives and experiences from the Global South. This acknowledges the diverse challenges and opportunities that different regions face in relation to AI adoption and governance.
The analysis also discusses the dual nature of AI technology, presenting both risks and opportunities. It underscores the significance of discussing uncertainties and potential risks associated with AI, alongside the numerous opportunities it presents. Additionally, it highlights the potential of AI to significantly contribute to addressing economic and labor issues, as evidenced by Japan considering AI as a solution to its declining labour force and sustaining its economy.
Another noteworthy point raised in the analysis is the recommendation to view AI governance through the Global South lens. This suggests that the perspectives and experiences of developing nations should be taken into account to ensure a more inclusive and equitable approach to AI governance.
The analysis also provides insights into the ongoing Hiroshima process focused on generative AI. It highlights that discussions within the G7 delegation task force are centred around a code of conduct from the private sector. Notably, the report suggests support for this approach, emphasising the importance of a code of conduct in addressing concerns such as misinformation and disinformation.
Flexibility and adaptability in global AI governance are advocated for in the analysis. It argues that AI is a rapidly evolving field, necessitating governance approaches that can accommodate changing circumstances and allow governments to tailor their strategies according to their specific needs.
Collaboration and coordination between organisations and governments are seen as crucial in AI policy-making, skills development, and creating AI ecosystems. The analysis suggests that international collaborations are needed to foster beneficial AI ecosystems and capacity building.
The importance of respecting human rights, ensuring safety, and fostering accountability and explainability in AI systems are also highlighted. These aspects are considered fundamental in mitigating potential harms and ensuring that AI technologies are used responsibly and ethically.
In addition to these main points, the analysis touches upon the significance of education and harmonisation. It suggests that education plays a key role in the AI governance discourse, and harmonisation is seen as important for the future.
Overall, the analysis brings attention to the multifaceted nature of AI governance, advocating for a balanced approach that takes into account various perspectives, fosters innovation, and ensures ethical and responsible practices. It underscores the need for inclusive and collaborative efforts to create effective AI policies and systems that can address the challenges and harness the opportunities presented by AI technology.
Jose
The analysis of the speakers’ points highlights several important issues. Representatives from the Global South stress the importance of gaining a deeper understanding of movements and policies within their regions. This is crucial for fostering an inclusive approach to technology development and governance.
A significant concern raised in the analysis is the intricate link between labour issues and advancements in the tech industry. In Brazil, for instance, there has been a rise in deaths among drivers on delivery platforms, which is attributed to the pressure exerted by new platforms demanding different delivery times. This highlights the need to address the adverse effects of tech advancements on workers’ well-being.
The impact of the tech industry on sustainability is another topic of debate in the analysis. There are concerns about the interest shown by tech leaders in Bolivia’s minerals, particularly lithium, following political instability. This raises questions about responsible consumption and production practices within the tech industry and the environmental consequences of resource extraction.
The use of biometric systems for surveillance purposes comes under scrutiny as well. In Brazil, the analysis reveals that the criminal system’s structural racism is being automated and accelerated by these technologies. This raises concerns about the potential for discriminatory practices and human rights violations resulting from the use of biometric surveillance.
There is a notable push for banning certain systems in Brazil, as civil society advocates for regulations to protect individuals’ rights and privacy in the face of advancing technology. This highlights the need for robust governance and regulation measures in the tech industry to prevent harmful impacts.
The global governance of AI is also a point of concern. The analysis highlights the potential risk of a race to the bottom due to geopolitical competition and various countries pushing their narratives. This emphasizes the importance of global collaboration and cooperation to ensure ethical and responsible use of AI technologies.
Countries from the global south argue for the need to actively participate and push forward their interests in the governance of AI technologies. Forums like BRICS and G20 are suggested as platforms to voice these concerns and advocate for more inclusive decision-making processes.
The analysis also sheds light on the issue of inequality in the global governance of technology. It is observed that certain groups seem to matter more than others, indicating the presence of power imbalances in decision-making processes. This highlights the need for addressing these inequalities and ensuring that all voices are heard and considered in the governance of technology.
Furthermore, the extraction of resources for technology development is shown to have significant negative impacts on indigenous groups. The example of the Anamames ethnicity in the Brazilian Amazon suffering due to the activities of illegal gold miners underscores the need for responsible and sustainable practices in resource extraction to protect the rights and well-being of indigenous populations.
Lastly, tech workers from the global south advocate for better working conditions and a greater say in the algorithms and decisions made by tech companies. This emphasizes the need for empowering workers and ensuring their rights are protected in the rapidly evolving tech industry.
In conclusion, the analysis of the speakers’ points highlights a range of issues in the intersection of technology, governance, and the impacts on various stakeholders. It underscores the need for deeper understanding, robust regulation, and inclusive decision-making processes to tackle challenges and ensure that technology benefits all.
Moderator – Prateek
The Policy and Analysis Initiative (P&AI) is a newly established policy network that focuses on addressing matters related to AI and data governance. It originated from discussions held at the IGF 2020-2022 in Addis Ababa. The P&AI aims to tackle policy issues relating to AI and data governance, and has recently released its first report.
The first report produced by the P&AI is a collaborative effort and sets out to examine various aspects of AI governance. It specifically focuses on the AI lifecycle for gender and race inclusion and outlines strategies for governing AI to ensure a just twin transition. The report takes into account different regulatory initiatives on artificial intelligence from various regions, including those from the Global South.
One of the noteworthy aspects of the P&AI is its working spirit and commitment to a multi-stakeholder approach. The working group of P&AI was formed in the true spirit of multi-stakeholderism at the IGF, and they collaborated closely to draft this first report. This approach ensures diverse perspectives and expertise are considered in shaping the policies and governance frameworks related to AI.
Prateek, an individual interested in understanding the connectivity between AI governance and internet governance, sought insights on the interoperability of these two domains. To gain a better understanding of the implications of internet governance on AI governance, Prateek engaged Professor Xing Li and requested a comparison between the two in terms of interoperability.
During discussions, Jose highlighted the need for a deeper understanding of local challenges faced in the Global South in relation to AI. This includes issues concerning labor, such as the impacts of the tech industry on workers, as well as concerns surrounding biometric surveillance and race-related issues. Jose called for more extensive debates on sustainability and the potential risks associated with over-reliance on technological solutions. Additionally, Jose stressed the underrepresentation of the Global South in AI discussions and emphasized the importance of addressing their specific challenges.
In the realm of AI training and education, Pradeep mentioned UNESCO’s interest in expanding its initiatives in this area. This focus on AI education aligns with SDG 4: Quality Education, and UNESCO aims to contribute to this goal by providing enhanced training programs in AI.
In a positive gesture of collaboration and sharing information, Prateek offered to connect with an audience member and provide relevant information about UNESCO’s education work. This willingness to offer support and share knowledge highlights the importance of partnerships and collaboration in achieving the goals set forth by the SDGs.
In conclusion, the Policy and Analysis Initiative (P&AI) is a policy network that aims to address AI and data governance matters. Their first report focuses on various aspects of AI governance, including gender and race inclusion and a just twin transition. Their multi-stakeholder approach ensures diverse perspectives are considered. Discussions during the analysis highlighted the need to understand local challenges in the Global South, the significance of AI education, and the connectivity between AI and internet governance. Collaboration and information sharing were also observed, reflecting the importance of partnerships in achieving the SDGs.
Maikki Sipinen
The Policymakers’ Network on Artificial Intelligence (P&AI) is a relatively new initiative that focuses on addressing policy matters related to AI and data governance. It emerged from discussions held at the IGF 2020-2022 meeting in Addis Ababa last year, where the importance of these topics was emphasised. The P&AI report, which was created with the dedication of numerous individuals, including the drafting team leaders, emphasises the significance of the IGF meetings as catalysts for new initiatives like P&AI.
One of the key arguments put forward in the report is the need to introduce AI and data governance topics in educational institutions. The reasoning behind this is to establish the knowledge and skills required to navigate the intricacies of AI among both citizens and the labor force. The report points to the success of the Finnish AI strategy, highlighting how it managed to train over 2% of the Finnish population in the basics of AI within a year. This serves as strong evidence for the feasibility and impact of introducing AI education in schools and universities.
Another argument highlighted in the report involves the importance of capacity building for civil servants and policymakers in the context of AI governance. The report suggests that this aspect deserves greater focus and attention within the broader AI governance discussions. By enhancing the knowledge and understanding of those responsible for making policy decisions, there is an opportunity to shape effective and responsible AI governance frameworks.
Diversity and inclusion also feature prominently in the report’s arguments. The emphasis is on the need for different types of AI expertise to work collaboratively to ensure inclusive and fair global AI governance. By bringing together individuals from diverse backgrounds, experiences, and perspectives, the report suggests that more comprehensive and equitable approaches to AI governance can be established.
Additionally, the report consistently underscores the significance of capacity building throughout all aspects of AI and data governance. It is viewed as intrinsically linked and indispensable for the successful development and implementation of responsible AI policies and practices. The integration of capacity building recommendations in various sections of the report further reinforces the vital role it plays in shaping AI governance.
In conclusion, the P&AI report serves as a valuable resource in highlighting the importance of policy discussions on AI and data governance. It emphasises the need for AI education in educational institutions, capacity building for civil servants and policymakers, and the inclusion of diverse perspectives in AI governance discussions. These recommendations contribute to the broader goal of establishing responsible and fair global AI governance frameworks.
Owen Larter
The analysis highlights several noteworthy points about responsible AI development. Microsoft is committed to developing AI in a sustainable, inclusive, and globally governed manner. This approach is aligned with SDG 9 (Industry, Innovation and Infrastructure), SDG 10 (Reduced Inequalities), and SDG 17 (Partnerships for the Goals). Microsoft has established a Responsible AI Standard to guide their AI initiatives, demonstrating their commitment to ethical practices.
Owen, another speaker in the analysis, emphasises the importance of transparency, fairness, and inclusivity in AI development. He advocates for involving diverse representation in technology design and implementation. To this end, Microsoft has established Responsible AI Fellowships, which aim to promote diversity in tech teams and foster collaboration with individuals from various backgrounds. The focus on inclusivity and diversity helps to ensure that AI systems are fair and considerate of different perspectives and needs.
Additionally, open-source AI development is highlighted as essential for understanding and safely using AI technology. Open-source platforms enable the broad distribution of AI benefits, fostering innovation and making the technology accessible to a wider audience. Microsoft, through its subsidiary GitHub, is a significant contributor to the open-source community. By embodying an open-source ethos, they promote collaboration and knowledge sharing, contributing to the responsible development and use of AI.
However, it is crucial to strike a balance between openness and safety/security in AI development. Concerns exist about the trade-off between making advanced AI models available through open-source platforms versus ensuring the safety and security of these models. The analysis suggests a middle-path approach, promoting accessibility to AI technology without releasing sensitive model weights, thereby safeguarding against potential misuse.
Furthermore, the need for a globally coherent framework for AI governance is emphasised. The advancement of AI technology necessitates establishing robust regulations to ensure its responsible and ethical use. The conversation around global governance has made considerable progress, and the G7 code of conduct, under Japanese leadership, plays a crucial role in shaping the future of AI governance.
Standards setting is proposed as an integral part of the future governance framework. Establishing standards is essential for creating a cohesive global framework that promotes responsible AI development. The International Civil Aviation Organization (ICAO) is highlighted as a potential model, demonstrating the effective implementation of standards in a complex and globally interconnected sector.
Understanding and reaching consensus on the risks associated with AI is also deemed critical. The analysis draws attention to the successful efforts of the Intergovernmental Panel on Climate Change in advancing understanding of risks related to climate change. Similarly, efforts should be made to comprehensively evaluate and address the risks associated with AI, facilitating informed decision-making and effective risk mitigation strategies.
Investment in AI infrastructure is identified as crucial for promoting the growth and development of AI capabilities. Proposals exist for the creation of public AI resources, such as the National AI Research Resource, to foster innovation and ensure equitable access to AI technology.
Evaluation is recognised as an important aspect of AI development. Currently, there is a lack of clarity in evaluating AI technologies. Developing robust evaluation frameworks is crucial for assessing the effectiveness, reliability, and ethical implications of AI systems, enabling informed decision-making and responsible deployment.
Furthermore, the analysis highlights the importance of social infrastructure development for AI. This entails the establishment of globally representative discussions to track AI technology progress and ensure that the benefits of AI are shared equitably among different regions and communities.
The analysis also underscores the significance of capacity building and actions in driving AI development forward. Concrete measures should be taken to bridge the gap between technical and non-technical stakeholders, enabling a comprehensive understanding of the socio-technical challenges associated with AI.
In conclusion, responsible AI development requires a multi-faceted approach. It involves developing AI in a sustainable, inclusive, and globally governed manner, promoting transparency and fairness, and striking a balance between openness and safety/security. It also necessitates the establishment of a globally coherent framework for AI governance, understanding and addressing the risks associated with AI, investing in AI infrastructure, conducting comprehensive evaluations, and developing social infrastructure. Capacity building and bridging the gap between technical and non-technical stakeholders are crucial for addressing the socio-technical challenges posed by AI. By embracing these principles, stakeholders can ensure the responsible and ethical development, deployment, and use of AI technology.
Xing Li
The analysis explores various aspects of AI governance, regulations for generative AI, the impact of generic AI on the global south, and the need for new educational systems in the AI age. In terms of AI governance, the study suggests that it can learn from internet governance, which features organisations such as IETF for technical interoperability and ICANN for names and number assignments. The shift from a US-centric model to a global model in internet governance is viewed positively and can serve as an example for AI governance.
The discussion on generative AI regulations focuses on concerns that early regulations may hinder innovation. It is believed that allowing academics and technical groups the space to explore and experiment is crucial for advancing generative AI. Striking a balance between regulation and fostering innovation is of utmost importance.
The analysis also highlights the opportunities and challenges presented by generic AI for the global south. Generic AI, consisting of algorithms, computing power, and data, has the potential to create new opportunities for development. However, it also poses challenges that need to be addressed to fully leverage its benefits.
Regarding education, the study emphasises the need for new educational systems that can adapt to the AI age. Outdated educational systems must be revamped to meet the demands of the digital era. Four key educational factors are identified as important in the AI age: critical thinking, fact-based reasoning, logical thinking, and global collaboration. These skills are essential for individuals to thrive in an AI-driven world.
Finally, the analysis supports the establishment of a global AI-related education system. This proposal, advocated by Stanford University Professor Fei-Fei Li, is seen as a significant step akin to the creation of modern universities hundreds of years ago. It aims to equip individuals with the necessary knowledge and skills to navigate the complexities and opportunities presented by AI.
In conclusion, the analysis highlights the importance of drawing lessons from internet governance, balancing regulations to foster innovation in generative AI, addressing the opportunities and challenges of generic AI in the global south, and reimagining education systems for the AI age. These insights provide valuable considerations for policymakers and stakeholders shaping the future of AI governance and its impact on various aspects of society.
Jean Francois ODJEBA BONBHEL
The analysis provides different perspectives on the development and implementation of artificial intelligence (AI). One viewpoint emphasizes the need to balance the benefits and risks of AI. It argues for the importance of considering and mitigating potential risks while maximizing the advantages offered by AI.
Another perspective highlights the significance of accountability in AI control. It stresses the need to have mechanisms in place that hold AI systems accountable for their actions, thereby preventing misuse and unethical behavior.
Education is also emphasized as a key aspect of AI development and understanding. The establishment of a specialized AI school in Congo at all educational levels is cited as evidence of the importance placed on educating individuals about AI. This educational focus aims to provide people with a deeper understanding of AI and equip them with the necessary skills to navigate the rapidly evolving technological landscape.
The analysis suggests that AI development should be approached with careful consideration of risks and benefits, control mechanisms, and education. By adopting a comprehensive approach that addresses these elements, AI can be developed and implemented responsibly and sustainably.
A notable observation from the analysis is the emphasis on AI education for children. A program specifically designed for children aged 6 to 17 is implemented to develop their cognitive skills with technology and AI. The program’s focus extends beyond making children technology experts; it aims to equip them with the necessary understanding and skills to thrive in a future dominated by technology.
Furthermore, one speaker raises the question of whether the world being created aligns with the aspirations for future generations. The proposed solution involves providing options, solutions, and education on technology to empower young people and prepare them for the technologically advanced world they will inhabit.
In conclusion, the analysis underscores the importance of striking a balance between the benefits and risks of AI, ensuring accountability in AI control, and promoting education for a better understanding and access to AI innovations. By considering these facets, the responsible and empowering development and implementation of AI can be achieved to navigate the evolving technological landscape effectively.
Speakers
Speech speed
0 words per minute
Speech length
words
Speech time
0 secs
Report
The panel discussion explored several crucial aspects of AI technology and its societal impact. One notable challenge highlighted was the difficulty in capacity building due to the rapidly changing nature of AI. It was observed that AI is more of an empirical science than an engineering product, meaning that researchers and designers often don’t know what to expect due to continual testing and experimentation.
Misinformation and the abundance of information sources further exacerbate the challenges in capacity building.
The importance of providing education to a diverse range of demographics, from school children to the elderly, was also emphasised. It was recognised that ensuring high-quality education in the field of AI is vital in equipping individuals with the knowledge and skills required to navigate the rapidly evolving technological landscape.
This education should be accessible to all, regardless of their age or background.
Additionally, the panel discussion shed light on the blurring boundaries between regulatory development and technical development in AI and other digital technologies. It was noted that the political domain of regulatory development and the technical domain of standards development are increasingly overlapping in the field of AI.
This convergence presents unique challenges that necessitate a thoughtful approach to ensure both regulatory compliance and technical excellence.
Furthermore, the role of standards in executing regulations in the context of AI was discussed. The panel emphasised that standards are becoming an essential tool for implementing and enforcing regulations.
Developing and adhering to standards can help address challenges such as interoperability, transparency, and accountability in AI systems.
The need for capacity building was also emphasised, allowing a broader stakeholder community to engage in the technical aspects of AI, which have become integral to major policy tools.
The panel acknowledged that empowering a diverse and inclusive group of stakeholders, including policymakers, experts, civil society representatives, academics, and industry professionals, is crucial for the development and governance of AI technology.
The process of contributing to AI training and education through UNESCO was discussed, highlighting the involvement of a UNESCO member who distributes AI research materials and textbooks to universities, particularly in developing countries.
This partnership and knowledge-sharing initiative aim to bridge the global education gap and ensure that AI education is accessible to all.
The assessment of AI systems was deemed crucial, with recognition that assessing non-technical aspects is as important as evaluating technical performance.
This includes considering the wider societal impact, such as potential consequences on workers and the categorisation of people. The panel emphasised the need for assessment processes to go beyond technical measures and include potential unintended consequences and ethical considerations.
Furthermore, it was acknowledged that the assessment of AI systems should extend beyond their current context and consider performance in future or “unbuilt” scenarios.
This reflects the need to anticipate and mitigate potential negative outcomes resulting from the deployment of AI technology and to ensure its responsible development and use.
In conclusion, the panel discussion provided valuable insights into the challenges and opportunities associated with AI technology.
The rapidly changing nature of AI necessitates continuous capacity building, particularly in the education sector, to equip individuals with the necessary skills and knowledge. Moreover, the convergence of regulatory and technical development in AI requires a thoughtful and inclusive approach, with standards playing a critical role in regulatory compliance.
The assessment of AI systems was identified as a key area, underscoring the importance of considering non-technical aspects and potential societal impacts. Overall, the discussion emphasised the need for responsible development, governance, and stakeholder engagement to harness the potential of AI technology while mitigating its risks.
Speech speed
0 words per minute
Speech length
words
Speech time
0 secs
Report
The analysis provides different perspectives on the development and implementation of artificial intelligence (AI). One viewpoint emphasizes the need to balance the benefits and risks of AI. It argues for the importance of considering and mitigating potential risks while maximizing the advantages offered by AI.
Another perspective highlights the significance of accountability in AI control.
It stresses the need to have mechanisms in place that hold AI systems accountable for their actions, thereby preventing misuse and unethical behavior.
Education is also emphasized as a key aspect of AI development and understanding. The establishment of a specialized AI school in Congo at all educational levels is cited as evidence of the importance placed on educating individuals about AI.
This educational focus aims to provide people with a deeper understanding of AI and equip them with the necessary skills to navigate the rapidly evolving technological landscape.
The analysis suggests that AI development should be approached with careful consideration of risks and benefits, control mechanisms, and education.
By adopting a comprehensive approach that addresses these elements, AI can be developed and implemented responsibly and sustainably.
A notable observation from the analysis is the emphasis on AI education for children. A program specifically designed for children aged 6 to 17 is implemented to develop their cognitive skills with technology and AI.
The program’s focus extends beyond making children technology experts; it aims to equip them with the necessary understanding and skills to thrive in a future dominated by technology.
Furthermore, one speaker raises the question of whether the world being created aligns with the aspirations for future generations.
The proposed solution involves providing options, solutions, and education on technology to empower young people and prepare them for the technologically advanced world they will inhabit.
In conclusion, the analysis underscores the importance of striking a balance between the benefits and risks of AI, ensuring accountability in AI control, and promoting education for a better understanding and access to AI innovations.
By considering these facets, the responsible and empowering development and implementation of AI can be achieved to navigate the evolving technological landscape effectively.
Speech speed
0 words per minute
Speech length
words
Speech time
0 secs
Report
The analysis of the speakers’ points highlights several important issues. Representatives from the Global South stress the importance of gaining a deeper understanding of movements and policies within their regions. This is crucial for fostering an inclusive approach to technology development and governance.
A significant concern raised in the analysis is the intricate link between labour issues and advancements in the tech industry.
In Brazil, for instance, there has been a rise in deaths among drivers on delivery platforms, which is attributed to the pressure exerted by new platforms demanding different delivery times. This highlights the need to address the adverse effects of tech advancements on workers’ well-being.
The impact of the tech industry on sustainability is another topic of debate in the analysis.
There are concerns about the interest shown by tech leaders in Bolivia’s minerals, particularly lithium, following political instability. This raises questions about responsible consumption and production practices within the tech industry and the environmental consequences of resource extraction.
The use of biometric systems for surveillance purposes comes under scrutiny as well.
In Brazil, the analysis reveals that the criminal system’s structural racism is being automated and accelerated by these technologies. This raises concerns about the potential for discriminatory practices and human rights violations resulting from the use of biometric surveillance.
There is a notable push for banning certain systems in Brazil, as civil society advocates for regulations to protect individuals’ rights and privacy in the face of advancing technology.
This highlights the need for robust governance and regulation measures in the tech industry to prevent harmful impacts.
The global governance of AI is also a point of concern. The analysis highlights the potential risk of a race to the bottom due to geopolitical competition and various countries pushing their narratives.
This emphasizes the importance of global collaboration and cooperation to ensure ethical and responsible use of AI technologies.
Countries from the global south argue for the need to actively participate and push forward their interests in the governance of AI technologies.
Forums like BRICS and G20 are suggested as platforms to voice these concerns and advocate for more inclusive decision-making processes.
The analysis also sheds light on the issue of inequality in the global governance of technology. It is observed that certain groups seem to matter more than others, indicating the presence of power imbalances in decision-making processes.
This highlights the need for addressing these inequalities and ensuring that all voices are heard and considered in the governance of technology.
Furthermore, the extraction of resources for technology development is shown to have significant negative impacts on indigenous groups.
The example of the Anamames ethnicity in the Brazilian Amazon suffering due to the activities of illegal gold miners underscores the need for responsible and sustainable practices in resource extraction to protect the rights and well-being of indigenous populations.
Lastly, tech workers from the global south advocate for better working conditions and a greater say in the algorithms and decisions made by tech companies.
This emphasizes the need for empowering workers and ensuring their rights are protected in the rapidly evolving tech industry.
In conclusion, the analysis of the speakers’ points highlights a range of issues in the intersection of technology, governance, and the impacts on various stakeholders.
It underscores the need for deeper understanding, robust regulation, and inclusive decision-making processes to tackle challenges and ensure that technology benefits all.
Speech speed
0 words per minute
Speech length
words
Speech time
0 secs
Report
The Policymakers’ Network on Artificial Intelligence (P&AI) is a relatively new initiative that focuses on addressing policy matters related to AI and data governance. It emerged from discussions held at the IGF 2020-2022 meeting in Addis Ababa last year, where the importance of these topics was emphasised.
The P&AI report, which was created with the dedication of numerous individuals, including the drafting team leaders, emphasises the significance of the IGF meetings as catalysts for new initiatives like P&AI.
One of the key arguments put forward in the report is the need to introduce AI and data governance topics in educational institutions.
The reasoning behind this is to establish the knowledge and skills required to navigate the intricacies of AI among both citizens and the labor force. The report points to the success of the Finnish AI strategy, highlighting how it managed to train over 2% of the Finnish population in the basics of AI within a year.
This serves as strong evidence for the feasibility and impact of introducing AI education in schools and universities.
Another argument highlighted in the report involves the importance of capacity building for civil servants and policymakers in the context of AI governance.
The report suggests that this aspect deserves greater focus and attention within the broader AI governance discussions. By enhancing the knowledge and understanding of those responsible for making policy decisions, there is an opportunity to shape effective and responsible AI governance frameworks.
Diversity and inclusion also feature prominently in the report’s arguments.
The emphasis is on the need for different types of AI expertise to work collaboratively to ensure inclusive and fair global AI governance. By bringing together individuals from diverse backgrounds, experiences, and perspectives, the report suggests that more comprehensive and equitable approaches to AI governance can be established.
Additionally, the report consistently underscores the significance of capacity building throughout all aspects of AI and data governance.
It is viewed as intrinsically linked and indispensable for the successful development and implementation of responsible AI policies and practices. The integration of capacity building recommendations in various sections of the report further reinforces the vital role it plays in shaping AI governance.
In conclusion, the P&AI report serves as a valuable resource in highlighting the importance of policy discussions on AI and data governance.
It emphasises the need for AI education in educational institutions, capacity building for civil servants and policymakers, and the inclusion of diverse perspectives in AI governance discussions. These recommendations contribute to the broader goal of establishing responsible and fair global AI governance frameworks.
Speech speed
0 words per minute
Speech length
words
Speech time
0 secs
Report
The Policy and Analysis Initiative (P&AI) is a newly established policy network that focuses on addressing matters related to AI and data governance. It originated from discussions held at the IGF 2020-2022 in Addis Ababa. The P&AI aims to tackle policy issues relating to AI and data governance, and has recently released its first report.
The first report produced by the P&AI is a collaborative effort and sets out to examine various aspects of AI governance.
It specifically focuses on the AI lifecycle for gender and race inclusion and outlines strategies for governing AI to ensure a just twin transition. The report takes into account different regulatory initiatives on artificial intelligence from various regions, including those from the Global South.
One of the noteworthy aspects of the P&AI is its working spirit and commitment to a multi-stakeholder approach.
The working group of P&AI was formed in the true spirit of multi-stakeholderism at the IGF, and they collaborated closely to draft this first report. This approach ensures diverse perspectives and expertise are considered in shaping the policies and governance frameworks related to AI.
Prateek, an individual interested in understanding the connectivity between AI governance and internet governance, sought insights on the interoperability of these two domains.
To gain a better understanding of the implications of internet governance on AI governance, Prateek engaged Professor Xing Li and requested a comparison between the two in terms of interoperability.
During discussions, Jose highlighted the need for a deeper understanding of local challenges faced in the Global South in relation to AI.
This includes issues concerning labor, such as the impacts of the tech industry on workers, as well as concerns surrounding biometric surveillance and race-related issues. Jose called for more extensive debates on sustainability and the potential risks associated with over-reliance on technological solutions.
Additionally, Jose stressed the underrepresentation of the Global South in AI discussions and emphasized the importance of addressing their specific challenges.
In the realm of AI training and education, Pradeep mentioned UNESCO’s interest in expanding its initiatives in this area.
This focus on AI education aligns with SDG 4: Quality Education, and UNESCO aims to contribute to this goal by providing enhanced training programs in AI.
In a positive gesture of collaboration and sharing information, Prateek offered to connect with an audience member and provide relevant information about UNESCO’s education work.
This willingness to offer support and share knowledge highlights the importance of partnerships and collaboration in achieving the goals set forth by the SDGs.
In conclusion, the Policy and Analysis Initiative (P&AI) is a policy network that aims to address AI and data governance matters.
Their first report focuses on various aspects of AI governance, including gender and race inclusion and a just twin transition. Their multi-stakeholder approach ensures diverse perspectives are considered. Discussions during the analysis highlighted the need to understand local challenges in the Global South, the significance of AI education, and the connectivity between AI and internet governance.
Collaboration and information sharing were also observed, reflecting the importance of partnerships in achieving the SDGs.
Speech speed
0 words per minute
Speech length
words
Speech time
0 secs
Report
The analysis reveals several key points regarding AI governance. Firstly, it emphasizes the importance of striking a balance between regulation and innovation in AI initiatives. This suggests that while regulations are necessary to address concerns and ensure ethical practices, there should also be room for innovation and advancement in the field.
Furthermore, the report highlights the need for AI policy development to take into consideration perspectives and experiences from the Global South.
This acknowledges the diverse challenges and opportunities that different regions face in relation to AI adoption and governance.
The analysis also discusses the dual nature of AI technology, presenting both risks and opportunities. It underscores the significance of discussing uncertainties and potential risks associated with AI, alongside the numerous opportunities it presents.
Additionally, it highlights the potential of AI to significantly contribute to addressing economic and labor issues, as evidenced by Japan considering AI as a solution to its declining labour force and sustaining its economy.
Another noteworthy point raised in the analysis is the recommendation to view AI governance through the Global South lens.
This suggests that the perspectives and experiences of developing nations should be taken into account to ensure a more inclusive and equitable approach to AI governance.
The analysis also provides insights into the ongoing Hiroshima process focused on generative AI.
It highlights that discussions within the G7 delegation task force are centred around a code of conduct from the private sector. Notably, the report suggests support for this approach, emphasising the importance of a code of conduct in addressing concerns such as misinformation and disinformation.
Flexibility and adaptability in global AI governance are advocated for in the analysis.
It argues that AI is a rapidly evolving field, necessitating governance approaches that can accommodate changing circumstances and allow governments to tailor their strategies according to their specific needs.
Collaboration and coordination between organisations and governments are seen as crucial in AI policy-making, skills development, and creating AI ecosystems.
The analysis suggests that international collaborations are needed to foster beneficial AI ecosystems and capacity building.
The importance of respecting human rights, ensuring safety, and fostering accountability and explainability in AI systems are also highlighted. These aspects are considered fundamental in mitigating potential harms and ensuring that AI technologies are used responsibly and ethically.
In addition to these main points, the analysis touches upon the significance of education and harmonisation.
It suggests that education plays a key role in the AI governance discourse, and harmonisation is seen as important for the future.
Overall, the analysis brings attention to the multifaceted nature of AI governance, advocating for a balanced approach that takes into account various perspectives, fosters innovation, and ensures ethical and responsible practices.
It underscores the need for inclusive and collaborative efforts to create effective AI policies and systems that can address the challenges and harness the opportunities presented by AI technology.
Speech speed
0 words per minute
Speech length
words
Speech time
0 secs
Report
The analysis highlights several noteworthy points about responsible AI development. Microsoft is committed to developing AI in a sustainable, inclusive, and globally governed manner. This approach is aligned with SDG 9 (Industry, Innovation and Infrastructure), SDG 10 (Reduced Inequalities), and SDG 17 (Partnerships for the Goals).
Microsoft has established a Responsible AI Standard to guide their AI initiatives, demonstrating their commitment to ethical practices.
Owen, another speaker in the analysis, emphasises the importance of transparency, fairness, and inclusivity in AI development. He advocates for involving diverse representation in technology design and implementation.
To this end, Microsoft has established Responsible AI Fellowships, which aim to promote diversity in tech teams and foster collaboration with individuals from various backgrounds. The focus on inclusivity and diversity helps to ensure that AI systems are fair and considerate of different perspectives and needs.
Additionally, open-source AI development is highlighted as essential for understanding and safely using AI technology.
Open-source platforms enable the broad distribution of AI benefits, fostering innovation and making the technology accessible to a wider audience. Microsoft, through its subsidiary GitHub, is a significant contributor to the open-source community. By embodying an open-source ethos, they promote collaboration and knowledge sharing, contributing to the responsible development and use of AI.
However, it is crucial to strike a balance between openness and safety/security in AI development.
Concerns exist about the trade-off between making advanced AI models available through open-source platforms versus ensuring the safety and security of these models. The analysis suggests a middle-path approach, promoting accessibility to AI technology without releasing sensitive model weights, thereby safeguarding against potential misuse.
Furthermore, the need for a globally coherent framework for AI governance is emphasised.
The advancement of AI technology necessitates establishing robust regulations to ensure its responsible and ethical use. The conversation around global governance has made considerable progress, and the G7 code of conduct, under Japanese leadership, plays a crucial role in shaping the future of AI governance.
Standards setting is proposed as an integral part of the future governance framework.
Establishing standards is essential for creating a cohesive global framework that promotes responsible AI development. The International Civil Aviation Organization (ICAO) is highlighted as a potential model, demonstrating the effective implementation of standards in a complex and globally interconnected sector.
Understanding and reaching consensus on the risks associated with AI is also deemed critical.
The analysis draws attention to the successful efforts of the Intergovernmental Panel on Climate Change in advancing understanding of risks related to climate change. Similarly, efforts should be made to comprehensively evaluate and address the risks associated with AI, facilitating informed decision-making and effective risk mitigation strategies.
Investment in AI infrastructure is identified as crucial for promoting the growth and development of AI capabilities.
Proposals exist for the creation of public AI resources, such as the National AI Research Resource, to foster innovation and ensure equitable access to AI technology.
Evaluation is recognised as an important aspect of AI development. Currently, there is a lack of clarity in evaluating AI technologies.
Developing robust evaluation frameworks is crucial for assessing the effectiveness, reliability, and ethical implications of AI systems, enabling informed decision-making and responsible deployment.
Furthermore, the analysis highlights the importance of social infrastructure development for AI. This entails the establishment of globally representative discussions to track AI technology progress and ensure that the benefits of AI are shared equitably among different regions and communities.
The analysis also underscores the significance of capacity building and actions in driving AI development forward.
Concrete measures should be taken to bridge the gap between technical and non-technical stakeholders, enabling a comprehensive understanding of the socio-technical challenges associated with AI.
In conclusion, responsible AI development requires a multi-faceted approach. It involves developing AI in a sustainable, inclusive, and globally governed manner, promoting transparency and fairness, and striking a balance between openness and safety/security.
It also necessitates the establishment of a globally coherent framework for AI governance, understanding and addressing the risks associated with AI, investing in AI infrastructure, conducting comprehensive evaluations, and developing social infrastructure. Capacity building and bridging the gap between technical and non-technical stakeholders are crucial for addressing the socio-technical challenges posed by AI.
By embracing these principles, stakeholders can ensure the responsible and ethical development, deployment, and use of AI technology.
Speech speed
0 words per minute
Speech length
words
Speech time
0 secs
Report
Generative AI, a powerful technology that enables easy content generation, has resulted in the widespread production and dissemination of misinformation and disinformation. This has negative effects on society as false information can be easily created and spread through the internet and digital platforms.
However, the rule of law plays a crucial role in curbing this spread of false information. Concrete legal protections are necessary to address the issue effectively.
Sarayu Natarajan advocates for a context-specific and rule of law approach in dealing with the issue of misinformation and disinformation.
This suggests that addressing the problem requires understanding the specific context in which false information is generated and disseminated and implementing legal measures accordingly. This approach acknowledges the importance of tailored solutions based on a solid legal framework.
The labour-intensive task of AI labelling, crucial for the functioning of generative AI, is often outsourced to workers in the global south.
These workers primarily label data based on categories defined by Western companies, which can introduce bias and reinforce existing power imbalances. This highlights the need for greater inclusivity and diversity in AI development processes to ensure fair representation and avoid perpetuating inequalities.
Efforts are being made to develop large language models in non-mainstream languages, allowing a wider range of communities to benefit from generative AI.
Smaller organizations that work within specific communities are actively involved in creating these language models. This represents a positive step towards inclusivity and accessibility in the field of AI, particularly in underrepresented communities and non-mainstream languages.
Mutual understanding and engagement between AI technology and policy domains are crucial for effective governance.
It is essential for these two disciplines to communicate with each other in a meaningful way. Creating forums that facilitate non-judgmental discussions and acknowledge the diverse empirical starting points is critical. This allows for a more integrated and collaborative approach towards harnessing the benefits of AI technology while addressing its ethical and societal implications.
While AI developments may lead to job losses, particularly in the global north, they also have the potential to generate new types of jobs.
Careful observation of the impact of AI on employment is necessary to ensure just working conditions for workers worldwide. It is important to consider the potential benefits and challenges associated with AI technology and strive for humane conditions for workers in different parts of the world.
In conclusion, the advent of generative AI has made it easier and cheaper to produce and disseminate misinformation and disinformation, posing negative effects on society.
However, the rule of law, through proper legal protections, plays a significant role in curbing the spread of false information. A context-specific and rule of law approach, advocated by Sarayu Natarajan, is key to effectively addressing this issue. Inclusivity, diversity, and mutual understanding between AI technology and policy domains are crucial considerations in the development and governance of AI.
It is essential to closely monitor the impact of AI on job loss and ensure fair working conditions for all.
Speech speed
0 words per minute
Speech length
words
Speech time
0 secs
Report
The analysis focuses on several key themes related to AI and its impact on various aspects, including the environment, data governance, geopolitical power dynamics, and historical injustices. It begins by highlighting the importance of data governance in the intersection of AI and the environment.
This aspect is considered to be quite broad and requires attention and effective management.
Moving on, the analysis advocates for a decolonial-informed approach to address power imbalances and historical injustices in AI. It emphasizes the need to acknowledge and rectify historical injustices that have shaped the global power dynamics related to AI.
By adopting a decolonial approach, it is believed that these injustices can be addressed and a more equitable and just AI landscape can be achieved.
Furthermore, the analysis highlights the concept of a just green digital transition, which is essential for achieving a sustainable and equitable future.
This transition leverages the power of AI to drive responsible practices for the environment while also promoting economic growth and social inclusion. It emphasizes the need for a balanced approach that takes into account the needs of the environment and all stakeholders involved.
In addition, the analysis underscores the importance of addressing historical injustices and promoting interoperable AI governance innovations.
It emphasizes the significance of a representative multi-stakeholder process to ensure that the materiality of AI is properly addressed and that all voices are heard. By doing so, it aims to create an AI governance framework that is inclusive, fair, and capable of addressing the challenges associated with historical injustices.
Overall, the analysis provides important insights into the complex relationship between AI and various domains.
It highlights the need to consider historical injustices, power imbalances, and environmental concerns in the development and deployment of AI technologies. The conclusions drawn from this analysis serve as a call to action for policymakers, stakeholders, and researchers to work towards a more responsible, equitable, and sustainable AI landscape.
Speech speed
0 words per minute
Speech length
words
Speech time
0 secs
Report
The analysis explores various aspects of AI governance, regulations for generative AI, the impact of generic AI on the global south, and the need for new educational systems in the AI age. In terms of AI governance, the study suggests that it can learn from internet governance, which features organisations such as IETF for technical interoperability and ICANN for names and number assignments.
The shift from a US-centric model to a global model in internet governance is viewed positively and can serve as an example for AI governance.
The discussion on generative AI regulations focuses on concerns that early regulations may hinder innovation.
It is believed that allowing academics and technical groups the space to explore and experiment is crucial for advancing generative AI. Striking a balance between regulation and fostering innovation is of utmost importance.
The analysis also highlights the opportunities and challenges presented by generic AI for the global south.
Generic AI, consisting of algorithms, computing power, and data, has the potential to create new opportunities for development. However, it also poses challenges that need to be addressed to fully leverage its benefits.
Regarding education, the study emphasises the need for new educational systems that can adapt to the AI age.
Outdated educational systems must be revamped to meet the demands of the digital era. Four key educational factors are identified as important in the AI age: critical thinking, fact-based reasoning, logical thinking, and global collaboration. These skills are essential for individuals to thrive in an AI-driven world.
Finally, the analysis supports the establishment of a global AI-related education system.
This proposal, advocated by Stanford University Professor Fei-Fei Li, is seen as a significant step akin to the creation of modern universities hundreds of years ago. It aims to equip individuals with the necessary knowledge and skills to navigate the complexities and opportunities presented by AI.
In conclusion, the analysis highlights the importance of drawing lessons from internet governance, balancing regulations to foster innovation in generative AI, addressing the opportunities and challenges of generic AI in the global south, and reimagining education systems for the AI age.
These insights provide valuable considerations for policymakers and stakeholders shaping the future of AI governance and its impact on various aspects of society.