International Cooperation for AI & Digital Governance | IGF 2023 Networking Session #109
Event report
Speakers and Moderators
Speakers:
- Dasom Lee, KAIST
- Matthew Liao, NYU
- Rafik Hadfi, Kyoto University
- Takayaki Ito, Kyoto University
- Jinbo Huang, UN University
- Atsushi Yamanaka, JICA
- Liming Zhu, University of New South Wales
Moderators:
- Kyung Ryul Park, KAIST
- So Young Kim, Online Moderator
Table of contents
Disclaimer: This is not an official record of the IGF session. The DiploAI system automatically generates these resources from the audiovisual recording. Resources are presented in their original format, as provided by the AI (e.g. including any spelling mistakes). The accuracy of these resources cannot be guaranteed. The official record of the session can be found on the IGF's official website.
Knowledge Graph of Debate
Session report
Liming Zhu
Australia has taken significant steps in developing AI ethics principles in collaboration with industry stakeholders. The Department of Industry and Science, in consultation with these stakeholders, established these principles in 2019. The country’s national science agency, CSIRO, along with the University of New South Wales, has been working to operationalise these principles over the past four years.
The AI ethics principles in Australia have a strong focus on human-centred values, ensuring fairness, privacy, security, reliability, safety, transparency, explainability, contestability, and accountability. These principles aim to guide the responsible adoption of AI technology. By prioritising these values, Australia aims to ensure that AI is used in ways that respect and protect individuals’ rights and well-being.
In addition to the development of AI ethics principles, it has been suggested that the use of large language models and AI should be balanced with system-level guardrails. OpenAI’s GPT model, for example, modifies user prompts by adding text such as ‘please always answer ethically and positively.’ This demonstrates the importance of incorporating ethical considerations into the design and use of AI technologies.
Diversity of stakeholder groups and their perspectives on AI and AI governance is viewed as a positive factor. The presence of different concerns from these groups allows for robust discussions and a more comprehensive approach in addressing potential challenges and ensuring the responsible deployment of AI. Fragmentation in this context is seen as an opportunity rather than a negative issue.
Both horizontal and vertical regulation of AI are deemed necessary. Horizontal regulation entails regulating AI as a whole, while vertical regulation focuses on specific AI products. It is crucial to strike a balance and ensure that there are no overlaps or conflicts between these regulations.
Collaboration and wider stakeholder involvement are considered vital for effective AI governance. Scientific evidence and advice should come from diverse sources and require broader collaboration between policy and stakeholder groups. This approach ensures that AI policies and decisions are based on a comprehensive understanding of the technology and its impact.
Overall, Australia’s development of AI ethics principles, the emphasis on system-level guardrails, recognition of diverse stakeholder perspectives, and the need for both horizontal and vertical regulation reflect a commitment to responsible and accountable AI adoption. Continued collaboration, engagement, and evidence-based policymaking are essential to navigate the evolving landscape of AI technology.
Audience
The analysis of the speakers’ arguments and supporting facts revealed several key points about AI governance and its impact on various aspects of society. Firstly, there is a problem of fragmentation in AI governance, both at the national and global levels. This fragmentation hinders the development of unified regulations and guidelines for AI technologies. Various agencies globally are dealing with AI governance, but they approach the problem from different perspectives, such as development, sociological, ethical, philosophical, and computer science. The need to reduce this fragmentation is recognized in order to achieve more effective and cohesive AI governance.
On the topic of AI as a democratic technology, it was highlighted that AI can be accessed and interacted with by anyone, which sets it apart from centralized technologies like nuclear technology. This accessibility creates opportunities for a wider range of individuals and communities to engage with AI and benefit from its applications.
However, when considering the global governance of AI, the problem of fragmentation becomes even more apparent. The audience members noted the existence of fragmentation in global AI governance and highlighted the need for multi-stakeholder engagement in order to address this issue effectively. Talks were mentioned about the creation of an International Atomic Energy Agency (IAEA)-like organization for AI governance, which could help in regulating and coordinating AI development across countries.
Another important aspect discussed was the need for a risk-based approach in AI governance. One audience member, a diplomat from the Danish Ministry of Foreign Affairs, expressed support for the EU AI Act’s risk-based approach. This approach focuses on identifying and mitigating potential risks associated with AI technologies. It was emphasized that a risk-based approach could help strike a balance between fostering innovation and ensuring accountability in AI development.
The discussions also touched upon the importance of follow-up mechanisms, oversight, and accountability in AI regulation. Questions were raised about how to ensure the effective implementation of AI regulations and the need for monitoring the compliance of AI technologies with these regulations. This highlights the importance of establishing robust oversight mechanisms and accountability frameworks to ensure that AI technologies are developed and deployed responsibly.
In terms of the impact of AI on African countries, it was noted that while AI is emerging as a transformative technology globally, its use is geographically limited, particularly in Africa. One audience member pointed out that the conference discussions only had a sample case from Equatorial Guinea, highlighting the lack of representation and implementation of AI technologies in African countries. It was also mentioned that Africa lacks certain expertise in AI and requires expert guidance and support to prepare for the realities of AI’s development and deployment in the region.
Furthermore, questions arose about the enforceability and applicability of human rights in the context of AI. The difference between human rights as a moral framework and as a legal framework was discussed, along with the need to learn from established case law in International Human Rights Law. This raises important considerations about how human rights principles can be effectively integrated into AI governance and how to ensure their enforcement in AI technologies.
Additionally, concerns were voiced about managing limited resources while maintaining public stewardship in digital public goods and infrastructure. The challenge of balancing public stewardship with scalability due to resource limitations was highlighted. This poses a significant challenge in ensuring the accessibility and availability of digital public goods while managing the constraints of resources.
Finally, the importance of inclusive data collection and hygiene in conversational AI for women’s inclusion was discussed. Questions were raised about how to ensure equitable availability of training data in conversational AI and how to represent certain communities without infringing privacy rights or causing risks of oppression. This emphasizes the need to address biases in data collection and ensure that AI technologies are developed in a way that promotes inclusivity and respect for privacy and human rights.
In conclusion, the analysis of the speakers’ arguments and evidence highlights the challenges and opportunities in AI governance. The problem of fragmentation at both the national and global levels calls for the need to reduce it and promote global governance. Additionally, the accessibility of AI as a democratic technology creates opportunities for wider engagement. However, there are limitations in AI adoption in African countries, emphasizing the need for extended research and expert guidance. The enforceability and applicability of human rights in AI, managing limited resources in digital public goods, and ensuring inclusive data collection in conversational AI were also discussed. These findings emphasize the importance of addressing these issues to shape responsible and inclusive AI governance.
Kyung Ryul Park
Kyung Ryul Park has assumed the role of moderator for a session focused on AI and digital governance, which includes seven talks specifically dedicated to exploring this topic. The session is highly relevant to SDG 9 (Industry, Innovation and Infrastructure) as it delves into the intersection of technology, innovation, and the development of sustainable infrastructure.
Park’s involvement as a moderator reflects his belief in the significance of sharing knowledge and information about AI and digital governance. This aligns with SDG 17 (Partnerships for the goals), emphasizing the need for collaborative efforts to achieve sustainable development. As a moderator, Park aims to provide a comprehensive overview of the ongoing research and policy landscape in the field of AI and digital governance, demonstrating his commitment to facilitating knowledge exchange and promoting effective governance in these areas.
The inclusion of Matthew Liao, a professor at NYU, as the first speaker in the session is noteworthy. Liao’s expertise in the field of AI and digital governance lends valuable insights and perspectives to the discussion. As the opening speaker, Liao is expected to lay the foundation for further discussions throughout the session.
Overall, the session on AI and digital governance is highly relevant to the objectives outlined in SDG 9 and SDG 17. Through Kyung Ryul Park’s moderation and the contributions of speakers like Matthew Liao, the session aims to foster knowledge-sharing, promote effective governance, and enhance understanding of AI and its implications in the digital age.
Atsushi Yamanaka
The use of artificial intelligence (AI) and digital technologies in developing nations presents ample opportunities for development and innovation. These technologies can provide innovative products and services that meet the needs of developing countries. For instance, mobile money, which originated in Kenya, exemplifies how AI and modern technologies are being utilized to create innovative solutions.
Moreover, Information and Communication Technology (ICT) plays a vital role in achieving the Sustainable Development Goals (SDGs). ICT has the potential to drive socio-economic development and significantly contribute to the chances of achieving these goals. It can enhance connectivity, access to information, and facilitate the adoption of digital solutions across various sectors.
However, despite the progress made, the issue of digital inclusion remains prominent. As of 2022, approximately 2.7 billion people globally are still unconnected to the digital world. Bridging this digital divide is crucial to ensure equal access to opportunities and resources.
Additionally, there are challenges related to digital governance that need to be addressed. Growing concerns about data privacy, cybersecurity, AI, internet and data fragmentation, and misinformation underscore the need for effective governance. The increasing prevalence of cyber warfare and the difficulty in distinguishing reality from fake due to advanced AI technologies are particularly worrisome. Developing countries also face frustrations due to the perceived one-directional flow of data, concerns over big tech companies controlling data, and worries about legal jurisdiction over critical national information stored in foreign servers.
To tackle these issues, it is suggested that an AI Governance Forum be created instead of implementing a global regulation for AI. After 20 years of discussions on internet governance, no suitable model has been developed, making the establishment of a global regulation challenging. Creating an AI Governance Forum, and sharing successful initiatives, offers a more practical approach to governing AI. This process would require the active participation of different stakeholders, making the establishment of global regulations less appealing.
AI is gaining traction in Africa, despite a limited workforce. Many startups in Africa are leveraging AI and other database solutions to drive innovation. However, to further enhance AI adoption, there is a need to establish advanced institutions in Africa that can provide training for more AI specialists. Examples of such advanced institutions include Carnegie Mellon University in Africa and the African Institute of Mathematical Science in Rwanda. Additionally, African students studying AI in countries like Japan and Korea are further augmenting expertise in this field.
Digital technology also presents a unique opportunity for women’s inclusion. It offers pseudonymization features that can help mask gender while providing opportunities for inclusion. In fact, digital technology provides more avenues for women’s inclusion compared to traditional in-person environments, thereby contributing to the achievement of gender equality.
It is worth noting that open source initiatives, despite their advantages, face scalability issues. Scalability has always been a challenge for open source initiatives and ICT for development. However, the Indian MOSSIP model has successfully demonstrated its scalability by serving 1 billion people. This highlights the importance of finding innovative solutions to overcome scalability barriers.
In conclusion, the use of AI and digital technologies in developing nations offers significant opportunities for development and innovation. However, challenges such as digital inclusion, data privacy, cybersecurity, and data sovereignty must be addressed. Establishing an AI Governance Forum and advanced institutions for training AI specialists can contribute to harnessing these technologies more effectively. Additionally, digital technology can create unique opportunities for women’s inclusion. Finding innovative solutions for open source scalability is also crucial for the successful adoption of ICT for development.
Takayaki Ito
Upon analysis, several compelling arguments and ideas related to artificial intelligence (AI) and its impact on various domains emerge. The first argument revolves around the development of a hyper-democracy platform, initiated by the Co-Agri system in 2010. Although the specific details regarding this system are not provided, it can be inferred that the intention is to leverage AI to enhance democratic processes. This project is regarded positively, indicating an optimistic outlook on the potential of AI in improving democratic systems globally.
Another noteworthy argument is the role of AI in addressing social network problems such as fake news and echo chambers. Recognising the text structures by AI is highlighted as a potential solution. By leveraging AI algorithms to analyse and detect patterns in text, it becomes possible to identify and counteract the spread of false information and the formation of echo chambers within social networks. The positive sentiment expressed further underscores the belief in the power of AI to mitigate the negative impact of misinformation on society.
Additionally, the Agri system, initially developed as part of the Co-Agri project, is introduced as a potential solution for addressing specific challenges in Afghanistan. The system aims to collect opinions from Kabul civilians, indicating a focus on incorporating the perspectives of local populations. Furthermore, collaboration with the United Nations Habitat underscores the potential for the Agri system to contribute to the achievement of Sustainable Development Goals related to good health and well-being (SDG 3) and peace, justice, and strong institutions (SDG 16).
Lastly, the positive sentiment encompasses the potential of AI to support crowd-scale discussions through the use of multiple AI agents. A multi-agent architecture for group decision support is being developed, which emphasises the collaborative capabilities of AI in facilitating large-scale deliberations. This development aligns with the goal of fostering industry, innovation, and infrastructure (SDG 9).
The overall analysis showcases the diverse applications and benefits of AI in various domains, including democracy, social networks, conflict zones like Afghanistan, and large-scale discussions. These discussions and arguments highlight the hopeful perspective of leveraging AI to address complex societal challenges. However, it is important to note that further information and evidence would be necessary to fully understand the potential impact and limitations of these AI systems.
Summary: The analysis reveals promising arguments for the use of artificial intelligence (AI) in different domains. The development of a hyper-democracy platform through the Co-Agri system shows optimism for enhancing democratic processes. AI’s potential in combating fake news and echo chambers is underscored, providing hope for addressing social network problems. The Agri system’s focus on collecting opinions from Kabul civilians in Afghanistan and collaboration with the United Nations Habitat suggests its potential in achieving SDG goals. The use of multiple AI agents for crowd-scale discussions exhibits AI’s collaborative capabilities. Overall, AI presents opportunities to tackle complex societal challenges, though further information is needed to fully evaluate its impact.
Rafik Hadfi
Digital inclusion is an essential aspect of modern society and is closely linked to the goal of gender equality. It plays a crucial role in integrating marginalized individuals into the use of information and communication technology (ICT) tools. Programs conducted in Afghanistan have shown that digital inclusion efforts can empower women by providing them with the knowledge and resources to actively engage with ICT technologies, bridging the societal gap and enabling them to participate more fully in digital spaces.
Artificial Intelligence (AI) has significant potential in facilitating digital inclusion and promoting social good. Case studies conducted in Afghanistan demonstrate that integrating AI into online platforms predominantly used by women can enhance diversity, reduce inhibitions, and foster innovative thinking among participants. This highlights the transformative impact of AI in empowering individuals and ensuring their active involvement in digital spaces.
Additionally, emphasizing community empowerment and inclusion in data collection processes is crucial for achieving the Sustainable Development Goals (SDGs). By involving local communities in training programs focused on AI systems, effective datasets can be created and maintained, ensuring diversity and representation. This approach recognizes the significance of empowering communities and involving them in decision-making processes, thereby promoting inclusivity and collaborative efforts in achieving the SDGs.
It is worth noting that training AI systems solely in English can lead to biases towards specific contexts. To address this bias and ensure a fairer and more inclusive AI system, training AI in different languages has been implemented in Indonesia and Afghanistan. By expanding the linguistic training of AI, biases towards specific contexts can be minimized, contributing to a more equitable and inclusive implementation of AI technologies.
Moreover, AI has been employed in Afghanistan to address various challenges faced by women and promote women’s empowerment and gender equality. By utilizing AI for women empowerment initiatives, Afghanistan takes a proactive approach to address gender disparities and promote inclusivity in society.
In conclusion, digital inclusion, AI, and community empowerment are crucial components in achieving the SDGs and advancing towards a sustainable and equitable future. Successful programs in Afghanistan demonstrate the transformative potential of digital inclusion in empowering women. AI can further facilitate digital inclusion and promote social good by enhancing diversity and inclusivity in digital spaces. Emphasizing community empowerment and inclusion in data collection processes is essential for creating effective and diverse datasets. Training AI in different languages helps minimize bias towards specific contexts, promoting fairness and inclusivity. Lastly, utilizing AI for women empowerment initiatives contributes significantly to achieving gender equality and equity.
Matthew Liao
The analysis examines multiple perspectives on the importance of regulating AI. The speakers stress the necessity of regulations to prevent harm and protect human rights. They argue that regulations should be based on a human rights framework, focusing on the promotion and safeguarding of human rights in relation to AI. They suggest conducting human rights impact assessments and implementing regulations at every stage of the technology process.
The speakers all agree that AI regulations should not be limited to the tech industry or experts. They propose a collective approach involving tech companies, AI researchers, governments, universities, and the public. This multi-stakeholder approach would ensure inclusivity and effectiveness in the regulation process.
Enforceability is identified as a major challenge in implementing AI regulations. The complexity of enforcing regulations and ensuring compliance is acknowledged. The speakers believe that regulations should be enforceable but recognize the difficulties involved.
The analysis draws comparisons to other regulated industries, such as nuclear energy and the biomedical model. The speakers argue that a collective approach, similar to nuclear energy regulation, is necessary in addressing AI challenges. They also suggest using the biomedical model as a reference for AI regulation, given its successful regulation of drug discovery.
A risk-based approach to AI regulation is proposed, considering that different AI applications carry varying levels of risk. The speakers advocate for categorizing AI into risk-based levels, determining the appropriate regulations for each level.
Potential concerns regarding regulatory capture are discussed, where regulatory agencies may be influenced by the industries they regulate. However, the analysis highlights the aviation industry as an example. Despite concerns of regulatory capture, regulations have driven safety innovations in aviation.
In summary, the analysis underscores the importance of AI regulation in mitigating risks and protecting human rights. It emphasizes the need for a human rights framework, a collective approach involving various stakeholders, enforceability, risk-based categorization, and lessons from other regulated industries. Challenges such as enforceability and regulatory capture are acknowledged, but the analysis encourages the implementation of effective regulations for responsible and ethical AI use.
Seung Hyun Kim
The intersection between advanced technologies and developing countries can have negative implications for social and economic problems. In Colombia, drug cartels have found a new method of distribution by using the cable car system. This not only enables more efficient operations for the cartels but also poses a significant challenge to law enforcement agencies.
Another concern is the potential misuse of AI technologies in communities that are already vulnerable to illicit activities. The speakers highlight the need to address this issue, as the advanced capabilities of AI can be exploited by those involved in criminal activities, further exacerbating social and economic problems in these areas.
In terms of governance, the Ethiopian government faces challenges due to the fragmentation of its ICT and information systems. There are multiple systems running on different platforms that do not communicate with each other. This lack of integration and coordination hampers efficient governance and slows down decision-making processes. It is clear that the government needs to address this issue in order to improve overall effectiveness and service delivery.
Furthermore, the dependence of Equatorial Guinea on foreign technology, particularly Huawei and China for its ICT infrastructure, raises concerns about technology sovereignty. By relying heavily on external entities for critical technology infrastructure, the country runs the risk of losing control over its own systems and data. This dependence undermines the ability to exercise full control and authority over technological advancements within the country.
The speakers express a negative sentiment towards these issues, highlighting the detrimental impact they can have on social and economic development. It is crucial for policymakers and stakeholders to address these challenges and find appropriate solutions to mitigate the negative effects of advanced technologies in developing countries.
Overall, the analysis reveals the potential risks and challenges that arise from the intersection of advanced technologies and developing countries. By considering these issues, policymakers can make more informed decisions and implement strategies that help to maximize the benefits of technology while minimizing the negative consequences.
Dasom Lee
Dasom Lee leads the AI and Cyber-Physical Systems Policy Lab at KAIST, where they focus on the relationship between AI, infrastructure, and environmental sustainability. The lab’s research covers energy transition, transportation, and data centers, addressing key challenges in these areas. Currently, they have five projects aligned with their research objectives.
One significant concern is the lack of international regulations on data centers, particularly in relation to climate change. The United States, for instance, lacks strong federal regulations despite having the most data centers. State governments also lack the expertise to propose relevant regulations. This highlights the urgent need for global standards to address the environmental impact of data centers.
In the field of automated vehicle research, there is a noticeable imbalance in focus. The emphasis is primarily on technological improvements, neglecting the importance of social sciences in understanding the broader implications of this technology. The lab at KAIST recognizes this gap and is using quantitative and statistical methods to demonstrate the necessity of involving social science perspectives in automated vehicle research. This comprehensive approach aims to understand the societal, economic, and ethical aspects of this advancing technology.
Privacy regulations present a unique challenge due to their contextual nature. The understanding and perception of privacy vary across geographical regions, making universal regulation unrealistic. To address this challenge, the KAIST-NYU project plans to conduct a survey to explore privacy perceptions and potential future interactions based on culture and history. This approach will help policymakers develop tailored and effective privacy regulations that respect different cultural perspectives.
To summarise, Dasom Lee and the AI and Cyber-Physical Systems Policy Lab at KAIST are making valuable contributions to AI, infrastructure, and environmental sustainability. Their focus on energy transition, transportation, and data centers, along with ongoing projects, demonstrates their commitment to finding practical solutions. The need for data center regulations, involvement of social sciences in automated vehicle research, and contextualization of privacy regulations are critical factors in the development of sustainable and ethical technologies.
Speakers
Speech speed
0 words per minute
Speech length
words
Speech time
0 secs
Report
The use of artificial intelligence (AI) and digital technologies in developing nations presents ample opportunities for development and innovation. These technologies can provide innovative products and services that meet the needs of developing countries. For instance, mobile money, which originated in Kenya, exemplifies how AI and modern technologies are being utilized to create innovative solutions.
Moreover, Information and Communication Technology (ICT) plays a vital role in achieving the Sustainable Development Goals (SDGs).
ICT has the potential to drive socio-economic development and significantly contribute to the chances of achieving these goals. It can enhance connectivity, access to information, and facilitate the adoption of digital solutions across various sectors.
However, despite the progress made, the issue of digital inclusion remains prominent.
As of 2022, approximately 2.7 billion people globally are still unconnected to the digital world. Bridging this digital divide is crucial to ensure equal access to opportunities and resources.
Additionally, there are challenges related to digital governance that need to be addressed.
Growing concerns about data privacy, cybersecurity, AI, internet and data fragmentation, and misinformation underscore the need for effective governance. The increasing prevalence of cyber warfare and the difficulty in distinguishing reality from fake due to advanced AI technologies are particularly worrisome.
Developing countries also face frustrations due to the perceived one-directional flow of data, concerns over big tech companies controlling data, and worries about legal jurisdiction over critical national information stored in foreign servers.
To tackle these issues, it is suggested that an AI Governance Forum be created instead of implementing a global regulation for AI.
After 20 years of discussions on internet governance, no suitable model has been developed, making the establishment of a global regulation challenging. Creating an AI Governance Forum, and sharing successful initiatives, offers a more practical approach to governing AI. This process would require the active participation of different stakeholders, making the establishment of global regulations less appealing.
AI is gaining traction in Africa, despite a limited workforce.
Many startups in Africa are leveraging AI and other database solutions to drive innovation. However, to further enhance AI adoption, there is a need to establish advanced institutions in Africa that can provide training for more AI specialists. Examples of such advanced institutions include Carnegie Mellon University in Africa and the African Institute of Mathematical Science in Rwanda.
Additionally, African students studying AI in countries like Japan and Korea are further augmenting expertise in this field.
Digital technology also presents a unique opportunity for women’s inclusion. It offers pseudonymization features that can help mask gender while providing opportunities for inclusion.
In fact, digital technology provides more avenues for women’s inclusion compared to traditional in-person environments, thereby contributing to the achievement of gender equality.
It is worth noting that open source initiatives, despite their advantages, face scalability issues.
Scalability has always been a challenge for open source initiatives and ICT for development. However, the Indian MOSSIP model has successfully demonstrated its scalability by serving 1 billion people. This highlights the importance of finding innovative solutions to overcome scalability barriers.
In conclusion, the use of AI and digital technologies in developing nations offers significant opportunities for development and innovation.
However, challenges such as digital inclusion, data privacy, cybersecurity, and data sovereignty must be addressed. Establishing an AI Governance Forum and advanced institutions for training AI specialists can contribute to harnessing these technologies more effectively. Additionally, digital technology can create unique opportunities for women’s inclusion.
Finding innovative solutions for open source scalability is also crucial for the successful adoption of ICT for development.
Speech speed
0 words per minute
Speech length
words
Speech time
0 secs
Report
The analysis of the speakers’ arguments and supporting facts revealed several key points about AI governance and its impact on various aspects of society. Firstly, there is a problem of fragmentation in AI governance, both at the national and global levels.
This fragmentation hinders the development of unified regulations and guidelines for AI technologies. Various agencies globally are dealing with AI governance, but they approach the problem from different perspectives, such as development, sociological, ethical, philosophical, and computer science. The need to reduce this fragmentation is recognized in order to achieve more effective and cohesive AI governance.
On the topic of AI as a democratic technology, it was highlighted that AI can be accessed and interacted with by anyone, which sets it apart from centralized technologies like nuclear technology.
This accessibility creates opportunities for a wider range of individuals and communities to engage with AI and benefit from its applications.
However, when considering the global governance of AI, the problem of fragmentation becomes even more apparent. The audience members noted the existence of fragmentation in global AI governance and highlighted the need for multi-stakeholder engagement in order to address this issue effectively.
Talks were mentioned about the creation of an International Atomic Energy Agency (IAEA)-like organization for AI governance, which could help in regulating and coordinating AI development across countries.
Another important aspect discussed was the need for a risk-based approach in AI governance.
One audience member, a diplomat from the Danish Ministry of Foreign Affairs, expressed support for the EU AI Act’s risk-based approach. This approach focuses on identifying and mitigating potential risks associated with AI technologies. It was emphasized that a risk-based approach could help strike a balance between fostering innovation and ensuring accountability in AI development.
The discussions also touched upon the importance of follow-up mechanisms, oversight, and accountability in AI regulation.
Questions were raised about how to ensure the effective implementation of AI regulations and the need for monitoring the compliance of AI technologies with these regulations. This highlights the importance of establishing robust oversight mechanisms and accountability frameworks to ensure that AI technologies are developed and deployed responsibly.
In terms of the impact of AI on African countries, it was noted that while AI is emerging as a transformative technology globally, its use is geographically limited, particularly in Africa.
One audience member pointed out that the conference discussions only had a sample case from Equatorial Guinea, highlighting the lack of representation and implementation of AI technologies in African countries. It was also mentioned that Africa lacks certain expertise in AI and requires expert guidance and support to prepare for the realities of AI’s development and deployment in the region.
Furthermore, questions arose about the enforceability and applicability of human rights in the context of AI.
The difference between human rights as a moral framework and as a legal framework was discussed, along with the need to learn from established case law in International Human Rights Law. This raises important considerations about how human rights principles can be effectively integrated into AI governance and how to ensure their enforcement in AI technologies.
Additionally, concerns were voiced about managing limited resources while maintaining public stewardship in digital public goods and infrastructure.
The challenge of balancing public stewardship with scalability due to resource limitations was highlighted. This poses a significant challenge in ensuring the accessibility and availability of digital public goods while managing the constraints of resources.
Finally, the importance of inclusive data collection and hygiene in conversational AI for women’s inclusion was discussed.
Questions were raised about how to ensure equitable availability of training data in conversational AI and how to represent certain communities without infringing privacy rights or causing risks of oppression. This emphasizes the need to address biases in data collection and ensure that AI technologies are developed in a way that promotes inclusivity and respect for privacy and human rights.
In conclusion, the analysis of the speakers’ arguments and evidence highlights the challenges and opportunities in AI governance.
The problem of fragmentation at both the national and global levels calls for the need to reduce it and promote global governance. Additionally, the accessibility of AI as a democratic technology creates opportunities for wider engagement. However, there are limitations in AI adoption in African countries, emphasizing the need for extended research and expert guidance.
The enforceability and applicability of human rights in AI, managing limited resources in digital public goods, and ensuring inclusive data collection in conversational AI were also discussed. These findings emphasize the importance of addressing these issues to shape responsible and inclusive AI governance.
Speech speed
0 words per minute
Speech length
words
Speech time
0 secs
Report
Dasom Lee leads the AI and Cyber-Physical Systems Policy Lab at KAIST, where they focus on the relationship between AI, infrastructure, and environmental sustainability. The lab’s research covers energy transition, transportation, and data centers, addressing key challenges in these areas.
Currently, they have five projects aligned with their research objectives.
One significant concern is the lack of international regulations on data centers, particularly in relation to climate change. The United States, for instance, lacks strong federal regulations despite having the most data centers.
State governments also lack the expertise to propose relevant regulations. This highlights the urgent need for global standards to address the environmental impact of data centers.
In the field of automated vehicle research, there is a noticeable imbalance in focus.
The emphasis is primarily on technological improvements, neglecting the importance of social sciences in understanding the broader implications of this technology. The lab at KAIST recognizes this gap and is using quantitative and statistical methods to demonstrate the necessity of involving social science perspectives in automated vehicle research.
This comprehensive approach aims to understand the societal, economic, and ethical aspects of this advancing technology.
Privacy regulations present a unique challenge due to their contextual nature. The understanding and perception of privacy vary across geographical regions, making universal regulation unrealistic.
To address this challenge, the KAIST-NYU project plans to conduct a survey to explore privacy perceptions and potential future interactions based on culture and history. This approach will help policymakers develop tailored and effective privacy regulations that respect different cultural perspectives.
To summarise, Dasom Lee and the AI and Cyber-Physical Systems Policy Lab at KAIST are making valuable contributions to AI, infrastructure, and environmental sustainability.
Their focus on energy transition, transportation, and data centers, along with ongoing projects, demonstrates their commitment to finding practical solutions. The need for data center regulations, involvement of social sciences in automated vehicle research, and contextualization of privacy regulations are critical factors in the development of sustainable and ethical technologies.
Speech speed
0 words per minute
Speech length
words
Speech time
0 secs
Report
Kyung Ryul Park has assumed the role of moderator for a session focused on AI and digital governance, which includes seven talks specifically dedicated to exploring this topic. The session is highly relevant to SDG 9 (Industry, Innovation and Infrastructure) as it delves into the intersection of technology, innovation, and the development of sustainable infrastructure.
Park’s involvement as a moderator reflects his belief in the significance of sharing knowledge and information about AI and digital governance.
This aligns with SDG 17 (Partnerships for the goals), emphasizing the need for collaborative efforts to achieve sustainable development. As a moderator, Park aims to provide a comprehensive overview of the ongoing research and policy landscape in the field of AI and digital governance, demonstrating his commitment to facilitating knowledge exchange and promoting effective governance in these areas.
The inclusion of Matthew Liao, a professor at NYU, as the first speaker in the session is noteworthy.
Liao’s expertise in the field of AI and digital governance lends valuable insights and perspectives to the discussion. As the opening speaker, Liao is expected to lay the foundation for further discussions throughout the session.
Overall, the session on AI and digital governance is highly relevant to the objectives outlined in SDG 9 and SDG 17.
Through Kyung Ryul Park’s moderation and the contributions of speakers like Matthew Liao, the session aims to foster knowledge-sharing, promote effective governance, and enhance understanding of AI and its implications in the digital age.
Speech speed
0 words per minute
Speech length
words
Speech time
0 secs
Report
Australia has taken significant steps in developing AI ethics principles in collaboration with industry stakeholders. The Department of Industry and Science, in consultation with these stakeholders, established these principles in 2019. The country’s national science agency, CSIRO, along with the University of New South Wales, has been working to operationalise these principles over the past four years.
The AI ethics principles in Australia have a strong focus on human-centred values, ensuring fairness, privacy, security, reliability, safety, transparency, explainability, contestability, and accountability.
These principles aim to guide the responsible adoption of AI technology. By prioritising these values, Australia aims to ensure that AI is used in ways that respect and protect individuals’ rights and well-being.
In addition to the development of AI ethics principles, it has been suggested that the use of large language models and AI should be balanced with system-level guardrails.
OpenAI’s GPT model, for example, modifies user prompts by adding text such as ‘please always answer ethically and positively.’ This demonstrates the importance of incorporating ethical considerations into the design and use of AI technologies.
Diversity of stakeholder groups and their perspectives on AI and AI governance is viewed as a positive factor.
The presence of different concerns from these groups allows for robust discussions and a more comprehensive approach in addressing potential challenges and ensuring the responsible deployment of AI. Fragmentation in this context is seen as an opportunity rather than a negative issue.
Both horizontal and vertical regulation of AI are deemed necessary.
Horizontal regulation entails regulating AI as a whole, while vertical regulation focuses on specific AI products. It is crucial to strike a balance and ensure that there are no overlaps or conflicts between these regulations.
Collaboration and wider stakeholder involvement are considered vital for effective AI governance.
Scientific evidence and advice should come from diverse sources and require broader collaboration between policy and stakeholder groups. This approach ensures that AI policies and decisions are based on a comprehensive understanding of the technology and its impact.
Overall, Australia’s development of AI ethics principles, the emphasis on system-level guardrails, recognition of diverse stakeholder perspectives, and the need for both horizontal and vertical regulation reflect a commitment to responsible and accountable AI adoption.
Continued collaboration, engagement, and evidence-based policymaking are essential to navigate the evolving landscape of AI technology.
Speech speed
0 words per minute
Speech length
words
Speech time
0 secs
Report
The analysis examines multiple perspectives on the importance of regulating AI. The speakers stress the necessity of regulations to prevent harm and protect human rights. They argue that regulations should be based on a human rights framework, focusing on the promotion and safeguarding of human rights in relation to AI.
They suggest conducting human rights impact assessments and implementing regulations at every stage of the technology process.
The speakers all agree that AI regulations should not be limited to the tech industry or experts. They propose a collective approach involving tech companies, AI researchers, governments, universities, and the public.
This multi-stakeholder approach would ensure inclusivity and effectiveness in the regulation process.
Enforceability is identified as a major challenge in implementing AI regulations. The complexity of enforcing regulations and ensuring compliance is acknowledged. The speakers believe that regulations should be enforceable but recognize the difficulties involved.
The analysis draws comparisons to other regulated industries, such as nuclear energy and the biomedical model.
The speakers argue that a collective approach, similar to nuclear energy regulation, is necessary in addressing AI challenges. They also suggest using the biomedical model as a reference for AI regulation, given its successful regulation of drug discovery.
A risk-based approach to AI regulation is proposed, considering that different AI applications carry varying levels of risk.
The speakers advocate for categorizing AI into risk-based levels, determining the appropriate regulations for each level.
Potential concerns regarding regulatory capture are discussed, where regulatory agencies may be influenced by the industries they regulate. However, the analysis highlights the aviation industry as an example.
Despite concerns of regulatory capture, regulations have driven safety innovations in aviation.
In summary, the analysis underscores the importance of AI regulation in mitigating risks and protecting human rights. It emphasizes the need for a human rights framework, a collective approach involving various stakeholders, enforceability, risk-based categorization, and lessons from other regulated industries.
Challenges such as enforceability and regulatory capture are acknowledged, but the analysis encourages the implementation of effective regulations for responsible and ethical AI use.
Speech speed
0 words per minute
Speech length
words
Speech time
0 secs
Report
Digital inclusion is an essential aspect of modern society and is closely linked to the goal of gender equality. It plays a crucial role in integrating marginalized individuals into the use of information and communication technology (ICT) tools. Programs conducted in Afghanistan have shown that digital inclusion efforts can empower women by providing them with the knowledge and resources to actively engage with ICT technologies, bridging the societal gap and enabling them to participate more fully in digital spaces.
Artificial Intelligence (AI) has significant potential in facilitating digital inclusion and promoting social good.
Case studies conducted in Afghanistan demonstrate that integrating AI into online platforms predominantly used by women can enhance diversity, reduce inhibitions, and foster innovative thinking among participants. This highlights the transformative impact of AI in empowering individuals and ensuring their active involvement in digital spaces.
Additionally, emphasizing community empowerment and inclusion in data collection processes is crucial for achieving the Sustainable Development Goals (SDGs).
By involving local communities in training programs focused on AI systems, effective datasets can be created and maintained, ensuring diversity and representation. This approach recognizes the significance of empowering communities and involving them in decision-making processes, thereby promoting inclusivity and collaborative efforts in achieving the SDGs.
It is worth noting that training AI systems solely in English can lead to biases towards specific contexts.
To address this bias and ensure a fairer and more inclusive AI system, training AI in different languages has been implemented in Indonesia and Afghanistan. By expanding the linguistic training of AI, biases towards specific contexts can be minimized, contributing to a more equitable and inclusive implementation of AI technologies.
Moreover, AI has been employed in Afghanistan to address various challenges faced by women and promote women’s empowerment and gender equality.
By utilizing AI for women empowerment initiatives, Afghanistan takes a proactive approach to address gender disparities and promote inclusivity in society.
In conclusion, digital inclusion, AI, and community empowerment are crucial components in achieving the SDGs and advancing towards a sustainable and equitable future.
Successful programs in Afghanistan demonstrate the transformative potential of digital inclusion in empowering women. AI can further facilitate digital inclusion and promote social good by enhancing diversity and inclusivity in digital spaces. Emphasizing community empowerment and inclusion in data collection processes is essential for creating effective and diverse datasets.
Training AI in different languages helps minimize bias towards specific contexts, promoting fairness and inclusivity. Lastly, utilizing AI for women empowerment initiatives contributes significantly to achieving gender equality and equity.
Speech speed
0 words per minute
Speech length
words
Speech time
0 secs
Report
The intersection between advanced technologies and developing countries can have negative implications for social and economic problems. In Colombia, drug cartels have found a new method of distribution by using the cable car system. This not only enables more efficient operations for the cartels but also poses a significant challenge to law enforcement agencies.
Another concern is the potential misuse of AI technologies in communities that are already vulnerable to illicit activities.
The speakers highlight the need to address this issue, as the advanced capabilities of AI can be exploited by those involved in criminal activities, further exacerbating social and economic problems in these areas.
In terms of governance, the Ethiopian government faces challenges due to the fragmentation of its ICT and information systems.
There are multiple systems running on different platforms that do not communicate with each other. This lack of integration and coordination hampers efficient governance and slows down decision-making processes. It is clear that the government needs to address this issue in order to improve overall effectiveness and service delivery.
Furthermore, the dependence of Equatorial Guinea on foreign technology, particularly Huawei and China for its ICT infrastructure, raises concerns about technology sovereignty.
By relying heavily on external entities for critical technology infrastructure, the country runs the risk of losing control over its own systems and data. This dependence undermines the ability to exercise full control and authority over technological advancements within the country.
The speakers express a negative sentiment towards these issues, highlighting the detrimental impact they can have on social and economic development.
It is crucial for policymakers and stakeholders to address these challenges and find appropriate solutions to mitigate the negative effects of advanced technologies in developing countries.
Overall, the analysis reveals the potential risks and challenges that arise from the intersection of advanced technologies and developing countries.
By considering these issues, policymakers can make more informed decisions and implement strategies that help to maximize the benefits of technology while minimizing the negative consequences.
Speech speed
0 words per minute
Speech length
words
Speech time
0 secs
Report
Upon analysis, several compelling arguments and ideas related to artificial intelligence (AI) and its impact on various domains emerge. The first argument revolves around the development of a hyper-democracy platform, initiated by the Co-Agri system in 2010. Although the specific details regarding this system are not provided, it can be inferred that the intention is to leverage AI to enhance democratic processes.
This project is regarded positively, indicating an optimistic outlook on the potential of AI in improving democratic systems globally.
Another noteworthy argument is the role of AI in addressing social network problems such as fake news and echo chambers.
Recognising the text structures by AI is highlighted as a potential solution. By leveraging AI algorithms to analyse and detect patterns in text, it becomes possible to identify and counteract the spread of false information and the formation of echo chambers within social networks.
The positive sentiment expressed further underscores the belief in the power of AI to mitigate the negative impact of misinformation on society.
Additionally, the Agri system, initially developed as part of the Co-Agri project, is introduced as a potential solution for addressing specific challenges in Afghanistan.
The system aims to collect opinions from Kabul civilians, indicating a focus on incorporating the perspectives of local populations. Furthermore, collaboration with the United Nations Habitat underscores the potential for the Agri system to contribute to the achievement of Sustainable Development Goals related to good health and well-being (SDG 3) and peace, justice, and strong institutions (SDG 16).
Lastly, the positive sentiment encompasses the potential of AI to support crowd-scale discussions through the use of multiple AI agents.
A multi-agent architecture for group decision support is being developed, which emphasises the collaborative capabilities of AI in facilitating large-scale deliberations. This development aligns with the goal of fostering industry, innovation, and infrastructure (SDG 9).
The overall analysis showcases the diverse applications and benefits of AI in various domains, including democracy, social networks, conflict zones like Afghanistan, and large-scale discussions.
These discussions and arguments highlight the hopeful perspective of leveraging AI to address complex societal challenges. However, it is important to note that further information and evidence would be necessary to fully understand the potential impact and limitations of these AI systems.
Summary: The analysis reveals promising arguments for the use of artificial intelligence (AI) in different domains.
The development of a hyper-democracy platform through the Co-Agri system shows optimism for enhancing democratic processes. AI’s potential in combating fake news and echo chambers is underscored, providing hope for addressing social network problems. The Agri system’s focus on collecting opinions from Kabul civilians in Afghanistan and collaboration with the United Nations Habitat suggests its potential in achieving SDG goals.
The use of multiple AI agents for crowd-scale discussions exhibits AI’s collaborative capabilities. Overall, AI presents opportunities to tackle complex societal challenges, though further information is needed to fully evaluate its impact.