AI is here. Are countries ready, or not? | IGF 2023 Open Forum #131
Event report
Speakers and Moderators
Speakers:
- Esther Kunda, Director General, Innovation & Emerging Technologies, Ministry of ICT & Innovation, Government of Rwanda (Government, Africa)
- Robert Opp, UNDP Chief Digital Officer (International Organization, HQ/North America)
- Dr. Romesh Ranawana, Group Chief Analytics & AI Officer at Dialog Axiata, and Chairman of the National Committee to formulate an AI Policy and Strategy for Sri Lanka (Technical Community, SE Asia)
Moderators:
- Yasmine Hamdar, UNDP Chief Digital Office
- Alena Klatte, UNDP
Table of contents
Disclaimer: This is not an official record of the IGF session. The DiploAI system automatically generates these resources from the audiovisual recording. Resources are presented in their original format, as provided by the AI (e.g. including any spelling mistakes). The accuracy of these resources cannot be guaranteed. The official record of the session can be found on the IGF's official website.
Knowledge Graph of Debate
Session report
Audience
Countries around the world are facing significant challenges in implementing artificial intelligence (AI) due to variations in democratic processes and understanding of ethical practices. The differences in governance structures and ethical frameworks make it difficult for countries with non-democratic processes to effectively grasp and navigate the complexities of AI ethics. Even in relatively democratic countries like the Netherlands, issues arise due to these disparities.
Furthermore, many countries are hastily rushing to implement AI without giving due consideration to important factors such as data quality, data collection, and data protection and privacy laws. The focus seems to be on implementing AI algorithms without laying down the necessary core elements required for a successful transition to AI-driven systems. This is a cause for concern, particularly in most countries in the global south where data protection and privacy laws are often inadequate.
The lack of adequate data quality and collection mechanisms, coupled with inadequate data protection and privacy laws, raises serious concerns about the safety and integrity of AI systems. Without proper measures in place, there is a risk of bias, discrimination, and potential misuse of data, which can have far-reaching consequences for individuals and societies.
In order to address these challenges, governments must recognize the need to ensure that their technical infrastructure and workforce skills are agile enough to adapt to new AI technologies as they emerge. The rapid advances in AI capabilities require a proactive approach in developing the necessary infrastructure and upskilling the workforce to keep up with the evolving technology.
In conclusion, the implementation of AI is hindered by variations in democratic processes and understanding of ethical practices among countries. Rushing into AI implementation without addressing critical issues such as data quality and protection can lead to significant problems, particularly in countries with insufficient data protection and privacy laws. Governments play a crucial role in fostering appropriate technical infrastructure and developing the necessary skills to effectively navigate the challenges posed by AI technologies.
Jingbo Huang
Jingbo Huang places significant emphasis on the importance of collective intelligence in both human-to-human and human-to-machine interactions. He recognizes the potential for artificial intelligence (AI) and human intelligence to work in unison to tackle challenges, highlighting the positive aspects of this partnership rather than focusing solely on the negatives. Huang emphasizes the need for collaboration and preparation among human entities to ensure the integration of AI into society benefits all parties involved.
Huang further expresses curiosity about the collaboration between different AI assessment tools developed by various organizations. Specifically, he mentions the UNDP’s AI readiness assessment tool and raises questions about how it aligns or interacts with tools developed by the OECD, Singapore, Africa, and others. This indicates Huang’s interest in exploring potential synergies and knowledge-sharing among these assessment tools.
Additionally, Huang demonstrates an interest in understanding the challenges faced by panelists during AI conceptualization and implementation. Although specific supporting facts are not provided, this suggests Huang’s desire to explore the obstacles encountered in bringing AI projects to fruition. By examining these challenges, he aims to acquire knowledge that can help overcome barriers and facilitate the successful integration of AI into various industry sectors.
In summary, Jingbo Huang underscores the significance of collective intelligence, both within human-to-human interactions and between human and machine intelligence. Huang envisions a collaborative approach that leverages the strengths of both AI and human intelligence to address challenges. He also shows a keen interest in exploring how different AI assessment tools can work together, seeking to identify potential synergies and compatibility. Moreover, he expresses curiosity about the challenges faced during the AI conceptualization and implementation process. These insights reflect Huang’s commitment to fostering mutual understanding, collaboration, and effective utilization of AI technologies.
Denise Wong
Singapore has taken a human-centric and inclusive approach to AI governance, prioritising digital readiness and adoption within communities. This policy aims to ensure that the benefits of AI are accessible and beneficial to all members of society. The model governance framework developed by Singapore aligns with OECD principles, demonstrating their commitment to ethical and responsible AI practices.
In adopting a multi-stakeholder approach, Singapore has sought input from a diverse range of companies, both domestic and international. They have collaborated with the World Economic Forum Center for the Fourth Industrial Revolution for ISAGO (Intentional Standards for AI Governance Organizations) and have worked with a local company to write a discussion paper on Gen-AI. This inclusive approach allows for a variety of perspectives and fosters collaboration between different stakeholders in the development of AI governance.
Practical guidance is a priority for Singapore in AI governance. They have created a compendium of use cases that serves as a reference for both local and international organisations. Additionally, they have developed ISAGO, an implementation and self-assessment guide for companies to ensure that they adhere to best practices in AI governance. Furthermore, Singapore has established the AI Verify Foundation, an open-source foundation that provides an AI toolkit to assist organisations in implementing AI in a responsible manner.
Singapore recognises the importance of international alignment and interoperability in AI governance. They encourage alignment with international organisations and other governments and advocate for an open industry focus on critical emerging technologies. Singapore believes that future conversations in AI governance will revolve around international technical standards and benchmarking, which will facilitate cooperation and harmonisation of AI practices globally.
However, concerns are raised about the fragmentation of global laws surrounding AI; compliance costs can increase when laws are fragmented, which could hinder the development and adoption of AI technologies. Singapore acknowledges the need for a unified framework and harmonised regulations to mitigate these challenges.
Additionally, there is apprehension about the potential negative impacts of technology, especially in terms of widening divides and negatively affecting vulnerable groups. Singapore, being a highly connected society, is aware of the possibility of certain groups being left behind. Bridging these divides and ensuring that technology is inclusive and addresses the needs of vulnerable populations is a priority in their AI governance efforts.
Cultural and ethnic sensitivities in conjunction with black box technology are also a concern. It is unpredictable whether technology will fragment or unify communities, particularly in terms of ethnic and cultural sensitivities. Singapore acknowledges the importance of considering a culturally specific perspective to understand the potential impacts of AI better.
In conclusion, Singapore’s approach to AI governance encompasses human-centricity, inclusivity, and practical guidance. Their multi-stakeholder approach ensures a diversity of perspectives, and they prioritise international alignment and interoperability in AI governance. While concerns exist regarding the fragmentation of global laws and the potential negative impacts on vulnerable groups and cultural sensitivities, Singapore actively addresses these issues to create an ethical and responsible AI ecosystem.
Dr. Romesh Ranawana
Sri Lanka is currently facing challenges in terms of its AI readiness and capacity, which puts it behind many other countries in this field. The country has just begun its journey towards improving AI readiness and it lags behind in terms of both readiness and capacity.
However, the government of Sri Lanka has recognised the importance of AI development and has taken the initiative to develop a national AI policy and strategy. This is expected to be rolled out in November and April 2024 respectively. The government understands that engagement in AI development should not be limited to the private sector or select universities, but it needs to be a national initiative involving various stakeholders.
Currently, AI projects in Sri Lanka face challenges in terms of their implementation. Although over 300 AI projects were conducted by university students in the country last year, none of them went into production. The proposed AI projects in Sri Lanka often do not progress beyond the conceptual stage. This highlights the need for better infrastructure and support to bring these projects to fruition.
One of the primary obstacles to AI advancement in Sri Lanka is the lack of standardized and digitized data. Data is often siloed and still available in paper format, making it difficult to utilize it effectively for AI applications. This challenge is not just technical but also operational, requiring a change in mindsets, awareness, and trust. Efforts to develop AI projects are being wasted due to the absence of consolidated data sets that address national problems.
In order to overcome these challenges, Sri Lanka aims to establish a sustainable, inclusive, and open digital ecosystem. The United Nations Development Programme (UNDP) is working on an AI readiness assessment for Sri Lanka. This assessment will help identify areas that need improvement and provide recommendations to establish an ecosystem that fosters AI development.
In conclusion, Sri Lanka is in the early stages of improving its AI readiness and capacity. The government is taking an active role in formulating a national AI policy and strategy. However, there are challenges in terms of implementing AI projects, primarily due to the lack of standardized and digitized data. Efforts are being made to address these challenges and establish a sustainable digital ecosystem that supports AI development.
Alison Gillwald
In Africa, achieving digital readiness for artificial intelligence (AI) poses significant challenges due to several fundamental obstacles. Limited access to the internet is a major barrier, with many countries in Africa having 95% broadband coverage, but less than 20% of the population experiencing the network effects of being online. This indicates that the lack of internet connectivity severely hampers the potential benefits of AI. Additionally, the high cost of devices is a crucial factor preventing a large portion of the population from acquiring the necessary technology to access the internet and engage with AI applications. Moreover, rural location is a greater hindrance to access than gender, further exacerbating the digital divide in Africa.
Education emerges as a key driver of digital readiness and the ability to absorb AI applications in Africa. Access to education directly impacts individuals’ affordability of devices, thereby influencing their ability to engage with AI technology. Consequently, investing in education is crucial for enhancing digital readiness and facilitating successful AI adoption in Africa.
The African Union Data Policy Framework plays a critical role in creating an enabling environment for AI in Africa. The framework recognizes the significance of digital infrastructure in supporting the African continental free trade area and provides countries with a clear action plan alignment and implementation support. This framework aims to overcome the challenges faced in achieving digital readiness for AI in Africa.
Addressing data governance challenges and managing the implications of AI require global cooperation. Currently, 90% of the data extracted from Africa goes to big tech companies abroad, necessitating the development of global governance frameworks to effectively manage digital public goods. Collaboration on an international scale is essential to ensure that data governance supports AI development while protecting the interests and sovereignty of African nations.
Structural inequalities pose a significant challenge to equal AI implementation. When AI blueprints from countries with different political economies are implemented in other societies, inequalities are deepened, leading to the perpetuation of inequitable outcomes. Ethical concerns surrounding AI are also raised, highlighting the role played by major tech companies, particularly those rooted in the world’s most prominent democracies. Ethical challenges arise from these companies’ actions and policies, which have far-reaching implications for AI development.
An additional concern is the presence of bias and discrimination in AI algorithms due to the absence of digitization in some countries. In certain nations, such as Sri Lanka, where there is a lack of full digitization, people remain offline, resulting in their invisibility, underrepresentation, and discrimination in AI algorithms. This highlights the inherent limitations of AI datasets in being truly unbiased and inclusive, as they rely on digitized data that may exclude significant portions of the global population.
In conclusion, African countries face several challenges in achieving digital readiness for AI, including limited internet access, high device costs, and rural location constraints. Education plays a crucial role in enhancing digital readiness, while the African Union Data Policy Framework provides an important foundation for creating an enabling environment. Addressing data governance challenges and managing the implications of AI require global cooperation and collaboration. Structural inequalities and ethical concerns pose significant risks to the equitable implementation of AI. Additionally, the absence of digitization in some countries leads to bias and discrimination in AI algorithms.
Alain Ndayishimiye
AI has the potential to have a profound impact on societies, but it requires responsible and transparent practices to ensure its successful integration and development. Rwanda is actively harnessing the power of AI to advance its social and economic goals. The country aims to become an upper middle-income nation by 2035 and a high-income country by 2050, relying heavily on AI technologies.
Rwanda’s national AI policy is considered a beacon of responsible and inclusive AI. This policy serves as a roadmap for the country’s AI development and deployment and was developed collaboratively with various stakeholders. Through this multi-stakeholder approach, Rwanda was able to create a comprehensive and robust policy framework that supports responsible AI practices.
One key benefit of the multi-stakeholder approach in developing Rwanda’s AI policy is the promotion of knowledge sharing and capacity building. By bringing together different stakeholders, experiences and insights were shared, fostering learning and collaboration. This approach also contributed to the strengthening of local digital ecosystems, creating a supportive environment for the development and implementation of AI technologies.
However, ethical considerations remain important in the development and deployment of AI. Concerns such as biases in AI models and potential privacy breaches need to be addressed to ensure AI is used ethically and does not harm individuals or society. Additionally, the impact of AI on job displacement and potential misuse in surveillance should be carefully managed and regulated.
To further promote the responsible use of AI and create a harmonised environment, it is crucial for African countries to collaborate and harmonise their AI policies and regulations. This would allow for a unified approach when dealing with large multinational companies and help reduce the complexities of regulation. Harmonisation would also facilitate the development of shared digital infrastructure, attracting global tech giants by providing a consistent and supportive regulatory environment.
In conclusion, the transformative potential of AI for societies is significant, but responsible and transparent practices are essential in its development and deployment. Rwanda’s national AI policy serves as an example of responsible and inclusive AI, with a multi-stakeholder approach promoting knowledge sharing and capacity building. However, ethical considerations and the harmonisation of AI policies among African countries should be prioritised to ensure the successful integration and benefits of the digital economy, positioning Africa as a significant player in the global digital space.
Galia Daor
The Organisation for Economic Cooperation and Development (OECD) has been actively involved in the field of artificial intelligence (AI) since 2016. They adopted the first intergovernmental standard on AI, called the OECD AI Principles, in 2019. These principles consist of five values-based principles for all AI actors and five policy recommendations for governments and policymakers.
The five values-based principles of the OECD AI Principles focus on fairness, transparency, accountability, and human-centrality. They aim to ensure that AI systems respect human rights, promote fairness, avoid discrimination, and maintain accountability. The OECD aims to establish a global framework for responsible AI development and use.
The OECD AI Principles also provide policy recommendations to assist governments in developing national AI strategies that align with the principles. The OECD supports countries in adapting and revising their AI strategies according to these principles.
In addition, the OECD emphasizes the need for global collaboration in AI development. They believe that AI should not be controlled solely by specific companies or countries. Instead, they advocate for a global approach to maximize the potential benefits of AI and ensure equitable outcomes.
While the OECD is optimistic about the positive changes AI can bring, they express concerns about the fragmentation of AI development. They highlight the importance of cohesive efforts and coordination to avoid hindering progress through differing standards and practices.
To conclude, the OECD’s work on AI focuses on establishing a global framework for responsible AI development and use. They promote principles of fairness, transparency, and accountability and provide support to countries in implementing these principles. The OECD also emphasizes the need for global collaboration and acknowledges the potential challenges posed by fragmentation in AI development.
Robert Opp
Embracing artificial intelligence (AI) has the potential to make significant progress towards achieving the Sustainable Development Goals (SDGs), according to a report by the UN Development Programme (UNDP) and ITU. The report highlights the positive impact that digital technology, including AI, could have on 70% of the SDG targets. However, the adoption of AI varies among countries due to their differing stages of digital transformation and the challenges they face.
For instance, Sri Lanka requires a national-level initiative to build AI readiness and capacity, as building AI readiness and capacity cannot be achieved solely at the corporate or private sector level. Other countries have recognized this and have implemented national-level initiatives. UNDP is actively involved in supporting digital programming and has initiated the AI readiness process in Sri Lanka, Rwanda, and Colombia. This process aims to complement national digital transformation processes and views the government as an enabler of AI.
Challenges in implementing AI include fragmentation, financing, ensuring foundation issues are addressed, and representation and diversity. Fragmentation and foundational issues have been identified as concerns, as AI is only as good as the data it is trained on. Additionally, financing issues may hinder the effective implementation of AI, and it is crucial to ensure representation and diversity to avoid bias and promote fairness.
Advocates argue for a multi-stakeholder and human-centered approach to AI development as a method of risk management. This approach emphasizes the importance of including various worldviews and cultural relevancy in the development process.
The report also highlights the need for inclusivity and leaving no one behind in the journey towards achieving the SDGs. It champions working with indigenous communities, who represent different worldviews, to ensure that every individual has the opportunity to realize their potential.
In conclusion, AI presents a unique opportunity for human progress and the achievement of the SDGs. However, careful consideration must be given to address challenges such as fragmentation, financing, foundation issues, and representation and diversity. By adopting a multi-stakeholder and human-centered approach, AI can be harnessed effectively and inclusively to drive sustainable development and improve the lives of people worldwide.
Speakers
Speech speed
0 words per minute
Speech length
words
Speech time
0 secs
Report
AI has the potential to have a profound impact on societies, but it requires responsible and transparent practices to ensure its successful integration and development. Rwanda is actively harnessing the power of AI to advance its social and economic goals.
The country aims to become an upper middle-income nation by 2035 and a high-income country by 2050, relying heavily on AI technologies.
Rwanda’s national AI policy is considered a beacon of responsible and inclusive AI. This policy serves as a roadmap for the country’s AI development and deployment and was developed collaboratively with various stakeholders.
Through this multi-stakeholder approach, Rwanda was able to create a comprehensive and robust policy framework that supports responsible AI practices.
One key benefit of the multi-stakeholder approach in developing Rwanda’s AI policy is the promotion of knowledge sharing and capacity building.
By bringing together different stakeholders, experiences and insights were shared, fostering learning and collaboration. This approach also contributed to the strengthening of local digital ecosystems, creating a supportive environment for the development and implementation of AI technologies.
However, ethical considerations remain important in the development and deployment of AI.
Concerns such as biases in AI models and potential privacy breaches need to be addressed to ensure AI is used ethically and does not harm individuals or society. Additionally, the impact of AI on job displacement and potential misuse in surveillance should be carefully managed and regulated.
To further promote the responsible use of AI and create a harmonised environment, it is crucial for African countries to collaborate and harmonise their AI policies and regulations.
This would allow for a unified approach when dealing with large multinational companies and help reduce the complexities of regulation. Harmonisation would also facilitate the development of shared digital infrastructure, attracting global tech giants by providing a consistent and supportive regulatory environment.
In conclusion, the transformative potential of AI for societies is significant, but responsible and transparent practices are essential in its development and deployment.
Rwanda’s national AI policy serves as an example of responsible and inclusive AI, with a multi-stakeholder approach promoting knowledge sharing and capacity building. However, ethical considerations and the harmonisation of AI policies among African countries should be prioritised to ensure the successful integration and benefits of the digital economy, positioning Africa as a significant player in the global digital space.
Speech speed
0 words per minute
Speech length
words
Speech time
0 secs
Report
In Africa, achieving digital readiness for artificial intelligence (AI) poses significant challenges due to several fundamental obstacles. Limited access to the internet is a major barrier, with many countries in Africa having 95% broadband coverage, but less than 20% of the population experiencing the network effects of being online.
This indicates that the lack of internet connectivity severely hampers the potential benefits of AI. Additionally, the high cost of devices is a crucial factor preventing a large portion of the population from acquiring the necessary technology to access the internet and engage with AI applications.
Moreover, rural location is a greater hindrance to access than gender, further exacerbating the digital divide in Africa.
Education emerges as a key driver of digital readiness and the ability to absorb AI applications in Africa. Access to education directly impacts individuals’ affordability of devices, thereby influencing their ability to engage with AI technology.
Consequently, investing in education is crucial for enhancing digital readiness and facilitating successful AI adoption in Africa.
The African Union Data Policy Framework plays a critical role in creating an enabling environment for AI in Africa. The framework recognizes the significance of digital infrastructure in supporting the African continental free trade area and provides countries with a clear action plan alignment and implementation support.
This framework aims to overcome the challenges faced in achieving digital readiness for AI in Africa.
Addressing data governance challenges and managing the implications of AI require global cooperation. Currently, 90% of the data extracted from Africa goes to big tech companies abroad, necessitating the development of global governance frameworks to effectively manage digital public goods.
Collaboration on an international scale is essential to ensure that data governance supports AI development while protecting the interests and sovereignty of African nations.
Structural inequalities pose a significant challenge to equal AI implementation. When AI blueprints from countries with different political economies are implemented in other societies, inequalities are deepened, leading to the perpetuation of inequitable outcomes.
Ethical concerns surrounding AI are also raised, highlighting the role played by major tech companies, particularly those rooted in the world’s most prominent democracies. Ethical challenges arise from these companies’ actions and policies, which have far-reaching implications for AI development.
An additional concern is the presence of bias and discrimination in AI algorithms due to the absence of digitization in some countries.
In certain nations, such as Sri Lanka, where there is a lack of full digitization, people remain offline, resulting in their invisibility, underrepresentation, and discrimination in AI algorithms. This highlights the inherent limitations of AI datasets in being truly unbiased and inclusive, as they rely on digitized data that may exclude significant portions of the global population.
In conclusion, African countries face several challenges in achieving digital readiness for AI, including limited internet access, high device costs, and rural location constraints.
Education plays a crucial role in enhancing digital readiness, while the African Union Data Policy Framework provides an important foundation for creating an enabling environment. Addressing data governance challenges and managing the implications of AI require global cooperation and collaboration.
Structural inequalities and ethical concerns pose significant risks to the equitable implementation of AI. Additionally, the absence of digitization in some countries leads to bias and discrimination in AI algorithms.
Speech speed
0 words per minute
Speech length
words
Speech time
0 secs
Report
Countries around the world are facing significant challenges in implementing artificial intelligence (AI) due to variations in democratic processes and understanding of ethical practices. The differences in governance structures and ethical frameworks make it difficult for countries with non-democratic processes to effectively grasp and navigate the complexities of AI ethics.
Even in relatively democratic countries like the Netherlands, issues arise due to these disparities.
Furthermore, many countries are hastily rushing to implement AI without giving due consideration to important factors such as data quality, data collection, and data protection and privacy laws.
The focus seems to be on implementing AI algorithms without laying down the necessary core elements required for a successful transition to AI-driven systems. This is a cause for concern, particularly in most countries in the global south where data protection and privacy laws are often inadequate.
The lack of adequate data quality and collection mechanisms, coupled with inadequate data protection and privacy laws, raises serious concerns about the safety and integrity of AI systems.
Without proper measures in place, there is a risk of bias, discrimination, and potential misuse of data, which can have far-reaching consequences for individuals and societies.
In order to address these challenges, governments must recognize the need to ensure that their technical infrastructure and workforce skills are agile enough to adapt to new AI technologies as they emerge.
The rapid advances in AI capabilities require a proactive approach in developing the necessary infrastructure and upskilling the workforce to keep up with the evolving technology.
In conclusion, the implementation of AI is hindered by variations in democratic processes and understanding of ethical practices among countries.
Rushing into AI implementation without addressing critical issues such as data quality and protection can lead to significant problems, particularly in countries with insufficient data protection and privacy laws. Governments play a crucial role in fostering appropriate technical infrastructure and developing the necessary skills to effectively navigate the challenges posed by AI technologies.
Speech speed
0 words per minute
Speech length
words
Speech time
0 secs
Report
Singapore has taken a human-centric and inclusive approach to AI governance, prioritising digital readiness and adoption within communities. This policy aims to ensure that the benefits of AI are accessible and beneficial to all members of society. The model governance framework developed by Singapore aligns with OECD principles, demonstrating their commitment to ethical and responsible AI practices.
In adopting a multi-stakeholder approach, Singapore has sought input from a diverse range of companies, both domestic and international.
They have collaborated with the World Economic Forum Center for the Fourth Industrial Revolution for ISAGO (Intentional Standards for AI Governance Organizations) and have worked with a local company to write a discussion paper on Gen-AI. This inclusive approach allows for a variety of perspectives and fosters collaboration between different stakeholders in the development of AI governance.
Practical guidance is a priority for Singapore in AI governance.
They have created a compendium of use cases that serves as a reference for both local and international organisations. Additionally, they have developed ISAGO, an implementation and self-assessment guide for companies to ensure that they adhere to best practices in AI governance.
Furthermore, Singapore has established the AI Verify Foundation, an open-source foundation that provides an AI toolkit to assist organisations in implementing AI in a responsible manner.
Singapore recognises the importance of international alignment and interoperability in AI governance.
They encourage alignment with international organisations and other governments and advocate for an open industry focus on critical emerging technologies. Singapore believes that future conversations in AI governance will revolve around international technical standards and benchmarking, which will facilitate cooperation and harmonisation of AI practices globally.
However, concerns are raised about the fragmentation of global laws surrounding AI; compliance costs can increase when laws are fragmented, which could hinder the development and adoption of AI technologies.
Singapore acknowledges the need for a unified framework and harmonised regulations to mitigate these challenges.
Additionally, there is apprehension about the potential negative impacts of technology, especially in terms of widening divides and negatively affecting vulnerable groups. Singapore, being a highly connected society, is aware of the possibility of certain groups being left behind.
Bridging these divides and ensuring that technology is inclusive and addresses the needs of vulnerable populations is a priority in their AI governance efforts.
Cultural and ethnic sensitivities in conjunction with black box technology are also a concern.
It is unpredictable whether technology will fragment or unify communities, particularly in terms of ethnic and cultural sensitivities. Singapore acknowledges the importance of considering a culturally specific perspective to understand the potential impacts of AI better.
In conclusion, Singapore’s approach to AI governance encompasses human-centricity, inclusivity, and practical guidance.
Their multi-stakeholder approach ensures a diversity of perspectives, and they prioritise international alignment and interoperability in AI governance. While concerns exist regarding the fragmentation of global laws and the potential negative impacts on vulnerable groups and cultural sensitivities, Singapore actively addresses these issues to create an ethical and responsible AI ecosystem.
Speech speed
0 words per minute
Speech length
words
Speech time
0 secs
Report
Sri Lanka is currently facing challenges in terms of its AI readiness and capacity, which puts it behind many other countries in this field. The country has just begun its journey towards improving AI readiness and it lags behind in terms of both readiness and capacity.
However, the government of Sri Lanka has recognised the importance of AI development and has taken the initiative to develop a national AI policy and strategy.
This is expected to be rolled out in November and April 2024 respectively. The government understands that engagement in AI development should not be limited to the private sector or select universities, but it needs to be a national initiative involving various stakeholders.
Currently, AI projects in Sri Lanka face challenges in terms of their implementation.
Although over 300 AI projects were conducted by university students in the country last year, none of them went into production. The proposed AI projects in Sri Lanka often do not progress beyond the conceptual stage. This highlights the need for better infrastructure and support to bring these projects to fruition.
One of the primary obstacles to AI advancement in Sri Lanka is the lack of standardized and digitized data.
Data is often siloed and still available in paper format, making it difficult to utilize it effectively for AI applications. This challenge is not just technical but also operational, requiring a change in mindsets, awareness, and trust. Efforts to develop AI projects are being wasted due to the absence of consolidated data sets that address national problems.
In order to overcome these challenges, Sri Lanka aims to establish a sustainable, inclusive, and open digital ecosystem.
The United Nations Development Programme (UNDP) is working on an AI readiness assessment for Sri Lanka. This assessment will help identify areas that need improvement and provide recommendations to establish an ecosystem that fosters AI development.
In conclusion, Sri Lanka is in the early stages of improving its AI readiness and capacity.
The government is taking an active role in formulating a national AI policy and strategy. However, there are challenges in terms of implementing AI projects, primarily due to the lack of standardized and digitized data. Efforts are being made to address these challenges and establish a sustainable digital ecosystem that supports AI development.
Speech speed
0 words per minute
Speech length
words
Speech time
0 secs
Report
The Organisation for Economic Cooperation and Development (OECD) has been actively involved in the field of artificial intelligence (AI) since 2016. They adopted the first intergovernmental standard on AI, called the OECD AI Principles, in 2019. These principles consist of five values-based principles for all AI actors and five policy recommendations for governments and policymakers.
The five values-based principles of the OECD AI Principles focus on fairness, transparency, accountability, and human-centrality.
They aim to ensure that AI systems respect human rights, promote fairness, avoid discrimination, and maintain accountability. The OECD aims to establish a global framework for responsible AI development and use.
The OECD AI Principles also provide policy recommendations to assist governments in developing national AI strategies that align with the principles.
The OECD supports countries in adapting and revising their AI strategies according to these principles.
In addition, the OECD emphasizes the need for global collaboration in AI development. They believe that AI should not be controlled solely by specific companies or countries.
Instead, they advocate for a global approach to maximize the potential benefits of AI and ensure equitable outcomes.
While the OECD is optimistic about the positive changes AI can bring, they express concerns about the fragmentation of AI development.
They highlight the importance of cohesive efforts and coordination to avoid hindering progress through differing standards and practices.
To conclude, the OECD’s work on AI focuses on establishing a global framework for responsible AI development and use.
They promote principles of fairness, transparency, and accountability and provide support to countries in implementing these principles. The OECD also emphasizes the need for global collaboration and acknowledges the potential challenges posed by fragmentation in AI development.
Speech speed
0 words per minute
Speech length
words
Speech time
0 secs
Report
Jingbo Huang places significant emphasis on the importance of collective intelligence in both human-to-human and human-to-machine interactions. He recognizes the potential for artificial intelligence (AI) and human intelligence to work in unison to tackle challenges, highlighting the positive aspects of this partnership rather than focusing solely on the negatives.
Huang emphasizes the need for collaboration and preparation among human entities to ensure the integration of AI into society benefits all parties involved.
Huang further expresses curiosity about the collaboration between different AI assessment tools developed by various organizations.
Specifically, he mentions the UNDP’s AI readiness assessment tool and raises questions about how it aligns or interacts with tools developed by the OECD, Singapore, Africa, and others. This indicates Huang’s interest in exploring potential synergies and knowledge-sharing among these assessment tools.
Additionally, Huang demonstrates an interest in understanding the challenges faced by panelists during AI conceptualization and implementation.
Although specific supporting facts are not provided, this suggests Huang’s desire to explore the obstacles encountered in bringing AI projects to fruition. By examining these challenges, he aims to acquire knowledge that can help overcome barriers and facilitate the successful integration of AI into various industry sectors.
In summary, Jingbo Huang underscores the significance of collective intelligence, both within human-to-human interactions and between human and machine intelligence.
Huang envisions a collaborative approach that leverages the strengths of both AI and human intelligence to address challenges. He also shows a keen interest in exploring how different AI assessment tools can work together, seeking to identify potential synergies and compatibility.
Moreover, he expresses curiosity about the challenges faced during the AI conceptualization and implementation process. These insights reflect Huang’s commitment to fostering mutual understanding, collaboration, and effective utilization of AI technologies.
Speech speed
0 words per minute
Speech length
words
Speech time
0 secs
Report
Embracing artificial intelligence (AI) has the potential to make significant progress towards achieving the Sustainable Development Goals (SDGs), according to a report by the UN Development Programme (UNDP) and ITU. The report highlights the positive impact that digital technology, including AI, could have on 70% of the SDG targets.
However, the adoption of AI varies among countries due to their differing stages of digital transformation and the challenges they face.
For instance, Sri Lanka requires a national-level initiative to build AI readiness and capacity, as building AI readiness and capacity cannot be achieved solely at the corporate or private sector level.
Other countries have recognized this and have implemented national-level initiatives. UNDP is actively involved in supporting digital programming and has initiated the AI readiness process in Sri Lanka, Rwanda, and Colombia. This process aims to complement national digital transformation processes and views the government as an enabler of AI.
Challenges in implementing AI include fragmentation, financing, ensuring foundation issues are addressed, and representation and diversity.
Fragmentation and foundational issues have been identified as concerns, as AI is only as good as the data it is trained on. Additionally, financing issues may hinder the effective implementation of AI, and it is crucial to ensure representation and diversity to avoid bias and promote fairness.
Advocates argue for a multi-stakeholder and human-centered approach to AI development as a method of risk management.
This approach emphasizes the importance of including various worldviews and cultural relevancy in the development process.
The report also highlights the need for inclusivity and leaving no one behind in the journey towards achieving the SDGs. It champions working with indigenous communities, who represent different worldviews, to ensure that every individual has the opportunity to realize their potential.
In conclusion, AI presents a unique opportunity for human progress and the achievement of the SDGs.
However, careful consideration must be given to address challenges such as fragmentation, financing, foundation issues, and representation and diversity. By adopting a multi-stakeholder and human-centered approach, AI can be harnessed effectively and inclusively to drive sustainable development and improve the lives of people worldwide.