Evolving AI, evolving governance: from principles to action | IGF 2023 WS #196
Event report
Speakers and Moderators
Speakers:
- Prateek Sibal, Intergovernmental Organization
- Owen Larter, Private Sector, Western European and Others Group (WEOG)
- Thomas Schneider, Government, Western European and Others Group (WEOG)
- Clara Neppel, Technical Community, Western European and Others Group (WEOG)
- Maria Paz Canales, Civil Society, Latin American and Caribbean Group (GRULAC)
- Nobuhisa NISHIGATA, Government, Asia-Pacific Group
- Suzanne Akkabaoui, Government, African Group
- OECD_Karine Perset, Intergovernmental Organization
Moderators:
- Timea Suto, Private Sector, Eastern European Group
Table of contents
Disclaimer: This is not an official record of the IGF session. The DiploAI system automatically generates these resources from the audiovisual recording. Resources are presented in their original format, as provided by the AI (e.g. including any spelling mistakes). The accuracy of these resources cannot be guaranteed. The official record of the session can be found on the IGF's official website.
Knowledge Graph of Debate
Session report
Owen Later
Microsoft has made significant efforts to prioritise the responsible use of AI technology. They have dedicated six years to building their responsible AI programme, which involves a team of over 350 experts in various fields including engineering, research, legal, and policy. Their responsible AI standard is based on the principles outlined by the Organisation for Economic Co-operation and Development (OECD), emphasising the importance of ethical AI practices.
In addition to their internal initiatives, Microsoft recognises the need for active participation from private companies and the industry in AI governance discussions. To foster collaboration and best practices, they have founded the Frontier Model Forum, which brings together leading AI labs. This forum focuses on developing technical guidelines and standards for frontier models. Microsoft also supports global efforts, such as those taking place at the United Nations (UN) and the OECD, to ensure that AI technology is governed responsibly.
Another crucial aspect highlighted by Microsoft is the importance of regulations to effectively manage the use and development of AI. They actively share their insights and experiences to help shape regulations that address the unique challenges posed by AI technology. Furthermore, Microsoft aims to build capacity for governments and industry regulators, enabling them to navigate the complex landscape of AI and ensure the adoption of responsible practices.
Microsoft emphasises the need for safeguards at both the model and application levels of AI development. The responsible development of AI models includes considering ethical considerations and ensuring that the model meets the necessary requirements. However, Microsoft acknowledges that even if the model is developed responsibly, there is still a risk if the application level lacks proper safeguards. Therefore, they stress the importance of incorporating safeguards throughout the entire AI development process.
Microsoft also supports global governance for AI and advocates for a representative process in developing standards. They believe that a global governance regime should aim for a framework that includes standard setting, consensus on risk assessment and mitigation, and infrastructure building. Microsoft cites the International Civil Aviation Organization and the Intergovernmental Panel on Climate Change as potential models for governance, highlighting the importance of collaborative and inclusive approaches to effectively govern AI.
In conclusion, Microsoft takes the responsible use of AI technology seriously, as evident through their comprehensive responsible AI programme, active participation in AI governance discussions, support for global initiatives, and commitment to shaping regulations. They emphasise the need for safeguards at both the model and application levels of AI development and advocate for global governance that is representative, consensus-driven, and infrastructure-focused. Through their efforts, Microsoft aims to ensure that AI technology is harnessed responsibly and ethically, promoting positive societal impact.
Clara Neppel
During the discussion, the speakers highlighted the ethical challenges associated with technology development. They emphasised the need for responsibility and the embedding of values and business models into technology. IEEE, with its constituency of 400,000 members, was recognised as playing a significant role in addressing these challenges.
Transparency, value-based design, and bias in AI were identified as crucial areas of concern. IEEE has been actively engaging with regulatory bodies to develop socio-technical standards in these areas. They have already established a standard that defines transparency and another standard focused on value-based design. These efforts aim to ensure that AI systems are accountable, fair, and free from bias.
The importance of standards in complementing regulation and bringing interoperability in regulatory requirements was emphasised. IEEE has been involved in discussions with experts to address the technical challenges related to AI. An example was provided in the form of the UK’s Children’s Act, which was complemented by an IEEE standard on age-appropriate design. This highlights how standards can play a crucial role in ensuring compliance and interoperability within regulatory frameworks.
Capacity building for AI certification was also discussed as an essential component. IEEE has trained over 100 individuals for AI certification and is also working on training certification bodies to carry out assessments. This capacity building process ensures that individuals and organisations possess the necessary skills and knowledge to navigate the complex landscape of AI and contribute to its responsible development and deployment.
The panel also explored the role of the private sector in protecting democracy and the rule of law. One speaker argued that it is not the responsibility of the private sector to safeguard these fundamental principles. However, another speaker highlighted the need for legal certainty, which can only be provided through regulations, in upholding the rule of law. This debate prompts further reflection on the appropriate roles and responsibilities of different societal actors in maintaining democratic values and institutions.
The negative impact of uncertainty on the private sector was acknowledged. The uncertain environment poses challenges for businesses and impedes economic growth and job creation. This concern underscores the need for stability and predictability to support a thriving and sustainable private sector.
Lastly, the importance of feedback loops and common standards in AI was emphasised. This includes ensuring that feedback is taken into account to improve and retrain AI systems. Drawing from lessons learned in the aviation industry, the development of benchmarking and common standards was seen as vital for enabling efficient and effective collaboration across different AI systems and applications.
In conclusion, the speakers underscored the importance of addressing ethical challenges in technology development, specifically in the context of AI. IEEE’s involvement in shaping socio-technical standards, capacity building for AI certification, and the need for transparency, value-based design, and addressing bias were highlighted. The role of standards in complementing regulation and promoting interoperability was emphasised, along with the necessity of legal certainty to uphold democracy and the rule of law. The challenges posed by uncertainty in the private sector and the significance of feedback loops and common standards in AI were also acknowledged. These insights contribute to the ongoing discourse surrounding the responsible development and deployment of technology.
Maria Paz Canales
The discussion on artificial intelligence (AI) governance highlighted several key points. Firstly, there is a need for a clearer understanding and clarification of AI governance. This involves the participation of various actors, including civil society organizations and communities that are directly affected by AI. Civil society organizations require a better understanding of AI risk areas, as well as effective strategies for AI implementation.
Another important point is the evaluation of AI’s impact on rights, with a specific focus on inclusiveness. It is essential to assess the effect of AI on civil, political, economic, social, and cultural rights. Unfortunately, the perspectives of communities impacted by AI are often excluded in the assessment of AI risks. Therefore, it is necessary to include the viewpoints and experiences of these communities to comprehensively evaluate the impact of AI. This inclusive approach to AI impact assessment can lead to the development of responsible and trustworthy technology.
The discussion also emphasized the importance of education and capacity building in understanding AI’s impact. The public cannot fully comprehend the consequences of AI without being adequately educated about the subject in a concrete and understandable way. Therefore, it is crucial to provide not only technical language but also information on how AI impacts daily life and basic rights. By enhancing education and capacity building, individuals can better grasp the implications and intricacies of AI technology.
Furthermore, it was highlighted that there should be some level of complementarity between voluntary standards and legal frameworks in AI governance. This includes ensuring responsibility at different levels, from the design stage to the implementation and functioning of AI systems. The relationship between voluntary standards and legal frameworks must be carefully balanced to create an effective governance structure for AI.
In addition, the discussion underscored the importance of accounting for shared responsibility within the legal framework. It is crucial to establish effective communication between the different operators involved in the production and use of AI. This communication should adhere to competition rules and intellectual property regulations, avoiding any violations. By accounting for shared responsibility, the legal framework can ensure ethical and responsible AI governance.
Lastly, the discussion emphasized the need for a bottom-up approach in AI governance. This approach involves the active participation of societies and different stakeholders at both the local and global levels. Geopolitically, there is a need to hear more about experiences and perspectives from various stakeholders in global-level governance discussions. By adopting a bottom-up approach, AI governance can become more democratic, inclusive, and representative of the diverse needs and interests of stakeholders.
In conclusion, the discussion on AI governance highlighted the importance of a clearer understanding and clarification of AI governance, inclusiveness in impact assessment, education and capacity building, complementarity between voluntary standards and legal frameworks, the consideration of shared responsibility, and a bottom-up approach in AI governance. By addressing these aspects, it is possible to develop responsible and trustworthy AI technology to benefit society as a whole.
Thomas Schneider
The topic of AI regulation is currently being discussed in the context of its application. The argument put forth is that instead of regulating AI as a tool, it should be regulated in consideration of how it is used. This means that regulations should be tailored to address the specific applications and potential risks of AI, rather than imposing blanket regulations on all AI technologies. It is believed that this approach will allow for a more nuanced and effective regulation of AI.
Voluntary commitment in AI regulations is seen as an effective approach, provided that the right incentives are in place. This means that instead of enforcing compulsory regulations, which can often be complicated and unworkable, voluntary agreements can be more successful. By providing incentives for AI developers and users to adhere to certain standards and guidelines, it is believed that a more cooperative and collaborative approach can be fostered, ensuring responsible and ethical use of AI technology.
The Council of Europe is currently developing the first binding convention on AI and human rights, which is seen as a significant step forward. This intergovernmental agreement aims to commit states to uphold AI principles based on the norms of human rights, democracy, and the rule of law. The convention is not only intended to ensure the protection of fundamental rights in the context of AI, but also to create interoperable legal systems within countries. This represents a significant development in the global governance of AI and the protection of human rights.
The need for agreement on fundamental values is also highlighted in the discussions on AI regulation. It is essential to have a consensus on how to respect human dignity and ensure that technological advancements are made while upholding and respecting human rights. This ensures that AI development and deployment align with society’s values and principles.
Addressing legal uncertainties and tackling new challenges is another important aspect of AI regulation. As AI technologies continue to evolve, it is necessary to identify legal uncertainties and clarify them to ensure a clear and coherent regulatory framework. Breaking down new elements and challenges associated with AI is crucial to ensuring that regulations are effective and comprehensive.
In solving the problems related to AI regulation, it is emphasized that using the best tools and methods is essential. Different problems may require different approaches, with some methods being faster but less sustainable, while others may be more sustainable but take longer to implement. By utilizing a mix of tools and methods, it is possible to effectively address identified issues.
Stakeholder cooperation is also considered to be of utmost importance in the realm of AI regulation. All stakeholders, including governments, businesses, researchers, and civil society, need to continue to engage and cooperate with each other, leveraging their respective roles. This collaboration ensures that diverse perspectives and expertise are taken into account when formulating regulations, thereby increasing the chances of effective and balanced AI regulation.
However, there is also opposition to the creation of burdensome bureaucracy in the process of regulating AI. Efforts should be made to clarify issues and address challenges without adding unnecessary administrative complexity. It is crucial to strike a balance between ensuring responsible and ethical use of AI technology and avoiding excessive burdens on AI developers and users.
In conclusion, the discussions on AI regulation are centered around the need to regulate AI in consideration of its application, rather than treating it as a tool. Voluntary commitments, such as the binding convention being developed by the Council of Europe, are seen as effective approaches, provided the right incentives are in place. Agreement on fundamental values, addressing legal uncertainties, and stakeholder cooperation are crucial aspects of AI regulation. It is important to strike a balance between effective regulation and avoiding burdensome bureaucracy.
Suzanne Akkabaoui
Egypt has developed a comprehensive national AI strategy aimed at fostering the growth of an AI industry. The strategy is based on four pillars: AI for government, AI for development, capacity building, and international relations. It focuses on leveraging AI technologies to drive innovation, improve governance, and address societal challenges.
Under the AI for government pillar, Egypt aims to enhance the effectiveness of public administration by adopting AI technologies. This includes streamlining administrative processes, improving decision-making, and delivering efficient government services.
The AI for development pillar highlights Egypt’s commitment to utilizing AI as a catalyst for economic growth and social development. The strategy focuses on promoting AI-driven innovation, entrepreneurship, and tackling critical issues such as poverty, hunger, and inequality.
Capacity building is prioritized in Egypt’s AI strategy to develop a skilled workforce. The country invests in AI education, training programs, research, and collaboration between academia, industry, and government.
International cooperation is emphasized to exchange knowledge, share best practices, and establish standardized approaches to AI governance. Egypt actively participates in global discussions on AI policies and practices as a member of the OECD AI network.
To ensure responsible AI deployment, Egypt has issued a charter that provides guidelines for promoting citizen well-being and aligning with national goals. These guidelines address aspects such as robustness, security, safety, and social impact assessments.
Egypt also recognizes the importance of understanding cultural differences and bridging gaps for technological advancements. The country aims to address cultural gaps and promote inclusivity, ensuring that the benefits of AI are accessible to all segments of society.
Overall, Egypt’s AI strategy demonstrates a commitment to creating an AI industry and leveraging AI for governance, development, capacity building, and international cooperation. The strategy aims to foster responsible and inclusive progress through AI technologies.
Moderator
In the analysis, several speakers discuss various aspects of AI and its impact on different sectors and societies. One key point raised is the need for more machines, especially AI, in Japan to sustain its ageing economy. With Japan facing social problems related to an ageing population, the introduction of more people and machines is deemed necessary. However, it is interesting to note that while Japan recognises the importance of AI, they believe that its opportunities and possibilities should be prioritised over legislation. They view AI as a solution rather than a problem and want to see more of what AI can do for their society before introducing any regulations.
Furthermore, the G7 delegates are focused on creating a report that specifically examines the risks, challenges, and opportunities of new technology, particularly generative AI. They have sought support from the OECD in summarising this report. This highlights the importance of international cooperation in addressing the impact of AI.
Egypt also plays a significant role in the AI discussion. The country has a national AI strategy that seeks to create an AI industry while emphasising that AI should enhance human labour rather than replace it. Egypt has published an AI charter for responsible AI and is a member of the OECD AI network. This showcases the importance of aligning national strategies with global initiatives and fostering regional and international collaborations.
Microsoft is another notable player in the field of AI. The company is committed to developing AI technology responsibly. It has implemented a responsible AI standard based on OECD principles, which is shared externally for critique and improvement. Microsoft actively engages in global governance conversations, particularly through the Frontier Model Forum, where they accelerate work on technical best practices. Their contributions highlight the importance of private sector involvement in governance discussions.
UNESCO has made significant contributions to the AI discourse by developing a recommendation on the ethics of AI. The recommendation was developed through a two-year multi-stakeholder process and adopted by 193 countries. It provides a clear indication of ethical values and principles that should guide AI development and usage. Furthermore, UNESCO is actively working on capacity building to equip governments and organizations with the necessary skills to implement AI systems ethically.
In terms of addressing concerns and ensuring inclusivity, it is highlighted that AI bias can be addressed even before AI regulations are in place. Existing human rights frameworks and data protection laws can start to address challenges related to bias, discrimination, and privacy. For example, UNESCO has been providing knowledge on these issues to judicial operators to equip them with the necessary understanding of AI and the rule of law. Additionally, discussions around AI governance emphasise the need for clarity in frameworks and the inclusion of individuals who are directly impacted by AI technologies.
The analysis also suggests that responsible development, governance, regulation, and capacity building should be multi-stakeholder and cooperative processes. All sectors need to be involved in the conversation and implementation of AI initiatives to ensure effective and inclusive outcomes.
An interesting observation is the need for a balance between voluntary standards and legal frameworks. Complementarity is needed between these two approaches, especially in the design, implementation, and use of AI systems. Furthermore, the analysis highlights the importance of a bottom-up approach in global governance, taking into account diverse stakeholders and global geopolitical contexts. By incorporating global experiences from different stakeholders, risks can be identified, and relevant elements can be considered in local contexts.
Overall, the analysis provides insights into various perspectives on AI and the importance of responsible development, global collaboration, and inclusive policies in shaping the future of AI.
Set Center
AI regulation needs to keep up with the rapid pace of technological advancements, as there is a perceived inadequacy of government response in this area. The tension between speed and regulation is particularly evident in the case of AI. The argument put forth is that regulations should be able to adapt and respond quickly to the ever-changing landscape of AI technology.
On the other hand, it is important to have risk frameworks in place to address the potential risks associated with AI. The United States has taken steps in this direction by introducing the AI Bill of Rights and Risk Management Framework. These foundational documents have been formulated with the contributions of 240 organizations and cover the entire lifecycle of AI. The multi-stakeholder approach ensures that various perspectives are considered in managing the risks posed by AI.
Technological advancements, such as the development of foundation models, have ushered in a new era of AI. Leading companies in this field are primarily located in the United States. This highlights the influence and impact of US-based companies on the development and deployment of AI technologies worldwide. The emergence of foundation models has disrupted the technology landscape, showcasing the potential and capabilities of AI systems.
To address the challenges associated with rapid AI evolution, the implementation of voluntary commitments has been proposed. The White House has devised a framework called ‘Voluntary Commitments’ to enhance responsible management of AI systems. This framework includes elements such as red teaming, information sharing, basic cybersecurity measures, public transparency, and disclosure. Its objective is to build trust and security amidst the fast-paced evolution of AI technologies.
In conclusion, it is crucial for AI regulation to keep pace with the rapid advancements in technology. The perceived inadequacy of government response highlights the need for agile and adaptive regulations. Additionally, risk frameworks, such as the AI Bill of Rights and Risk Management Framework, are important in managing the potential risks associated with AI. The emergence of technologies like foundation models has brought about a new era of AI, with leading companies based in the US driving innovation in this field. The implementation of voluntary commitments, as proposed by the White House, aims to foster trust and security in the ever-evolving landscape of AI technologies.
Auidence
The discussions at the event highlighted the challenges that arise in capacity building due to time and financial commitments. Ansgar Kuhn from EY raised the problem of time commitments related to the capacity building process, which may bring additional financial burdens for various parties. Small to Medium Enterprises (SMEs) and civil society organizations might not be able to afford the cost of someone who isn’t directly contributing to their main products or services. Academics may also struggle to get academic credit for engaging in this kind of process.
To address these financial and time commitment issues in capacity building, participants stressed the importance of finding solutions. Ansgar Kuhn specifically asked for suggestions to tackle this problem, underscoring the need to explore feasible strategies to alleviate the burden that time and financial commitments place on different stakeholders.
There were also concerns raised about the implementation of responsible AI, particularly regarding the choice between system level guardrails and model level guardrails. The discussion highlighted worries about tech vendors providing unsafe models if responsibility for responsible AI is pushed to the system level. This sparked a debate about the best approach to implement responsible AI and the potential trade-offs associated with system level versus model level guardrails.
Moreover, the event touched upon the Hiroshima process and the expectation of a principle-based approach to AI. The previous G20 process, which focused on creating data free flow with trust, served as a reference point for the discussion. There was a question about the need for a principle approach for AI, suggesting the desire to establish ethical guidelines and principles to guide the development and deployment of AI technologies.
In conclusion, the discussions shed light on the challenges posed by time and financial commitments in capacity building and the need for solutions to mitigate these issues. Concerns about system level versus model level guardrails in responsible AI implementation emphasized the importance of balancing safety and innovation in AI. The desire for a principle-based approach to AI and the establishment of ethical guidelines were also highlighted.
Prateek Sibal
UNESCO has developed a comprehensive recommendation on the ethics of AI, achieved through a rigorous multi-stakeholder process. Over two years, 24 global experts collaborated on the initial draft, which then underwent around 200 hours of intergovernmental negotiations. In 2021, all 193 member countries adopted the recommendation, emphasizing the global consensus on addressing ethical concerns surrounding AI.
The recommendation includes values of human rights, inclusivity, and sustainability, serving as a guide for developers and users. It emphasizes transparency and explainability, ensuring AI systems are clear and understandable.
UNESCO is implementing the recommendation through various tools, forums, and initiatives. This includes a readiness assessment methodology, a Global Forum on the Ethics of AI, and an ethical impact assessment tool for governments and companies procuring AI systems.
The role of AI in society is acknowledged, with an example of a robot assisting teachers potentially influencing learning norms. Ethical viewpoints are crucial to align AI with societal expectations.
Prateek Sibal advocates for inclusive multi-stakeholder conversations around AI, emphasizing the importance of awareness, accessibility, and sensitivity towards different sectors. He suggests financial compensation to facilitate civil society engagement.
In conclusion, UNESCO’s recommendation on AI ethics provides valuable guidelines for responsible AI development. Their commitment to implementation and inclusive dialogue strengthens the global effort to navigate ethical challenges presented by AI.
Galia
The speakers in the discussion highlight the critical role of global governance, stakeholder engagement, and value alignment in working towards SDG 16, which focuses on Peace, Justice, and Strong Institutions. They address the challenges faced in implementing these efforts and stress the importance of establishing credible value alignment.
Galia, one of the speakers, emphasizes the mapping exercises conducted with the OECD regarding risk assessment. This suggests that the speakers are actively involved in assessing potential risks and vulnerabilities in the context of global governance. The mention of these mapping exercises indicates that concrete steps are being taken to identify and address potential obstacles.
Both speakers agree that stakeholder engagement is essential in complementing global governance. Galia specifically highlights the significance of ensuring alignment with values at a global level. This implies that involving various stakeholders and aligning their interests and values is crucial to achieving successful global governance. This collaborative approach allows for a wider range of perspectives and enables a more inclusive decision-making process.
The sentiment expressed by the speakers is positive, indicating their optimism and belief in overcoming the challenges associated with implementation. Their focus on credible value alignment suggests that they recognize the importance of ensuring that the principles and values underpinning global governance are widely accepted and respected. By emphasizing stakeholder engagement and value alignment, the speakers underscore the need for a holistic approach that goes beyond mere top-down control and incorporates diverse perspectives.
In summary, the discussion emphasizes the vital aspects of global governance, stakeholder engagement, and value alignment in achieving SDG 16. The speakers’ identification of challenges and their emphasis on credible value alignment demonstrate a proactive and thoughtful approach. Their mention of the OECD mapping exercises also indicates a commitment to assessing risks and vulnerabilities. Overall, the analysis underscores the significance of collaboration and the pursuit of shared values in global governance.
Nobuhisa Nishigata
Japan has emerged as a frontrunner in the discussions surrounding artificial intelligence (AI) during the G7 meeting. Japan proposed the inclusion of AI as a topic for discussion at the G7 meeting, and this proposal was met with enthusiasm by the other member countries. Consequently, the Japanese government has requested the OECD to continue the work on AI further. This indicates the recognition and value placed by the G7 nations on AI and its potential impact on various aspects of society.
While Japan is proactive in advocating for AI discussions, it adopts a cautious approach towards the introduction of legislation for AI. Japan believes that it is premature to implement legislation specifically tailored for AI at this stage. Nevertheless, Japan acknowledges and respects the efforts made by the European Union in this regard. This perspective highlights Japan’s pragmatic approach towards ensuring that any legislation around AI is well-informed and takes into account the potential benefits and challenges presented by this emerging technology.
Underlining its commitment to fostering cooperation and setting standards in AI, Japan has established the ‘Hiroshima AI process’. This initiative aims to develop a code of conduct and encourage project-based collaboration in the field of AI. The process, which began in 2016, has seen a shift from voluntary commitment to government-initiated inclusive dialogue among the G7 nations. Japan is pleased with the progress made in the Hiroshima process and the inclusive dialogue it has facilitated. It is worth noting that despite unexpected events in 2016, the process has continued to move forward successfully.
Japan recognises the immense potential of AI technology to serve as a catalyst for economic growth and improve everyday life. It believes that AI has the ability to support various aspects of the economy and enhance daily activities. This positive outlook reinforces Japan’s commitment to harnessing the benefits of AI and ensuring its responsible and sustainable integration into society.
In conclusion, Japan has taken a leading role in driving discussions on AI within the G7, with its proposal being well-received by other member countries. While cautious about introducing legislation for AI, Japan appreciates the efforts made by the EU in this regard. The establishment of the ‘Hiroshima AI process’ showcases Japan’s commitment to setting standards and fostering cooperation in AI. Overall, Japan is optimistic about the potential of AI to generate positive outcomes for the economy and society as a whole.
Speakers
Speech speed
0 words per minute
Speech length
words
Speech time
0 secs
Report
The discussions at the event highlighted the challenges that arise in capacity building due to time and financial commitments. Ansgar Kuhn from EY raised the problem of time commitments related to the capacity building process, which may bring additional financial burdens for various parties.
Small to Medium Enterprises (SMEs) and civil society organizations might not be able to afford the cost of someone who isn’t directly contributing to their main products or services. Academics may also struggle to get academic credit for engaging in this kind of process.
To address these financial and time commitment issues in capacity building, participants stressed the importance of finding solutions. Ansgar Kuhn specifically asked for suggestions to tackle this problem, underscoring the need to explore feasible strategies to alleviate the burden that time and financial commitments place on different stakeholders.
There were also concerns raised about the implementation of responsible AI, particularly regarding the choice between system level guardrails and model level guardrails.
The discussion highlighted worries about tech vendors providing unsafe models if responsibility for responsible AI is pushed to the system level. This sparked a debate about the best approach to implement responsible AI and the potential trade-offs associated with system level versus model level guardrails.
Moreover, the event touched upon the Hiroshima process and the expectation of a principle-based approach to AI.
The previous G20 process, which focused on creating data free flow with trust, served as a reference point for the discussion. There was a question about the need for a principle approach for AI, suggesting the desire to establish ethical guidelines and principles to guide the development and deployment of AI technologies.
In conclusion, the discussions shed light on the challenges posed by time and financial commitments in capacity building and the need for solutions to mitigate these issues.
Concerns about system level versus model level guardrails in responsible AI implementation emphasized the importance of balancing safety and innovation in AI. The desire for a principle-based approach to AI and the establishment of ethical guidelines were also highlighted.
Speech speed
0 words per minute
Speech length
words
Speech time
0 secs
Report
During the discussion, the speakers highlighted the ethical challenges associated with technology development. They emphasised the need for responsibility and the embedding of values and business models into technology. IEEE, with its constituency of 400,000 members, was recognised as playing a significant role in addressing these challenges.
Transparency, value-based design, and bias in AI were identified as crucial areas of concern.
IEEE has been actively engaging with regulatory bodies to develop socio-technical standards in these areas. They have already established a standard that defines transparency and another standard focused on value-based design. These efforts aim to ensure that AI systems are accountable, fair, and free from bias.
The importance of standards in complementing regulation and bringing interoperability in regulatory requirements was emphasised.
IEEE has been involved in discussions with experts to address the technical challenges related to AI. An example was provided in the form of the UK’s Children’s Act, which was complemented by an IEEE standard on age-appropriate design.
This highlights how standards can play a crucial role in ensuring compliance and interoperability within regulatory frameworks.
Capacity building for AI certification was also discussed as an essential component. IEEE has trained over 100 individuals for AI certification and is also working on training certification bodies to carry out assessments.
This capacity building process ensures that individuals and organisations possess the necessary skills and knowledge to navigate the complex landscape of AI and contribute to its responsible development and deployment.
The panel also explored the role of the private sector in protecting democracy and the rule of law.
One speaker argued that it is not the responsibility of the private sector to safeguard these fundamental principles. However, another speaker highlighted the need for legal certainty, which can only be provided through regulations, in upholding the rule of law.
This debate prompts further reflection on the appropriate roles and responsibilities of different societal actors in maintaining democratic values and institutions.
The negative impact of uncertainty on the private sector was acknowledged. The uncertain environment poses challenges for businesses and impedes economic growth and job creation.
This concern underscores the need for stability and predictability to support a thriving and sustainable private sector.
Lastly, the importance of feedback loops and common standards in AI was emphasised. This includes ensuring that feedback is taken into account to improve and retrain AI systems.
Drawing from lessons learned in the aviation industry, the development of benchmarking and common standards was seen as vital for enabling efficient and effective collaboration across different AI systems and applications.
In conclusion, the speakers underscored the importance of addressing ethical challenges in technology development, specifically in the context of AI.
IEEE’s involvement in shaping socio-technical standards, capacity building for AI certification, and the need for transparency, value-based design, and addressing bias were highlighted. The role of standards in complementing regulation and promoting interoperability was emphasised, along with the necessity of legal certainty to uphold democracy and the rule of law.
The challenges posed by uncertainty in the private sector and the significance of feedback loops and common standards in AI were also acknowledged. These insights contribute to the ongoing discourse surrounding the responsible development and deployment of technology.
Speech speed
0 words per minute
Speech length
words
Speech time
0 secs
Report
The speakers in the discussion highlight the critical role of global governance, stakeholder engagement, and value alignment in working towards SDG 16, which focuses on Peace, Justice, and Strong Institutions. They address the challenges faced in implementing these efforts and stress the importance of establishing credible value alignment.
Galia, one of the speakers, emphasizes the mapping exercises conducted with the OECD regarding risk assessment.
This suggests that the speakers are actively involved in assessing potential risks and vulnerabilities in the context of global governance. The mention of these mapping exercises indicates that concrete steps are being taken to identify and address potential obstacles.
Both speakers agree that stakeholder engagement is essential in complementing global governance.
Galia specifically highlights the significance of ensuring alignment with values at a global level. This implies that involving various stakeholders and aligning their interests and values is crucial to achieving successful global governance. This collaborative approach allows for a wider range of perspectives and enables a more inclusive decision-making process.
The sentiment expressed by the speakers is positive, indicating their optimism and belief in overcoming the challenges associated with implementation.
Their focus on credible value alignment suggests that they recognize the importance of ensuring that the principles and values underpinning global governance are widely accepted and respected. By emphasizing stakeholder engagement and value alignment, the speakers underscore the need for a holistic approach that goes beyond mere top-down control and incorporates diverse perspectives.
In summary, the discussion emphasizes the vital aspects of global governance, stakeholder engagement, and value alignment in achieving SDG 16.
The speakers’ identification of challenges and their emphasis on credible value alignment demonstrate a proactive and thoughtful approach. Their mention of the OECD mapping exercises also indicates a commitment to assessing risks and vulnerabilities. Overall, the analysis underscores the significance of collaboration and the pursuit of shared values in global governance.
Speech speed
0 words per minute
Speech length
words
Speech time
0 secs
Report
The discussion on artificial intelligence (AI) governance highlighted several key points. Firstly, there is a need for a clearer understanding and clarification of AI governance. This involves the participation of various actors, including civil society organizations and communities that are directly affected by AI.
Civil society organizations require a better understanding of AI risk areas, as well as effective strategies for AI implementation.
Another important point is the evaluation of AI’s impact on rights, with a specific focus on inclusiveness. It is essential to assess the effect of AI on civil, political, economic, social, and cultural rights.
Unfortunately, the perspectives of communities impacted by AI are often excluded in the assessment of AI risks. Therefore, it is necessary to include the viewpoints and experiences of these communities to comprehensively evaluate the impact of AI. This inclusive approach to AI impact assessment can lead to the development of responsible and trustworthy technology.
The discussion also emphasized the importance of education and capacity building in understanding AI’s impact.
The public cannot fully comprehend the consequences of AI without being adequately educated about the subject in a concrete and understandable way. Therefore, it is crucial to provide not only technical language but also information on how AI impacts daily life and basic rights.
By enhancing education and capacity building, individuals can better grasp the implications and intricacies of AI technology.
Furthermore, it was highlighted that there should be some level of complementarity between voluntary standards and legal frameworks in AI governance.
This includes ensuring responsibility at different levels, from the design stage to the implementation and functioning of AI systems. The relationship between voluntary standards and legal frameworks must be carefully balanced to create an effective governance structure for AI.
In addition, the discussion underscored the importance of accounting for shared responsibility within the legal framework.
It is crucial to establish effective communication between the different operators involved in the production and use of AI. This communication should adhere to competition rules and intellectual property regulations, avoiding any violations. By accounting for shared responsibility, the legal framework can ensure ethical and responsible AI governance.
Lastly, the discussion emphasized the need for a bottom-up approach in AI governance.
This approach involves the active participation of societies and different stakeholders at both the local and global levels. Geopolitically, there is a need to hear more about experiences and perspectives from various stakeholders in global-level governance discussions. By adopting a bottom-up approach, AI governance can become more democratic, inclusive, and representative of the diverse needs and interests of stakeholders.
In conclusion, the discussion on AI governance highlighted the importance of a clearer understanding and clarification of AI governance, inclusiveness in impact assessment, education and capacity building, complementarity between voluntary standards and legal frameworks, the consideration of shared responsibility, and a bottom-up approach in AI governance.
By addressing these aspects, it is possible to develop responsible and trustworthy AI technology to benefit society as a whole.
Speech speed
0 words per minute
Speech length
words
Speech time
0 secs
Report
In the analysis, several speakers discuss various aspects of AI and its impact on different sectors and societies. One key point raised is the need for more machines, especially AI, in Japan to sustain its ageing economy. With Japan facing social problems related to an ageing population, the introduction of more people and machines is deemed necessary.
However, it is interesting to note that while Japan recognises the importance of AI, they believe that its opportunities and possibilities should be prioritised over legislation. They view AI as a solution rather than a problem and want to see more of what AI can do for their society before introducing any regulations.
Furthermore, the G7 delegates are focused on creating a report that specifically examines the risks, challenges, and opportunities of new technology, particularly generative AI.
They have sought support from the OECD in summarising this report. This highlights the importance of international cooperation in addressing the impact of AI.
Egypt also plays a significant role in the AI discussion. The country has a national AI strategy that seeks to create an AI industry while emphasising that AI should enhance human labour rather than replace it.
Egypt has published an AI charter for responsible AI and is a member of the OECD AI network. This showcases the importance of aligning national strategies with global initiatives and fostering regional and international collaborations.
Microsoft is another notable player in the field of AI.
The company is committed to developing AI technology responsibly. It has implemented a responsible AI standard based on OECD principles, which is shared externally for critique and improvement. Microsoft actively engages in global governance conversations, particularly through the Frontier Model Forum, where they accelerate work on technical best practices.
Their contributions highlight the importance of private sector involvement in governance discussions.
UNESCO has made significant contributions to the AI discourse by developing a recommendation on the ethics of AI. The recommendation was developed through a two-year multi-stakeholder process and adopted by 193 countries.
It provides a clear indication of ethical values and principles that should guide AI development and usage. Furthermore, UNESCO is actively working on capacity building to equip governments and organizations with the necessary skills to implement AI systems ethically.
In terms of addressing concerns and ensuring inclusivity, it is highlighted that AI bias can be addressed even before AI regulations are in place.
Existing human rights frameworks and data protection laws can start to address challenges related to bias, discrimination, and privacy. For example, UNESCO has been providing knowledge on these issues to judicial operators to equip them with the necessary understanding of AI and the rule of law.
Additionally, discussions around AI governance emphasise the need for clarity in frameworks and the inclusion of individuals who are directly impacted by AI technologies.
The analysis also suggests that responsible development, governance, regulation, and capacity building should be multi-stakeholder and cooperative processes.
All sectors need to be involved in the conversation and implementation of AI initiatives to ensure effective and inclusive outcomes.
An interesting observation is the need for a balance between voluntary standards and legal frameworks. Complementarity is needed between these two approaches, especially in the design, implementation, and use of AI systems.
Furthermore, the analysis highlights the importance of a bottom-up approach in global governance, taking into account diverse stakeholders and global geopolitical contexts. By incorporating global experiences from different stakeholders, risks can be identified, and relevant elements can be considered in local contexts.
Overall, the analysis provides insights into various perspectives on AI and the importance of responsible development, global collaboration, and inclusive policies in shaping the future of AI.
Speech speed
0 words per minute
Speech length
words
Speech time
0 secs
Report
Japan has emerged as a frontrunner in the discussions surrounding artificial intelligence (AI) during the G7 meeting. Japan proposed the inclusion of AI as a topic for discussion at the G7 meeting, and this proposal was met with enthusiasm by the other member countries.
Consequently, the Japanese government has requested the OECD to continue the work on AI further. This indicates the recognition and value placed by the G7 nations on AI and its potential impact on various aspects of society.
While Japan is proactive in advocating for AI discussions, it adopts a cautious approach towards the introduction of legislation for AI.
Japan believes that it is premature to implement legislation specifically tailored for AI at this stage. Nevertheless, Japan acknowledges and respects the efforts made by the European Union in this regard. This perspective highlights Japan’s pragmatic approach towards ensuring that any legislation around AI is well-informed and takes into account the potential benefits and challenges presented by this emerging technology.
Underlining its commitment to fostering cooperation and setting standards in AI, Japan has established the ‘Hiroshima AI process’.
This initiative aims to develop a code of conduct and encourage project-based collaboration in the field of AI. The process, which began in 2016, has seen a shift from voluntary commitment to government-initiated inclusive dialogue among the G7 nations. Japan is pleased with the progress made in the Hiroshima process and the inclusive dialogue it has facilitated.
It is worth noting that despite unexpected events in 2016, the process has continued to move forward successfully.
Japan recognises the immense potential of AI technology to serve as a catalyst for economic growth and improve everyday life. It believes that AI has the ability to support various aspects of the economy and enhance daily activities.
This positive outlook reinforces Japan’s commitment to harnessing the benefits of AI and ensuring its responsible and sustainable integration into society.
In conclusion, Japan has taken a leading role in driving discussions on AI within the G7, with its proposal being well-received by other member countries.
While cautious about introducing legislation for AI, Japan appreciates the efforts made by the EU in this regard. The establishment of the ‘Hiroshima AI process’ showcases Japan’s commitment to setting standards and fostering cooperation in AI. Overall, Japan is optimistic about the potential of AI to generate positive outcomes for the economy and society as a whole.
Speech speed
0 words per minute
Speech length
words
Speech time
0 secs
Report
Microsoft has made significant efforts to prioritise the responsible use of AI technology. They have dedicated six years to building their responsible AI programme, which involves a team of over 350 experts in various fields including engineering, research, legal, and policy.
Their responsible AI standard is based on the principles outlined by the Organisation for Economic Co-operation and Development (OECD), emphasising the importance of ethical AI practices.
In addition to their internal initiatives, Microsoft recognises the need for active participation from private companies and the industry in AI governance discussions.
To foster collaboration and best practices, they have founded the Frontier Model Forum, which brings together leading AI labs. This forum focuses on developing technical guidelines and standards for frontier models. Microsoft also supports global efforts, such as those taking place at the United Nations (UN) and the OECD, to ensure that AI technology is governed responsibly.
Another crucial aspect highlighted by Microsoft is the importance of regulations to effectively manage the use and development of AI.
They actively share their insights and experiences to help shape regulations that address the unique challenges posed by AI technology. Furthermore, Microsoft aims to build capacity for governments and industry regulators, enabling them to navigate the complex landscape of AI and ensure the adoption of responsible practices.
Microsoft emphasises the need for safeguards at both the model and application levels of AI development.
The responsible development of AI models includes considering ethical considerations and ensuring that the model meets the necessary requirements. However, Microsoft acknowledges that even if the model is developed responsibly, there is still a risk if the application level lacks proper safeguards.
Therefore, they stress the importance of incorporating safeguards throughout the entire AI development process.
Microsoft also supports global governance for AI and advocates for a representative process in developing standards. They believe that a global governance regime should aim for a framework that includes standard setting, consensus on risk assessment and mitigation, and infrastructure building.
Microsoft cites the International Civil Aviation Organization and the Intergovernmental Panel on Climate Change as potential models for governance, highlighting the importance of collaborative and inclusive approaches to effectively govern AI.
In conclusion, Microsoft takes the responsible use of AI technology seriously, as evident through their comprehensive responsible AI programme, active participation in AI governance discussions, support for global initiatives, and commitment to shaping regulations.
They emphasise the need for safeguards at both the model and application levels of AI development and advocate for global governance that is representative, consensus-driven, and infrastructure-focused. Through their efforts, Microsoft aims to ensure that AI technology is harnessed responsibly and ethically, promoting positive societal impact.
Speech speed
0 words per minute
Speech length
words
Speech time
0 secs
Report
UNESCO has developed a comprehensive recommendation on the ethics of AI, achieved through a rigorous multi-stakeholder process. Over two years, 24 global experts collaborated on the initial draft, which then underwent around 200 hours of intergovernmental negotiations. In 2021, all 193 member countries adopted the recommendation, emphasizing the global consensus on addressing ethical concerns surrounding AI.
The recommendation includes values of human rights, inclusivity, and sustainability, serving as a guide for developers and users.
It emphasizes transparency and explainability, ensuring AI systems are clear and understandable.
UNESCO is implementing the recommendation through various tools, forums, and initiatives. This includes a readiness assessment methodology, a Global Forum on the Ethics of AI, and an ethical impact assessment tool for governments and companies procuring AI systems.
The role of AI in society is acknowledged, with an example of a robot assisting teachers potentially influencing learning norms.
Ethical viewpoints are crucial to align AI with societal expectations.
Prateek Sibal advocates for inclusive multi-stakeholder conversations around AI, emphasizing the importance of awareness, accessibility, and sensitivity towards different sectors. He suggests financial compensation to facilitate civil society engagement.
In conclusion, UNESCO’s recommendation on AI ethics provides valuable guidelines for responsible AI development.
Their commitment to implementation and inclusive dialogue strengthens the global effort to navigate ethical challenges presented by AI.
Speech speed
0 words per minute
Speech length
words
Speech time
0 secs
Report
AI regulation needs to keep up with the rapid pace of technological advancements, as there is a perceived inadequacy of government response in this area. The tension between speed and regulation is particularly evident in the case of AI. The argument put forth is that regulations should be able to adapt and respond quickly to the ever-changing landscape of AI technology.
On the other hand, it is important to have risk frameworks in place to address the potential risks associated with AI.
The United States has taken steps in this direction by introducing the AI Bill of Rights and Risk Management Framework. These foundational documents have been formulated with the contributions of 240 organizations and cover the entire lifecycle of AI. The multi-stakeholder approach ensures that various perspectives are considered in managing the risks posed by AI.
Technological advancements, such as the development of foundation models, have ushered in a new era of AI.
Leading companies in this field are primarily located in the United States. This highlights the influence and impact of US-based companies on the development and deployment of AI technologies worldwide. The emergence of foundation models has disrupted the technology landscape, showcasing the potential and capabilities of AI systems.
To address the challenges associated with rapid AI evolution, the implementation of voluntary commitments has been proposed.
The White House has devised a framework called ‘Voluntary Commitments’ to enhance responsible management of AI systems. This framework includes elements such as red teaming, information sharing, basic cybersecurity measures, public transparency, and disclosure. Its objective is to build trust and security amidst the fast-paced evolution of AI technologies.
In conclusion, it is crucial for AI regulation to keep pace with the rapid advancements in technology.
The perceived inadequacy of government response highlights the need for agile and adaptive regulations. Additionally, risk frameworks, such as the AI Bill of Rights and Risk Management Framework, are important in managing the potential risks associated with AI. The emergence of technologies like foundation models has brought about a new era of AI, with leading companies based in the US driving innovation in this field.
The implementation of voluntary commitments, as proposed by the White House, aims to foster trust and security in the ever-evolving landscape of AI technologies.
Speech speed
0 words per minute
Speech length
words
Speech time
0 secs
Report
Egypt has developed a comprehensive national AI strategy aimed at fostering the growth of an AI industry. The strategy is based on four pillars: AI for government, AI for development, capacity building, and international relations. It focuses on leveraging AI technologies to drive innovation, improve governance, and address societal challenges.
Under the AI for government pillar, Egypt aims to enhance the effectiveness of public administration by adopting AI technologies.
This includes streamlining administrative processes, improving decision-making, and delivering efficient government services.
The AI for development pillar highlights Egypt’s commitment to utilizing AI as a catalyst for economic growth and social development. The strategy focuses on promoting AI-driven innovation, entrepreneurship, and tackling critical issues such as poverty, hunger, and inequality.
Capacity building is prioritized in Egypt’s AI strategy to develop a skilled workforce.
The country invests in AI education, training programs, research, and collaboration between academia, industry, and government.
International cooperation is emphasized to exchange knowledge, share best practices, and establish standardized approaches to AI governance. Egypt actively participates in global discussions on AI policies and practices as a member of the OECD AI network.
To ensure responsible AI deployment, Egypt has issued a charter that provides guidelines for promoting citizen well-being and aligning with national goals.
These guidelines address aspects such as robustness, security, safety, and social impact assessments.
Egypt also recognizes the importance of understanding cultural differences and bridging gaps for technological advancements. The country aims to address cultural gaps and promote inclusivity, ensuring that the benefits of AI are accessible to all segments of society.
Overall, Egypt’s AI strategy demonstrates a commitment to creating an AI industry and leveraging AI for governance, development, capacity building, and international cooperation.
The strategy aims to foster responsible and inclusive progress through AI technologies.
Speech speed
0 words per minute
Speech length
words
Speech time
0 secs
Report
The topic of AI regulation is currently being discussed in the context of its application. The argument put forth is that instead of regulating AI as a tool, it should be regulated in consideration of how it is used. This means that regulations should be tailored to address the specific applications and potential risks of AI, rather than imposing blanket regulations on all AI technologies.
It is believed that this approach will allow for a more nuanced and effective regulation of AI.
Voluntary commitment in AI regulations is seen as an effective approach, provided that the right incentives are in place. This means that instead of enforcing compulsory regulations, which can often be complicated and unworkable, voluntary agreements can be more successful.
By providing incentives for AI developers and users to adhere to certain standards and guidelines, it is believed that a more cooperative and collaborative approach can be fostered, ensuring responsible and ethical use of AI technology.
The Council of Europe is currently developing the first binding convention on AI and human rights, which is seen as a significant step forward.
This intergovernmental agreement aims to commit states to uphold AI principles based on the norms of human rights, democracy, and the rule of law. The convention is not only intended to ensure the protection of fundamental rights in the context of AI, but also to create interoperable legal systems within countries.
This represents a significant development in the global governance of AI and the protection of human rights.
The need for agreement on fundamental values is also highlighted in the discussions on AI regulation. It is essential to have a consensus on how to respect human dignity and ensure that technological advancements are made while upholding and respecting human rights.
This ensures that AI development and deployment align with society’s values and principles.
Addressing legal uncertainties and tackling new challenges is another important aspect of AI regulation. As AI technologies continue to evolve, it is necessary to identify legal uncertainties and clarify them to ensure a clear and coherent regulatory framework.
Breaking down new elements and challenges associated with AI is crucial to ensuring that regulations are effective and comprehensive.
In solving the problems related to AI regulation, it is emphasized that using the best tools and methods is essential.
Different problems may require different approaches, with some methods being faster but less sustainable, while others may be more sustainable but take longer to implement. By utilizing a mix of tools and methods, it is possible to effectively address identified issues.
Stakeholder cooperation is also considered to be of utmost importance in the realm of AI regulation.
All stakeholders, including governments, businesses, researchers, and civil society, need to continue to engage and cooperate with each other, leveraging their respective roles. This collaboration ensures that diverse perspectives and expertise are taken into account when formulating regulations, thereby increasing the chances of effective and balanced AI regulation.
However, there is also opposition to the creation of burdensome bureaucracy in the process of regulating AI.
Efforts should be made to clarify issues and address challenges without adding unnecessary administrative complexity. It is crucial to strike a balance between ensuring responsible and ethical use of AI technology and avoiding excessive burdens on AI developers and users.
In conclusion, the discussions on AI regulation are centered around the need to regulate AI in consideration of its application, rather than treating it as a tool.
Voluntary commitments, such as the binding convention being developed by the Council of Europe, are seen as effective approaches, provided the right incentives are in place. Agreement on fundamental values, addressing legal uncertainties, and stakeholder cooperation are crucial aspects of AI regulation.
It is important to strike a balance between effective regulation and avoiding burdensome bureaucracy.