International multistakeholder cooperation for AI standards | IGF 2023 WS #465
Event report
Speakers and Moderators
Speakers:
- Nikita Bhangu, Government, Western European and Others Group (WEOG)
- Ashley Casovan, Civil Society, Western European and Others Group (WEOG)
- Jacquet Aurelie, Technical Community, Asia-Pacific Group
- Sundeep Bhandari, Technical Community, Western European and Others Group (WEOG)
- Sahar Danesh, Private Sector, Western European and Others Group (WEOG)
Moderators:
- Florian Ostmann, Civil Society, Western European and Others Group (WEOG)
Table of contents
Disclaimer: This is not an official record of the IGF session. The DiploAI system automatically generates these resources from the audiovisual recording. Resources are presented in their original format, as provided by the AI (e.g. including any spelling mistakes). The accuracy of these resources cannot be guaranteed. The official record of the session can be found on the IGF's official website.
Knowledge Graph of Debate
Session report
Matilda Road
The AI Standards Hub is a collaboration between the Alan Turing Institute, British Standards Institute, National Physical Laboratory, and the UK government’s Department for Science, Innovation, and Technology. It aims to promote the responsible use of artificial intelligence (AI) and engage stakeholders in international AI standardization.
One of the key missions of the AI Standards Hub is to advance the use of responsible AI by encouraging the development and adoption of international standards. This ensures that AI systems are developed, deployed, and used in a responsible and ethical manner, fostering public trust and mitigating potential risks.
The involvement of stakeholders is crucial in the international AI standardization landscape. The AI Standards Hub empowers stakeholders and encourages their active participation in the standardization process. This ensures that the resulting standards are comprehensive, inclusive, and representative of diverse interests.
Standards are voluntary codes of best practice that companies adhere to. They assure quality, safety, environmental targets, ethical development, and promote interoperability between products. Adhering to standards helps build trust between organizations and consumers.
Standards also facilitate market access and link to other government mechanisms. Aligning with standards allows companies to enter new markets and enhance competitiveness. Interoperability ensures seamless collaboration between different systems, promoting knowledge sharing and technology transfer.
The adoption of standards provides benefits such as quality assurance, safety, and interoperability. Compliance ensures that products and services meet defined norms and requirements, instilling confidence in their reliability and performance. Interoperability allows for the exchange of information and collaboration, fostering innovation and advancements.
In conclusion, the AI Standards Hub promotes responsible AI use and engages stakeholders in international AI standardization. It fosters the development and adoption of international standards to ensure ethical AI use. Standards offer benefits like quality assurance, safety, and interoperability, building trust between organizations and consumers, enhancing market access, and linking to government mechanisms. The adoption of standards is crucial for responsible consumption, sustainable production, and industry innovation.
Ashley Casovan
Standards play a crucial role in the field of artificial intelligence (AI), ensuring consistency, reliability, and safety. However, the lack of standardisation in this area can lead to confusion and hinder the advancement of AI technologies. The complexity of the topic itself adds to the challenge of developing universally accepted standards.
To address this issue, the Canadian government has taken proactive steps by establishing the Data and AI Standards Collaborative. Led by Ashley, representing civil society, this initiative aims to comprehensively understand the implications of AI systems. One of the primary goals of the collaborative is to identify specific use cases and develop context-specific standards throughout the entire value chain of AI systems. This proactive approach not only helps in ensuring the effectiveness and ethical use of AI but also supports SDG 9: Industry, Innovation, and Infrastructure.
Within the AI ecosystem, various types of standards are required at different levels. This includes certifications and standards for both evaluating the quality management systems and ensuring product-level standards. Furthermore, there is a growing interest in understanding the individual training requirements for AI. This multifaceted approach to standards highlights the complexity and diversity within the field.
The establishment of multi-stakeholder forums is recognised as a positive step towards developing AI standards. These forums play a vital role in establishing common definitions and understanding of AI system life cycles. North American markets have embraced such initiatives, including the National Institute of Standards and Technology’s (NIST) AIRMF, demonstrating their effectiveness in shaping AI standards. This collaborative effort aligns with SDG 17: Partnerships for the Goals.
Inclusion of all relevant stakeholders is seen as crucial for effective AI standards. The inclusivity of diverse perspectives is paramount for ensuring that the standards address the needs and challenges of different communities. Effective data analysis and processing within the context of AI standards necessitate inclusivity. This aligns with SDG 10: Reduced Inequalities as it promotes fairness and equal representation in the development of AI standards.
Engaging Indigenous groups and considering their perspectives is critical in developing AI system standards. Efforts are being made in Canada to include the voices of the most impacted populations. By understanding the potential harms of AI systems to these groups, measures can be taken to mitigate them. This highlights the significance of reducing inequalities (SDG 10) and fostering inclusivity.
Given the global nature of AI, collaboration on an international scale is essential. An international exercise through organisations such as the Organisation for Economic Co-operation and Development (OECD) or the Internet Governance Forum (IGF) is proposed for mapping AI standards. Collaboration between countries and regions will help avoid duplication of efforts, foster harmonisation, and promote the implementation of effective AI standards globally.
It is important to recognise that AI is not a monolithic entity but rather varies in its types of uses and associated harms. Different AI systems have different applications and potential risks. Therefore, it is crucial to engage the right stakeholders to discuss and address these specific uses and potential harms. This aligns with the importance of SDG 3: Good Health and Well-being and SDG 16: Peace, Justice, and Strong Institutions.
In conclusion, the development of AI standards is a complex and vital undertaking. The Canadian government’s Data and AI Standards Collaborative, the involvement of multi-stakeholder forums, the importance of inclusivity and engagement with Indigenous groups, and the need for international collaboration are all prominent factors in shaping effective AI standards. Recognising the diversity and potential impact of AI systems, it is essential to have comprehensive discussions and involve all relevant stakeholders to ensure the development and implementation of robust and ethical AI standards.
Audience
The analysis reveals that the creation of AI standards involves various bodies, but their acceptance by governments is not consistent. In particular, standards institutions accepted by the government are more recognized than technical community-led standards, such as those from the IETF or IEEE, which are often excluded from government policies. This highlights a discrepancy between the standards created by technical communities and those embraced by governments.
Nevertheless, the analysis suggests reaching out to the technical community for AI standards. The technical community is seen as a valuable resource for developing and refining AI standards. Furthermore, the analysis encourages the creation of a declaration or main message from the AI track at the IGF (Internet Governance Forum). This indicates the importance of consolidating the efforts of the AI track at IGF to provide a unified message and promote collaboration in the field of AI standards.
Consumer organizations are recognized as playing a critical role in the design of ethical and responsible AI standards. They represent actual user interests and can provide valuable insights and data for evidence-based standards. Additionally, consumer organizations can drive the adoption of standards by advocating for consumer-friendly solutions. The analysis also identifies the AI Standards Hub as a valuable initiative from a consumer organization’s perspective. The Hub acknowledges and welcomes consumer organizations, breaking the norm of industry dominance in standardization spaces. It also helps bridge the capacity gap by enabling consumer organizations to understand and contribute effectively to complex AI discussions.
The analysis suggests that AI standardization processes should be made accessible to consumers. Traditionally, standardization spaces have been dominated by industry experts, but involving consumers early in the process can help ensure that standards are compliant and sustainable from the start. User-friendly tools and resources can aid consumers in understanding AI and AI standards, empowering them to participate effectively in the standardization process.
Furthermore, the involvement of consumer organizations can diversify the AI standardization process. They represent a diverse range of views and interests, bringing significant diversity into the standardization process. Consumer International, as a global organization, is specifically mentioned as having the potential to facilitate this diversity in the standardization process.
In conclusion, the analysis highlights the importance of collaboration and inclusivity in the development of AI standards. It underscores the need to bridge the gap between technical community-led standards and government policies. The involvement of consumer organizations is crucial in ensuring the ethical and responsible development of AI standards. Making AI standardization processes accessible to consumers and diversifying the standardization process are essential steps towards creating inclusive and effective AI standards.
Wansi Lee
International cooperation is crucial for the standardization of AI regulation, and Singapore actively participates in this process. The country closely collaborates with other nations and engages in multilateral processes to align its AI practices and contribute to global standards. Singapore has initiated a mapping project with the National Institute of Standards and Technology (NIST) to ensure the alignment of its AI practices.
In addition, multi-stakeholder engagement is considered essential for the technical development and sharing of AI knowledge. Singapore leads in this area by creating the AI Verify Testing Framework and Toolkit, which provides comprehensive tests for fairness, explainability, and robustness of AI systems. This initiative is open-source, allowing global community contribution and engagement. The AI Verify Toolkit supports responsible AI implementation.
Adherence to AI guidelines is important, and the Singapore government plays an active role in setting guidelines for organizations. Implementing these guidelines ensures responsible AI implementation. The government also utilizes the AI Verify Testing Framework and Toolkit to validate the implementation of responsible AI requirements.
Given Singapore’s limited resources, the country strategically focuses its efforts on specific areas where it can contribute to global AI conversations. Singapore adopts existing international efforts where possible and fills gaps to make a valuable contribution. Despite being a small country, Singapore recognizes the significance of its role in standard setting and strives to make a meaningful impact.
The Singapore government actively engages with industry members to incorporate a broad perspective in AI development. Input from these companies is valued to create a comprehensive and inclusive framework for responsible AI implementation.
The establishment of the AI Verify Foundation provides a platform for all interested organizations to contribute to AI standards. The open-source platform is not limited by organization size or location, welcoming diverse perspectives. Work done on the AI Verify Foundation platform is rationalized at the national level in Singapore and supported globally through various platforms, such as OECD, GPA, or ISO.
In conclusion, Singapore recognizes the importance of international cooperation, multi-stakeholder engagement, adherence to guidelines, strategic resource management, and industry partnerships in standardizing AI regulation. The country’s active involvement in initiatives such as the AI Verify Testing Framework and Toolkit and the AI Verify Foundation demonstrates its commitment to responsible AI development and global AI conversations. The emphasis on harmonized or aligned standards by Wansi Lee further highlights the need for a unified approach to AI regulation.
Florian Ostmann
During the session, the role of AI standards in the responsible use and development of AI was thoroughly explored. The focus was placed on the importance of multi-stakeholder participation and international cooperation in developing these standards. It was recognized that standards provide a specific governance tool for ensuring the responsible adoption and implementation of AI technology.
In line with this, the UK launched the AI Standards Hub, a collaborative initiative involving the Alan Turing Institute, the British Standards Institution, and the National Physical Laboratory. The aim of this initiative is to increase awareness and participation in AI standardization efforts. The partnership is working closely with the UK government to ensure a coordinated approach and effective implementation of AI standards.
Florian Ostmann, the head of AI Governance and Regulatory Innovation at the Alan Turing Institute, stressed the significance of international cooperation and multi-stakeholder participation in making AI standards a success. He emphasized the need for a collective effort involving various stakeholders to establish effective frameworks and guidelines for AI development and use. The discussion highlighted the recognition of AI standards as a key factor in ensuring responsible AI practices.
The UK government’s commitment to AI standards was reiterated as the National AI Strategy published in September 2021 highlighted the AI Standards Hub as a key deliverable. Additionally, the AI Regulation White Paper emphasized the role of standards in implementing a context-specific, risk-based, and decentralized regulatory approach. This further demonstrates the UK government’s understanding of the importance of AI standards in governing AI technology.
The AI Standards Hub actively contributes to the field of AI standardization. It undertakes research to provide strategic direction and analysis, offers e-learning materials and in-person training events to engage stakeholders, and organizes events to gather input on AI standards. By conducting these activities, the AI Standards Hub aims to ensure a comprehensive approach to addressing the needs and requirements of AI standardization.
The discussion also highlighted the significance of considering a wider landscape of AI standards. While the AI Standards Hub focuses on developed standards, it was acknowledged that other organizations, like ITF, also contribute to the development of AI standards. This wider perspective helps in gaining a holistic understanding of AI standards and their implications in various contexts.
Florian Ostmann expressed a desire to continue the discussion on standards and AI, indicating that the session had only scratched the surface of this vast topic. He welcomed ideas for collaboration from around the world, underscoring the importance of international cooperation in shaping AI standards and governance.
In conclusion, the session emphasized the role of AI standards in the responsible use and development of AI technology. It highlighted the significance of multi-stakeholder participation, international cooperation, and the need to consider a wider landscape of AI standards. The UK’s AI Standards Hub, in collaboration with the government, is actively working towards increasing awareness and participation in AI standardization. Florian Ostmann’s insights further emphasized the importance of international collaboration and the need for ongoing discussions on AI standards and governance.
Aurelie Jacquet
The analysis examines multiple viewpoints on the significance of AI standardisation in the context of international governance. Aurelie Jacquet asserts that AI standardisation can serve as an agile tool for effective international governance, highlighting its potential benefits. On the other hand, another viewpoint stresses the indispensability of standards in regulating and ensuring the reliability of AI systems for industry purposes. Australia is cited as an active participant in shaping international AI standards since 2018, with a roadmap focusing on 40,2001 in 2020. The adoption of AI standards by the government aligns with the NSW AI Assurance Framework, strengthening the use of standards in AI systems.
Education and awareness regarding standards emerge as important factors in promoting the understanding and implementation of AI standards. Australia has taken steps to develop education programs on standards and build tools in collaboration with CSIRO and Data61, leveraging their expertise in the field. These initiatives aim to enhance knowledge and facilitate the adoption of standards across various sectors.
Despite having a small delegation, Australia has made significant contributions to standards development and has played an influential role in shaping international mechanisms. Through collaboration with other countries, Australia strives to tailor mechanisms to accommodate delegations of different sizes. However, it is noted that limited resources and time pose challenges to participation in standards development. In this regard, Australia has received support from nonprofit organisations and their own government, which enables experts to voluntarily participate and contribute to the development of standards.
Context is highlighted as a crucial element for effective engagement in standards development. Australia’s experts have been actively involved in developing white papers that provide the necessary background and context for standards documents. This ensures that stakeholders have a comprehensive understanding of the issues at hand, fostering informed discussions and decision-making processes.
The analysis also highlights the challenges faced by SMEs in the uptake of standards. Larger organisations tend to adopt standards more readily, leaving SMEs at a disadvantage. Efforts are underway to address these challenges and make standards more accessible and fit for purpose for SMEs. This ongoing discussion aims to create a more inclusive environment for all stakeholders, regardless of their size or resources.
The significance of stakeholder inclusion is emphasised throughout the analysis. Regardless of delegation size, stakeholder engagement is seen as critical in effective standards development. Australia has actively collaborated with other countries to ensure that mechanisms and processes are tailored to their respective sizes, highlighting the importance of inclusiveness in shaping international standards.
Standards are seen as enablers of interoperability, promoting harmonisation of varied perspectives in AI regulations. Different regulatory initiatives and practices in AI are deemed beneficial, and standards play a key role in facilitating interoperability and bridging gaps between different approaches.
Moreover, the adoption of AI standards is advocated as a means to learn from international best practices. Experts from diverse backgrounds can engage in discussions, enabling nations to develop policies and grow in a responsible manner. The focus lies on using AI responsibly and scaling its application through the use of interoperability standards.
In conclusion, the analysis underscores the importance of AI standardisation in international governance. It highlights various viewpoints on the subject, including the agile nature of AI standardisation, the need for industry-informed regulation, the significance of education and awareness, the role of context, the challenges faced by SMEs, the importance of stakeholder inclusion, and the benefits of interoperability and learning from international best practices. The analysis provides valuable insights for policymakers, industry professionals, and stakeholders involved in AI standardisation and governance.
Nikita Bhangu
The UK government recognizes the importance of AI standards in the regulatory framework for AI, as highlighted in the recent AI White Paper. They emphasize the significance of standards and other tools in AI governance. Digital standards are crucial for effectively implementing the government’s AI policy.
To ensure effective standardization, the UK government has consulted stakeholders to identify challenges in the UK. This aims to provide practical tools for stakeholders to engage in the standardization ecosystem, promoting participation, collaboration, and innovation in AI standards.
The establishment of the AI Standards Hub demonstrates the UK government’s commitment to reducing barriers to AI standards. The hub, established a year ago, has made significant contributions to the understanding of AI standards in the UK. Independent evaluation acknowledges the positive impact of the hub in overcoming obstacles and promoting AI standards adoption.
The UK government plans to expand the AI Standards Hub and foster international collaborations. This growth and increased collaboration will enhance global efforts towards achieving AI standards, benefiting industries and infrastructure. Collaboration with international partners aims to create synergies between AI governance and standards.
Representation of all stakeholder groups, including small to medium businesses and civil society, is crucial in standard development organizations. However, small to medium digital technology companies and civil society face challenges in participating effectively due to resource and expertise limitations. Even the government, as a key stakeholder, lacks technical expertise and resources.
The UK government is actively working to improve representation and diversity in standard development organizations. Initiatives include developing a talent pipeline to increase diversity and collaborating with international partners and organizations such as the Internet Governance Forum’s Multi-Advisory Group. Existing organizations like BSI and IEC contribute to efforts for diverse and inclusive standards development organizations.
In conclusion, the UK government recognizes the importance of AI standards in the regulatory framework for AI and actively works towards their implementation. Consultation with stakeholders, establishment of the AI Standards Hub, and efforts to increase international collaborations reduce barriers and promote a thriving standardization ecosystem. Initiatives aim to ensure representation of all stakeholder groups, fostering diversity and inclusion. These actions contribute to advancements in the field of AI and promote sustainable development across sectors.
Sonny
The AI Act introduced by the European Union aims to govern and assess AI systems, particularly high-risk ones. It sets out five principles and establishes seven essential requirements for these systems. The act underscores the need for collaboration and global standards to ensure fair and consistent AI governance. By adhering to shared standards, stakeholders can operate on a level playing field.
The AI Standards Hub is a valuable resource that promotes global cooperation. It offers a comprehensive database of AI standards and policies, accessible worldwide. The hub facilitates collaboration among stakeholders, enabling them to align efforts and work towards common goals. Additionally, it provides e-learning materials to enhance understanding of AI standards.
Moreover, the AI Standards Hub strives to promote inclusive access to AI standards and policies. It encourages stakeholders from diverse backgrounds and industries to contribute and participate in standard development and implementation. This inclusive approach ensures comprehensive and effective AI governance.
The partnership between the AI Standards Hub and international organizations, such as the OECD, further demonstrates the significance of global cooperation in this field. By leveraging expertise and resources from like-minded institutions, the hub fosters a collective effort to tackle AI-related challenges and opportunities.
In summary, the EU AI Act and the AI Standards Hub emphasize the importance of collaboration, global standards, and inclusive access to AI standards and policies. By working together, stakeholders can establish a harmonized approach to AI governance, promoting ethical and responsible use of AI technologies across industries and regions.
Speakers
Speech speed
0 words per minute
Speech length
words
Speech time
0 secs
Report
Standards play a crucial role in the field of artificial intelligence (AI), ensuring consistency, reliability, and safety. However, the lack of standardisation in this area can lead to confusion and hinder the advancement of AI technologies. The complexity of the topic itself adds to the challenge of developing universally accepted standards.
To address this issue, the Canadian government has taken proactive steps by establishing the Data and AI Standards Collaborative.
Led by Ashley, representing civil society, this initiative aims to comprehensively understand the implications of AI systems. One of the primary goals of the collaborative is to identify specific use cases and develop context-specific standards throughout the entire value chain of AI systems.
This proactive approach not only helps in ensuring the effectiveness and ethical use of AI but also supports SDG 9: Industry, Innovation, and Infrastructure.
Within the AI ecosystem, various types of standards are required at different levels. This includes certifications and standards for both evaluating the quality management systems and ensuring product-level standards.
Furthermore, there is a growing interest in understanding the individual training requirements for AI. This multifaceted approach to standards highlights the complexity and diversity within the field.
The establishment of multi-stakeholder forums is recognised as a positive step towards developing AI standards.
These forums play a vital role in establishing common definitions and understanding of AI system life cycles. North American markets have embraced such initiatives, including the National Institute of Standards and Technology’s (NIST) AIRMF, demonstrating their effectiveness in shaping AI standards.
This collaborative effort aligns with SDG 17: Partnerships for the Goals.
Inclusion of all relevant stakeholders is seen as crucial for effective AI standards. The inclusivity of diverse perspectives is paramount for ensuring that the standards address the needs and challenges of different communities.
Effective data analysis and processing within the context of AI standards necessitate inclusivity. This aligns with SDG 10: Reduced Inequalities as it promotes fairness and equal representation in the development of AI standards.
Engaging Indigenous groups and considering their perspectives is critical in developing AI system standards.
Efforts are being made in Canada to include the voices of the most impacted populations. By understanding the potential harms of AI systems to these groups, measures can be taken to mitigate them. This highlights the significance of reducing inequalities (SDG 10) and fostering inclusivity.
Given the global nature of AI, collaboration on an international scale is essential.
An international exercise through organisations such as the Organisation for Economic Co-operation and Development (OECD) or the Internet Governance Forum (IGF) is proposed for mapping AI standards. Collaboration between countries and regions will help avoid duplication of efforts, foster harmonisation, and promote the implementation of effective AI standards globally.
It is important to recognise that AI is not a monolithic entity but rather varies in its types of uses and associated harms.
Different AI systems have different applications and potential risks. Therefore, it is crucial to engage the right stakeholders to discuss and address these specific uses and potential harms. This aligns with the importance of SDG 3: Good Health and Well-being and SDG 16: Peace, Justice, and Strong Institutions.
In conclusion, the development of AI standards is a complex and vital undertaking.
The Canadian government’s Data and AI Standards Collaborative, the involvement of multi-stakeholder forums, the importance of inclusivity and engagement with Indigenous groups, and the need for international collaboration are all prominent factors in shaping effective AI standards. Recognising the diversity and potential impact of AI systems, it is essential to have comprehensive discussions and involve all relevant stakeholders to ensure the development and implementation of robust and ethical AI standards.
Speech speed
0 words per minute
Speech length
words
Speech time
0 secs
Report
The analysis reveals that the creation of AI standards involves various bodies, but their acceptance by governments is not consistent. In particular, standards institutions accepted by the government are more recognized than technical community-led standards, such as those from the IETF or IEEE, which are often excluded from government policies.
This highlights a discrepancy between the standards created by technical communities and those embraced by governments.
Nevertheless, the analysis suggests reaching out to the technical community for AI standards. The technical community is seen as a valuable resource for developing and refining AI standards.
Furthermore, the analysis encourages the creation of a declaration or main message from the AI track at the IGF (Internet Governance Forum). This indicates the importance of consolidating the efforts of the AI track at IGF to provide a unified message and promote collaboration in the field of AI standards.
Consumer organizations are recognized as playing a critical role in the design of ethical and responsible AI standards.
They represent actual user interests and can provide valuable insights and data for evidence-based standards. Additionally, consumer organizations can drive the adoption of standards by advocating for consumer-friendly solutions. The analysis also identifies the AI Standards Hub as a valuable initiative from a consumer organization’s perspective.
The Hub acknowledges and welcomes consumer organizations, breaking the norm of industry dominance in standardization spaces. It also helps bridge the capacity gap by enabling consumer organizations to understand and contribute effectively to complex AI discussions.
The analysis suggests that AI standardization processes should be made accessible to consumers.
Traditionally, standardization spaces have been dominated by industry experts, but involving consumers early in the process can help ensure that standards are compliant and sustainable from the start. User-friendly tools and resources can aid consumers in understanding AI and AI standards, empowering them to participate effectively in the standardization process.
Furthermore, the involvement of consumer organizations can diversify the AI standardization process.
They represent a diverse range of views and interests, bringing significant diversity into the standardization process. Consumer International, as a global organization, is specifically mentioned as having the potential to facilitate this diversity in the standardization process.
In conclusion, the analysis highlights the importance of collaboration and inclusivity in the development of AI standards.
It underscores the need to bridge the gap between technical community-led standards and government policies. The involvement of consumer organizations is crucial in ensuring the ethical and responsible development of AI standards. Making AI standardization processes accessible to consumers and diversifying the standardization process are essential steps towards creating inclusive and effective AI standards.
Speech speed
0 words per minute
Speech length
words
Speech time
0 secs
Report
The analysis examines multiple viewpoints on the significance of AI standardisation in the context of international governance. Aurelie Jacquet asserts that AI standardisation can serve as an agile tool for effective international governance, highlighting its potential benefits. On the other hand, another viewpoint stresses the indispensability of standards in regulating and ensuring the reliability of AI systems for industry purposes.
Australia is cited as an active participant in shaping international AI standards since 2018, with a roadmap focusing on 40,2001 in 2020. The adoption of AI standards by the government aligns with the NSW AI Assurance Framework, strengthening the use of standards in AI systems.
Education and awareness regarding standards emerge as important factors in promoting the understanding and implementation of AI standards.
Australia has taken steps to develop education programs on standards and build tools in collaboration with CSIRO and Data61, leveraging their expertise in the field. These initiatives aim to enhance knowledge and facilitate the adoption of standards across various sectors.
Despite having a small delegation, Australia has made significant contributions to standards development and has played an influential role in shaping international mechanisms.
Through collaboration with other countries, Australia strives to tailor mechanisms to accommodate delegations of different sizes. However, it is noted that limited resources and time pose challenges to participation in standards development. In this regard, Australia has received support from nonprofit organisations and their own government, which enables experts to voluntarily participate and contribute to the development of standards.
Context is highlighted as a crucial element for effective engagement in standards development.
Australia’s experts have been actively involved in developing white papers that provide the necessary background and context for standards documents. This ensures that stakeholders have a comprehensive understanding of the issues at hand, fostering informed discussions and decision-making processes.
The analysis also highlights the challenges faced by SMEs in the uptake of standards.
Larger organisations tend to adopt standards more readily, leaving SMEs at a disadvantage. Efforts are underway to address these challenges and make standards more accessible and fit for purpose for SMEs. This ongoing discussion aims to create a more inclusive environment for all stakeholders, regardless of their size or resources.
The significance of stakeholder inclusion is emphasised throughout the analysis.
Regardless of delegation size, stakeholder engagement is seen as critical in effective standards development. Australia has actively collaborated with other countries to ensure that mechanisms and processes are tailored to their respective sizes, highlighting the importance of inclusiveness in shaping international standards.
Standards are seen as enablers of interoperability, promoting harmonisation of varied perspectives in AI regulations.
Different regulatory initiatives and practices in AI are deemed beneficial, and standards play a key role in facilitating interoperability and bridging gaps between different approaches.
Moreover, the adoption of AI standards is advocated as a means to learn from international best practices.
Experts from diverse backgrounds can engage in discussions, enabling nations to develop policies and grow in a responsible manner. The focus lies on using AI responsibly and scaling its application through the use of interoperability standards.
In conclusion, the analysis underscores the importance of AI standardisation in international governance.
It highlights various viewpoints on the subject, including the agile nature of AI standardisation, the need for industry-informed regulation, the significance of education and awareness, the role of context, the challenges faced by SMEs, the importance of stakeholder inclusion, and the benefits of interoperability and learning from international best practices.
The analysis provides valuable insights for policymakers, industry professionals, and stakeholders involved in AI standardisation and governance.
Speech speed
0 words per minute
Speech length
words
Speech time
0 secs
Report
During the session, the role of AI standards in the responsible use and development of AI was thoroughly explored. The focus was placed on the importance of multi-stakeholder participation and international cooperation in developing these standards. It was recognized that standards provide a specific governance tool for ensuring the responsible adoption and implementation of AI technology.
In line with this, the UK launched the AI Standards Hub, a collaborative initiative involving the Alan Turing Institute, the British Standards Institution, and the National Physical Laboratory.
The aim of this initiative is to increase awareness and participation in AI standardization efforts. The partnership is working closely with the UK government to ensure a coordinated approach and effective implementation of AI standards.
Florian Ostmann, the head of AI Governance and Regulatory Innovation at the Alan Turing Institute, stressed the significance of international cooperation and multi-stakeholder participation in making AI standards a success.
He emphasized the need for a collective effort involving various stakeholders to establish effective frameworks and guidelines for AI development and use. The discussion highlighted the recognition of AI standards as a key factor in ensuring responsible AI practices.
The UK government’s commitment to AI standards was reiterated as the National AI Strategy published in September 2021 highlighted the AI Standards Hub as a key deliverable.
Additionally, the AI Regulation White Paper emphasized the role of standards in implementing a context-specific, risk-based, and decentralized regulatory approach. This further demonstrates the UK government’s understanding of the importance of AI standards in governing AI technology.
The AI Standards Hub actively contributes to the field of AI standardization.
It undertakes research to provide strategic direction and analysis, offers e-learning materials and in-person training events to engage stakeholders, and organizes events to gather input on AI standards. By conducting these activities, the AI Standards Hub aims to ensure a comprehensive approach to addressing the needs and requirements of AI standardization.
The discussion also highlighted the significance of considering a wider landscape of AI standards.
While the AI Standards Hub focuses on developed standards, it was acknowledged that other organizations, like ITF, also contribute to the development of AI standards. This wider perspective helps in gaining a holistic understanding of AI standards and their implications in various contexts.
Florian Ostmann expressed a desire to continue the discussion on standards and AI, indicating that the session had only scratched the surface of this vast topic.
He welcomed ideas for collaboration from around the world, underscoring the importance of international cooperation in shaping AI standards and governance.
In conclusion, the session emphasized the role of AI standards in the responsible use and development of AI technology.
It highlighted the significance of multi-stakeholder participation, international cooperation, and the need to consider a wider landscape of AI standards. The UK’s AI Standards Hub, in collaboration with the government, is actively working towards increasing awareness and participation in AI standardization.
Florian Ostmann’s insights further emphasized the importance of international collaboration and the need for ongoing discussions on AI standards and governance.
Speech speed
0 words per minute
Speech length
words
Speech time
0 secs
Report
The AI Standards Hub is a collaboration between the Alan Turing Institute, British Standards Institute, National Physical Laboratory, and the UK government’s Department for Science, Innovation, and Technology. It aims to promote the responsible use of artificial intelligence (AI) and engage stakeholders in international AI standardization.
One of the key missions of the AI Standards Hub is to advance the use of responsible AI by encouraging the development and adoption of international standards.
This ensures that AI systems are developed, deployed, and used in a responsible and ethical manner, fostering public trust and mitigating potential risks.
The involvement of stakeholders is crucial in the international AI standardization landscape. The AI Standards Hub empowers stakeholders and encourages their active participation in the standardization process.
This ensures that the resulting standards are comprehensive, inclusive, and representative of diverse interests.
Standards are voluntary codes of best practice that companies adhere to. They assure quality, safety, environmental targets, ethical development, and promote interoperability between products.
Adhering to standards helps build trust between organizations and consumers.
Standards also facilitate market access and link to other government mechanisms. Aligning with standards allows companies to enter new markets and enhance competitiveness. Interoperability ensures seamless collaboration between different systems, promoting knowledge sharing and technology transfer.
The adoption of standards provides benefits such as quality assurance, safety, and interoperability.
Compliance ensures that products and services meet defined norms and requirements, instilling confidence in their reliability and performance. Interoperability allows for the exchange of information and collaboration, fostering innovation and advancements.
In conclusion, the AI Standards Hub promotes responsible AI use and engages stakeholders in international AI standardization.
It fosters the development and adoption of international standards to ensure ethical AI use. Standards offer benefits like quality assurance, safety, and interoperability, building trust between organizations and consumers, enhancing market access, and linking to government mechanisms. The adoption of standards is crucial for responsible consumption, sustainable production, and industry innovation.
Speech speed
0 words per minute
Speech length
words
Speech time
0 secs
Report
The UK government recognizes the importance of AI standards in the regulatory framework for AI, as highlighted in the recent AI White Paper. They emphasize the significance of standards and other tools in AI governance. Digital standards are crucial for effectively implementing the government’s AI policy.
To ensure effective standardization, the UK government has consulted stakeholders to identify challenges in the UK.
This aims to provide practical tools for stakeholders to engage in the standardization ecosystem, promoting participation, collaboration, and innovation in AI standards.
The establishment of the AI Standards Hub demonstrates the UK government’s commitment to reducing barriers to AI standards.
The hub, established a year ago, has made significant contributions to the understanding of AI standards in the UK. Independent evaluation acknowledges the positive impact of the hub in overcoming obstacles and promoting AI standards adoption.
The UK government plans to expand the AI Standards Hub and foster international collaborations.
This growth and increased collaboration will enhance global efforts towards achieving AI standards, benefiting industries and infrastructure. Collaboration with international partners aims to create synergies between AI governance and standards.
Representation of all stakeholder groups, including small to medium businesses and civil society, is crucial in standard development organizations.
However, small to medium digital technology companies and civil society face challenges in participating effectively due to resource and expertise limitations. Even the government, as a key stakeholder, lacks technical expertise and resources.
The UK government is actively working to improve representation and diversity in standard development organizations.
Initiatives include developing a talent pipeline to increase diversity and collaborating with international partners and organizations such as the Internet Governance Forum’s Multi-Advisory Group. Existing organizations like BSI and IEC contribute to efforts for diverse and inclusive standards development organizations.
In conclusion, the UK government recognizes the importance of AI standards in the regulatory framework for AI and actively works towards their implementation.
Consultation with stakeholders, establishment of the AI Standards Hub, and efforts to increase international collaborations reduce barriers and promote a thriving standardization ecosystem. Initiatives aim to ensure representation of all stakeholder groups, fostering diversity and inclusion. These actions contribute to advancements in the field of AI and promote sustainable development across sectors.
Speech speed
0 words per minute
Speech length
words
Speech time
0 secs
Report
The AI Act introduced by the European Union aims to govern and assess AI systems, particularly high-risk ones. It sets out five principles and establishes seven essential requirements for these systems. The act underscores the need for collaboration and global standards to ensure fair and consistent AI governance.
By adhering to shared standards, stakeholders can operate on a level playing field.
The AI Standards Hub is a valuable resource that promotes global cooperation. It offers a comprehensive database of AI standards and policies, accessible worldwide. The hub facilitates collaboration among stakeholders, enabling them to align efforts and work towards common goals.
Additionally, it provides e-learning materials to enhance understanding of AI standards.
Moreover, the AI Standards Hub strives to promote inclusive access to AI standards and policies. It encourages stakeholders from diverse backgrounds and industries to contribute and participate in standard development and implementation.
This inclusive approach ensures comprehensive and effective AI governance.
The partnership between the AI Standards Hub and international organizations, such as the OECD, further demonstrates the significance of global cooperation in this field. By leveraging expertise and resources from like-minded institutions, the hub fosters a collective effort to tackle AI-related challenges and opportunities.
In summary, the EU AI Act and the AI Standards Hub emphasize the importance of collaboration, global standards, and inclusive access to AI standards and policies.
By working together, stakeholders can establish a harmonized approach to AI governance, promoting ethical and responsible use of AI technologies across industries and regions.
Speech speed
0 words per minute
Speech length
words
Speech time
0 secs
Report
International cooperation is crucial for the standardization of AI regulation, and Singapore actively participates in this process. The country closely collaborates with other nations and engages in multilateral processes to align its AI practices and contribute to global standards. Singapore has initiated a mapping project with the National Institute of Standards and Technology (NIST) to ensure the alignment of its AI practices.
In addition, multi-stakeholder engagement is considered essential for the technical development and sharing of AI knowledge.
Singapore leads in this area by creating the AI Verify Testing Framework and Toolkit, which provides comprehensive tests for fairness, explainability, and robustness of AI systems. This initiative is open-source, allowing global community contribution and engagement. The AI Verify Toolkit supports responsible AI implementation.
Adherence to AI guidelines is important, and the Singapore government plays an active role in setting guidelines for organizations.
Implementing these guidelines ensures responsible AI implementation. The government also utilizes the AI Verify Testing Framework and Toolkit to validate the implementation of responsible AI requirements.
Given Singapore’s limited resources, the country strategically focuses its efforts on specific areas where it can contribute to global AI conversations.
Singapore adopts existing international efforts where possible and fills gaps to make a valuable contribution. Despite being a small country, Singapore recognizes the significance of its role in standard setting and strives to make a meaningful impact.
The Singapore government actively engages with industry members to incorporate a broad perspective in AI development.
Input from these companies is valued to create a comprehensive and inclusive framework for responsible AI implementation.
The establishment of the AI Verify Foundation provides a platform for all interested organizations to contribute to AI standards. The open-source platform is not limited by organization size or location, welcoming diverse perspectives.
Work done on the AI Verify Foundation platform is rationalized at the national level in Singapore and supported globally through various platforms, such as OECD, GPA, or ISO.
In conclusion, Singapore recognizes the importance of international cooperation, multi-stakeholder engagement, adherence to guidelines, strategic resource management, and industry partnerships in standardizing AI regulation.
The country’s active involvement in initiatives such as the AI Verify Testing Framework and Toolkit and the AI Verify Foundation demonstrates its commitment to responsible AI development and global AI conversations. The emphasis on harmonized or aligned standards by Wansi Lee further highlights the need for a unified approach to AI regulation.