Searching for Standards: The Global Competition to Govern AI | IGF 2023
Event report
Speakers and Moderators
Speakers:
- Kyoko Yoshinaga, Civil Society, Asia-Pacific Group
- Tomiwa Ilori, Civil Society, African Group
- Simon Chesterman, Government, Asia-Pacific Group
- Carlos Affonso Souza, Civil Society, Latin American and Caribbean Group (GRULAC)
- GABRIELA RAMOS, Intergovernmental Organization, Intergovernmental Organization
- Courtney Radsch, Civil Society, Western European and Others Group (WEOG)
Moderators:
- Michael Karanicolas, Civil Society, Western European and Others Group (WEOG)
Table of contents
Disclaimer: This is not an official record of the IGF session. The DiploAI system automatically generates these resources from the audiovisual recording. Resources are presented in their original format, as provided by the AI (e.g. including any spelling mistakes). The accuracy of these resources cannot be guaranteed. The official record of the session can be found on the IGF's official website.
Knowledge Graph of Debate
Session report
Michael Karanicolas
During a session on AI governance, organized by the School of Law and the School of Engineering at UCLA, the Yale Information Society Project, and the Georgetown Institute for Technology, Law and Policy, Michael Karanicolas hosted a discussion on the development of new regulatory trends around the world. The focus was on major regulatory blocks such as China, the US, and the EU, and their influence on AI development globally.
The session aimed to explore the tension between the rule-making within these major regulatory blocks and the impacts of AI outside of this privileged minority. It recognized their dominant position and sought to understand their global influence in shaping AI governance. The discussion highlighted the need to recognize the power dynamics at play and ensure that the regulatory decisions made within these blocks do not ignore the wider issues and potential negative ramifications for AI development on a global scale.
Michael Karanicolas encouraged interactive participation from the audience, inviting comments and engagement from all present. He stressed the importance of active participation over passive listening, fostering an environment that encouraged inclusive and thoughtful discussions.
The speakers also delved into the globalised nature of AI and the challenges posed by national governments in regulating it. As AI consists of data resources, software programs, networks, and computing devices, it operates within globalised markets. The internet has enabled the rapid distribution of applications and data resources, making it difficult for national governments to control and regulate the development of AI effectively. The session emphasised that national governments alone cannot solve the challenges and regulations of AI, calling for partnerships and collaborative efforts to address the global nature of AI governance.
Another topic of discussion revolved around the enforcement of intellectual property (IP) rights and privacy rights in the online world. It was noted that the enforcement of IP rights online is significantly stronger compared to the enforcement of privacy rights. This discrepancy is seen as a result of the early prioritisation of addressing harms related to IP infringement, while privacy rights were not given the same level of attention in regulatory efforts. The session highlighted the need to be deliberate and careful in selecting how harms are understood and prioritised in current regulatory efforts to ensure a balance between different aspects of AI governance.
Engagement, mutual learning, and sharing of best practices were seen as crucial in the field of AI regulation. The session emphasised the benefits of these collaborative approaches, which enable regulators to stay updated on the latest developments and challenges in AI governance. It also emphasised the importance of factoring local contexts into regulatory processes. A one-size-fits-all approach, where countries simply adopt an EU or American model without considering their unique circumstances, was deemed problematic. It was concluded that for effective AI regulation, it is essential to develop regulatory structures that fit the purpose and are sensitive to the local context.
In conclusion, the session on AI governance hosted by Michael Karanicolas shed light on the influence of major regulatory blocks on AI development globally. It emphasised the need for inclusive and participatory approaches in AI governance and highlighted the challenges posed by national governments in regulating AI. The session also underscored the need for a balanced approach to prioritise different aspects of AI governance, including intellectual property rights and privacy rights. The importance of engagement, mutual learning, and the consideration of local contexts in regulatory processes were also highlighted.
Tomiwa Ilori
AI governance in Africa is still in its infancy, with at least 466 policy and governance items referred to in the African region. However, there is currently no major treaty, law, or standard specifically addressing AI governance in Africa. Despite this, some countries in Africa have already taken steps to develop their own national AI policies. For instance, countries like Mauritius, Kenya, and Egypt have established their own AI policies, indicating the growing interest in AI governance among African nations.
Interest in AI governance is not limited to governments alone. Various stakeholders in Africa, including multilateral organizations, publicly funded research institutions, academia, and the private sector, are increasingly recognizing the importance of AI governance. This indicates a collective recognition of the need to regulate and guide the development and use of artificial intelligence within the region. In fact, the Kenyan government has expressed its intention to pass a law aimed at regulating AI systems, further demonstrating the commitment towards responsible AI governance in Africa.
However, the region often relies on importing standards rather than actively participating in the design and development of these standards. This makes African nations more vulnerable and susceptible to becoming pawns or testing grounds for potentially inadequate AI governance attempts. This highlights the need for African nations to actively engage in the process of shaping AI standards rather than merely adapting to standards set by external entities.
On a positive note, smaller nations in Africa have the potential to make a significant impact by strategically collaborating with like-minded initiatives. International politics often stifle the boldness of smaller nations, but when it comes to AI governance, smaller nations can leverage partnerships and collaborations to amplify their voices and push for responsible AI practices. By working together with others who share similar goals and intended results, the journey towards achieving effective AI governance in Africa could be expedited.
In conclusion, AI governance in Africa is still in its early stages, but the interest and efforts to establish responsible AI policies and regulations are steadily growing. While there is currently no major treaty or law specifically addressing AI governance in Africa, countries like Mauritius, Kenya, and Egypt have already taken steps to develop their own national AI policies. Moreover, various stakeholders, including governments, multilateral organizations, academia, and the private sector, are recognizing the significance of AI governance in Africa. Despite the challenges that smaller nations in Africa may face, strategic collaborations and partnerships can empower them to actively shape the future of AI governance in the region.
Carlos Affonso Souza
In Latin America, several countries, including Argentina, Brazil, Colombia, Peru, and Mexico, are actively engaging in discussions and actions related to the governance and regulation of Artificial Intelligence (AI). This reflects a growing recognition of the need to address the ethical implications and potential risks associated with AI technology. The process of implementing AI regulation typically involves three stages: the establishment of broad ethical principles, the development of national strategies, and the enactment of hard laws.
However, different countries in Latin America are at varying stages of this regulatory process, which is influenced by their unique priorities, approaches, and long-term visions. Each country has its specific perspective on how AI will drive economic, political, and cultural changes within society. Accordingly, they are implementing national strategies and specific regulations through diverse mechanisms.
One of the challenges in regulating AI in the majority world lies in the nature of the technology itself. AI can often be invisible and intangible, making it difficult to grasp and regulate effectively. This creates a need for countries in the majority world to develop their own regulations and governance frameworks for AI.
Moreover, these countries primarily serve as users of AI applications rather than developers, making it even more crucial to establish regulations that address not only the creation but also the use of AI applications. This highlights the importance of ensuring that AI technologies are used ethically and responsibly, considering the potential impact on individuals and society.
Drawing from the experience of internet regulation, which has dealt with issues such as copyright, freedom of expression, and personal data protection, can provide valuable insights when considering AI regulation. The development of personal data protection laws and decisions on platform liability are also likely to significantly influence the shape of AI regulation.
Understanding the different types of AI and the nature of the damages they can cause is essential for effective regulation. It is argued that AI should not be viewed as purely autonomous or dumb but rather as a tool that can cause both harm and profit. Algorithmic decisions are not made autonomously or unawarely but rather reflect biases in design or fulfill their intended functions.
Countries’ motivations for regulating AI vary. Some view it as a status symbol of being future-oriented, while others believe it is important to learn from regulation efforts abroad and develop innovative solutions tailored to their own contexts. There is a tendency to adopt European solutions for AI regulation, even if they may not function optimally. This adoption is driven by the desire to demonstrate that efforts are being made towards regulating AI.
In conclusion, Latin American countries are actively engaging in discussions and actions to regulate AI, recognizing the need to address its ethical implications and potential risks. The implementation of AI regulation involves multiple stages, and countries are at different phases of this process. Challenges arise due to the intangible nature of AI, which requires countries to create their own regulations. The use of AI applications, as well as the type and nature of damages caused by AI, are important considerations for regulation. The experience of internet regulation can provide useful insights for AI regulation. The motivations for regulating AI vary among countries, and there is a tendency to adopt European solutions. Despite the shortcomings of these solutions, countries still adopt them to show progress in AI regulation.
Irakli Khodeli
The UNESCO recommendation on AI ethics has become a critical guide for global AI governance. It was adopted two years ago by 193 member states, demonstrating its widespread acceptance and importance. The principles put forward by UNESCO are firmly rooted in fundamental values such as human rights, human dignity, diversity, environmental sustainability, and peaceful societies. These principles aim to provide a solid ethical foundation for the development and deployment of AI technologies.
To ensure the practical application of these principles, UNESCO has operationalized them into 11 different policy contexts. This highlights the organization’s commitment to bridging the gap between theoretical principles and practical implementation. By providing specific policy contexts, UNESCO offers concrete guidance for governments and other stakeholders to incorporate AI ethics into their decision-making processes.
One of the key arguments put forth by UNESCO is that AI governance should be grounded in gender equality and environmental sustainability. The organization believes that these two aspects are often overlooked in global discussions on AI ethics and governance. By highlighting the need to disassociate gender discussions from general discrimination discussions and emphasising environmental sustainability, UNESCO aims to bring attention to these crucial issues.
Furthermore, UNESCO emphasises the significant risks posed by AI, ranging from benign to catastrophic harms. The organization argues that these risks are closely intertwined with the pillars of the United Nations, such as sustainable development, human rights, gender equality, and peace. Therefore, global governance of AI is deemed critical to avoid jeopardizing other multilateral priorities.
While global governance is essential, UNESCO also recognises the significant role of national governments in AI governance. Successful regulation and implementation of AI policies ultimately occur at the national level. It is the responsibility of national governments to establish the necessary institutions and laws to govern AI technologies effectively. This highlights the importance of collaboration between national governments and international organisations like UNESCO.
In terms of regulation, it is evident that successful regulation of any technology, including AI, requires a multi-layered approach. Regulatory frameworks must exist at different levels – global, regional, national, and even sub-national – to ensure comprehensive and effective governance. The ongoing conversation at the United Nations revolves around determining the appropriate regulatory mechanisms for AI. Regional organisations such as the European Union, African Union, and ASEAN already play significant roles in AI regulation. Meanwhile, countries themselves are indispensable in enforcing regulatory mechanisms at the national level.
To achieve coordination and compatibility between different layers of regulation, various stakeholders, including the UN, European Union, African Union, OECD, and ASEAN, are mentioned as necessary participants. The creation of a global governance mechanism is advocated to ensure interoperability and coordination among different levels of regulation, ultimately facilitating effective AI governance on a global scale.
Additionally, bioethics is highlighted as a concrete example of how a multi-level governance model can function successfully. UNESCO’s Universal Declaration on Bioethics and Human Rights, along with the Council of Europe’s Oviedo Convention, serve as global and regional governance examples, respectively. These principles are then translated into binding regulations at the country level, further supporting the notion that a multi-level approach can be effective in governing complex issues like AI ethics.
In conclusion, the UNESCO recommendation on AI ethics provides crucial guidance for global AI governance. By grounding AI ethics in fundamental values, providing specific policy contexts, and emphasising the importance of gender equality and environmental sustainability, UNESCO aims to ensure that AI technologies are developed and deployed responsibly. This requires collaboration between international organisations, national governments, and other stakeholders to establish regulatory frameworks at different levels. Ultimately, a global governance mechanism is advocated to coordinate and ensure compatibility between these levels of regulation.
Kyoko Yoshinaga
Japan takes a soft law approach to AI governance, using non-binding international frameworks and principles for AI R&D. These soft laws guide Japanese companies in developing their own AI policies, ensuring flexibility and adaptation. Additionally, Japan amends sector-specific hard laws to enhance transparency and fairness in the AI industry. Companies like Sony and Fujitsu have already developed AI policies, focusing on responsible AI as part of corporate social responsibility and ESG practices. Publicly accessible AI policies are encouraged to promote transparency and accountability. Japan also draws on existing frameworks, such as the Information Security Governance Policy Framework, to establish robust AI governance. Each government should tailor AI regulations to their own context, considering factors like corporate culture and technology level. Hard laws on AI risks may be dangerous due to their varying nature, and personal data protection laws are essential for addressing privacy concerns with AI.
Simon Chesterman
The analysis of the given text reveals several key points regarding AI regulation and governance. Firstly, it is highlighted that jurisdictions are wary of both over-regulating and under-regulating AI. Over-regulation, especially in smaller jurisdictions like Singapore, might cause tech companies to opt for innovation elsewhere. On the other hand, under-regulation may expose citizens to unforeseen risks. This underscores the need for finding the right balance in AI regulation.
Secondly, it is argued that a new set of rules is not necessary to regulate AI. The text suggests that existing laws are capable of effectively governing most AI use cases. However, the real challenge lies in the application of these existing rules to new and emerging use cases of AI. Despite this challenge, the prevailing sentiment is positive towards the effectiveness of current regulations in governing AI.
Thirdly, Singapore’s approach to AI governance is highlighted. The focus of Singapore’s AI governance framework is on human-centrality and transparency. Rather than creating new laws, Singapore has made adjustments to existing ones to accommodate AI, such as changing the Road Traffic Act to allow for the use of autonomous vehicles. This approach reflects Singapore’s commitment to ensuring human-centrality and transparency in AI governance.
Additionally, it is mentioned that the notion of AI not being biased is covered under anti-discrimination laws. This highlights the importance of ensuring that AI systems are not prejudiced or discriminatory, in alignment with existing laws.
The text also emphasises the need for companies to police themselves regarding AI regulations. Singapore has released a tool called AI Verify, which assists organizations in self-regulating their AI standards and evaluating if further improvements are needed. This self-regulation approach is viewed positively, highlighting the responsibility of companies in ensuring ethical and compliant AI practices.
Furthermore, the text acknowledges that smaller jurisdictions face challenges when it comes to AI regulation. These challenges include deciding when and how to regulate and addressing the concentration of power in private hands. These issues reflect the delicate balance that smaller jurisdictions must navigate to effectively regulate AI.
The influence of Western technology companies on AI regulations is another notable observation. The principles of AI regulation can be traced back to these companies, and public awareness and concern about the risks of AI have been triggered by events like the Cambridge Analytica scandal. This implies that the regulations of AI are being influenced by the practices and actions of primarily Western technology companies.
Regulatory sandboxes, particularly in the fintech sector, are highlighted as a useful technique for fostering innovation. The Monetary Authority of Singapore has utilized regulatory sandboxes to reduce risks and enable testing of new use cases for AI in the fintech sector.
In terms of balancing regulation and innovation, the text emphasizes the need for a careful approach. The Personal Data Protection Act in Singapore aims to strike a balance between users’ rights and the needs of businesses. This underscores the importance of avoiding excessive regulation that may drive innovation elsewhere.
Furthermore, the responsibility for the output generated by AI systems is mentioned. It is emphasized that accountability must be taken for the outcomes and impact of AI systems. This aligns with the broader goal of achieving peace, justice, and strong institutions.
In conclusion, the text highlights various aspects of AI regulation and governance. The need to strike a balance between over-regulation and under-regulation, the effectiveness of existing laws in governing AI, and the importance of human-centrality and transparency in AI governance are key points. It is also noted that smaller jurisdictions face challenges in AI regulation, and the influence of Western technology companies is evident. Regulatory sandboxes are seen as a useful tool, and the responsibility for the output of AI systems is emphasized. Overall, the analysis provides valuable insights into the complex landscape of AI regulation and governance.
Audience
During the discussion on regulating artificial intelligence (AI), several key challenges and considerations were brought forward. One of the main challenges highlighted was the need to strike a balance in regulating generative AI, which has caused disruptive effects. This task proves to be challenging due to the complex nature of generative AI and its potential impact on multiple sectors. It was noted that the national AI policy of Pakistan, for example, is still in the draft stage and is open for input from various stakeholders.
Another crucial consideration is the measurement of risks associated with AI usage. The speaker from the Australian National Science Agency emphasized the importance of assessing the risks and trade-offs involved in AI applications. There was a call for an international research alliance to explore how to effectively measure these risks. This approach aims to guide policymakers and regulators in making informed decisions about the use of AI.
The discussion also explored the need for context-based trade-offs in AI usage. One example provided was the case of face recognition for blind people. While blind individuals desire the same level of facial recognition ability as sighted individuals, legislation that inhibits the development of face recognition for blind people due to associated risks was mentioned. This highlights the need to carefully consider the trade-offs and context-specific implications of AI applications.
The global nature of AI was another topic of concern. It was pointed out that AI applications and data can easily be distributed globally through the internet, making it difficult for national governments alone to regulate AI effectively. This observation indicates the necessity of international collaboration and partnerships in regulating AI in order to mitigate any potential risks and ensure responsible use.
The impact of jurisdiction size on regulation was also discussed. The example of Singapore’s small jurisdiction size potentially driving businesses away due to regulations was mentioned. However, it was suggested that Singapore’s successful publicly-owned companies could serve as testing grounds for regulation implementation. This would allow for experimentation and learning about what works and what consequences may arise.
Data governance and standard-setting bodies were also acknowledged as influential in AI regulation. Trade associations and private sector standard-setting bodies were highlighted for their significant role. However, it was noted that these structures can sometimes work at cross-purposes and compete, potentially creating conflicts. This calls for a careful consideration of the interaction between different bodies involved in norm-setting processes.
The issue of data granularity in the global South was raised, highlighting a potential risk for AI. It was noted that the global South might not have the same fine granularity of data available as the global North, which may lead to risks in the application of AI. This disparity emphasizes the need to address power dynamics between the global North and South to ensure a fair and equitable AI practice.
Several arguments were made regarding the role of the private sector in AI regulation and standard-setting. The host called for private sector participation in the discussion, recognizing the importance of their involvement. However, concerns were expressed about potential discrimination in AI systems that learn from massive data. The shift in AI learning from algorithms in the past to massive data learning today raises concerns about potential biases and discrimination against groups that do not produce a lot of data for AI to learn from.
The speakers also emphasized the importance of multi-stakeholder engagement in regulation and standard-setting. Meaningful multi-stakeholder processes were deemed necessary for crafting effective standards and regulations for AI. This approach promotes inclusivity and ensures that various perspectives and interests are considered.
Current models of AI regulation were criticized for being inadequate, with companies sorting themselves into risk levels without comprehensive assessment. Such models were seen as box-ticking exercises rather than effective regulation measures. This critique underscores the need for improved risk assessment approaches that take into account the nuanced and evolving nature of AI technologies.
A rights-based approach focused on property rights was argued to be crucial in AI regulation. New technologies, such as AI, have created new forms of property, raising discussions around ownership and control of data. Strict definitions of digital property rights were cautioned against, as they might stifle innovation. Striking a balance between protecting property rights and fostering a dynamic AI ecosystem is essential.
The importance of understanding and measuring the impact of AI within different contexts was highlighted. The need to define ways to measure AI compliance, performance, and trust in AI systems was emphasized. It was suggested that pre-normative standards could provide a helpful framework but acknowledged the lengthy time frame required for their development and establishment as standards.
Collaboration with industry was deemed essential in the regulation of AI. Industry was seen as a valuable source of resources, case studies, and knowledge. The mutual benefit between academia and industry in research and development efforts was acknowledged, emphasizing the significance of partnerships for effective regulation and innovation.
In conclusion, the discussion on regulating AI delved into various challenges and considerations. Striking a balance in the regulation of generative AI, measuring risks associated with AI usage, addressing context-specific trade-offs, and promoting multi-stakeholder engagement were key points raised. The impact of data granularity, power dynamics, and the role of the private sector were also highlighted. Observations were made regarding the inadequacy of current AI regulation models, the need for a rights-based approach focused on property rights, and the importance of understanding and measuring the impact of AI within different contexts. Collaboration with industry was emphasized as crucial, and various arguments and evidence were presented throughout the discussion to support these points.
Courtney Radsch
In the United States, there is a strong focus on developing frameworks for the governance and regulation of artificial intelligence (AI). The White House Office of Science and Technology Policy is taking steps to create a blueprint for an AI Bill of Rights, which aims to establish guidelines and protections for the responsible use of AI. The National AI Commission Act is another initiative that seeks to promote responsible AI regulation across various government agencies.
Furthermore, several states in the US have already implemented AI legislation to address the growing impact of AI in various sectors. This reflects a recognition of the need to regulate and govern AI technologies to ensure ethical and responsible practices.
However, some argue that the current AI governance efforts are not adequately addressing the issue of market power held by a small number of tech giants, namely Meta (formerly Facebook), Google, and Amazon. These companies dominate the AI foundation models and utilize aggressive tactics to acquire and control independent AI firms. This dominance extends to key cloud computing platforms, leading to self-preference of their own AI models. Critics believe that the current market structure needs to be reshaped to eliminate anti-competitive practices and foster a more balanced and competitive environment.
Another important aspect highlighted in the discussion is the need for AI governance to address the individual components of AI. This includes factors like data, computational power, software applications, and cloud computing. Current debates on AI governance mostly focus on preventing harm and exploitation, but fail to consider these integral parts of AI systems.
The technical standards set by tech communities also come under scrutiny. While standards like HTTP, HTTPS, and robot TXT have been established, concerns have been raised regarding the accumulation of rights-protected data by big tech companies without appropriate compensation. These actions have significant political and economic implications, impacting other industries and limiting the overall fairness of the system. It is argued that a more diverse representation in the tech community is needed to neutralize big tech’s unfair data advantage.
The notion of unfettered innovation is challenged, as some argue that it may not necessarily lead to positive outcomes. The regulation of AI should encompass a broader set of policy interventions that prioritize the public interest. A risk-based approach to regulation is deemed insufficient to address the complex issues associated with AI.
The importance of data is emphasized, highlighting that it extends beyond individual user data, encompassing environmental and sensor data as well. The control over and exploitation of such valuable data by larger firms requires careful consideration and regulation.
A notable challenge highlighted is the lack of oversight of powerful companies, particularly for non-EU researchers due to underfunding. This raises concerns about the suppression or burying of risky research findings by companies conducting their own risk assessments. It suggests the need for independent oversight and accountability mechanisms to ensure that substantial risks associated with AI are properly addressed.
In conclusion, the governance and regulation of AI in the United States are gaining momentum, with initiatives such as the development of an AI Bill of Rights and state-level legislation. However, there are concerns regarding the market power of tech giants, the need to focus on individual components of AI, the political and economic implications of technical standards, the lack of diversity in the tech community, and the challenges of overseeing powerful companies. These issues highlight the complexity of developing effective AI governance frameworks that strike a balance between promoting innovation, protecting the public interest, and ensuring responsible and ethical AI practices.
Speakers
Speech speed
0 words per minute
Speech length
words
Speech time
0 secs
Report
During the discussion on regulating artificial intelligence (AI), several key challenges and considerations were brought forward. One of the main challenges highlighted was the need to strike a balance in regulating generative AI, which has caused disruptive effects. This task proves to be challenging due to the complex nature of generative AI and its potential impact on multiple sectors.
It was noted that the national AI policy of Pakistan, for example, is still in the draft stage and is open for input from various stakeholders.
Another crucial consideration is the measurement of risks associated with AI usage.
The speaker from the Australian National Science Agency emphasized the importance of assessing the risks and trade-offs involved in AI applications. There was a call for an international research alliance to explore how to effectively measure these risks. This approach aims to guide policymakers and regulators in making informed decisions about the use of AI.
The discussion also explored the need for context-based trade-offs in AI usage.
One example provided was the case of face recognition for blind people. While blind individuals desire the same level of facial recognition ability as sighted individuals, legislation that inhibits the development of face recognition for blind people due to associated risks was mentioned.
This highlights the need to carefully consider the trade-offs and context-specific implications of AI applications.
The global nature of AI was another topic of concern. It was pointed out that AI applications and data can easily be distributed globally through the internet, making it difficult for national governments alone to regulate AI effectively.
This observation indicates the necessity of international collaboration and partnerships in regulating AI in order to mitigate any potential risks and ensure responsible use.
The impact of jurisdiction size on regulation was also discussed. The example of Singapore’s small jurisdiction size potentially driving businesses away due to regulations was mentioned.
However, it was suggested that Singapore’s successful publicly-owned companies could serve as testing grounds for regulation implementation. This would allow for experimentation and learning about what works and what consequences may arise.
Data governance and standard-setting bodies were also acknowledged as influential in AI regulation.
Trade associations and private sector standard-setting bodies were highlighted for their significant role. However, it was noted that these structures can sometimes work at cross-purposes and compete, potentially creating conflicts. This calls for a careful consideration of the interaction between different bodies involved in norm-setting processes.
The issue of data granularity in the global South was raised, highlighting a potential risk for AI.
It was noted that the global South might not have the same fine granularity of data available as the global North, which may lead to risks in the application of AI. This disparity emphasizes the need to address power dynamics between the global North and South to ensure a fair and equitable AI practice.
Several arguments were made regarding the role of the private sector in AI regulation and standard-setting.
The host called for private sector participation in the discussion, recognizing the importance of their involvement. However, concerns were expressed about potential discrimination in AI systems that learn from massive data. The shift in AI learning from algorithms in the past to massive data learning today raises concerns about potential biases and discrimination against groups that do not produce a lot of data for AI to learn from.
The speakers also emphasized the importance of multi-stakeholder engagement in regulation and standard-setting.
Meaningful multi-stakeholder processes were deemed necessary for crafting effective standards and regulations for AI. This approach promotes inclusivity and ensures that various perspectives and interests are considered.
Current models of AI regulation were criticized for being inadequate, with companies sorting themselves into risk levels without comprehensive assessment.
Such models were seen as box-ticking exercises rather than effective regulation measures. This critique underscores the need for improved risk assessment approaches that take into account the nuanced and evolving nature of AI technologies.
A rights-based approach focused on property rights was argued to be crucial in AI regulation.
New technologies, such as AI, have created new forms of property, raising discussions around ownership and control of data. Strict definitions of digital property rights were cautioned against, as they might stifle innovation. Striking a balance between protecting property rights and fostering a dynamic AI ecosystem is essential.
The importance of understanding and measuring the impact of AI within different contexts was highlighted.
The need to define ways to measure AI compliance, performance, and trust in AI systems was emphasized. It was suggested that pre-normative standards could provide a helpful framework but acknowledged the lengthy time frame required for their development and establishment as standards.
Collaboration with industry was deemed essential in the regulation of AI.
Industry was seen as a valuable source of resources, case studies, and knowledge. The mutual benefit between academia and industry in research and development efforts was acknowledged, emphasizing the significance of partnerships for effective regulation and innovation.
In conclusion, the discussion on regulating AI delved into various challenges and considerations.
Striking a balance in the regulation of generative AI, measuring risks associated with AI usage, addressing context-specific trade-offs, and promoting multi-stakeholder engagement were key points raised. The impact of data granularity, power dynamics, and the role of the private sector were also highlighted.
Observations were made regarding the inadequacy of current AI regulation models, the need for a rights-based approach focused on property rights, and the importance of understanding and measuring the impact of AI within different contexts. Collaboration with industry was emphasized as crucial, and various arguments and evidence were presented throughout the discussion to support these points.
Speech speed
0 words per minute
Speech length
words
Speech time
0 secs
Report
In Latin America, several countries, including Argentina, Brazil, Colombia, Peru, and Mexico, are actively engaging in discussions and actions related to the governance and regulation of Artificial Intelligence (AI). This reflects a growing recognition of the need to address the ethical implications and potential risks associated with AI technology.
The process of implementing AI regulation typically involves three stages: the establishment of broad ethical principles, the development of national strategies, and the enactment of hard laws.
However, different countries in Latin America are at varying stages of this regulatory process, which is influenced by their unique priorities, approaches, and long-term visions.
Each country has its specific perspective on how AI will drive economic, political, and cultural changes within society. Accordingly, they are implementing national strategies and specific regulations through diverse mechanisms.
One of the challenges in regulating AI in the majority world lies in the nature of the technology itself.
AI can often be invisible and intangible, making it difficult to grasp and regulate effectively. This creates a need for countries in the majority world to develop their own regulations and governance frameworks for AI.
Moreover, these countries primarily serve as users of AI applications rather than developers, making it even more crucial to establish regulations that address not only the creation but also the use of AI applications.
This highlights the importance of ensuring that AI technologies are used ethically and responsibly, considering the potential impact on individuals and society.
Drawing from the experience of internet regulation, which has dealt with issues such as copyright, freedom of expression, and personal data protection, can provide valuable insights when considering AI regulation.
The development of personal data protection laws and decisions on platform liability are also likely to significantly influence the shape of AI regulation.
Understanding the different types of AI and the nature of the damages they can cause is essential for effective regulation.
It is argued that AI should not be viewed as purely autonomous or dumb but rather as a tool that can cause both harm and profit. Algorithmic decisions are not made autonomously or unawarely but rather reflect biases in design or fulfill their intended functions.
Countries’ motivations for regulating AI vary.
Some view it as a status symbol of being future-oriented, while others believe it is important to learn from regulation efforts abroad and develop innovative solutions tailored to their own contexts. There is a tendency to adopt European solutions for AI regulation, even if they may not function optimally.
This adoption is driven by the desire to demonstrate that efforts are being made towards regulating AI.
In conclusion, Latin American countries are actively engaging in discussions and actions to regulate AI, recognizing the need to address its ethical implications and potential risks.
The implementation of AI regulation involves multiple stages, and countries are at different phases of this process. Challenges arise due to the intangible nature of AI, which requires countries to create their own regulations. The use of AI applications, as well as the type and nature of damages caused by AI, are important considerations for regulation.
The experience of internet regulation can provide useful insights for AI regulation. The motivations for regulating AI vary among countries, and there is a tendency to adopt European solutions. Despite the shortcomings of these solutions, countries still adopt them to show progress in AI regulation.
Speech speed
0 words per minute
Speech length
words
Speech time
0 secs
Report
In the United States, there is a strong focus on developing frameworks for the governance and regulation of artificial intelligence (AI). The White House Office of Science and Technology Policy is taking steps to create a blueprint for an AI Bill of Rights, which aims to establish guidelines and protections for the responsible use of AI.
The National AI Commission Act is another initiative that seeks to promote responsible AI regulation across various government agencies.
Furthermore, several states in the US have already implemented AI legislation to address the growing impact of AI in various sectors.
This reflects a recognition of the need to regulate and govern AI technologies to ensure ethical and responsible practices.
However, some argue that the current AI governance efforts are not adequately addressing the issue of market power held by a small number of tech giants, namely Meta (formerly Facebook), Google, and Amazon.
These companies dominate the AI foundation models and utilize aggressive tactics to acquire and control independent AI firms. This dominance extends to key cloud computing platforms, leading to self-preference of their own AI models. Critics believe that the current market structure needs to be reshaped to eliminate anti-competitive practices and foster a more balanced and competitive environment.
Another important aspect highlighted in the discussion is the need for AI governance to address the individual components of AI.
This includes factors like data, computational power, software applications, and cloud computing. Current debates on AI governance mostly focus on preventing harm and exploitation, but fail to consider these integral parts of AI systems.
The technical standards set by tech communities also come under scrutiny.
While standards like HTTP, HTTPS, and robot TXT have been established, concerns have been raised regarding the accumulation of rights-protected data by big tech companies without appropriate compensation. These actions have significant political and economic implications, impacting other industries and limiting the overall fairness of the system.
It is argued that a more diverse representation in the tech community is needed to neutralize big tech’s unfair data advantage.
The notion of unfettered innovation is challenged, as some argue that it may not necessarily lead to positive outcomes.
The regulation of AI should encompass a broader set of policy interventions that prioritize the public interest. A risk-based approach to regulation is deemed insufficient to address the complex issues associated with AI.
The importance of data is emphasized, highlighting that it extends beyond individual user data, encompassing environmental and sensor data as well.
The control over and exploitation of such valuable data by larger firms requires careful consideration and regulation.
A notable challenge highlighted is the lack of oversight of powerful companies, particularly for non-EU researchers due to underfunding. This raises concerns about the suppression or burying of risky research findings by companies conducting their own risk assessments.
It suggests the need for independent oversight and accountability mechanisms to ensure that substantial risks associated with AI are properly addressed.
In conclusion, the governance and regulation of AI in the United States are gaining momentum, with initiatives such as the development of an AI Bill of Rights and state-level legislation.
However, there are concerns regarding the market power of tech giants, the need to focus on individual components of AI, the political and economic implications of technical standards, the lack of diversity in the tech community, and the challenges of overseeing powerful companies.
These issues highlight the complexity of developing effective AI governance frameworks that strike a balance between promoting innovation, protecting the public interest, and ensuring responsible and ethical AI practices.
Speech speed
0 words per minute
Speech length
words
Speech time
0 secs
Report
The UNESCO recommendation on AI ethics has become a critical guide for global AI governance. It was adopted two years ago by 193 member states, demonstrating its widespread acceptance and importance. The principles put forward by UNESCO are firmly rooted in fundamental values such as human rights, human dignity, diversity, environmental sustainability, and peaceful societies.
These principles aim to provide a solid ethical foundation for the development and deployment of AI technologies.
To ensure the practical application of these principles, UNESCO has operationalized them into 11 different policy contexts. This highlights the organization’s commitment to bridging the gap between theoretical principles and practical implementation.
By providing specific policy contexts, UNESCO offers concrete guidance for governments and other stakeholders to incorporate AI ethics into their decision-making processes.
One of the key arguments put forth by UNESCO is that AI governance should be grounded in gender equality and environmental sustainability.
The organization believes that these two aspects are often overlooked in global discussions on AI ethics and governance. By highlighting the need to disassociate gender discussions from general discrimination discussions and emphasising environmental sustainability, UNESCO aims to bring attention to these crucial issues.
Furthermore, UNESCO emphasises the significant risks posed by AI, ranging from benign to catastrophic harms.
The organization argues that these risks are closely intertwined with the pillars of the United Nations, such as sustainable development, human rights, gender equality, and peace. Therefore, global governance of AI is deemed critical to avoid jeopardizing other multilateral priorities.
While global governance is essential, UNESCO also recognises the significant role of national governments in AI governance.
Successful regulation and implementation of AI policies ultimately occur at the national level. It is the responsibility of national governments to establish the necessary institutions and laws to govern AI technologies effectively. This highlights the importance of collaboration between national governments and international organisations like UNESCO.
In terms of regulation, it is evident that successful regulation of any technology, including AI, requires a multi-layered approach.
Regulatory frameworks must exist at different levels – global, regional, national, and even sub-national – to ensure comprehensive and effective governance. The ongoing conversation at the United Nations revolves around determining the appropriate regulatory mechanisms for AI. Regional organisations such as the European Union, African Union, and ASEAN already play significant roles in AI regulation.
Meanwhile, countries themselves are indispensable in enforcing regulatory mechanisms at the national level.
To achieve coordination and compatibility between different layers of regulation, various stakeholders, including the UN, European Union, African Union, OECD, and ASEAN, are mentioned as necessary participants.
The creation of a global governance mechanism is advocated to ensure interoperability and coordination among different levels of regulation, ultimately facilitating effective AI governance on a global scale.
Additionally, bioethics is highlighted as a concrete example of how a multi-level governance model can function successfully.
UNESCO’s Universal Declaration on Bioethics and Human Rights, along with the Council of Europe’s Oviedo Convention, serve as global and regional governance examples, respectively. These principles are then translated into binding regulations at the country level, further supporting the notion that a multi-level approach can be effective in governing complex issues like AI ethics.
In conclusion, the UNESCO recommendation on AI ethics provides crucial guidance for global AI governance.
By grounding AI ethics in fundamental values, providing specific policy contexts, and emphasising the importance of gender equality and environmental sustainability, UNESCO aims to ensure that AI technologies are developed and deployed responsibly. This requires collaboration between international organisations, national governments, and other stakeholders to establish regulatory frameworks at different levels.
Ultimately, a global governance mechanism is advocated to coordinate and ensure compatibility between these levels of regulation.
Speech speed
0 words per minute
Speech length
words
Speech time
0 secs
Report
Japan takes a soft law approach to AI governance, using non-binding international frameworks and principles for AI R&D. These soft laws guide Japanese companies in developing their own AI policies, ensuring flexibility and adaptation. Additionally, Japan amends sector-specific hard laws to enhance transparency and fairness in the AI industry.
Companies like Sony and Fujitsu have already developed AI policies, focusing on responsible AI as part of corporate social responsibility and ESG practices. Publicly accessible AI policies are encouraged to promote transparency and accountability. Japan also draws on existing frameworks, such as the Information Security Governance Policy Framework, to establish robust AI governance.
Each government should tailor AI regulations to their own context, considering factors like corporate culture and technology level. Hard laws on AI risks may be dangerous due to their varying nature, and personal data protection laws are essential for addressing privacy concerns with AI.
Speech speed
0 words per minute
Speech length
words
Speech time
0 secs
Report
During a session on AI governance, organized by the School of Law and the School of Engineering at UCLA, the Yale Information Society Project, and the Georgetown Institute for Technology, Law and Policy, Michael Karanicolas hosted a discussion on the development of new regulatory trends around the world.
The focus was on major regulatory blocks such as China, the US, and the EU, and their influence on AI development globally.
The session aimed to explore the tension between the rule-making within these major regulatory blocks and the impacts of AI outside of this privileged minority.
It recognized their dominant position and sought to understand their global influence in shaping AI governance. The discussion highlighted the need to recognize the power dynamics at play and ensure that the regulatory decisions made within these blocks do not ignore the wider issues and potential negative ramifications for AI development on a global scale.
Michael Karanicolas encouraged interactive participation from the audience, inviting comments and engagement from all present.
He stressed the importance of active participation over passive listening, fostering an environment that encouraged inclusive and thoughtful discussions.
The speakers also delved into the globalised nature of AI and the challenges posed by national governments in regulating it.
As AI consists of data resources, software programs, networks, and computing devices, it operates within globalised markets. The internet has enabled the rapid distribution of applications and data resources, making it difficult for national governments to control and regulate the development of AI effectively.
The session emphasised that national governments alone cannot solve the challenges and regulations of AI, calling for partnerships and collaborative efforts to address the global nature of AI governance.
Another topic of discussion revolved around the enforcement of intellectual property (IP) rights and privacy rights in the online world.
It was noted that the enforcement of IP rights online is significantly stronger compared to the enforcement of privacy rights. This discrepancy is seen as a result of the early prioritisation of addressing harms related to IP infringement, while privacy rights were not given the same level of attention in regulatory efforts.
The session highlighted the need to be deliberate and careful in selecting how harms are understood and prioritised in current regulatory efforts to ensure a balance between different aspects of AI governance.
Engagement, mutual learning, and sharing of best practices were seen as crucial in the field of AI regulation.
The session emphasised the benefits of these collaborative approaches, which enable regulators to stay updated on the latest developments and challenges in AI governance. It also emphasised the importance of factoring local contexts into regulatory processes. A one-size-fits-all approach, where countries simply adopt an EU or American model without considering their unique circumstances, was deemed problematic.
It was concluded that for effective AI regulation, it is essential to develop regulatory structures that fit the purpose and are sensitive to the local context.
In conclusion, the session on AI governance hosted by Michael Karanicolas shed light on the influence of major regulatory blocks on AI development globally.
It emphasised the need for inclusive and participatory approaches in AI governance and highlighted the challenges posed by national governments in regulating AI. The session also underscored the need for a balanced approach to prioritise different aspects of AI governance, including intellectual property rights and privacy rights.
The importance of engagement, mutual learning, and the consideration of local contexts in regulatory processes were also highlighted.
Speech speed
0 words per minute
Speech length
words
Speech time
0 secs
Report
The analysis of the given text reveals several key points regarding AI regulation and governance. Firstly, it is highlighted that jurisdictions are wary of both over-regulating and under-regulating AI. Over-regulation, especially in smaller jurisdictions like Singapore, might cause tech companies to opt for innovation elsewhere.
On the other hand, under-regulation may expose citizens to unforeseen risks. This underscores the need for finding the right balance in AI regulation.
Secondly, it is argued that a new set of rules is not necessary to regulate AI.
The text suggests that existing laws are capable of effectively governing most AI use cases. However, the real challenge lies in the application of these existing rules to new and emerging use cases of AI. Despite this challenge, the prevailing sentiment is positive towards the effectiveness of current regulations in governing AI.
Thirdly, Singapore’s approach to AI governance is highlighted.
The focus of Singapore’s AI governance framework is on human-centrality and transparency. Rather than creating new laws, Singapore has made adjustments to existing ones to accommodate AI, such as changing the Road Traffic Act to allow for the use of autonomous vehicles.
This approach reflects Singapore’s commitment to ensuring human-centrality and transparency in AI governance.
Additionally, it is mentioned that the notion of AI not being biased is covered under anti-discrimination laws. This highlights the importance of ensuring that AI systems are not prejudiced or discriminatory, in alignment with existing laws.
The text also emphasises the need for companies to police themselves regarding AI regulations.
Singapore has released a tool called AI Verify, which assists organizations in self-regulating their AI standards and evaluating if further improvements are needed. This self-regulation approach is viewed positively, highlighting the responsibility of companies in ensuring ethical and compliant AI practices.
Furthermore, the text acknowledges that smaller jurisdictions face challenges when it comes to AI regulation.
These challenges include deciding when and how to regulate and addressing the concentration of power in private hands. These issues reflect the delicate balance that smaller jurisdictions must navigate to effectively regulate AI.
The influence of Western technology companies on AI regulations is another notable observation.
The principles of AI regulation can be traced back to these companies, and public awareness and concern about the risks of AI have been triggered by events like the Cambridge Analytica scandal. This implies that the regulations of AI are being influenced by the practices and actions of primarily Western technology companies.
Regulatory sandboxes, particularly in the fintech sector, are highlighted as a useful technique for fostering innovation.
The Monetary Authority of Singapore has utilized regulatory sandboxes to reduce risks and enable testing of new use cases for AI in the fintech sector.
In terms of balancing regulation and innovation, the text emphasizes the need for a careful approach.
The Personal Data Protection Act in Singapore aims to strike a balance between users’ rights and the needs of businesses. This underscores the importance of avoiding excessive regulation that may drive innovation elsewhere.
Furthermore, the responsibility for the output generated by AI systems is mentioned.
It is emphasized that accountability must be taken for the outcomes and impact of AI systems. This aligns with the broader goal of achieving peace, justice, and strong institutions.
In conclusion, the text highlights various aspects of AI regulation and governance.
The need to strike a balance between over-regulation and under-regulation, the effectiveness of existing laws in governing AI, and the importance of human-centrality and transparency in AI governance are key points. It is also noted that smaller jurisdictions face challenges in AI regulation, and the influence of Western technology companies is evident.
Regulatory sandboxes are seen as a useful tool, and the responsibility for the output of AI systems is emphasized. Overall, the analysis provides valuable insights into the complex landscape of AI regulation and governance.
Speech speed
0 words per minute
Speech length
words
Speech time
0 secs
Report
AI governance in Africa is still in its infancy, with at least 466 policy and governance items referred to in the African region. However, there is currently no major treaty, law, or standard specifically addressing AI governance in Africa. Despite this, some countries in Africa have already taken steps to develop their own national AI policies.
For instance, countries like Mauritius, Kenya, and Egypt have established their own AI policies, indicating the growing interest in AI governance among African nations.
Interest in AI governance is not limited to governments alone. Various stakeholders in Africa, including multilateral organizations, publicly funded research institutions, academia, and the private sector, are increasingly recognizing the importance of AI governance.
This indicates a collective recognition of the need to regulate and guide the development and use of artificial intelligence within the region. In fact, the Kenyan government has expressed its intention to pass a law aimed at regulating AI systems, further demonstrating the commitment towards responsible AI governance in Africa.
However, the region often relies on importing standards rather than actively participating in the design and development of these standards.
This makes African nations more vulnerable and susceptible to becoming pawns or testing grounds for potentially inadequate AI governance attempts. This highlights the need for African nations to actively engage in the process of shaping AI standards rather than merely adapting to standards set by external entities.
On a positive note, smaller nations in Africa have the potential to make a significant impact by strategically collaborating with like-minded initiatives.
International politics often stifle the boldness of smaller nations, but when it comes to AI governance, smaller nations can leverage partnerships and collaborations to amplify their voices and push for responsible AI practices. By working together with others who share similar goals and intended results, the journey towards achieving effective AI governance in Africa could be expedited.
In conclusion, AI governance in Africa is still in its early stages, but the interest and efforts to establish responsible AI policies and regulations are steadily growing.
While there is currently no major treaty or law specifically addressing AI governance in Africa, countries like Mauritius, Kenya, and Egypt have already taken steps to develop their own national AI policies. Moreover, various stakeholders, including governments, multilateral organizations, academia, and the private sector, are recognizing the significance of AI governance in Africa.
Despite the challenges that smaller nations in Africa may face, strategic collaborations and partnerships can empower them to actively shape the future of AI governance in the region.