State of play of major global AI Governance processes
29 May 2024 14:30h - 15:15h
Table of contents
Disclaimer: This is not an official record of the session. The DiploAI system automatically generates these resources from the audiovisual recording. Resources are presented in their original format, as provided by the AI (e.g. including any spelling mistakes). The accuracy of these resources cannot be guaranteed.
Knowledge Graph of Debate
Session report
Full session report
Global AI Governance Takes Centre Stage as International Experts Convene for Inclusive Framework Development
At a significant panel discussion on the state of play of major global AI governance processes, Dr. Ebtesam Almazrouei, a renowned AI expert, moderated a session with prominent international figures. The panelists included His Excellency Hiroshi Yoshida from Japan, Thomas Schneider from Switzerland, His Excellency Shan Zhongde from China, His Excellency Do-Hyun Kan from Korea, Helen Davidson from the USA, and Yuhua Haikila from the European Commission.
Ambassador Thomas Schneider highlighted the importance of context-based AI regulation, emphasizing the need for a global understanding of risks and impacts rather than focusing solely on the technology. He proudly discussed the Council of Europe treaty, which aims to ensure that existing human rights and democratic principles are upheld in the context of AI use, inviting global participation in this initiative.
Juha Heikkilä from the European Commission elaborated on the EU AI Act, set to be the first comprehensive, legally binding regulation of AI. He detailed the phased implementation of the Act, designed to ensure a strong pre- and post-market enforcement system, and introduced the European AI Office, which will play a crucial role in coordinating and supervising AI regulation across the EU.
Alan Davidson from the United States discussed the US approach to AI governance, which includes voluntary commitments from AI companies, a comprehensive AI executive order, and the establishment of the US AI Safety Institute. He stressed the need for bipartisan legislation to further ensure AI safety and trust.
Shan Zhongde from China shared the country’s commitment to AI ethics and governance, outlining China’s efforts to implement a human-centered approach and practical measures to mitigate AI risks. He also extended an invitation to the World Conference on AI in Shanghai, emphasizing China’s focus on AI for good.
Hiroshi Yoshida from Japan discussed the country’s active role in international AI governance, including the Hiroshima AI process launched at the G7 in 2023. He stressed the importance of interoperable governance frameworks that allow for different implementation approaches while maintaining a common understanding of necessary actions.
Do-Hyun Kan from Korea provided an update on the AI Safety Summit, affirming the validity of the goals set during the UK Safety Summit and highlighting Korea’s commitment to developing specific AI safety standards and promoting inclusivity in AI governance.
The panelists agreed on the importance of collaboration and cooperation to develop inclusive AI governance frameworks aimed at harnessing AI for good. They shared a common vision of leveraging AI to benefit humanity and achieve the Sustainable Development Goals, each bringing unique perspectives and initiatives from their respective countries and regions. The consensus was on the need for interoperability, sharing of best practices, and a multi-stakeholder approach to ensure that AI serves the common good globally.
Session transcript
Introduction:
So, I would like to invite up the next panel now, which is the state of play of major global AI governance processes. And we have a moderator, Ebtusan Almazroui. Dr. Almazroui is the founder and CEO of AIE3 and is recognized for her pioneering AI Falcon models including the Middle East’s first open source LLM Falcon 0B and the world’s most powerful open AI model Falcon 180B in 2023. Please welcome her to the stage. I’ll quickly introduce the other panelists. Please welcome them to the stage as I do so. His Excellency Hiroshi Yoshida serves as Vice Minister for Policy Coordination at the Ministry of International Affairs and Communications Japan since 2022. Thomas Schneider is Ambassador and Director of International Affairs at the Swiss Federal Office of Communications in the Federal Department of the Environment, Transport, Energy and Communications. His Excellency Shan Zhongge serves as Vice Minister for the Ministry of Industry and Information Technology of the People’s Republic of China. His Excellency Do-Hyun Kan serves as Vice Minister of the Head of the Office of ICT Policy at the Ministry of Science and ICT of the Republic of Korea. Helen Davidson is the Assistant Secretary of Commerce for Communications and Information and National Telecommunications and Information Administration. And lastly, Yuhua Haikila. is advisor for AI in the European Commission.
Ebtesam Almazrouei:
Your excellencies, respected guests, the Secretary General of ITU, thank you for organizing the first UN Governance Day. It’s a crucial moment to gather all to discuss an important topic, which is the AI governance. Today in our morning discussions, we discuss how we can implement AI in a most secure, inclusive, and trustable way. What is exactly the landscape of AI and how it will evolve? In today’s session, I would like to welcome your excellencies, respected guests from European Council, from China, from also the United States of America, from Korea and Japan, and also European Union to discuss the different AI activities and the government regulation and frameworks that your countries and governments, they already put a lot of effort in shaping these rules. So, first of all, I would like to start with Ambassador Thomas Schneider. You have successfully convened a group of countries to sign a treaty in a pluralized world. What are the challenges did you encounter during this process? And also, if you can also inform us which parts of the treaty are you most proud of?
Thomas Schneider:
Thank you, and thanks for convening this session. Before I go to the treaty itself, I would like to give a little bit of an outline what role, in my view, the treaty should play in the bigger governance setting. Because there are many people that ask for one new or established institution to solve all the problems, one law to be created that will solve all the problems, normally politician and media like this. But if you look, for instance, at how engines are regulated, we don’t have one UN convention on engines and everything, or not nationally. We have hundreds and thousands of technical, legal, and sociocultural norms that regulate mostly not the engine itself, but the vehicles and the machines that are using engines. We regulate them in different contexts. We regulate the people that are driving the engines. We regulate the infrastructure. We regulate or protect people affected. But all context-based is not the engine. It’s the function of the engine, the effect of the tool. And I think we have to, and there’s different levels of harmonization. We allow people in the UK to drive on the other side of the road, even allow them to drive here. It more or less works. On aviation, it’s probably difficult if planes land from different directions on the same airport. So there’s different level of harmonization with engine regulation. And I think the same logic should be applied to AI. It should be context-based to whatever we can. It should be about risks and impacts, not the technology itself. And that is specific to every culture in many ways, to economic incentives and so on. At the same time, we need a common understanding about what we try to achieve. We need to have a global discussion about how we deal with risks, what the risks are, what are we trying to protect. And so we need a coherent approach, but not necessarily one institution, but hundreds and thousands of pieces that work together. And the Council of Europe treaty was drafted not in a spirit to create new human rights, not to reinvent the wheel. But actually, to make sure that the existing human rights and protections for democracy and rule of law are applied in the context where AI is used. And it was set up also not as a European process. The Council of Europe, by the way, is not the European Union. It’s an institution like the UN of Europe with 46 member states. We had 57 states negotiating in the beginning. It’s open for anyone that cares about human rights, democracy, and rule of law. Every country can join. It’s trying to fulfill one particular piece in this clockwork of methods to make sure that human rights, democracy, and rule of law are protected when using AI. And, at the same time, allow for innovation. And this, of course, is not an easy thing. First of all, to bridge institutional differences between these different countries and cultures and regions of the world that were cooperating and hopefully will be cooperating. That’s one of the key challenges that we faced. And also how to make sure that we protect existing rights in a dynamic way, not in a bureaucratic way, so that this instrument is fit for purpose now, but also in the future. And I think this is why it’s so helpful to have discussions like this. Because we need dialogue. We need dialogue here at the Internet Governance Forum to understand what are the challenges. How do they differ in different contexts in different regions? How do we agree on a shared vision? How can we create a mechanism like a Swiss clockwork with different tools that feed into each other that it shows the right time and not too fast, not too quick, and it doesn’t break down? Thank you.
Ebtesam Almazrouei:
Thank you, Ambassador. My question now will go to Yoha. So, with the implementation of the AI Act, and before two weeks there is a release that it will be into action the next coming month. Could you share how this directive will be translated into practice? And how you can measure the success of the Act? the European AI Act and whether it is really achieving its goals.
Juha Heikkila:
Thank you very much, and thank you very much indeed for the invitation to be on this panel. So indeed the European Union AI Act is the first comprehensive horizontal and legally binding regulation globally, and it will apply to both public and private providers of AI applications equally, which makes it very different from other governments’ attempts and efforts elsewhere. Technically speaking, it’s a regulation for those legal eagles who are interested in EU law. That means that it applies equally in all the 27 member states of the European Union. I would like to first point out that it becomes applicable in stages, so it will enter into force in about a month’s time. It’s just missing signatures and publication, and then it will take 20 days. So about a month from now it will enter into force, and then the first provisions will become applicable six months after this. Those provisions concern the prohibitions. And then the rules for general purpose AI models, so large language models, generative AI, those will then become applicable after 12 months. And then the so-called high-risk systems, the rules will become applicable either after 24 or 36 months. This is to give providers time to adapt, to prepare for the applicability of this new legislation. It’s a risk-based piece of legislation, so it intervenes where necessary. It does not apply, it doesn’t regulate technology, but it regulates certain uses of technology, so the context is important, the use is important. The implementation itself is based on a strong pre- and post-market system of enforcement and supervision. So it’s a decentralized system of national… national notified bodies checking compliance with the AI Act requirements before high-risk systems can be used, can be placed on the EU market. So these are the member states’ national notified bodies, and they have then market surveillance authorities ensuring the post-market monitoring. So there is the pre-market checks and then, of course, the post-market monitoring. So this system of implementation is based on a well-established and functioning system that we have in place in the European Union on product safety. So it applies similar principles. I would like to highlight the importance of one body here, which is the European AI Office. We have set up the European AI Office, and that AI Office will then coordinate the work of the national notified bodies and national authorities which are involved in this. And so it has the coordination and monitoring tasks, to some extent, to ensure uniform application, uniform implementation. But it will also have a special role in the supervision of general-purpose AI models. So the AI Office will have special powers in this regard. For example, it can do evaluations, it can request measures, particularly on general-purpose AI models which carry a systemic risk. And they have those kinds of models, providers of such models have certain additional obligations rather than just transparency. So the AI Office has roles which go beyond safety institutes which have been set up in some countries, because it has a much broader scope, because the AI Act deals not just with safety, but also the protection of health and fundamental rights. So it has quite a different profile, but it includes the safety aspect. So any cooperation that we have with safety institutes elsewhere that are being set up or have been set up, it includes the safety aspect. have been set up, this will be with the AI office. So it will have an important role in the implementation. I should also add that it will also deal with research, innovation, and deployment aspects, and also the international engagement. So it has a very broad, comprehensive set of roles. Indicators of measures. I should maybe add that we also have a couple of other bodies, scientific panel, which will support the implementation and advisory forum with stakeholders and member states’ representatives in an AI board. Indicators, well, there could be technical, technocratic indicators of its success. For example, the number of systems that undergo conformity assessment and get this so-called CE marking, which then enables them to be put on the market, how many are registered in the database, et cetera. However, in a way, the most important indicator is something that is hard to measure, because we think that this will increase trust in AI systems. So the AI Act and its provisions and the safeguards it provides, the guardrails it provides, will increase trust in AI systems. Why is trust so important? Trust is important because trust is the sine qua non for uptake. Uptake is sine qua non for benefits to materialize. So we need trust to have uptake, and we need uptake to have benefits so that we can actually enjoy this technology and the potential it has and the positive aspect it has. Thank you.
Ebtesam Almazrouei:
Thank you, Juha. While we all agree on the importance of measuring the effectiveness of AI Act, also we would like to hear more from Alan, the Assistant Secretary from the United States of America about the executive order. And mainly that the United States had already adopted a voluntary commitment approach by the private sector and already issued an executive order. What are the next steps? or the steps for the United States and how you can domestically and internationally measure the success of them.
Alan Davidson:
Well, thank you, Dr. El-Masri. And a quick thank you and congratulations to the ITU and to Secretary General Doreen Bogdan-Martin for convening all of us and for hosting a very successful day already. And for all of you for joining us to discuss how we can leverage AI to help achieve our common collective goals and the sustainable development goals. You know, the starting point for us has been that responsible AI innovation, emphasis on responsible, can bring enormous benefits to people. It’s going to transform every corner of our economy. But we will only realize the promise of AI, as others have said, if we also address the serious risks that it raises today. And we’ve heard a lot about those. Those include concerns about safety, security, privacy, discrimination, bias, risks around disinformation, as we heard so eloquently on the previous panel. We also face a risk of exacerbating the inequities that exist already in our world if we don’t ensure that these advances are available for everyone. To talk about how we’ve been approaching this. Domestically, the entire US government has really moved with urgency, I would say, to seize on this tremendous promise and potential risk of this moment. As you noted, last summer, President Biden secured voluntary commitments, that was our starting point, from the leading AI companies to help make sure that AI systems are safe before they’re released. And these commitments helped us, and the world, I hope, get ahead of the rapid pace of development that we started to see in these frontier models. Developer commitments were just the first step. The U.S. government has issued an AI executive order last fall, which is, we think, one of the most significant government actions to date on AI safety, security, and trust, and it brings the full capabilities of the U.S. government to bear in promoting innovation and trust in AI. It also lays out a very broad work program, from research to tooling to policy, to address the risks of AI and to use the authorities that already exist in law to bring to bear to these issues. Just as an example of a few of the big initiatives folks have been aware of, going forward we’re standing up, and have stood up, the U.S. AI Safety Institute to do the technical work around security science required to address the full spectrum of AI-related risks. And just last week, the Secretary of Commerce, we released a vision paper for that safety institute. We’re also pursuing a broad range of other initiatives. One that I’m particularly interested in going forward is, and that we’re leading going forward is, on the question of open model weights and the question of the openness of frontier models and dual-use foundation models. That domestic work, and it’s far-ranging, I think gives us, going forward, also a sound basis for our international approach. You know, as we’ve noted, the international community has been working on AI governance and principles for years, and that is something we should build upon. Much of our work is around thinking about how we can leverage the tremendous potential of AI for our collective goals. And just as an example, I’ll talk about the Sustainable Development Goals, which as many of you know, we’re all on track to achieve just 12% of our goals in that space. space. In these benchmarks, we’ve plateaued on many of them. On some of them, we’re actually regressing. But studies suggest that AI could accelerate progress on 80 percent of the SGGs, in part by automating the work, improving decision-making. AI can help map soil, can yield better crops. It can help us predict earthquakes, as we’ve seen in studies. All of these are the kinds of things that, optimistically, we should be harnessing and making sure that these tools are available widely to everyone. We’re working to build global momentum around that idea of harnessing AI for good. In March, the U.S. led the passage of the first-ever standalone resolution on AI in the U.N. General Assembly. And we believe that gives us a framework for leveraging AI for economic and social progress, while respecting human rights and leaving no one behind. Last week’s summit in Seoul of the AI Safety Institutes, congratulations to our colleagues from Korea, was another important building block in multi-stakeholder collaboration. And the list goes on. There are many initiatives that are underway. I think what you’re hearing is a sense of urgency from government to address the issues of the moment and to realize that if we work together, and we are committed to working together, we can capitalize on this energy, capitalize on this moment of public attention, and ensure that the AI revolution is a revolution that is working for everyone. Thank you.
Ebtesam Almazrouei:
Thank you, Alan. I can’t agree more. AI for good should be our canvas. And how we can harness the power of AI across all the 17 SDGs is a crucial step that we all should agree. as governments, industry leaders, NGOs, academic institutes, and how we can foster our global collaboration and cooperation toward achieving the AI for good goals. Now, moving to China. Your Excellency, Mr. Sean John Kitty, and I apologize if I pronounce your name maybe incorrectly. You can correct me if it’s not the right way. China is already committed to the concept of AI ethics first and AI for good. And could you share more practical experience of AI governance in China and what exactly you were doing and what are the regulation and AI frameworks that you already put into practice?
SHAN Zhongde:
Thank you very much. It’s a very important initiative worldwide. And we are going to promote development. So this event that we will hold will be very important. Last year, we promoted the initiative AI for good. We proposed several methods. and we will base ourselves on a human-centered approach so that AI will work for good. We are currently working on a consensus so that governance is at the center. We are going from putting these words into reality. We are putting these words into practice. In China, we would like to explain how we reflect the practices of having a human-centered approach and how AI can be used for good. Firstly, we would like to insist on this theory. We want to ensure that we prevent risks, that we analyze risks so that we have many warning systems, we are creating policies to do so, we are working with sector players to establish initiatives for open, transparent work, and we are focusing on open data. Some companies are already working on this as a priority. The practice of having very strong theories is very important in China. Secondly, we have a number of very positive examples. In China, we are drafting and creating different strategies and methods, and through research, we are progressing. We are working on different algorithms, and in that way, we are engaging in in-depth work with AI to adopt a number of policies and regulations. We are also testing technology, and are dividing products in different categories – finance, health, and transport systems. And AI is used in all these fields. We have specific standards for this technology. In April, China adopted a first set of projects for AI. We have principles and regulations with the aim of avoiding risk and to compensate risk. We are also working to ensure a fair and fair trade. We wish to have a human-centered approach. We wish to avoid discrimination of users, and for this we have created different sectors and standardized these different sectors. We have data notification systems that allow us to have a multi-stakeholder and multi-sectoral assessment. We have also participated in the action plan on well-being thanks to AI. We are working actively in this field, and we are working with ASEAN actively on a global level. To conclude, I’d like to say that regarding 1G and 2G, we have adopted a number of regulations, and we are focusing on technical capacities to increase and enhance our working methods, our innovative technologies. has allowed us to create a stamp in this field. We have also worked on a public platform for modern technologies. And we insist that AI should be for good. We use digital content to create simulations. And we would like to insist on the fact that AI should be for good. Thank you.
Ebtesam Almazrouei:
Thank you, Your Excellency, for speaking about the examples that you provided and you already implemented in China about the best practices and implementation of AI, especially for good. Now, moving to Your Excellency, Mr. Hiroshi Yoshida. As the chair of Group 20 in 2019, and with the recent launch of the Hiroshima AI process at the G7 in 2023, Japan is actively working on international AI governance and collaboration. What outcomes do you anticipate from these efforts? And if you can provide us with any updates about these collaboration efforts. Please.
Hiroshi Yoshida:
Thank you. First of all, I would like to congratulate Thomas for adoption of Framework Convention of European Council. And yes, as already discussed in this panel and the previous one, so we know that there are many risks in AI. So many aspects are pointed out, bias in learning data, disinformation, cyber. and Bangladesh Minister pointed out at the previous session, the deepfake in disaster. And also in our country, we experienced deepfake in disaster caused by typhoon and others. And also deepfaking election, this deepfaking election is pointed out recently. But on the other hand, we all know that AI has a big potential. And so that what is important is because of the… so hesitate to get best use of AI because of the risk, but instead we should mitigate those risks and get best use of AI. And so that’s why we started a discussion on AI in international forum. So the first one we started in OECD under G20 and in 2019 Osaka Summit of G20. So we agreed on G20 AI principle. And it is a kind of a common understanding what we should think about building a policy regarding AI. Of course, in these four years, everything had changed and the generative AI had came up last two years. So in 2022, and we launched to cooperate it, we launched the Hiroshima AI process. And the concept of the Hiroshima AI process is that the… so we need some kind of a governance framework, but it should be interoperable. And interoperable means not every country has to take the same action. So what should be, so what action should be taken is a very same level, but how to implement it is a kind of, it’s up to each country. So that we know that AI act is adopted in Europe, but on the other hand we have another approach. Of course voluntary commitment can be an option, but what action should be done needs to be interoperable. So for example, so in the outcome of the Hiroshima AI process, there is, so it asks for evaluating risks in advance to putting those products into the market. And those actions to be taken should be interoperable, and those AI developers, AI service providers or AI users know what to do. And so that we discussed last year, and December last year, we agreed on comprehensive policy framework which includes guiding principle for all AI actors, that it’s not only for AI developers, including AI service providers, AI users. And also code of conduct for AI developers. And so this year we are continuing discussing how to implement it, and so kind of a monitoring mechanism is discussed. and the Italian presidency. So also, we launched the Hiroshima AI Process Friends Group beginning of this month, and so now 50 countries and regions have joined the Hiroshima AI Process Friends Group. And also, in our country, we also established AI Safety Institute this February, and the important thing here is also interoperability. And we are now working on, in that AI Institute, how to assess the AI safety, and so we cannot do only in our AI Safety Institute, AI Safety Institute, so the AI Safety Institute should be coordinated between other AI Institute in other countries, and then such assessment also should be having interoperability. Thank you very much.
Ebtesam Almazrouei:
Thank you, your excellency. Thank you, your excellency. Maybe one of the most thing that I noticed from the morning discussions and also this afternoon discussion is most of the countries, they started to have their own AI Safety Institute, such as in Korea and Japan and also in USA. And I would like also to emphasize that the role and the important steps that each country, they have to set their own institute or sandbox to test the best AI frameworks that can be embedded and put into place for their societies and for their governmental work. Now, going to Korea. Your Excellency, Dohyon, are the goals set during the UK Safety Summit that has been held in November 2023, and also I contribute with my colleagues from TIC AI leaders, government representative, and we discussed many themes. Last week, you already hosted the second AI Safety Summit in Korea. What has been implemented so far still? Can you see that the goals that has been set by the UK Safety Summit still valid, or maybe there is different approach that has been taken by Korean government?
Dohyun Kang:
Thank you very much for introducing me, and thank you again, the Secretary General of the ITU, and under the leadership of the wonderful organization conference. I am also glad to introduce the Seoul AI Summit and the result of the summit. If I have got the questions about the Blitzkrieg Summit is still valid, then of course it’s valid. The Seoul Summit is the second version, which is developed for the six months since the Blitzkrieg. So the developed version’s point are several things. The one is the value of the AI and the topic of dealing with in the summit is diverse. The one is safety. The second is innovation. The third one that already several guests discussed. discuss inclusivity, so. Also the second one is the more detailed things, it’s almost the action plan with respect to the AI safety. In the Seoul Summit, we strongly recommended the networking between the AI Safety Institute and also the emphasizing the global cooperation between the AI Safety Institute. It’s a more detailed thing, so. The third one could be, how can I say, it’s related to the government action, so. So for example, the AI Arctic in Korean case, at first time, as long as I remember, at the 2019, firstly, OECD announced the AI Arctic principle, and the next year, the G20 also announced the AI Arctic principle. After that, the Korean government also established our standard, our Arctic principle, and then after that, we detailed, we established a guide, guideline, and the checkpoint for the developer and the operator, each companies. Seoul AI Summit composed three parts. One is the leader’s section, second is the minister’s section, the third one is the global forum, so. At the leader’s section, the leader, president, and the prime minister, and the secretary-general also adopted Seoul declarations. At the Seoul declarations, we included the… emphasizing the importance of the testing and the measure of AI safety. And the second one is addressing all various kind of the side effect of AI. And then the third one, we recognize that the core relationship, much more enhanced, international cooperation between each others, focusing on inclusivity, we also have to contribute to another, how can I say, digital south or developing south or something. It is called, could be called the AI south. So we also have to act to the more detailed things to solve this problem. So the AI Safety Summit, including all of the things at the Seoul declarations. Also, the next day, the global forum held it and the ministerial meeting held it. At the time, the ministerial meeting statement, including all of the things, the democratic environment and all the activities and the culture and the human brain, all of the side effect of the AI. And especially, it included the low-power semiconductor is also included there. Okay. That’s the quick summary of the Seoul AI Summit. And thank you for the congratulation comment on our summit. Next, we held it at French. And then from now, the United Kingdom government and the Korean government also will be discussing about the topic of the next ones. Also, more detailed things will be scheduled. Thanks again for our step of a colleague of the United Kingdom, so he, the United Kingdom, and one of the best of my partners, our partners to the co-organizer of this summit. Thank you very much.
Ebtesam Almazrouei:
Thank you, Your Excellency. I think now we will go to the next part of our session, and we heard from everyone from the United States about the executive order and what they are doing in terms of AI governance and implementation to Japan, Korea, China, the European and AI Act. Now that you already heard your colleagues and your peers and what each government and their countries are trying to do, what is one key element you would like to incorporate from their practices to your country or region approach towards AI governance and framework in your country?
Hiroshi Yoshida:
Yes, so, as I said in the previous remark, operability is a very important factor for AI governance, and we want to know what other countries are doing, and so that to know other policies, it would be effective for us to discuss AI policy, and we want to share more information in this policy and have a multi-stakeholder discussion, of course not only the government can develop AI governance framework, we need multi-stakeholder. call the discussion and know what other countries are doing. Thank you very much.
Thomas Schneider:
Thank you. Thank you. It is interesting and good to hear that all governments seem to want the same thing. They want to protect rights, they want to put people in the center, they want to allow innovation. But then, if you look at the tools, there are many ways to roam, as the Italians say. So, some go, give reference to voluntary commitments or regulations, others go for a horizontal law that tries to fulfill at least some of the purposes, others have other incentives. So, I think we should cooperate all together, and not just governments, but all stakeholders, of course, to develop a global governance and cooperation framework that allows us to do the same in different ways that are reflecting our situations, our cultures, our needs. And I think this is what is needed. At the same time, if we are honest, we also need tools that empower people, that create transparency and accountability, so that people can react in case governments or companies do not do what they say they do, i.e. support people, protect them, and so on. So, if a government or a company does damage to its owner or other people, then we should have means to actually stand up and create incentives that this is not happening, or this is avoided or minimized. And therefore, the Council of Europe Convention is one tool that unites all those that care about human rights, democracy, and rule of law, by agreeing on the same values, but offering adaptable, agile, dynamic ways fit for every country to sign up to it, like the Cybercrime Convention did, where more than 100 countries are cooperating in a modular way, on substance, with additional protocols, but also in different levels of participation. You sign and ratify, you just cooperate, so this is one of the big contributions. But I think on a global governance level, and this is what I want to end, fortunately, we do not need to start from scratch. We have many actors that already perform important functions, like the ITU in their fields, like UNESCO, like the standard institutions, and so on. We just need to make them cooperate better, more coherent, and identify concrete gaps where maybe additional functions or structures would be needed, so we can actually start not from zero, but from a reasonable amount of activities that we already have.
Ebtesam Almazrouei:
Thank you, Thomas.
SHAN Zhongde:
FOREIGN MINISTER WANG YIQIUAN I’m very glad to hear from everyone about their experiences, and also on the AI governance in the past few days. I have, for many times, heard the speeches from the ITU Secretariat, and I have the following thoughts. First, we should continue to strengthen the norms and standards, and we should promote the formation of a global framework. And this is what we should do first. Secondly, we should strengthen the international exchange, and we should make use of the generative AI technology, so the sharing of the experiences is very important. Thirdly, we should strengthen and deepen the cooperation, and jointly we should enhance the safety, reliability, controllability, and the fairness of the AI. And so, to make AI empowerment transition, and to jointly promote the AI for the benefit of mankind, for the benefit of SDG, I would like to take this opportunity. to extend an invitation to all of you to attend the World Conference on AI to be held in Shanghai in the coming July. And in particular, at this conference, the ITU, my ministry, the Shanghai Municipal Government will co-host a forum entitled AI for Good. I’m ready to meet with everyone in Shanghai to jointly push up for the AI for good and for the benefit of the people all over the world. Thank you.
Ebtesam Almazrouei:
Thank you, Your Excellency. Indeed, we will look forward to joining you in Shanghai to foster our collaboration and to discuss how we can harness the power of AI across all the 17 stages. Going back to our main second round discussion, how you can benefit from your peers’ experience in terms of regulating AI frameworks, and what are the best practices that you want to put in implementation in your government and in your country?
Dohyun Kang:
Yes. Actually, the Korean government wants to contribute to all of the AI governance, but the most specific things that we want to do is, the first one is the more specific AI standard, AI safety standard. There are four national AI safety institutes up to now, the United States, Canada, and the United Kingdom, and Japan. Also, our government will open our AI safety institute in Japan. in the end of the years. Also, the five AI safety institutes with how to do the testing, the private sector or the public sector. Also, we want to, in short term, so we want to focus on that one. The second one is the long-term, the long-term approaches. It is related to the inclusivity. So, the Korean government got a higher reputation in terms of the inclusivity. We worked for 30 years, almost 30 years, so we know what is good, what is bad, we are what, we are failed. Sometimes, we got the success. It’s the know-how in the policy makers. Also, we understanding that the situation of the other countries. As you know, the Korean is very unique situations. We want to go to the global markets. We want to be one of the best nations in terms of the AIs. Also, we want to contribute the inclusivity with the other countries. Also, we know the value of the inclusivity. The next one is we will launch another big project with ASEAN. It is the big project after that I met another country’s ministry. So, that is our actions and that I want to. reflect your opinion to our policies. So we’ll try to get much more steps and we will much more get the more practical things to contribute to this discussion. So that’s our strategy of the Korean government. Thank you very much
Ebtesam Almazrouei:
Thank you, Your Excellency. We are short of time. I really appreciate if everyone can stick to one minute
Alan Davidson:
I will be very quick. I will say, first of all, there’s so many good ideas. We have tried to incorporate many of them. Part of our approach, as you heard, voluntary commitments, a very comprehensive executive order, tooling, research, ultimately governance activities, and with the immediate effect has been our approach. Probably one thing that I see that we have, that others have spoken about, is legislation. And one thing that we have not pursued yet, but that the President has called for, is bipartisan legislation in the U.S. to further harness the power of AI while keeping Americans safe. Congress will determine the exact approach for us, but the President has been clear that he wants to see legislation that incorporates principles of trust and safety in legislation that gives us also the tools we need to make sure we’re regulating properly. I will say it is early days for this conversation. We have a lot to learn. I do feel like together we can move forward on these big issues around AI with urgency and with wisdom. That is our goal. Thank you. Thank you so much.
Juha Heikkila:
So, two things emerge from this discussion. First, there is a lot of attention now on AI, so we have to seize the moment and make AI work for good. And I think this is something that we share here, and I think this is something that we support, and this is certainly something that we can support. How we go about it in cooperation, of course, we see where we can converge and work on that. And then, as was mentioned by the Japanese Vice Minister, the implementation details then are different. So, different countries, different jurisdictions take different decisions. We in the European Union have decided to legislate. We have hard law. others and there are other countries which have followed suit and are introducing or preparing legislation, yet others have decided to rely on other types of measures. So this is, I think, an important point. Each jurisdiction will choose whatever suits them best and what they feel is the most appropriate way of doing it, but the compatibility of these different governance approaches and the international approach, of course, is something that would be important. This was mentioned there. And this is also what we want to work towards and I already mentioned that the international aspects are part of our EU AI strategies. One key cornerstone, as the President of the European Commission stated in her State of the Union speech in September, we work to guide innovation, to set guardrails on AI and to work together with other jurisdictions on guardrails, on international governance, on AI. Thank you.
Ebtesam Almazrouei:
Thank you, Juha. Well, we are concluding the second session. What has been discussed here is how we can all collaborate and the importance of collaboration and cooperation to foster the development of inclusive AI governance for AI for good. Thank you for participating in this panel. I hope that your colleagues here also, they can take the remarks and every country start to build based on your best practices and best experience. Thank you. Thank you.
Speakers
AD
Alan Davidson
Speech speed
161 words per minute
Speech length
1089 words
Speech time
407 secs
Report
The speaker commenced by expressing gratitude towards the ITU and Secretary-General Doreen Bogdan-Martin for orchestrating a successful event, as well as acknowledging participants’ dedication to exploring the potential of AI in achieving shared goals and the Sustainable Development Goals (SDGs).
The central belief is that AI, if innovated responsibly, can significantly revolutionise the economy and impart considerable benefits. However, it is crucial to develop strategies to counteract the risks associated with AI, including concerns over safety, security, privacy, discrimination, and the spread of bias and misinformation.
The speech addressed the threat of widening inequalities should AI’s distribution not be equitable. In response to this, the speaker detailed actions taken by the US, highlighting a sense of urgency to leverage the potential of AI whilst recognising its dangers.
A key move by President Biden involved securing voluntary commitments from top AI companies to ensure the safety of AI systems before their launch—marking a step towards active AI risk management. The US government has also instituted an AI Executive Order to foster trust and innovation, providing the foundation for a comprehensive programme encompassing research, development of tools, and policy reform to tackle AI’s risks.
The establishment of the US AI Safety Institute, dedicated to understanding AI’s diverse security risks, stands as a significant initiative in this regard. Moreover, the speaker emphasised the importance of transparency in AI models, particularly those with dual-use potential and announced the release of a vision paper for the safety institute by the Secretary of Commerce.
On the international stage, the speaker underscored the US’s dedication to enhancing years of AI governance, spotlighting AI’s role as a pivotal tool for advancing nearly all SDGs—even against the backdrop of some goals experiencing a reversal or stagnation in global progress.
AI is recognised as a key driver in sectors such as agriculture and disaster prediction, which are vital to international advancement. This commitment was evidenced by the US leading the charge in the adoption of a U.N. General Assembly resolution on AI, which offers a framework for utilising AI to promote economic and social improvement while upholding human rights.
The speaker communicated the urgency felt by the US government in tackling today’s AI challenges and their determination to harness public interest to ensure AI engenders widespread benefits. As for legislation, the speaker mentioned that despite a lack of specific AI laws in the US, President Biden has recommended bipartisan legislation to manage AI’s advantages safely.
In summing up, the speaker emphasised the collaborative and evolving dialogue surrounding AI, which is still in the nascent stages. There is a collective ambition to swiftly and wisely overcome the hurdles presented by AI, advocating unified efforts to make the AI revolution inclusive and advantageous for all.
DK
Dohyun Kang
Speech speed
123 words per minute
Speech length
996 words
Speech time
486 secs
Arguments
The UK Safety Summit goals are still valid
Supporting facts:
- Seoul Summit is a developed version of the Blitzkrieg Summit
- Seoul Summit included diverse topics such as safety, innovation, and inclusivity
Topics: AI Safety, Seoul AI Summit, International cooperation
Seoul Summit emphasized the importance of testing and measuring AI safety
Supporting facts:
- Seoul declarations adopted at the leader’s section of the Seoul AI Summit
- Leaders emphasized importance of addressing AI side effects
Topics: AI Testing, AI Measurement, AI Safety
AI Safety Institutes are important for global cooperation
Supporting facts:
- Seoul Summit recommended networking between AI Safety Institutes
- Seoul declarations recognized international cooperation enhancement
Topics: AI Safety, Global Cooperation, AI Safety Institutes
Korean government established AI principles and guidelines
Supporting facts:
- Korea followed OECD AI principles and established its own
- Government released guidelines and checkpoints for developers and operators
Topics: AI Ethics, Government Policy, AI Regulation
International cooperation should focus on inclusivity and support for AI south
Supporting facts:
- Seoul Summit stressed contribution to digital south
- Ministerial meeting addressed democratic environment and cultural aspects
Topics: Inclusivity, International Aid, Digital Divide
Report
The Seoul AI Summit, widely recognised as an extension of the pioneering efforts initiated by the UK’s Blitzkrieg Summit, has made significant strides in promoting an agenda that strongly emphasises AI safety and international cooperation. Its varied thematic scope encapsulated a range of topics and successfully echoed commitments seen in several Sustainable Development Goals (SDGs), notably Industry, Innovation, and Infrastructure (SDG 9); Partnerships for the Goals (SDG 17); Peace, Justice, and Strong Institutions (SDG 16); and Reduced Inequalities (SDG 10).
The predominant sentiment emanating from the summit proceedings is overwhelmingly positive, signifying a smooth transition of objectives from the UK Safety Summit and putting a spotlight on the need for comprehensive strategies for AI testing, measurement, and proactive mitigation of potential side effects.
Importantly, the Seoul declarations, born out of productive discussions at the summit, have reinforced earlier initiatives and signalled the requisite for a detailed and refined action plan focused on AI safety. The inclusion of AI side effects in the discourse not only demonstrates recognition of these issues but also a determined effort to tackle them head-on, with summit leaders emphasising the development of safety protocols and measures.
Integral to the summit’s achievements was the promotion of collaboration between AI Safety Institutes, underscoring the vision of SDG 17’s goals for global partnerships. This collaborative approach is designed to synergise global efforts and enhance a cooperative environment for the safe and ethical advancement of AI technologies.
The summit also made substantive progress in addressing the digital divide, investigating ways to foster inclusivity and provide support to less developed regions—a step towards a more equitable digital landscape that meets aspirations outlined in SDG 10. Deliberations in the ministerial meeting also broached the importance of maintaining a democratic environment and considering cultural diversity, reflecting a comprehensive perspective on AI’s societal role.
Regarding South Korea’s proactive engagement, the government showcased its dedication to the ethics of AI by embracing OECD AI principles and establishing corresponding national guidelines. These guidelines offer a framework for both developers and operators, promoting an ethos of responsibility and transparency in line with SDG 16.
In totality, the outcome of the summit is unanimously viewed as a significant positive development, reinforcing international determination to establish an AI ecosystem that champions innovation as well as anticipates and mitigates potential risks through collective expertise and a sophisticated strategy.
In summary, the Seoul AI Summit has undoubtedly received wide acclaim for its role in advancing the conversation on AI safety, placing greater emphasis on striking a balance between rapid technological innovation and the ethical considerations inherent to this rapidly evolving domain.
The summit has firmly laid down expectations for ongoing collaboration and strategic action in the field of AI, shaping the narrative for future global deliberations and safety regulations within the AI technology landscape.
EA
Ebtesam Almazrouei
Speech speed
136 words per minute
Speech length
1149 words
Speech time
506 secs
Arguments
Measuring the effectiveness of AI Act is important
Supporting facts:
- AI Act provisions and safeguards increase trust
- Increased trust is essential for AI uptake and realizing benefits
Topics: AI Regulation, AI Adoption, AI Trust, AI Benefits
Interest in hearing about the next steps for the U.S. regarding the executive order on AI
Supporting facts:
- U.S. has a voluntary commitment approach by the private sector
- An executive order on AI has been issued
Topics: AI Regulation, AI Policy, United States AI Strategy
AI for good should be the primary goal
Supporting facts:
- Harnessing AI across all 17 SDGs is crucial
- Global collaboration is necessary for achieving AI for good
Topics: AI innovation, Sustainable development
China has implemented AI for good initiatives focusing on human-centered approaches
Supporting facts:
- Promoted the initiative ‘AI for good’
- Creating policies for open and transparent work with AI
- Adopted first set of AI projects with principles and regulations to avoid risk
Topics: AI governance, Human-centered AI, Risk prevention in AI
Japan is progressing in international AI governance and collaboration
Supporting facts:
- Japan chaired G20 in 2019
- Launched Hiroshima AI Process at the G7 in 2023
Topics: International AI Collaboration, Hiroshima AI Process
Countries have started their own AI Safety Institutes.
Supporting facts:
- Countries like Korea, Japan, and the USA have established AI Safety Institutes.
- AI Safety Institutes play a critical role in developing safe AI frameworks and practices for societies and governments.
Topics: AI Safety, National AI Strategies
Each country should establish a sandbox or institute to test AI frameworks.
Supporting facts:
- A sandbox allows for testing AI technologies in a controlled environment.
- AI frameworks need to be tailored to the societal and governmental needs of each country.
Topics: AI Governance, AI Testing Environments
Collaboration and cooperation are crucial for the development of inclusive AI governance aimed at AI for good.
Supporting facts:
- The need for collaboration was discussed to foster the development of inclusive AI governance.
- The importance of collaboration is to ensure AI is used for good.
Topics: AI Governance, International Cooperation
Report
The current discussions surrounding Artificial Intelligence (AI) governance exhibit a uniformly positive outlook, underlined by a general agreement regarding the importance of establishing trustworthy, secure, and internationally collaborative AI frameworks. Such a focus is essential not only to harness the wide-ranging advantages that AI has to offer but also to align with multiple Sustainable Development Goals (SDGs).
The AI Act of the European Union is often lauded as a pivotal piece of regulation aimed at instilling trust—a key element for the widespread acceptance and utilisation of AI technologies. With defined provisions and safeguards, the Act is recognised as critical in unleashing AI’s capabilities, in line with SDG 9, which champions industry, innovation, and infrastructure.
This optimism is further solidified by the consensus on the importance of assessing the AI Act’s effectiveness, pivotal for ensuring robust and just policy implementation, reflective of SDG 16 which promotes peace, justice, and strong institutions. In contrast, the sentiment towards AI regulation in the United States portrays a sense of neutrality, with interest centred on the upcoming developments post the implementation of an executive order on AI.
This suggests a more measured or speculative stand on the topic within the US, awaiting the forthcoming actions post-policy introduction. Global collaboration emerges as a recurring and prominent theme throughout the discussions, particularly in resonance with SDG 17 that stands for partnerships in achieving the broader goals.
The narrative strongly advocates for the leverage of AI congruent with all 17 SDGs, positing that the quest for ‘AI for good’ must be the predominant goal. This emphasises the significance of worldwide collaborative endeavours as being paramount to steering AI as a powerful catalyst for good.
China’s focus on AI, centred on humanity, is heralded through its ‘AI for good’ policies and projects that value transparency and mitigate risks, mirroring the objectives of SDGs 9 and 16. Japan’s forward strides in the realm of international AI governance also garner praise.
With the initiation of the Hiroshima AI Process at the G7 in 2023 and its role in the chairmanship of the G20 in 2019, the commitment to governance frameworks that nurture global cooperation is clearly signified, correlating with the innovative and sustainable tenets of SDGs 9 and 17.
The dialogue includes national initiatives like the establishment of AI Safety Institutes, with Korea, Japan, and the USA at the forefront. These institutions are pivotal for sculpting AI frameworks and practices that promote safety, corresponding with SDG 9. Ebtesam Almazrouei’s recognition of such endeavours intimates that adopting shared best practices is a prudent approach.
The creation of controlled testing environments for AI technologies, such as sandboxes, is also a point of emphasis. This enables tailored AI governance systems that accommodate the distinct societal and government requirements of various countries, addressing the infrastructure of innovation in SDG 9 as well as the pursuit of strong institutional frameworks in SDG 16.
To conclude, the sentiment permeating AI governance discussions is one of anticipation and envisions AI development that espouses trustworthiness, safety, and global coordination. The unifying sentiment suggests that through combined initiatives, transparency, and robust governance, AI can be directed to serve the global good, achieving sustainable development across diverse areas.
Exemplifying a cooperative spirit that transgresses boundaries, this discourse aligns with the imperative need for a globalised approach in the advancing AI era.
HY
Hiroshi Yoshida
Speech speed
107 words per minute
Speech length
716 words
Speech time
400 secs
Report
The acknowledgement of Thomas’s contributions to the adoption of the Framework Convention by the European Council initiated a dialogue that swiftly transitioned towards examining the dual nature of Artificial Intelligence (AI). This encompassed AI’s vast potential tempered by considerable risks, such as biased learning data, disinformation propagation, cyber threats, and notably, the advent of deepfakes.
The latter has been particularly emphasised by the Bangladesh Minister in relation to disaster scenarios and their implications for electoral integrity. Notwithstanding the recognised risks, the discourse conveys a resolute confidence in AI’s transformative capabilities. The consensus is that efforts should pivot away from a defensive posture and towards comprehending and alleviating AI-associated risks to fully leverage its capabilities.
Reflecting on the evolution of international discussions on AI, the acknowledgment goes back to the G20 AI principles, collaboratively established at the 2019 Osaka Summit. These principles have laid the groundwork for constructing AI policies. The conversation then turns to present AI developments, acknowledging the technological strides made in the past four years, with particular focus on generative AI’s emergence in the last two years.
The Hiroshima AI process was introduced in 2022, marking the pursuit of an interoperable governance framework. Interoperability is underscored as a versatile approach, allowing nations to synchronise certain actions while respecting sovereignty in their application strategies. A key recent accomplishment is the formulation of a comprehensive policy framework from the Hiroshima AI process.
This framework provides guidance for the wider AI stakeholder community—developers, service providers, and users—as well as a bespoke code of conduct for AI developers. Under Italy’s current presidency, a dialogue on implementing this framework continues, with a particular focus on creating effective monitoring mechanisms.
Moreover, the recent inception of the Hiroshima AI Process Friends Group, with participants from over 50 countries, signifies a burgeoning global commitment to collaborative AI governance. Nationally, the establishment of an AI Safety Institute in February represents a measured step towards evaluating AI risks and enhancing coordination among analogous institutions globally.
This initiative highlights the necessity for harmonised safety assessments to ensure international compatibility and recognition. Looking to the future, the speaker champions the exchange of information and inclusivity in AI governance conversations. A multi-stakeholder approach is essential for developing AI governance frameworks and fostering comprehensive understanding and collaboration.
Emphasis is given to recognising and understanding various national efforts in AI policy, contributing to a globally coherent discourse on AI governance’s future. The summary has been thoroughly reviewed to ensure it reflects UK spelling and grammar while also improving clarity and accuracy.
Long-tail keywords have been incorporated to enhance the text’s searchability without sacrificing the quality of the summary.
I
Introduction
Speech speed
116 words per minute
Speech length
244 words
Speech time
126 secs
Report
The forthcoming panel discussion, titled “The State of Play of Major Global AI Governance Processes,” is set to explore the intricate nature and current state of global AI governance mechanisms. The conversation will be chaired by Dr. Ebtusan Almazroui, who is revered for her considerable contributions to AI development, including the creation of the Middle East’s first open-source large language model, Falcon 0B, as well as the high-impact Falcon 180B models, which were introduced in 2023.
The panel features a distinguished group of government officials from worldwide, each offering significant expertise regarding the policy and regulatory frameworks for AI in their respective areas: – His Excellency Hiroshi Yoshida, from Japan’s Ministry of International Affairs and Communications, presents a wealth of experience following his 2022 appointment.
– Ambassador Thomas Schneider, Director of International Affairs at Switzerland’s Federal Office of Communications, brings the Swiss perspective to the table. – His Excellency Shan Zhongge of China’s Ministry of Industry and Information Technology will provide insights into China’s strategic approach to AI.
– His Excellency Do-Hyun Kan shares South Korea’s perspective through his role in the Ministry of Science and ICT, with a focus on policy coordination for emerging technologies. – Helen Davidson, serving as the U.S. Assistant Secretary of Commerce for Communications and Information, adds an American angle to the conversation.
– Yuhua Haikila, the AI advisor within the European Commission, offers a comprehensive view of the AI regulatory landscape in Europe. The discussion is set to be a thorough examination of the varying strategies, challenges, and collaborative efforts in AI governance.
Key topics will undoubtedly include ethical questions, data privacy, international cooperation, and the delicate balance between fostering innovation and ensuring public trust and security. This event is significant not only because of the high-profile nature of the participants but also because it highlights the shared understanding of the profound influence of AI.
With each region facing distinct policy obstacles, the exchange of ideas will emphasise the urgency and intricacy of creating coherent and adaptable global AI governance structures. The panel is expected to feature a spectrum of arguments reflecting both national experiences and overarching global needs.
It is likely to reach a consensus on the need to evolve a common framework that meets the dual requirements of leveraging AI for economic and social advancement while controlling risks through responsible, inclusive, and clear governance. In summary, this panel is anticipated to offer a rich amalgamation of perspectives and encourage collaborative efforts.
Once concluded, it will probably underscore the ongoing need for dialogue and cooperative action to effectively manage the international AI governance landscape.
JH
Juha Heikkila
Speech speed
165 words per minute
Speech length
1184 words
Speech time
430 secs
Report
The European Union’s AI Act is a pioneering regulatory framework for overseeing artificial intelligence, marking the first comprehensive and binding set of rules for AI applications that is consistent across the EU’s 27 member states. It applies to both public and private entities that are engaged with AI, setting it apart from less encompassing frameworks seen internationally.
In terms of implementation, the Act will adopt a phased approach to ensure compliance. Once the Act becomes official a month after its signing and publication, it introduces immediate prohibitions to be complied with within six months. This is followed by provisions targeting general-purpose AI models, which will take effect after twelve months, while high-risk systems are afforded a longer transition period, being either 24 or 36 months, allowing providers sufficient time to adapt to the new requirements.
The AI Act employs a risk-based approach, prioritising the context and use-cases of AI, thus concentrating on the potentially harmful applications of technology instead of the technology itself. By doing so, the Act aims to mitigate risks more effectively without imposing sweeping regulations on all AI indiscriminately.
Enforcement of the AI Act will utilise a proven dual system within the EU, involving both pre-market inspections and post-market monitoring done by national bodies. High-risk AI systems will be checked for compliance by notified national bodies before they can enter the EU market, and national surveillance authorities will be responsible for its continuous supervision.
This enforcement draws upon the EU’s successful models for product safety. Central to the oversight structure is the European AI Board, tasked with coordinating and supervising the uniform application of the regulation. It will play a particularly important role in overseeing high-risk general-purpose AI models and can assess and mandate measures to address systemic risks.
This body is also expected to address wider implications of AI on health, fundamental rights, and international relations. For assessing its own effectiveness, the Act considers a variety of indicators, including technical measures such as the volume of systems undergoing assessments and the registration of approved systems in an EU-wide database.
Importantly, the Act also targets the less tangible goal of cultivating public trust in AI systems, with trust regarded as essential for the broad adoption and optimising of AI’s societal benefits. While the EU has taken a legislative path to AI governance, it is acknowledged that other regions may prefer different approaches.
A key goal is international collaboration, ensuring that the global management of AI technology and innovation operates harmoniously across various regulatory environments.
SZ
SHAN Zhongde
Speech speed
95 words per minute
Speech length
766 words
Speech time
481 secs
Report
The expanded summary underscores the concerted international endeavour to harness artificial intelligence (AI) as a force for societal benefit, emphasising the necessity of a human-centred approach to its progression. The AI for Good initiative, which initiated this mission the previous year, has remained dedicated to prioritising ethical considerations and social welfare in AI development.
The initiative stresses the requirement for risk prevention systems and policies that underscore transparency, advocating for the adoption of open data protocols to support this transparency. China’s strategic and theoretical engagement with AI development was notably highlighted. It has laid down a rigorous theoretical foundation considered crucial for shaping AI’s future.
In accordance with this, extensive research and algorithmic advancements have been integrated into public policy-making and regulatory frameworks. Practical implementations of AI in critical public service sectors such as finance, healthcare, and transportation are being closely monitored, with established standards to guide the deployment of AI technologies.
In April, China achieved a significant advancement by launching AI projects to establish principles aimed at risk mitigation, promoting fairness in commerce, and preventing discrimination against users through standardisation efforts. The implementation of these strategies facilitated the establishment of data notification systems that enable comprehensive assessments by stakeholders across various sectors.
China’s active role in the global AI dialogue, including its involvement in the AI action plan for well-being and engagement with the Association of Southeast Asian Nations (ASEAN), further cements its position as a significant collaborator in AI governance. Foreign Minister Wang Yiqiuan addressed the global conversation on AI governance by providing three strategic directives.
He proposed consolidating norms and standards to foster a unified global AI framework, advocated for the dissemination of generative AI technology via international exchanges, and called for deepened international cooperation in developing AI systems that are safe, reliable, fair, and controllable.
The invitation to the World Conference on AI in Shanghai symbolised the collaborative spirit driving these efforts, presenting an opportunity for ongoing dialogue and progress aligned with the AI for Good initiative’s objectives. The assertion ‘AI should be for good’ resonated throughout the conversations, reinforcing the narrative that AI development must prioritise human welfare and contribute to global societal improvement.
The summary depicts a shared commitment to position AI as a constructive tool for humanity, with the ITU Secretariat’s insights marking the direction and achievements of the AI for Good initiative.
TS
Thomas Schneider
Speech speed
187 words per minute
Speech length
1133 words
Speech time
364 secs
Report
The speaker advocates for a nuanced perspective on AI governance, drawing a parallel with the multifaceted regulation of engines. The argument is made against seeking a singular, overarching institution or law to tackle the complex challenges AI presents. Instead, the approach mirrors the regulation of engine technology – which is not regulated in isolation but in the context of the vehicles and machinery that utilise it, addressing operation, infrastructure and the safety of individuals affected.
These regulations are context-sensitive, harmonised to varying degrees as needed; traffic regulations in the UK, for example, contrast with universally stringent aerial navigation rules. Highlighting risk and impact as priorities for AI governance, the speaker recommends a context-based approach that accounts for cultural and economic factors.
Global discussions are emphasised as crucial for understanding the risks AI poses and the values in need of protection. The Council of Europe treaty is presented as a prime example of a framework ensuring that human rights, democracy, and the rule of law prevail in the age of AI while fostering innovation.
The broad membership of 46 states is noted, along with the council’s distinction from the European Union and its openness to any nation committed to human rights, democracy, and the rule of law. The treaty aims to safeguard human rights’ relevance in the context of AI, initiated by 57 negotiating states.
Its flexibility and adaptability are praised for bridging institutional, cultural, and regional practices. A cooperative global governance model encompassing both governments and stakeholders is advocated, one that caters to diverse cultural and situational needs. The speaker cites the Council of Europe Convention as a tool for unification based on shared values while allowing each country to adapt it to their context, similar to the modular cooperation of the Cybercrime Convention.
The convention is promoted as an empowering mechanism, increasing transparency and accountability. It ensures that, where governments or companies fall short in protecting individuals, mechanisms are in place to enforce rights and deter harm. In conclusion, the speaker acknowledges the complex array of institutions governing AI matters, urging for enhanced cooperation and coherence among bodies like the ITU and UNESCO.
The goal is to identify governance gaps and strengthen existing frameworks rather than constructing new ones from scratch. The speaker implicitly understands that governance must protect fundamental values and rights amidst rapid AI advancements while respecting global diversity and promoting the synergy of innovation with legal and ethical standards.