Global cyber capacity building efforts

31 May 2024 09:00h - 09:45h

Table of contents

Disclaimer: This is not an official record of the session. The DiploAI system automatically generates these resources from the audiovisual recording. Resources are presented in their original format, as provided by the AI (e.g. including any spelling mistakes). The accuracy of these resources cannot be guaranteed.

Full session report

Expert panel addresses the societal impact and future of AI governance

An expert panel convened to discuss the societal impact of Artificial Intelligence (AI), exploring its transformative potential and associated challenges. Daniele Gerundino traced AI’s evolution from its conceptualisation in 1956 to recent breakthroughs in machine learning and deep learning. He underscored AI’s integral role in Industry 4.0 and its transformative potential for developing countries, particularly through innovative technologies like additive manufacturing.

The panel recognised the substantial risks posed by AI, including job displacement, biases in training datasets, and a lack of accountability in AI decision-making. Gerundino highlighted the urgent need for governments to establish comprehensive policy frameworks to mitigate these risks, citing legislative initiatives such as the EU’s AI Act, the US’s executive orders, and China’s evolving regulations. He also referenced the G7’s AI Hiroshima Process, which aims to establish international guiding principles and a code of conduct for AI.

The importance of international standards and collaboration in making AI principles operational and promoting regulatory harmonisation was emphasised. Gerundino advocated for the creation of a research organisation for general intelligence to mitigate the oligopolistic control over AI development and to encourage international collaboration.

Jon France from ISC2 discussed the cybersecurity workforce’s capacity building, noting AI’s potential to alleviate the skills gap by automating tasks and enhancing job functions. He also highlighted AI’s appeal as a field that could attract new talent to cybersecurity. France raised concerns about the risks created by AI, such as deepfakes and misinformation campaigns, and called for sensible AI regulation, with cybersecurity professionals involved in setting such regulations.

Moctar Yedaly addressed the unique challenges and opportunities AI presents for Africa, emphasising the need for job reclassification and skills development as AI transforms the job market. He stressed the importance of including African perspectives and indigenous knowledge in AI development to ensure that AI solutions are adapted to the continent’s context and problem-solving approaches.

Katharina Frey Bossoni from the Swiss government discussed AI’s dual nature as a tool that can enhance cyber resilience and be used to conduct more effective cyber operations. She mentioned the upcoming Global Conference on Cyber Capacity Building (GC3B) in Geneva as a forum to discuss and learn from experiences in enhancing cyber resilience training and to anticipate future technological advancements.

Audience participant Jason Keller highlighted the importance of organisational culture, collaboration, and basic cyber education for policymakers in effectively managing cybersecurity and AI challenges. He argued that foundational elements are crucial for addressing the complex issues posed by AI.

The panel concluded that AI offers significant opportunities for societal advancement but also poses serious risks that require a whole-society approach for adoption. This approach involves not only technologists and security practitioners but also policymakers and the general public. The GC3B in Geneva was recognised as an important event to further these discussions and work towards a common understanding and action plan for AI governance and capacity building.

Noteworthy observations from the discussion included the recognition that AI is not a new technology but is currently in the public consciousness due to developments like ChatGPT and large language models. The panel also acknowledged that while AI could be used to enhance cyber attacks, it could equally empower defenders. There was a consensus that AI should be regulated, but regulation should be sensible and involve cybersecurity professionals in the process. The discussion also highlighted the need for multidisciplinary perspectives and multi-stakeholder engagement in developing AI standards, reflecting the complexity and far-reaching impact of AI on society.

Session transcript

Moctar Yedaly:
you you

Maarten Botterman:
we’re very good speakers and the intent is to speak short and allow you to interact with us as well. First and for Daniel Grandino, a research associate of the of the University of Geneva of the Institut de Gouvernance et l’Environnement et le Développement Territorial. He’s done work next to your microphone. You can disable. Sorry. Preventing double microphones. Okay. We also have John Franz online. John, can you hear me now? Super. That’s excellent. That’s good news from IS2C, who is security and risk management expert and also very much aware of what it means in terms of skills development and awareness. Mokhtar Jidali, the African Regional Director for GFCE, but also the former Minister of Digital Transformation of Mauritania, and very well placed to also see the regional impact of new technology developments. And last but not least, Katrina Frabisoni, the Deputy Head of State Secretariat for Digitalization Division for Foreign Affairs. And she works also closely with the GFCE, also in preparation of next year’s conference that will take place in Geneva, actually. More about that later. So with that, the focus is really on emerging technologies. They come, no matter what. But they also can be benefited from. And how do we make sure that we can benefit from it for the good? And that focus is what we are beginning to explore. Of course, AI is not the only emerging technology. Quantum computing is something that’s coming up. IoT deployment is becoming wider and wider, and also in interaction with each other. But the focus today will be on AI. And I’m very happy to give first the word to Daniele, who’s been working on recommendations to the G7 on how to deal with this. Daniele, can I give you the floor?

Daniele Gerundino:
Thank you very much. It’s really a pleasure to be here in this place. I know very well because I’ve worked closely with ITU for many, many years in the past. I’ve been for 20 years Assistant Secretary General at the ISO. Okay. But in the interest of time, we move immediately. Oh, sorry. I need to share the screen, I guess. Zoom to get the presentation. Okay. I hope you can see it, great, great, okay, thank you, in the interest of time, I know we have very limited time and also to have a debate in the end, I’ll go quickly. So the first thing I think that many people know, what are the origins of artificial intelligence? I like to remind that this goes back to 1956, when the group of incredible people of early computer science and information theory experts got together in this summer workshop at Dartmouth School, where they had the idea that anything in learning and also in activity of the human mind could be described so precisely that a machine could be developed to emulate that. But then it took many, many years to get to where we are, and it took the exponential development of digital technologies, the growth of the internet and the explosion of data available over the internet to have what is today’s predominant form of AI that is based on artificial neural networks, machine learning, deep learning, and where the recent breakthrough actually happened just in 2012, and then in 2018, those that actually shape the artificial intelligence systems that we have today around us. This is a fascinating story, but we do not have time to talk about that. The latest breakthrough was in 2018 with the emergence of generative AI with the GPT, generally pre-trained transformers that OpenAI with the third GPT made widely available. Now, these technologies are demonstrate, and I spend one word on that because the nature of these technologies, and this is very important to consider when we talk about safety, security, trustworthiness, is something that is deeply related to the nature, I mean, the difficulty of making this technology in that way, to just underline and remind that. But they are demonstrating unprecedented capabilities in a broad variety of areas, and are reshaping the way of working, the business processes, profession, research and development. Their transformational potential can be very, very beneficial to society on the one hand. And for example, we have AI, a key ingredient, a key component of Industry 4.0. There are many others, there are many other technologies, and I have a very short digression here, taking that some of these technologies are also of enormous interest and potential if applied well for developing countries. Just think, for example, of additive manufacturing or 3D manufacturing, this can be a game changer in a lot of areas. This is a publication I’ve written for UNIDO, and I recommend you to read it if you have a time about, because this brings together many elements, particularly from the developing countries’ perspective. But coming back to AI, I would say very interesting. For example, we had this here in Switzerland, in the EPFL, a robot was basically a few months or several, some months ago, developed actually by Chad GPT. This gives an incredible, I mean, all the creative elements were actually suggested. by ChatGPT, which is, this is, of course, there was a team of researchers working on that. But that is amazing, as well with many other things that you have heard about. But at the same time, these technologies pose a substantial threat to our society. We have a list here. I’m not going through, I guess many of you realize, these are very, very serious, and even to a certain extent, existential threat to humanity. And these are some of the things, for example, we have, you know, the horrors of using of AI, and particularly in war, for self-managed weapons. And then we have the deep fakes, we had all the things, the problems of the impact on workers, on jobs, the disinformation, etc. Now, the other thing that is scary is this development of the core technologies is going at unprecedented speed, and it is in the hands of just a bunch of guys that, honestly speaking, they’re driven by the quest of power and profit. I’m not comfortable with that, but that’s the way it is today. Now, urgent actions by government are required, and this means comprehensive policy frameworks that can be supported and complemented by other initiatives to manage this type of risks. There are legislative initiatives in key jurisdictions. For example, in the EU, you have the AI Act that has been approved that is going to enter into force. It’s the first comprehensive AI law. It’s a lot of interesting things. In the US, you have the executive orders that direct the US federal agency to do certain things. In China, you have things evolving. But here, I just mentioned what is done in the framework of the G7. The G7 last year launched the so-called AI Hiroshima process. And at the end of the year, they agreed on a policy framework, on a general policy frameworks where they published the Hiroshima Process International Guiding Principles and Code of Conduct. These are important documents, but that’s still at a very general level of indications of what should be done. And it’s true that in this document is written that regulations and legislative initiatives from government must follow to provide this. And I will say in one moment in the concluding remarks of my speech that also standards here, standards, voluntary standards developed by, standards developing organizations such as ISO-IC, ITU, and many others are very important to give flesh and bones to these principles to make them operational. And also a way to provide a layer at the international level for international harmonization of regulation. So this is taken forward in the Italian, under the Italian presidency of the G7, so say progressing with the implementation of the Hiroshima Process. And this is the policy brief that you can find on the internet in the documents of the G7 that I’ve written with a group of other authors from different background and representing different type of stakeholders. So that’s very important. So we focused the attention on four recommendations to the G7. So strategically promote standardization, scientific collaboration, education, and the empowerment of citizens and protection of personal data. That’s four. It’s not only these things that need to be done, but these are four important areas of development. that we recommended to the G7. And in particular, this is my bread and butter, the standardization part. There is a specific reference in the guiding principles. It’s the guiding principle 10. It is promote the development and use of international standardization. Now, standardization is very important because it allows you to go into details. You have principles saying you should do this, this and that. For example, risk management approaches need to be developed and system. Now, the point is that there are a lot of things that need to be specified in order to make this principles and code of contact operationally applicable. And in this sense, what we said that the G7 countries as a core, as a starting point, of course, this should be extended at least to the G20 and then to the general countries of the world, to the whole countries, but they can provide an invaluable contribution to standardization by shaping strategic directions and underlining priorities that needs to be addressed. This is something that is very difficult in the standardization world because it is mostly a bottom-up process. And you need to take into account a lot of interest and issues, especially in something like here, but this is, so it is very important to have strategic directions and priorities that are aligned with the government perspective that hopefully will lead to have AI for good. And this is the other thing that I’d like to underline is the suggestion of actually creating, developing in some way, a research organization for general intelligence, bringing together research institutes, academic knowledge, et cetera, all over the world, but in order to have also mitigation to the. the oligopolistic framework you have today, and also to share and make concrete elements that can be distributed. You can find the details there. Also, there are things regarding education and the personal data protection. One thing that I mentioned happened last week, and I think is a good sign. At the AI Seoul Summit, world leaders agreed to launch network of safety institutes. And this is very important. The agencies that are being developed by various governments to manage and to supervise activities. So the idea here was to link these agencies and to start with an important process of global collaboration on these issues. They also stated that they need enhanced collaboration to develop human-centric and trustworthy AI. And from this perspective, the last thing I would say is that the engagement of all countries with a multi-stakeholder and multidisciplinary perspective is essential. Don’t consider standards like just technical standards that need to address some. There are a lot of important technical issues to be addressed to make this technology move forward, but you need to take into account the concern and the needs for a multidisciplinary perspective. When you talk human-centered values, that’s a big, big issue. Yes, you need to embed them into systems, but then you have to have, and there is a lot of discussions about that and many, many other things that requires really a multidisciplinary perspective and the engagement of stakeholders from the global South without that, it won’t be good for everybody. So thank you very much for your attention. I’m happy to discuss these themes with you later.

Maarten Botterman:
Thank you very much, Daniel, for an excellent. introduction. And indeed, also the last aspect you said, standards may mean different things in these different stakeholder groups, you need all stakeholder groups. The danger of working with multiple stakeholder groups is that they hear the same concept, but they think something different. So there needs to be a standard for that as well. So with that, and no doubt, John France is very much aware of that too. For those who don’t know, ISC2, it is the world’s leading collaboration of cybersecurity experts. And what they do is also they standardized how a good cybersecurity expert looks like by, for instance, having certifications, programs in teaching, etc. So John, I’m really looking forward to hear how from a cybersecurity professional perspective, we’re looking into what’s coming to us and how to deal with it and what ISC2 is already seeing and doing about that. Please.

Jon France:
Thank you. And thank you for the invite to come and speak. I’ll share a few viewpoints. I’m sure we’ll get into discussion down the line, but four quick viewpoints from ISC2. So opportunities to create and improve capacity for secure use of AI. Actually, a lot of the data I’m going to quote came from a study we did of a little over 1200 of our members and experts in the field. And some of the data was kind of interesting, but sort of capacity building, we know we need good skilled practitioners in this space for safe, secure use of AI. And we tend to look at it in sort of three lenses. One is the security of AI systems itself. So things like data poisoning, model inversion, what a good development practices look like. The next lens is how we use AI in a how it helps us do our jobs and make systems more secure. And then AI adopted and used in business and how that’s done in a way that not only benefits business, but doesn’t threaten it. So you could call that the risk management piece of AI and the use of AI in the wider context. But if we come back to AI in the use of our profession, how do our professionals, we start to see that things like task automation from AI systems are really going to help us. We know we have a workforce skills gap, a fairly large one. We have a study on that that you can go and freely download, but it runs to the millions of practitioners needed. Some of what AI promises is that we can get efficient at our jobs. It’ll take some of the more mundane jobs out and increase the signal-to-noise ratio that we deal with. We drown lots of data. It’ll help sharpen up that for security reasons. It will also help improve scale of our defences. So if the first one’s an efficiency play, help us do our jobs better. The next one is a capability play, which is it’s going to do some things in the security context that as humans we might find difficult or time-consuming. So a capability play. Alleviating some of the workforce pressures, that’s a mixture of the two, efficiency and capability. We can also see that emerging technologies and things like AI, whether that be, and the first speaker nailed it absolutely right, it’s not a new technology. It’s just very much in the public consciousness because of chat GPT and large language models. But we can see it attracting talent as well. It’s an interesting facet to work with. So it does have an attractiveness that can help in the security context. And then it can help redefine some responsibilities. we can rethink traditional roles and responsibilities in cybersecurity, coming back to that capacity and efficiency or capability, I should say. And 56% of our respondents thought AI would make some of the job functions obsolete. That’s fine. It’s not going to remove the whole job. But 82% are optimistic that AI will enhance job efficiency, so freeing them up to do high value tasks. Point number two is, if we just have a look at the sort of risks created by AI, and how do we see that coming into the cybersecurity profession. We’re in an election cycle for many jurisdictions and territories in the world, and deepfakes and misinformation campaigns was one core concern from our respondents, where AI could, let’s just say, muddy the waters and present some challenges. In general, the respondents to our survey said that it’s probably the worst threat landscape we’ve seen in the last five years or so. It’s complex, and AI adds another dimension to that, and generally quite worried about what AI could be used to do from an attacker’s point of view. So if we look at the adversarial use of AI, that is a core concern. However, on the positive side, many of the respondents think we have a golden opportunity to utilize it for societal and safety good. So there’s a double-edged sword here. One is it could equip attackers, but equally it will equip the defenders as much, if not more. But we have to execute on that opportunity. So point number three, the landscapes complex. AI is another component. It is a system like any other system, so you can go and secure it with good cyber hygiene as well. Point number three, current approaches to capacity building. Actually, we know there’s a shortage, but we also know there’s a deep interest in this topic. So it’s not only a deficit, but an attractor. And we’re starting to see some clarion calls for good education and experiences in this. I see too, along with other bodies are launching workshops, courses and materials to educate on the benefits of AI, how to secure it and how to use it in a security context. So we’re quite buoyed by the fact that it is being used as an attractor to our profession. There was another data point that was kind of interesting, and it came up around regulation, which is around three quarters of our respondents to the survey said, AI should be regulated. It’s one of those technologies that can be tricky and should be regulated. However, sensible regulation was also a call and very strongly professionals, cyber professionals, cybersecurity professionals should be involved in helping set that regulation. As the first speaker said, we’re starting to see world jurisdictions move towards that, the AI Act in the EU, the executive orders in the US and other territories, and alongside that emergent frameworks. So we’ve seen ISO come out with several frameworks around risk management and use. So I think it’s 4200 series and a couple of 23 somethings there. We’ve seen other frameworks, the likes of OWASP and large language model. We’ve seen cybersecurity framework incorporated in AI. risk management framework. So these things are being addressed from a framework and standards point of view. So my sort of concluding comments are, yes, it’s opening up a threat. It’s complex, but we are rising to the challenge. We are excited to see how it’s going to be used in a security context. And the cybersecurity profession is pleased to say we’ll be in the vanguard of securing, developing and helping society adopt AI in a safe and secure way.

Maarten Botterman:
Thank you very much for that, John. Very quick question, because you were mentioning cyber hygiene, the awareness of, well, the capacity gap for cybersecurity professionals at the moment, also for AI building. That’s a different one. And awareness of users, of course, because we all remember the spam we got in the 90s, which was full spelling mistakes to lure us. A deep fake is different. Now, are these emerging quickly as a service? So cyber threat capacities with AI enabled coming up as a service? Do you see that rapidly coming? Or is that?

Jon France:
So what I’ve seen and what some of our practitioners have seen is the use of AI as a point on the adversarial side as a point solution. So to write better phishing campaigns, to get better grammar, to produce more convincing things, rather than as systemic tools that will perpetrate an attack automatically. There is a feeling that that will come. So adversarialists of AI will get more deep, but equally on the defense side, it’ll be used to detect, to deter, and to interdict. those kind of attacks. So we could say that we’re in a little bit of an arms race from the attackers and to the defenders, but we are starting to see it being used less as a service, but more as to sharpen up existing methods.

Maarten Botterman:
Thank you very much. Yeah, very clear science, both on the social engineering side and on the tech attack side, I hear you. Thanks for that. It’s not only those countries with advanced capacities that are confronted with it, but the whole world is confronted with it. And from that perspective, I’m very happy to have Mokhtar Jadali with me. So the regional director for Africa from GFC, but also with a deep internet and IT background in deployment in Africa. Do you see specific challenges coming in regions where catching up is also happening and the benefit is so clear from AI in applications like medicine, like the example given here, and even in agriculture, please.

Moctar Yedaly:
Thank you, Martin. And thank you for the previous speakers. As I see in America, it’s very hard to follow, but I’ll try to a little bit to contribute to that from the African perspective, but prior to that, allow me to thank you, Martin, for accepting to moderate this GFC session on AI and capacity building. And thank you, Chris, and the whole team to have organized this on behalf of the GFC. And thank you for all of you for being here on behalf of the GFC. Now, these previous speakers have said it all. It’s actually very difficult to say something that will not be repetitive at all. However, I will just look at this presented from the African perspective, which is the specificities that will be contributed with here. As you said, the AI present very great opportunities for Africa to meet their agenda, which is the Agenda 2063, the SDGs, and the Digital Transformation Strategy, and ongoing kind of African Digital Compact. The risk, as it has been mentioned, and I do associate myself to what has been said previously, be it on the cybersecurity part or be it at the risk association to that, but from African point of view, the concern will be really the bias due to the data set, how data are being trained really, and that potential discrimination that is embedded there. And moreover, the liability of the AI in the sense that AI doesn’t respond and doesn’t explain anything about the decisions they are making. The implementation of the AI, it is actually critical and very important and vital to embed the security and safety by design at the beginning right away. And those include specificity of each and every nation. From Africa point of view, one of the biggest risks will be linked with the job displacements. And since anyway, job will be, some of them will be disappearing, it needs to be accompanied by what we call the job reclassification or reorientation. If tasks were being provided by humans and now are delegated to AI, those jobs that are supposed to be disappearing need to be reoriented to do something else. And hence, they need to re-skill people, the skills of people need to be actually adopted. Africa’s challenges are mainly the fact that AI at this point of time seems not to be really a priority. Because of all the challenges Africans are facing, most of the time, the AI is still actually just at the inception space. African Union has just started in going drafting their own continental AI strategy. This is actually just ongoing now. You see the gap between the moment of the AI has been actually adopted and is moving. And the mother organization of Africa is actually just starting now building that. The second challenge is that the AI has not reached the cabinet or leadership level. It’s mainly techie people, and the AI initiatives are bottom-up driven kind of momentum, rather than coming back from the top. And it is understood that anything related to AI, digitalization, and so they need to have the support and agreement from the top-level people. But having said that, anything that will be built in AI, be it those who are building it for Africa or Africa building it for themselves, or build globally, the specificities and the indigenous knowledge and specificities of those Africans need to be taken into consideration. The way Africa is solving problems is not the same way that anyone else is solving problem. AI is not just for engineering. And as we say, engineering is not only about calculation, it’s about solving problems. So solving problems need to be really adapted to the context and to the people that are using it. So I stop here, and I’ll be glad to answer to any further questions. What do you want?

Maarten Botterman:
Thank you very much, Mokhtar. Very clear. I mean, like the internet, AI is something that affects the world and doesn’t know borders that well. So the whole world needs to be prepared to grab the opportunities as well as deal with the threats. Thank you for that. exploration. Katrina, can you expand on the Swiss government’s position? We know the Swiss government has been in the lead also, for instance, in the development of the AI framework, the convention from the Council of Europe, and that you have also been active in GFCE as a total. What is your perspective and what do you see as next steps? And maybe you can tell a little bit about next year’s conference.

Katharina Frey Bossoni:
Happy to do so, and thank you for my pervious speakers, because it helps. I think you embedded both the pros and the cons of this new, I mean, not new, but this yet another technology for capacity building. I think it’s been said, it offers certain risks. I think John has elaborated on them specifically. I just came to my mind when I was listening to you, as it can be used as a tool to enhance existing cyber operations. Yesterday, I was at the AI for Good Summit, and Tristan Harris, you probably know him from the Social Dilemma, like I said, Netflix documentary, and he had a very impressive speech on the risks in general on AI and how we should be aware of it. And he showcased how you can kind of destroy the reputation of a, it just took quite a famous TV moderator in the US, and with deep fakes, how you can very quickly create, because it’s just so easy to do, like on X and on others, just prepare those tweets, and then follow up with like TV and newspapers in those. So, I think it’s just a scale, because it’s much more easier and probably also cheaper than it was before to do operations. And yes, we do also include like missing this information as a cyber risk. So that’s just something that came to my mind when I was listening to you and seeing on like the challenges, and I think most of the rest has been said before. But it’s also been elaborated before, and I think that’s also a very important position from Switzerland that AI also offers great opportunities, including as like enhancing cyber resilience building very practically. And I think you mentioned that before, it can help like create, for instance, a tabletop exercise. We don’t have sufficient offer of capacity building at the moment, so it can be very helpful to enhance the few offers with probably opening more and more trainings and more and more courses. So that’s for sure a very positive side. And more looking in how that and where that can be discussed, I think you mentioned and my colleague wrote it down because I have problems announcing it every time. So the Global Conference on Cyber Capacity Building. See, thank you so much for the note that this is a G3CB. I’m going to say the whole world because I think it’s easier. The Global Conference. I’m just going to name it the Global Conference. And that was it’s a multi-stakeholder conference that took place. The conference took place the first time 2023 in Ghana, organized by GFC. I don’t know if it was you or your colleagues,

Moctar Yedaly:
yes, my colleague and I.

Katharina Frey Bossoni:
Fantastic. Thank you so much because we believe that that’s exactly the kind of forum where you can discuss and also assess and learn from each other about experiences how you can enhance the training and enhance the cyber resilience. And we like Switzerland is hosting, you mentioned that before, the next forum in 2025, more or less in a year in Geneva. And we will on the one hand look back obviously on how the so-called ACRA call for cyber and all the goals that you have or we have set up back then for us or the tasks we have given ourselves, how they have been fulfilled. But I think and we think it’s also a very good place that we can also look forward and because technology moves so fast. So within a year, we probably don’t even know now what will be capable by then. I mean, you’ve probably seen the Sora. How do you call that platform? It’s kind of a development of chat GPT that now you can talk to it. So imagine in a year time, what other tools we will have, hopefully positive ones. And finally, having it in Geneva, just to say some words, you are here in Geneva, and probably some of you are based in Geneva. But we believe it offers a very good platform because there’s many other actors here already trying to also work on cyber capacity building, like the Diplo Foundation, the Cyber Peace Institute, but from the UN side, also UNICC. So hosting the conference in Geneva, we do hope that this broad ecosystem could also be mutually reinforcing and engaging. So let me check on the notes if I forgot anything. But yeah, obviously, we will be very happy to welcome you next year at the GC3B. My goodness, the Global Conference. And very happy to for further questions to exchange. Thank you.

Maarten Botterman:
Thank you. And maybe if you split it into the Global Conference and on cyber capacity building, it’s easier to remember. Because it’s about that, and I also tell for the people in the room that that will be the one to remember.

Katharina Frey Bossoni:
Can I add one thing that just came to my mind? I’m so sorry, I should, because it is one thing is that I think we talked about that you mentioned it for African continent and other continents, but just as a with my head as a host country in Geneva, because that’s also a very big priority of us. We had every single IO. Okay, I can probably not say every, but many of the IOs here have been attacked. There’s a very known big attack on the ICRC two years ago, where they stole 500,000 personal data of refugees. So even from the host state perspective, it’s a big challenge. So yeah, thank you.

Maarten Botterman:
Yes, I think I recognize that as part of the ICANN board, ICANN is also one of those organizations who is running the global address book, basically the unique identifier system. And we keep it going, no matter what. And this is because of the enormous resilience that has been built in worldwide with our partners. And something like this may be needed for AI as well. Now, back to the room or back to the room. Actually, we’re going to the room now and happy to take any questions. And we don’t have a lot of time, but a little. And I see Jason Keller in the room who’s been involved in cyber capacity development, I believe.

Jason Keller:
Yes. So good morning, everyone. As a trained hacker and somebody that works in this space, I think it’s good to note something. You know, if I want to make you an ethical hacker, I cannot get you or I cannot start you off and put you in front of the system and say go without teaching you the basics. Cyber capacity building is a governance exercise. We cannot skip over the basics. What are those things? Culture, for instance. If your government has a organizational culture in which people cannot openly share information about problems and address them, you cannot address cybersecurity issues. If you have people that are afraid to ask for help or provide honesty around those things, again, you’re not going to be able to solve these issues, whether they be in cybersecurity or artificial intelligence. A culture of collaboration in government, if I’ve seen anything around the world and talking to my colleagues, it’s that most ministries do not communicate with others openly, especially at the working group level. If you cannot share cyber threat intelligence from one organization to another, you cannot respond to a cyber attack. So, again, these things are basic core functions of agile organizations, nimble organizations, that whether you are trying to address emergency management, cybersecurity, tech policy development, or a whole multitude of other multidisciplinary problems, these are things we cannot skip over. And I’d say lastly, having basic cyber and digital education at the policymaker level is, again, absolutely critical. If the folks that are writing the policies do not have the basic education to understand the technology, again, you’re not going to have the outcomes that you desire. AI will be an incredible tool to empower people to make these gains, but we cannot skip over the ABCs, if you will, of improving government’s ability to simply manage the situation, let alone make improvements in more advanced areas of practice.

Maarten Botterman:
Thank you very much, and thank you for reminding us it’s not only about training more cybersecurity experts to deal with it. but also at other levels in society to make sure the awareness is there and how to deal with it becomes aware. Your standards may help in that. Chris Buckridge, please.

Chris Buckridge:
Sorry to jump in. There was one question that came from the chat in Zoom, which I wanted to raise with the panel. I mean, because it’s quite a specific question. It’s about the Sol AI Summit and the international networking institutions that have been established there. There was a question about whether these are going to be physical institutions in certain countries or more virtual collaborations. And I think it’s a bit of an open question as to what kind of institutions are going to be established and useful in this kind of space.

Katharina Frey Bossoni:
I can say something. I’m happy to. You can compliment me very happily. You can say something because I also wanted to react to what you said, but first to the safety summit. To my understanding, because at the Bletchley Summit, where I attended as well, that was, I think, where the UK announced, the US announced their own AI safety institutes. To my understanding, each country deals with it a bit differently. You have some countries to actually have a proper part of the government’s institute. So it’s like those called AI safety institutes. Other, including Switzerland, but also I’ve just been to Singapore. They have an other more from the academia side, like a center that’s dealing with the questions. I think it’s a bit different and the exchange for how it works. Probably you can say that because I wasn’t now in Seoul and I wouldn’t know, but I think one is the AI institutes and the other way is how they collaborate. Allow me just to make a comment on him, then I happily hand over to you. Thank you so much for pointing out those different needs also from the culture, also from governance and government’s culture. I very much appreciate it and that just came to my mind because that’s also some things we work with, for instance, the HD, which is also an NGO in Geneva, or the Decaf, where we actually try to do like this tabletop exercises with like certs on the one hand, which my experience are sometimes quite good connected amongst themselves, like in the first. But very often the problem isn’t, as you mentioned as well, as between the certs, like the techies but when it comes between the certs and, for instance, the foreign ministry, like the diplomats, so that’s where we do put a big focus on it and would for sure also be an interesting topic to deal at the conference. Thank you.

Maarten Botterman:
Thank you for that. Please, Daniel.

Daniele Gerundino:
Just a couple of things to add is that here we are still at the level of declarations, so we need to see what is going to happen concretely. But regarding the question is that you have in certain jurisdictions, these institutes are already or going to be regulatory agency for AI. For example, in Europe, it is an element of the AI act to establish such institute that in principle could become extremely powerful as they have, for example, for chemistry, there is a European. So, in some jurisdictions, these are going to be regulatory agency and what was very good from the Seoul Declaration is the idea they need to share and exchange because so far many of these institutes have been able, for example, to run tests or to detect problems, but they didn’t have the teeth to, I would say, enforce actions on the problems that their research and the testing they’ve done has been detected. This is something that has to happen, and I think it is a very positive step the fact that the number of governments and the European Union agreed to establish this type of network of collaboration. Now the practical side that we need to see you know how these things happen but the idea the perception that this is essential is very important and we hope that this can move forward because one thing we have to take into account that’s also extremely important that a lot of these technologies the recent developments also come out from universities and public research centers is not only sometimes a private company capture this knowledge and put that if you have a way to create a network of these type of institutions and organization and give them more I would say flesh and bones that would be extremely useful it’s like what happened for the human genome I don’t know if you remember the story but with the human genome I mean it was days for matter of days that this was not how to say captured by the private sector they could have been patented the human genome if it was that just to say there was a but anyway this is another thing that we need to take into account and it’s positive signs but everybody needs to be alert that it’s urgent to move because these things is moving very very very fast and with a lot of resources and the private you remember that some was going to here in Switzerland that the web asking for seven trillion dollars to fund the development of that stuff so..

Maarten Botterman:
Yes the the speed of developments on one hand and the incentives to to invest and on the other hand the responsibility towards society which is a fine balance so I’ll ask for a couple of final remarks well, a final remark, because time is really running out. And having, we’ll take those takeaways away and report back in a report. This is just one of the areas of emerging technologies we think is, are important considering, building towards having a more concrete conversation and maybe even direction by the time of the global conference on cyber capacity building in Geneva next year. So John, can I ask you, you’ve been listening silently, but attentively, any specific takeaways from your side?

Jon France:
Great that it’s an active topic of conversation and that it promises so much. Happy to help where we can, especially in the cybersecurity side. I do take away that it’s a whole society approach. It’s not just a technologist approach or a security practitioner. So I think the quid pro quo is new technologies and emerging technologies offer and promise so much, but it’s a whole society approach for adoption, for benefit.

Maarten Botterman:
Thank you so much. Mokhtar.

Moctar Yedaly:
Thank you very much. So the GFCE has dedicated itself really on building capacities globally. And more specifically for Africa, our objective is really to have each and every country having their own cert and cert, having their cyber legislations, cyber strategies, and specifically having the good governance with related to a good data set infrastructure that will really be the basis for AI. AI represent for Africa, actually, one of the greatest opportunity for them to live. into the 21st century. And that would be probably the Africa Renaissance if they have to really use the AI to do something. But it requires some decision from the leadership of Africa and investing from the government and the public to do the partnership and invest in skills and education. So extremely important for that. And with that probably Africa could be really contributing to the ethics and the development of the AI globally and specifically in Africa too.

Maarten Botterman:
Thank you so much. Big thanks for the speakers. Big thanks for your attendance. It’s clear that AI is full with promises and we need capacity on fulfilling those promises. It’s clear that it comes with threats. It’s clear we need to address those and standards will help with that. Multifaceted, not only on the technical level but also on the policy level, the ethics that are ingrained to be ingrained in the development of AI, et cetera. We’ll talk more about this and we’ll work towards a common understanding on that because we all may have our ideas right now in our heads. This was a first step in bringing those ideas closer together and make no mistake, that’s a longer way. I believe maybe because of my long involvement in internet governance, but that we can learn from the internet governance process because this is also a borderless thing. And whereas internet governance has developed amazingly rapidly, AI is even developing more rapidly. So we need to be there and to make sure that we don’t have irreversible effects that we don’t want. And we are at the table for that. So cyber capacity building is key in this. And with that, I’m really… thanking you all and look forward to future conversations. Follow the space, follow the minutes of this meeting that you will find on the conference site and go to thegfce.org for more information. Also building up to the GC3B. Thank you all very much. Meeting is closed.

Moctar Yedaly:

CB

Chris Buckridge

Speech speed

195 words per minute

Speech length

101 words

Speech time

31 secs

DG

Daniele Gerundino

Speech speed

151 words per minute

Speech length

2325 words

Speech time

926 secs

JK

Jason Keller

Speech speed

144 words per minute

Speech length

334 words

Speech time

139 secs

JF

Jon France

Speech speed

167 words per minute

Speech length

1411 words

Speech time

507 secs

KF

Katharina Frey Bossoni

Speech speed

168 words per minute

Speech length

1292 words

Speech time

463 secs

MB

Maarten Botterman

Speech speed

147 words per minute

Speech length

1530 words

Speech time

626 secs

MY

Moctar Yedaly

Speech speed

160 words per minute

Speech length

917 words

Speech time

345 secs