Digital Governance 3.0

29 May 2024 10:00h - 10:45h

Table of contents

Disclaimer: This is not an official record of the session. The DiploAI system automatically generates these resources from the audiovisual recording. Resources are presented in their original format, as provided by the AI (e.g. including any spelling mistakes). The accuracy of these resources cannot be guaranteed.

Full session report

Expert panel addresses the future of digital governance and the challenges of rapid technological change

An expert panel convened to explore the rapid advancements and challenges in digital governance, with a focus on recent developments and the difficulty of staying abreast of the fast-paced changes in the field. The panel featured specialists from diverse sectors, including academia, the private sector, international organizations, civil society, and the open-source community, each offering unique insights.

Dr. Bruno Lanvin, a seasoned economist with a background in the UN system and the World Bank, initiated the discussion by underscoring the importance of data governance. He challenged the common misconceptions that openness undermines security and that data is a substitutable resource, likening data’s indispensability to that of air, rather than oil. Lanvin called for the rejection of these myths to prevent widening digital divides and to promote openness, including the embrace of open-source principles.

Brian Behlendorf from the Linux Foundation highlighted the ubiquity of open-source software in technology and introduced the Open Wallet Foundation, which concentrates on digital identity technologies. He emphasized the significance of interoperability and shared standards, particularly in light of the European Union’s EIDIS regulations that require member states to provide national ID wallets by 2025. Behlendorf advocated for open-source collaboration to achieve more efficient and compatible solutions across industries.

Deniz Susar, representing the UN Department of Economic and Social Affairs, shed light on the UN’s efforts in digital governance, such as the Global Digital Compact and the inaugural UN resolution on AI. These initiatives seek to foster global cooperation, uphold human rights in AI development, and bridge the digital divide. Susar also discussed the UN’s e-government work, which evaluates online services in various countries and cities, and the potential of AI and smart city technologies to enhance service delivery.

Jim Caravalla, CEO of Offworld, spoke about the era of augmented sentience marked by the democratization of AI tools, leading to profound economic and societal shifts. He stressed the necessity for governance and standardization in mission-critical systems, particularly as humanity’s reliance on autonomous systems grows with expansion into space.

The panelists collectively recognized the need for innovative and adaptive regulation that can keep pace with technological advancements and involve a broad spectrum of stakeholders. They emphasized the importance of addressing the digital divide, advocating for language inclusivity and youth engagement in governance discussions. The concept of interoperability and standardization was highlighted as crucial across nations and sectors, with a focus on the unique challenges posed by AI and autonomy.

Audience contributions included concerns about linguistic inclusivity in AI services and the call for multi-sector, multi-stakeholder partnerships to tackle digital governance challenges. The discussion also pointed to the emerging divide between those who can and cannot utilize AI tools, underlining the need for high-quality, openly licensable datasets to ensure societal inclusion.

In conclusion, the session delved into the complexities of digital governance, the pivotal role of data and AI, the necessity for inclusive and adaptive regulation, and the imperative of global cooperation and ethical considerations in shaping digital policy. The panelists concurred that while the pace of change is daunting, it presents an opportunity for collaborative innovation and the development of robust governance frameworks to support sustainable and equitable progress.

Session transcript

Danil Kerimi:
So, very well, welcome to the session where we will be diving in into the digital governance developments, particularly this year, but looking back and forward as well. And the reason why we’re looking at that is because you probably, in capitals around the world, in the boardrooms around the world, in wherever you’re based, have had a hard time keeping up with everything that was happening in digital governance in the past couple of years. This space really picked up and there’s no way for us to slow it down. There’s a summit every week. Some are more important than the others, like this week’s obviously gathering is extremely important on international global calendar, but there are so many others. It’s hard for anyone to keep track of all the legislation coming up at the national, sub-national, municipal, regional or even international level. Anything from cybersecurity to DI, blockchain and digital finance, you name it. So today, the idea is to really take a little bit of a stalk of what’s going on, what we should be paying attention to, perhaps answer some questions. What I will do, we’ll have fantastic speakers that come from academia, private sector, international organizations, civil society and open source community, and they will present their views on what’s happening in the broader global digital governance debate, you know, projecting, telling us what’s happened in the past 12 months, projecting into perhaps 12 months into the future. I will ask them to introduce themselves for about a minute, just before they will speak. We will start with Bruno. And the reason why I want to start with Bruno is because before we actually dive in and about what, we need to answer why, you know, what does a digital policy really matter? Do you think anyone pays attention to what we’re discussing here today? Does any developers that are sitting and actually building our digital future? future, really care about where the regulations and legislation is going. With this, Bruno, over to you.

Dr. Bruno Lanvin:
Thank you, Danil. Hello and good morning, everybody. So my name is Bruno Lanvin, and I’m a French economist, having spent 30 years in the UN system and the World Bank, and focusing on measuring the unmeasurable. That’s on my business card. The idea is that there’s a number of things that we can only change if we can measure them properly. And many of the things we want to measure, we don’t know how to measure, very often because we have not been able to define them properly. And this is very true when we look at data governance, and I shall explain why. So I mentioned that I’m French, so you can count on the French to bring complexity at the level of total confusion. I shall try to stay away from that. And as far as the economist is concerned, of course, we all know who was the first economist ever, and that was Christopher Columbus, because when he left, he had no idea where he was going. When he arrived, he was not at all where he thought he was, and most of that on public funds. So economists should stay away from being too predictive about data governance. And yet there is a number of elements that have to do with the why. Why is it that we should be concerned today with data governance? And this is what we are going to discuss, so I don’t want to get into too much of what this discussion was about, but I will just flag two myths that I think we should try to dispel right away. The first myth is that openness is the enemy of security, and of course other people on this panel are more qualified than I am to address this, but I think it is something very important when we try to take a global view. If some parts of the world are better equipped in terms of security equipment, security software, and security attitudes, we may create additional digital divides by pushing security for the sake of security. And for me, openness, including open source, is a critical element in that. So that’s the first myth worth disbanding when we address governance. A second myth, which I’ve heard again and again, is that the reason why we should be concerned about data governance is because data is the new oil. I’m sure many of you have heard that before, data is the new oil. It’s not. Data is not the new oil. Data is the new air. You can do away with oil. You can go to new kinds of energies whenever you need them. That may take time, it may take money, it may take faith, but you cannot do without air. The air we breathe, the air we move around, the air that plants and the planet use, is a fundamental component of our environment. And this is what data is. It is not some kind of resource that we can substitute with another resource. It is something that is part of our lives and is becoming an increasingly important part of our lives. And if we don’t define it properly, if we don’t measure it properly, and most importantly, if we don’t govern it collectively and properly, we’re going to fail. And failure does not mean collapse of our civilization, although it might. It means we’re going to fail at identifying the right opportunities and using them to make this planet a better and more equal place. So is that enough of an introduction?

Danil Kerimi:
Thanks very much, Bruno. That was perfect. Actually, you made my job as a moderator very easy because you did mention two critical words right there, and with that I will pass to Brian. Because you did mention open source and you did mention development community. And I want Brian to introduce himself, but obviously and speak a little bit about both of his projects, but also how does the community that he lives with, shares bread every day, actually think about all these global digital governance processes in their daily lives? Thank you.

Brian Behlendorf:
Thank you, Danil. And it’s funny, I thought I was just going to be attending the session, then he’s like, no, come on up. So I am very actually honored to be up here with all of you, and with all of you. So I’m Brian Bellendorf. I work for the Linux Foundation. I lead our initiatives on artificial intelligence, as well as an initiative focused on digital identity called the Open Wallet Foundation. Now, some of you on Monday morning might have heard an announcement in Doreen’s opening keynote of a complementary effort that we are now working on with the ITU called the Open Wallet Forum. These two halves are like two halves of one brain. And to give a little bit of, just finish my background, I’ve been with the Linux Foundation for eight years. Prior to that, I’ve done a number of things. I was chief technology officer for the World Economic Forum. I worked in the Obama administration for its first two years, and I co-founded something called the Apache Software Foundation, which was really the first open source foundation focused on working with companies and, frankly, anybody to build common open source software to move further and move faster. Some of you might not realize this, but 70 to 90 percent of the software that sits in a phone, behind a website, in your car, 70% to 90% is pre-existing open source code. Code that came from communities working together, very much like the Wikipedia is built by people working together. But in this case, it’s often companies working together because they have a common interest in a shared platform. And this is a recurring pattern, not just in operating systems and low-level bits, but in financial services, in education, in all sorts of domains, and increasingly in government software. The Open Wallet Foundation, which we at the Linux Foundation started about a year and a half ago, has been focused on the technology pieces behind this emerging category of applications called digital wallets, right? You, of course, know them as Apple Pay and Google Wallet, but some of you may be tracking the EIDIS regulations in the European Union that are now mandating that every member state in the European Union ship a national ID wallet by the end of 2025. And this will be a holder for your national ID, things and documents such as a digital driver’s license, a diploma from a national school. And the interesting thing is now post-COVID and a lot of the work that we did around verifiable credentials for proof of vaccination status, there was a recognition in Europe and in other countries of the need for this technology to be privacy-preserving, right? To be able to present a credential to somebody without the issuer of that credential knowing that it was presented, right? And in many other ways, to have the same quality of privacy that you have today when you pull out your wallet and show your driver’s license to be able to buy alcohol. At least that’s how it works in the U.S., right? So there have been open standards, but now this push in Europe, along with pushes in other countries, and this recognition of digital identity as a key part of digital public goods, of digital public infrastructure, this… This project is not just a bunch of technologists, you know, nerds working together on small bits behind the scenes. Now this is a place where bodies like the European Commission, bodies like the UN, care very deeply about how this gets built. They’re starting to spend tens of millions of dollars in each country to build these kinds of applications. Our pitch is, if you do this independently, you’re going to waste a lot of money when you could be working together. And you’re likely to come up with solutions that are not compatible. Even if you try to implement the same standards, wallets are hard to build. And we should be building them together. And finally, we don’t have to only have one wallet on a phone. You’re likely to see wallets for health care information, separate from wallets for educational records and accomplishments. And all of those should share these underlying components. So at the Open Wallet Foundation, we’ve been driving adoption of these shared components, collaborating with companies like Google and Accenture and Visa, but also with a lot of small businesses, and increasingly with the member states that are deploying this technology. The partnership with the ITU has been about how do we get a formal recognition and a formal engagement from the nations that are developing and adopting this technology and pushing it out and spending real money so that we can all really work together as peers on the common technology and move faster and further around this shared thing. And that’s a pattern that I think we could see applied to the development of e-government technologies, to the development of artificial intelligence, which is another big domain for the Linux Foundation and something that I oversee our strategy around, and do this in a way that not only delivers great functionality and more of it, but also does it in an ethically correct way, does it in a way that reflects the needs of the citizens, reflects an inherent degree of safety, that today, too often, Silicon Valley does move fast and break things. But if we work together, we can move further, faster, and in a safe way.

Danil Kerimi:
Thank you very much, Brian. You mentioned there are a couple of key themes. I would say, you know, the differences of approaches, the fact that somebody may be moving faster than the others. And as the only universal organization in the world that we know today, it’s the United Nations. And the United Nations really provide the basic infrastructure for us to come together for common solutions for everybody on this planet. With this, I would like to turn to Deniz, who can introduce the work that the UNDESA is doing in this field. Thank you. Good morning, everyone.

Deniz Susar:
And thank you, Danil, for inviting UN Department of Economic and Social Affairs here. So as you said, there are many things happening at the UN in the area of digital transformation, AI. And our UN Secretary General, maybe some of you know, is an engineer. It’s very important, all these issues for him. So I will divide my intervention into two. One is about digital governance, internet governance. And the second one is on e-government, like Brian mentioned. So about digital governance, we are here, and everyone is talking about the Global Digital Compact, which will be adopted in the summit of future as part of a pact for future. This pact will guide the international community about global cooperation in the field of sustainable development, peace and security, UN system reform, and Global Digital Compact with some objectives, principles, and commitments and actions, which will be adopted in September. You can look at the first version online, which is being discussed by governments and also by all input with all stakeholders. It’s trying to close the digital divide, expand benefits of digital economy to all, foster an inclusive, open, free, and safe digital space respecting human rights, advance responsible and equitable data governance, and strengthen international governance of emerging technologies and including artificial intelligence. Speaking of artificial intelligence, on 21st of March, 2024, this year, there was the first UN resolution on AI, co-sponsored, in other words, backed by. more than 120 member states, and it’s adopted by consensus without a vote, and this was a big success for the General Assembly, all 193 member states to agree on a draft AI resolution. I also encourage you to look at that. It recognizes the AI’s potential for achieving sustainable development goals, but also calls for respect, protection, promotion of human rights in the design, development, deployment, and use of AI. And these principles are there. Even the SG received a question, OK, they are all there, and what will you do? All those private sector companies with profit-oriented to respect these. And his response was raising awareness. I mean, he also quoted, there are many countries who don’t actually follow the human rights, not all the time, but just do everything as a stakeholder, all we can do in this area. And last thing on the internet governance area is also in the last, I think it’s still within the last one year. Last year in June, we also, UN also published a policy brief on information integrity on digital platforms. As you know, according to several studies, fake news spread six times faster than real news. And this calling for all multi-stakeholder refrain from disinformation on hate speech in, again, in digital platforms. There is also a policy brief from the UN on that. My second part, very briefly, is about work on e-government. As a digital government branch at the UN Department of Economic and Social Affairs, we look at e-government development of all 193 UN member states. We recruit two volunteers speaking the native language in each country. And they do assess the online services for us and we have like around 160 features where we look at live events such as getting an online birth certificate, like driving license, passport and we publish United Nations E-Government Survey every other year. The next edition will be launched during the Summit of the Future in September. If you search for UN E-Government Knowledge Base you can see the data. Obviously digital divides that we see also exist in E-Government. Many countries in Africa and in Oceania except New Zealand and Australia are lacking behind. There are lots of room for improvement. As part of it we also look at the most populous city in each country. We assess the city portal. Again we recruit two people living in that city and we have around 100 features that we ask them to assess. We also publish this as we call it Local Online Service Index and obviously larger cities perform better and cities in the north perform better. But this is an area that we collaborate with any government or non-government entity. We sign a memorandum of understanding and we share our methodology. We give our platform to this entity so this entity recruits more people. It could be from a university or academia in that country so that we can apply the methodology in several cities in a single country because there is a demand to see why you only included the most populous city, why not Ankara for example I’m from Turkey, why only Istanbul. So we sign this agreement and it’s open if anyone is interested in you can find me after the session. So by this way we can apply the methodology in many cities in a country and we did this in Brazil. in Palestine, in Jordan, in India, Uzbekistan, Greece and a few other studies that are going on right now. Back to you.

Danil Kerimi:
I’m just, there’s a gentleman standing at the door, sir, if you pick the first three tables, here is a massage chairs. If you feel like Steve Beck, please do come forward, it will disappear in a minute. But Deniz, you mentioned something very important, you mentioned the very important developments in terms of the first UAI Act, the first AI resolution passed by unanimous consensus at the UN General Assembly. But again, I’m coming back to my point that there’s just so much stuff happening that it’s hard for any policymaker or any staffer in any ministry to keep track on everything or even the policy team on the company side. So turning to Jim, I just want to ask him, why is it important for us to set the governance structure for this emerging tech here on Earth? And please do provide an answer, why is it so important to those desperate souls at midnight burning the oil? Thank you, Jim. Hello.

Jim Caravalla:
Thank you, Danil. It’s a pleasure to be here. I’m Jim Caravalla, CEO of Offworld. We’re building swarms of machine intelligent robots and infrastructure to build civilization out into the solar system and to the next stars before the end of the century. Since the last quarter of 2022, I would say that we have entered the age of augmented sentience. This is a new era that is unknown territory. It is the time when the machine intelligence tools that practitioners have developed over the years in the last decade or two decades, particularly based around deep reinforcement learning, have now become so accessible to the wider population with the generative AI and large language models that we are now seeing an acceleration of human capability and a democratization of these tools to such an extent that we are in the greatest change of civilization since the establishment of the Gutenberg press in the 1400s. This is an extraordinary time and every week and every month the changes are becoming ever more significant. For the kinds of systems that we’re building, for example, that will operate here on earth terrestrially and in space celestially, machine intelligence, autonomy and the ability to communicate intelligently with human operators will become ever more significantly important. But the most significant difference I think from a societal perspective is that I could take an informed intelligent general person from the street, bring him into our laboratories or her and within a week that person would have an effective capability to control and operate many of these systems with these new large language models. The acceleration of capability, the liberation of opportunity and the establishment of the democratization of these tools is going to be such an economic change. I don’t know, to your point, I don’t know how we keep up in terms of governance, let alone on earth, let alone in space, where we are really facing multiple new frontiers. So I think we have to look very carefully at how these tools are developing, how the use cases are developing, where the opportunities are. For example, we’re deploying a lot of our robots across the African continent, where we want to take people out of these dangerous mines. and the opportunities for local communities who have centered their lives around mining, for example, now to get into autonomy, robotics, machine intelligence, the fourth industrial revolution, technologies, right there at their fingertips with a handheld device and a power unit. I think we are in for extraordinary changes and we are going to notice the difference and we’ll be able to measure the differences month by month, quarter by quarter as we proceed.

Danil Kerimi:
Brilliant. Now that we defined that it’s actually, it is important indeed, perhaps I would just turn around and see everybody has their vintage points and their pet projects and they’re excited opportunities ahead of them, but the global community, is there any area of broader digital governance that you think that is largely underserved, tremendously overhyped or something that we are even not even paying attention to? Because sometimes what is out, we can only see what we can see, but at the same time there’s a whole universe of issues that can be addressed, should be addressed. Perhaps Brian, from your perch in the Silicon Valley, is there something that is coming that is gonna overtake everything that is being discussed here? Or in the country, we just have to bunker down, make sure that this, we get AI, right? We get digital finance right? We get open source right? Anything that you want to share with the audience?

Brian Behlendorf:
I would clarify that while physically, I sit somewhere in the Silicon Valley, Bay Area kind of setting. We as the Linux Foundation operate globally and I sometimes list my home address as seat 16A. But I think, look, technology continues to advance on all fronts, even in the non-AI space, it’s getting cheaper to store data, it’s getting cheaper to transmit it. What we’ve seen really open up opportunities globally has been the availability of this technology as building blocks, right? the access to it, the fact that a lot of software is written in programming languages that are easy to learn, much easier than they used to be. C and assembly language were very challenging. Today, Python, which is the predominant language that AI systems are built in, is something they can teach in grade school. So, you know, a lot of investment has gone into capacity building out there in terms of usage of internet technologies, but it’s been good to see the development of local technologists around the world who are capable to build, to not only like receive these systems that might be built by Silicon Valley or by well-intentioned charities, but also to continue to maintain them, to evolve them, to bring a degree of digital sovereignty that isn’t just about where the software runs, but is also about the ability to evolve it, to maintain it, to build micro enterprises around it that are much more resilient than leaving it all up to a big technology company here or there. Specifically on the AI front, I think we are still in the midst of rapid evolution, and part of that evolution is it’s getting cheaper and cheaper to build higher and higher quality models, cheaper in terms of less computation time, less energy consumption, and cheaper and cheaper to build inference, which is the way you query these models. Still orders of magnitude of improvement. We’re also seeing kind of the point where it’s less about more and more data, and it’s more about higher quality data, better tagged data. So high quality data like the Wikipedia is much more interesting and builds much better models than low quality data like Reddit comments, right? And so the shift that I hope we will see, and I see hints towards out there in the world of AI governance is how do we get organizations to work together to build high quality, openly licensable data sets that then can be used to build models that can sit everywhere, that isn’t just a model that sits controlled by Sam Altman and the different models controlled by Sergey Brin, you know, or Satya, but instead are models that all of you. could run in your countries, even on your phones, in your national data centers and everywhere. And I think that’s a level of investment that in both the underlying data and the tools to build these systems and finally in the education and capacity building that should be a priority for all of you.

Danil Kerimi:
Brilliant. Bruno, let me turn to you. You have seen quite a few hype cycles and evolution of governance approaches over the course of your career with the World Bank, with the United Nations. What is different this time around? What we should be particularly, any lessons that you would urge us all to learn from previous failures or perhaps successes?

Dr. Bruno Lanvin:
Thank you, Danil. The main difference is the acceleration of time. That is, we have less time to consider governance issues that we’ve had for any of the previous waves of technological innovation or innovation generally. I’d like to relate that question to the previous one. Are we overlooking anything important in our approach to regulation? Because the two are interconnected. And the answer is yes. We are overlooking a number of very important things. But I cannot tell you which. There’s some similarity between the world of data governance and the world of talent production. We have to produce the talents that will be required for jobs that we cannot even define today. The jobs that will be offered to young people in five years from now, for a large number of them, cannot be defined today. We have the same kind of issue for regulatory issues. That is, what are the problems we should be tackling? And what we need is innovative regulators. Now, innovative regulators sounds like an oxymoron, like risk-taking central banker, or even something illegal. like creative accountant, there’s some danger that may be felt about trusting people who may not be specialists to become regulators, but we don’t know where the specialization will go. We know that if we want to address societal issues related to data regulation, we have to involve not only engineers, we have not only to involve scientists, we also should involve government people, but we need to involve ordinary citizens, we need to involve artists, we need to involve neurosurgeons, we need to involve a number of people whose capacities and talents will combine in defining a future proof kind of regulation, and that’s a danger. How can you have something that can be reliable, that people will respect, while saying at the same time, you know this regulation may change over time, because circumstances will change. So trust will be an important mechanism of generating that apparatus.

Danil Kerimi:
Thank you, Bruno. Deniz, I want to turn to you now to ask you a very simple question. Very often, national regulators look at the United Nations to provide them with guidance, assistance, best practices, or you just simply tell us what else is going on out there as they are preparing to become more agile in terms of their approaches to digital governance and regulations. Anything that you advise the ministers that you meet, the agencies, or at the city level, when they approach you?

Deniz Susar:
Thank you, Danil. In addition to those discussions that I mentioned, GDC, or WSIS Plus 20, I think two things here, because we are talking about the AI, I think the use of AI in public sector, and this will be something bigger and bigger, and most governments do not have enough capacity and they are trying to catch up. So is there, I think, something that… we could focus is, is there any way, and again, we need support from here from private sector, is there any way use of AI so that we can leapfrog like the same conversation that we had with the mobile phones and in a way successful in some parts of the world leapfrogging with mobile phones. Can we also do it with AI to better service delivery in public administration? That’s one thing. And the second one is there is this hype about smart cities. You know, cities, I think are the biggest invention of, one of the biggest invention of humankind. There are many things happening in cities, lots of progress happening, and I think it’s important that majority of the development will happen in cities. Think about work, employment, but they also have a lot of tools, you know, about environment pollution, etc. So I think all these smart city initiatives, which are based on AI and data, they can also help us to accelerate progress. I think we see those two main areas. So the use of AI at the national level and also how we can use these technologies, which we call smart city, we don’t necessarily agree with that because there is no smart city. We will always catch up in a city with the latest technological developments, but how we can use this to deliver services in cities. I think these two, which we will see, will become more and more important in the area of e-government.

Danil Kerimi:
Thanks very much, Deniz. Before I turn to Jim to talk a little bit about what he sees in terms of what needs to happen, because this is the technology that will take us away from this planet, right? This is what will help us expand to different parts of the solar system and the universe. But before I turn to him, to talk about the potentially good and bad scenarios. Oh, I forgot to mention that we’ll be running a little prize competition here. Right after his intervention, whoever asks first question will get a prize, and whoever asks the best question will also get a prize. So you may want to already start preparing those questions in hands to make sure that you are the first ones at the very least. Jim, so what is the dream scenario for our tech and digital governments and AI regulation here on Earth, and a nightmare scenario that would prevent us from actually expanding into the universe?

Jim Caravalla:
So clearly the use of machine intelligence or artificial intelligence generally for industrial processes and expanding productivity and economy is absolutely key. And in every industrial process that we have across the economic fields, there’s always standardization so that we can have governments’ compatible operations, and we can maintain standards and interoperability. This is extremely important in all of the arenas, whether it’s working with autonomy or not. If we don’t undertake the governance of that standardization of processes, the interoperability of our artificial intelligence systems, both in terms of human use, autonomy, robotics, and other application areas, the challenges for the smallest learning variations that occur in some of these machine learning frameworks to then exacerbate and become magnified is pretty critical, especially when we consider that mission-critical activities that require 100% veracity in operations, in data interpretation, and in machine decision-making are going to be extremely important. Now on Earth, we tend to get away with having less integrity in mission-critical systems because we’ve always got human operators that are monitoring and keeping an eye on. And obviously, we strive to get that mission-critical capability. As we move out into space, or we work in more extreme environments, or with more mundane repeated operations, that necessity for mission criticality becomes key. Yet the economic drivers for the use of autonomy adds pressure to increase the autonomy applications within those mission-critical systems. So that is exactly where the blurry territory of governance and standardization over machine learning autonomy that we don’t know actually how to define. We can’t look under the hood into some of the outputs of the algorithms. It’s going to be an extraordinary challenge, especially when we don’t have humans presence in those environments. And we are relying on our autonomous systems for mission-critical operations. So I think that’s going to be one example of a key area of governance that’s going to be required.

Danil Kerimi:
Thank you so very much. Now, as I mentioned, the combined cognitive ability of the panelists far outstrips any model that is currently on the market, be it Bernie, Gemini, Claude, or any other. So I would urge you to ask the questions that you would want to, perhaps, ask some of your large language models, or any other question, really. And as I mentioned, there will be a prize for whoever asks the first question.

Audience:
Are we allowed to hallucinate the answer?

Danil Kerimi:
The winner, please introduce yourself and ask the question.

Audience:
My name is Professor Selma Bassi from eWorldwide Group. I’m a professor in artificial intelligence and innovation and social entrepreneurship. But, fantastic, a really. really enjoyed the conversation and I’m scared about what you’ve been saying, especially Jim and Bruno. I’ve got two things I wanted, one I want to suggest and one I’m weary. I live in Dubai but I’m from the UK. I came here, I’m just from Germany and from Strasbourg, guess what? I brought my European adapter. In the olden days I worked for AMD designing ASIC chips in 1980. The compatibility, interoperability of our electronic components is still not there. SWIS has their own and that’s a simple mechanical example. So when you talk about interoperability, Jim, I think we are a long way and I don’t think we’re there yet and I’d love to know how we can get there. The second is a recommendation because the risks are there, as you said, with the data not being able to be extracted from different sources to then make the same decision or informed decision. The second is, yesterday in my policy statement, I talked about the need for multi-sector, multi-stakeholder partnerships, which you emphasised beautifully, Bruno. What I am suggesting, lovely comparison for neuroscientists, but also intergenerational discussions. Looking at the average age in this room, it’s really, really important that we bring in the young. They are the people who are actually going to think outside the box on the future that we haven’t anticipated of the job they will need and I want to understand how do we engage such platforms and bring in kids from Africa, from Asia and of course from Europe and the rest of the world. Thank you so much

Danil Kerimi:
Jim, Bruno, you want to start first and then anyone who wants to?

Jim Caravalla:
I’ll start briefly on interoperability. A very good point and well made. I don’t know, but I’m making a hopeful assumption that the age of legacy infrastructure, where the infrastructure emerged in nation states that were generally isolated, most of the 20th century, in terms of interoperable standards, is different from the globalized age we’re in now, where AI deployments are global for the most part. And so I think the interoperability will be global. The challenge will be more in, will there be interoperability in use cases? And I think so sectors and applications may take the place of nation states in the last century. So I think you’re very prescient in flagging up that there are still interoperability challenges, but I think this time it will be across sectors rather than nations.

Dr. Bruno Lanvin:
Just on the same question of interoperability, I remember working in the late 1990s on e-government services in a certain number of countries around the world, and one of these countries was Korea. And I worked with two institutions at the time called KISD and KAIST, with different views but very good ideas. And both of them underlined that the first cost item was to spend money on getting rid of legacy systems. In other words, they realized that trying to build and reconcile different data systems within government and other administrations was going to be more costly and less efficient than getting rid of everything that was there and build something that was from the start interoperable. Very few governments realized that at the time. I think the world as a whole has learned since then, and the challenges we have to face now require very early agreement. on compatibility, on interoperability, and a number of elements. And I would think that the point about getting the young involved in governance is largely an issue of interoperability, but we should do it.

Danil Kerimi:
Thank you very much. Before I turn to the other speakers, I think there are a couple of more questions. We’ll take them together. So first the gentleman here, then the gentleman here.

Audience:
Thank you very much. My name is Kwaku from Ghana. In the era of digital governance, vis-a-vis the era of artificial intelligence, there is a key missing component that we must fight to bridge. How do we bridge the linguistic gap if we have to reach out and give services to those in the far-flung areas? I tried putting a language, which is called Akan in Ghana, as we were speaking. I put it, I asked that question, where is the WSIS being held in that local dialect? And I had no answer. Said, oh, can you correct me? Can you now come in a language I can understand? So I had to put in English, and readily said in Geneva, Switzerland. There is a major issue we should address if we want digital governance as governance 2.0 empowerment to go to the far-flung areas. And this I think we must build together. And I’m very happy to talk about bridging the youth gap in bringing out this capacity development. Thank you very much.

Danil Kerimi:
Just for information, I’m 18. It’s just I look this old because of all the laws and regulations I have to follow.

Audience:
Hi, Lee McKnight, Syracuse University. Really enjoyed this session. Sorry to get here late. First, I’m gonna. I’m trying to get a pledge from all of you to promise to stop saying one word, which is hallucination, which is not what any machine can do. They can either lie or make things up. So you can choose which of those terms you prefer to talk about these large language models, but they’re not hallucinating. One. Okay, so that’s just my observation. Second. All right, you can keep on saying whatever you want.

Danil Kerimi:
That was the champagne.

Dr. Bruno Lanvin:
That really happened.

Audience:
All right. Second. I’m gonna challenge things a little bit in terms of the level of where things are going. I’m just seeing my own children who did not, you know, he studied history, now they’re running an AI startup. Really? Yeah, that’s happening now. So I agree that this is really spreading out further. So the level of education and diffusion is already sort of underway at the grassroots level. Kids are just picking up and playing with these tools. They’re using them in schools already. We have to, we older folks have to catch up with the kids. The final observation or question is, why is there no such thing as a certified ethical AI practitioner? Thank you.

Danil Kerimi:
Thank you so very much for your questions. Please, I will turn to anyone who wants to pick up on any or all of those questions, but also please, within your answers, also at the very end, we have 30 seconds concluding thoughts before we finish this session. Anything that you want to leave.

Jim Caravalla:
I’ll just give one brief 20 second comment. I recommend checking out a Microsoft report that just came out, that the biggest divide that is emerging will not be between, excuse me, AI and people. It’s going to be those people who can utilize AI tools and those people that can’t. That’s the next dividing front. And I think it’s very difficult to figure out how to determine quantified practitioners because the field is just changing before anyone knows how it’s changing.

Danil Kerimi:
Thank you, Brian.

Brian Behlendorf:
I could give 10 minute answers to each of the. questions I would ask. The one I did want to focus on was the question about international representation and access to underrepresented languages and cultures. The number one thing I think nations can do and development agencies can do to help prepare low resource countries, but everyone frankly, for inclusion into this AI future is to build high quality data sets of content in the language to be preserved as well as capturing cultural artifacts and stories from those cultures. And to do that in text, go ahead and scan whatever paper archives you have that represent your national libraries, represent your national culture or your ethnical culture, capture it in audio and capture it in video. So that as these AI models become multimodal, they start generating video and generating audio and other content that they are able to do that in a way that reflects the broad sweep of society and not just those cultures that have invested in it.

Danil Kerimi:
Dennis? Thank you.

Deniz Susar:
I mean, I share the concerns. I think it’s a big concern. All these language models are trained most in English, so I think we need to do some work there and I think the idea here is good, but we need much more. And about ethics, I just want to highlight that UNESCO produced the first ever global standard on AI ethics, the recommendation on ethics of AI, which was launched, I’m sure you are aware, which was launched in 2021. And this is also one thing that’s still being discussed in the GDC, which will adopt it in September.

Danil Kerimi:
Bruno, you kicked us off. You will bring us back home.

Dr. Bruno Lanvin:
Thank you, Danil. Yeah, I’d like to pick up the question about languages because I think it is fundamental. At WSIS, the first edition of WSIS, I was then heading the World Bank delegation. We had several sessions with somebody, many of you may remember, Adama Sarkozy. Adama has been the chairman of the Foundation for African Languages for a couple of decades and he made a very eloquent plea to maintain the diversity of African languages and use digital technologies to do it. In other words, not to see the two as competing, but digital technologies adding to our ability to indeed enjoy and continue protecting the diversity of our languages. This will be essential in terms of interoperability, mobilizing talents, and bridging some of the other divides that have been identified, like intergenerational divides. There’s one word that we have not pronounced and I think it may be due to the composition of this panel, maybe this manual, which is gender. I think that what we are seeing in AI, in a number of other areas, and the digital governance generally is that the gender dimension is taking new avenues, unprecedented ways in which the bias are being identified, but also contributions can be identified. If I can merge the two, language and gender, I can refrain from being very biased and saying that the best language in the world to address gender issue is probably French, because this is a language in which the problem is masculine and the solution is feminine.

Danil Kerimi:
Thank you very much, Bruno, and thank you very much to our prize winners. Your prizes have already been dispatched. You should arrive at home. If they’re not there on time, all the complaints to Swiss Post, please. With that, a big, big, big thank you for being such a wonderful audience. The next panel in this room will be on space and will be as exciting, if not more, as what you have just heard. Thank you very much.

A

Audience

Speech speed

177 words per minute

Speech length

768 words

Speech time

260 secs

BB

Brian Behlendorf

Speech speed

199 words per minute

Speech length

1809 words

Speech time

546 secs

DK

Danil Kerimi

Speech speed

188 words per minute

Speech length

1646 words

Speech time

525 secs

DS

Deniz Susar

Speech speed

158 words per minute

Speech length

1390 words

Speech time

529 secs

DB

Dr. Bruno Lanvin

Speech speed

149 words per minute

Speech length

1500 words

Speech time

604 secs

JC

Jim Caravalla

Speech speed

135 words per minute

Speech length

1099 words

Speech time

487 secs