Main Topic 3 – Keynote
19 Jun 2024 10:00h - 10:30h
Table of contents
Disclaimer: This is not an official record of the session. The DiploAI system automatically generates these resources from the audiovisual recording. Resources are presented in their original format, as provided by the AI (e.g. including any spelling mistakes). The accuracy of these resources cannot be guaranteed.
Knowledge Graph of Debate
Session report
Full session report
EuroDIG 2024: Leaders call for ethical AI and international cooperation to bridge the digital divide
At the EuroDIG 2024 forum, a pivotal session on artificial intelligence (AI) unfolded, featuring keynote speeches from Thomas Lamanauskas, Deputy Secretary General of the International Telecommunication Union, and Marija Pejčinović Burić, Secretary General of the Council of Europe. The session delved into the multifaceted impact of AI on digital policy and society at large.
Lamanauskas opened his address by acknowledging the potential and perils of AI. He highlighted the unsettling rise of AI-powered disinformation and deepfakes, particularly concerning in an election-heavy year. A study revealing gender bias in AI systems was cited, illustrating the need for AI to be closely tied to ethics and human rights.
The economic promise of AI was underscored, with generative AI projected to add a staggering $4.4 trillion to the global economy and aid in achieving 70% of sustainable development goals. These include advancements in health, education, and climate action. However, Lamanauskas warned of the “AI divide,” where less connected countries, particularly in the Global South, risk being left behind due to inadequate digital infrastructure and regulatory frameworks. He emphasised the importance of international solidarity and resource sharing to prevent AI from widening inequalities.
The Deputy Secretary General also shed light on the United Nations’ initiatives in AI governance, including the work of the UN Interagency Working Group on Artificial Intelligence and the AI for Good platform. He raised concerns about AI’s environmental footprint, noting the significant energy and water consumption associated with AI technologies. Lamanauskas called on the tech industry to commit to emission targets and transparently share greenhouse gas emissions data, positioning AI as a part of the solution to environmental challenges.
Pejčinović Burić’s keynote focused on the balance between fostering innovation and ensuring regulation in the AI sphere. She referenced the Council of Europe’s track record in creating treaties to address tech-related challenges, such as the Budapest Convention on Cybercrime and Convention 108 on data protection. She announced the adoption of the new Framework Convention on AI, the first international legally binding treaty in this domain, designed to protect individual rights and ensure AI systems adhere to high standards, based on principles such as human dignity, non-discrimination, privacy, accountability, and safe innovation.
The Secretary General highlighted the treaty’s global potential, as it is open to countries outside Europe, aiming to harmonise AI regulation worldwide. She also mentioned the need for sector-specific instruments to tackle challenges like AI-induced bias and the Council of Europe’s commitment to developing tools to evaluate the new convention’s implementation.
Both speakers called for international cooperation to harness AI’s power for prosperity, sustainability, and inclusion. They emphasised the need for robust legal frameworks and inclusive dialogue to mitigate AI’s risks and ensure it contributes positively to human rights, democracy, and sustainable development.
Key takeaways from the session include the recognition that AI innovation is outpacing the capacity to regulate it, the call for pragmatism in governance, and the consensus that the future of technology, including AI, is ours to shape responsibly. The session concluded with a sense of urgency and a call to action for all stakeholders to work together to ensure that the age of AI is characterised by advancement and inclusivity, rather than fear and division.
Session transcript
Moderator:
So, good morning, dear EuroDIG participants, good morning, it’s after party yesterday, not so much energy, but lots of coffee in your blood, I feel that, and I heard that. But it’s really nice to see you all here, and even it’s like morning, you know, and you had nice concerts at night, yeah, long concert, as I heard as well. But yeah, let’s start the day, our last big session. And the main topic of it is artificial intelligence, as I thought on the zero day, or the first day, that now I’ve been reading an article of Thomas Schneider, where he compared engines, like what we did 200 years ago with artificial intelligence, what we did just now. And similarities, differences, and challenges we face are quite similar. Am I right, Thomas? Understood correctly? Thank you for that. But yeah, it’s not about me, and not about Thomas this time. I want to invite other Thomas, Thomas Lamanowskas, Deputy Secretary General of International Telecommunication Union, to be our first keynote speaker.
Tomas Lamanauskas:
So thank you very much. I think I’m getting familiar to this podium really well, you know, over this conference, and I’m really glad to be here. So indeed, ladies and gentlemen, Secretary General Pejčinović Burić, and all colleagues here. So indeed, it’s great to address you today and talk about artificial intelligence, which is also, this is a topic that has become the hot topic, probably the hottest topic in digital policy, if not the whole policy, over the last year and a half. And it’s really been raising a lot of very complex questions. So indeed, AI-powered disinformation, disinformation, deepfakes have become a daily reality. And it’s very unsettling, especially in the year when around half of the world’s population are going to polls. Meanwhile, a global study found that about 45% of AI systems show gender bias. Also fears from existential threat, something like, think of the Terminator, they’ve been on the rise, whether they’re real or not is also a lot of discussions there, but definitely a lot of people are fearing that. Indeed, we cannot overlook the increasing use of AI systems in the battlefield as well that probably inspire those fears. These risks need to be recognized and need to be addressed. But we cannot focus solely on downsides, otherwise we risk missing the enormous benefits that artificial intelligence can bring as well. So generative AI alone can add an estimated 4.4 trillion US dollars to the global economy. Other solutions, including AI, can help meet 70% of sustainable development goals targets, including improving people’s health, education, livelihoods, building better energy grids, as well as communities, protecting biodiversity, protecting land and protecting water resources, and strengthening climate action. Therefore, we need to harness the power of AI, but to do so in a safe and responsible manner. And of course, appropriate governance of AI is key to achieve that. I’m expecting to hear today quite a bit more about the Council of Europe’s Framework Convention on AI, so I think the Secretary General will definitely cover that. But indeed, such frameworks as this convention are very important in this regard. I think I was told also that this convention was adopted on the 17th of May this year, which for us is a very special day, because it’s our birthday. So we see that’s very symbolic, as this year we celebrate the 159th one. So and again, it was, I think, a great presence for the world as well on this one. Of course, there are a number of other governance steps, including the European Union AI Act, which was finally recently fully adopted, US Executive Order on AI, G7 Hiroshima process, and the AI Safety Summit process, with the most recent one taking place in Seoul just around a month ago. As well, of course, AI regulatory measures in China and beyond. So the convenience of those different processes have gathered in ITU’s AI Governance Day on 29th of May in Geneva. And this day was convened as part of AI for Good Summit, where policymakers during that day came together with the private sector, academia leaders, to discuss how to shift from principles to practice in a governing of these technologies. Indeed, we were able to welcome 70 ministers, regulators, and high-level policy leaders, 25 UN representatives, and over 100 representatives of industry and academia for that governance for Inaugural Governance Day, with more than half of them from developing countries. And indeed, at that day, Minister from Bangladesh spoke for many when connoted their absence from the AI governance processes, many of them that were presented, and appreciated the opportunity for an inclusive dialogue on global governance, bringing everyone together. More generally, what did those policy, academia, industry leaders said they need from the AI and AI governance? They want to see a few things. First of all, responsible frameworks, so that tying AI closely to ethics and human rights. Second, they want to see interoperability, interoperability both of technology platforms, so they can work together, but also regulatory frameworks, so that regulatory frameworks don’t conflict, but they can work together around the world. They want to see international technical standards that underpin and enable implementation of these requirements. They also want to make sure that AI leverages digital divides, not creates new ones, such as AI divide. As well, they want to see global solidarity and resource sharing, to make sure that AI generally doesn’t leave anyone behind. These are very valuable insights that we took note of in developing governance frameworks and our activities in this regard. One thing I would like, as people say in tech industry, double click on, so this is AI divide. Indeed, being left out of AI resolution is one of the greatest risks for a lot of people and countries. Today, still, 2.6 billion people are unconnected, and with the vast majority being in the Global South. Actually, about two-thirds of the population, at least developed countries, remain offline. The large majority, around 95%, live in low- and middle-income countries, where modern data infrastructure, like co-location data centers and access to cloud computing, is most lacking. Clearly, being left out of the digital world makes it impossible to be part of AI revolution. The AI divide has many more faces, though. For example, concentration of innovation and patents, by some estimates, three countries in the world have around half of the AI-related presence. Placed in the value chain, some countries produce key components to enable AI, such as microchips and foundational models, and employ PhD-holding engineers, whereas others are sources of raw materials and data labelers. As well as policy divide, our recent AI Readiness Survey among eight U.S. member states demonstrated that the AI regulatory frameworks are still in the infancy. Still, 85% of the countries lack AI regulatory frameworks, and more than half of respondents said that they don’t even have AI strategy. To paraphrase the words of United Nations Secretary-General António Guterres, who said at the ITU Council just a week ago, we must address these challenges so that AI never stands for advancing inequality. I kind of like that AI interpretation, and hopefully it will never be that way. What do we do to respond? And here I stand as ITU, but also I stand as a member of the United Nations system, which works together to address some of those challenges. And the United Nations system has been working to support the world in harnessing AI in a safe, responsible manner, and we’ve been doing that for much longer than Chacha Boutibineira. So the reason we launched UN system-wide paper on AI governance identifies over 50 instruments, either directly applicable to AI, around half of them, or applicable to very closely related areas, such as data and cybersecurity, that could apply to artificial intelligence. Some are overarching normative frameworks, such as our colleagues in UNESCO’s recommendation on ethics in AI governance, or AI ethics recommendation, with the implementation framework as well. Some of them are very specific and sector-specific. For example, World Health Organization’s recommendations on using AI in health applications, or UN Interregional Crime and Justice Research Institute’s recommendations on using AI in criminal justice. So the paper that I mentioned was developed by the UN Interagency Working Group on Artificial Intelligence, which since 2020 has brought the UN system together to coordinate AI-related activities, and I have the pleasure of chairing this group together with Gabriela Ramos, Assistant DG for Social and Human Science of UNESCO. So this interagency group, internal UN group, complements the work of the UN Secretary General’s High-Level Advisory Board on AI, which is an external experts group, where independent experts are analyzing the landscape and advising recommendations for international governance of these technologies. So we, as well, the UN system, have been proactive in developing AI platforms and tools to support the countries worldwide. I use latest report, which we launched again just a few weeks ago, on UN activities on AI, details over 400 projects across the UN system, actually 47 agencies, that leverage AI for various aspects of sustainable development. As well, ITU and other UN agencies have worked to set technical standards for AI in a range of vertical sectors. For example, we’ve been working very closely with WHO and WIPO on AI and health, Food and Agriculture Organization on how AI can boost agriculture, the World Meteorological Organization United Nations and Rampel Program on AI solutions for emergency response and disaster management, UN Economic Commission for Europe, UNEC, on intelligent transport systems, as well, and using NEI, just to name a few examples. Overall, we have around 200 technical standards on AI, which either have been published or being developed now. Our work strives to level the playing field across developed and developing countries a lot. So ITU’s challenges on AI and machine learning involves, for example, real-world assimilated data, technical webinars, mentoring, and hands-on sessions in which participating teams create, train, and deploy models mapped to AI standard specifications. AT competitions so far attracted more than 8,000 participants all across the world. And to ensure global inclusion, ITU provides a free, state-of-the-art compute platform to participants who lack adequate access of their own. At the heart of our AI efforts stands the AI for Good platform, which is powered by the ATU and supported by 40 United Nations partners. It features our annual summit, as well as an extensive year-round program of online events. And you can always join them. I think we have around 650 webinars so far. And continuous community engagement through our neural network platform brings together more than 26,000 participants. Again, everyone is very welcome to join us as well. AI for Good started seven years ago as a solution summit looking at how AI can help achieve sustainable development goals and has grown now into a critical platform for discussing responsible AI development. This year’s events, which we convened together with the Swiss Confederation, I think, thanks Thomas here in front, just took place three weeks ago, as I said, took place along our WSIS Plus 20 forum high-level event. And we attracted 8,000 participants, around 6,500 of them in person, and the remainder online. And this really was amazing, at least for organizers, to see a queue circling around the CCG, the one main conference center’s engineer, of the people who really wanted to be part of the conversation. Several new initiatives around this year aim to meet the AI challenges further. So first of all, we launched the world’s… First of all, we launched the Unified Framework for AI Standards Development with our partners, International Standardization Organization, ISO, International Electrotechnical Commission, IEC, with whom we already work together under the banner of World Standards Corporation. And again, we agreed to work together and ensure the coordinated development of standards. As well as ITU, with a diverse range of organizations, including Content Authenticity Initiative, Coalition for Content Provenance and Authenticity, ITF, IEC, ISO, and JPEG, agreed to set up a multi-stakeholder collaboration on global standards for AI watermarking, multimedia authenticity, and deepfake detection technologies, which was one of the biggest challenges today with AI. Our new AI for Good Impact Initiative will mobilize our diverse active stakeholder community to share knowledge and assist developing countries. At the summit as well, UNDP and UNESCO have joined forces to support countries… with AI readiness assessments. And I’m also thrilled about the new partnership with the United Nations University, UNU, to produce a flagship AI for Good report. It will help transform the knowledge and expertise within AI for Good platform into valuable resource for stakeholders. This is one more thing which I want to actually need to address before closing today. And it is AI’s environmental and climate impact. And may actually, and it’ll become the 12th month in the row of beating temperature records. Not the records we want to beat. And you see even in Lithuania, yesterday was probably the record heat day, you know? And of course, we see today, you know, there’s some sort of, you know, really big contrast to that. As forest fires, droughts and floods rise in frequency and severity, the impact of greenhouse gas emissions is impossible to ignore. In this context, is AI part of the problem? Or will it be part of the solution? Importantly also, digital technologies, of course, as AI, can help improve energy efficiency, optimize inventory management, enhance business operations and reduce emissions and e-waste for everyone. So research shows that AI can help mitigate around five to 10% greenhouse gas emissions by 2030, roughly equivalent to the emissions of European Union. AI technologies can also bolster eco-solutions and help protect biodiversity. For example, AI systems can detect and analyze subtle ecosystem changes and bolster conservation efforts. AI also provides startling insights on climate and weather patterns, helping to understand the change we’re facing and provide early warnings of disasters. But there are challenges too. So the tech sector currently produces an estimated 1.7% greenhouse gas emissions over the world, actually at least 1.7%. And AI takes a growing share in that, as well as takes a growing share in energy consumption from the data centers that need to power AI. Training a single model uses more electricity than 100 U.S. homes consumes in an entire year, as well as an estimated in two years, data centers supporting skyrocketing AI use could consume twice as much energy as Japan as a whole does today. AI is thirsty too, with just a few, as thirsty as 10 prompts, consuming as much as a half a liter of water. So every time you send 10 prompts to charity, it’s like drinking one small bottle of water. So last year, we mobilized partners worldwide in a call for green digital action asking tech companies worldwide to embrace emission targets consistent with a 1.5 degree limit for global warming. We’re also asking the digital industry, including AI companies, to share their greenhouse gas emissions data openly. And green digital action partners are working to ensure the sustainability standards are implemented in actual practice. We’re encouraging the entire global tech industry to get on board, encourage everyone to support green digital action and help make AI part of a solution. So as UN Secretary General told the ITU Council last week, the pace of innovation is outpacing the capacity to regulate it. I believe though, that in this race, the pragmatism is key. The fast pace of tech development compared to the relative slowness of developing international laws and institutions, with one notable exception being presented here today, in particular the ones covering the whole world, only underscores the need to leverage existing instruments and governance structures, including those in the UN system. And I hope today already presented there is enough to leverage there. So ladies and gentlemen, let us work to harness the power of AI for all, to manage its risks, and to make sure that we do all of this together, all countries and all stakeholders, so that the age of AI becomes the age of prosperity, sustainability and inclusion, not the age of fear, anxiety and division. Thank you very much.
Moderator:
Thank you, Thomas. Now let me introduce, and please welcome a lot our second key speaker, Mrs. Marija Pejčinović Burić, Secretary General of the Council of Europe. Marija, please come. Thank you.
Marija Pejčinović Burić:
Deputy Secretary General Lamanauskas, distinguished guests, ladies and gentlemen. It is a great pleasure to be back in Vilnius for EuroDIG 2024. The digital dimension of freedom is a priority of the Lithuanian Presidency of the Council of Europe’s Committee of Ministers. So there is probably no better place to hold this event, or to explain the cutting edge role of the digital dimension of freedom or to explain the cutting edge role that the Council of Europe is playing when it comes to public policy on internet governance. Our organization has always understood the importance of balancing innovation and regulation when it comes to new technologies. But in fact, it would be wrong to see these things as weighing against one another. Rather, they should move in tandem, ensuring that technology develops in a way that truly benefits our societies. This has been our approach with previous tech-related legal treaties. Our Budapest Convention on Cybercrime, which remains the international gold standard in its field, harmonizing national laws, improving investigations and increasing cross-border cooperation. Our Eurodig to tackle internet and computer-based crime. And our Convention 108, which has ensured people’s privacy and data protection to over for four decades now. Both of these are open conventions, allowing countries from outside Europe to ratify them. And I know that our team taking part has explained how the recent upgrades of these treaties have ensured that they keep pace with ever-evolving technology. As well as outlining our pioneering work on addressing the human rights challenges brought by the metaverse. And sharing our guidance on how to counter disinformation on the internet. This is a subject of particular importance in a bumper election year. In which more than 4 billion people have the opportunity to vote worldwide. And where they should be able to cast their ballots based on accurate information. There is, however, a specific and positive development about which I also want to speak today. Last month, as it was already mentioned, our ministerial session in Strasbourg, the Council of Europe’s foreign ministers adopted our new framework convention on artificial intelligence. And human rights, democracy, and the rule of law. This is the first international legally binding treaty in this area. It is designed specifically to allow AI technology to flourish. But to do so in a way that protect individuals’ rights and does not undermine them. This means that AI systems should uphold the highest standards throughout their life cycles. Applying principles that are technology neutral and therefore future proof. So that gaps created by rapid technological advances are closed and remain closed. These principles include human dignity and individual autonomy. Equality and non-discrimination. Protection of privacy and personal data. Accountability and responsibility. Protection, transparency, and oversight. And safe innovation and reliability. On top of this, the framework convention also sets our government’s obligations to provide accessible procedural safeguards and remedies to help prevent AI systems from going off the rails. Both in the private and the public sectors. And thereby breaching our common standards and also to ensure justice where this does happen. This treaty has the potential to ensure safer, more secure artificial intelligence. Not just in Europe, but around the world. Because the framework convention, just like the Budapest Convention and Convention 108 plus, is an open convention. With a potential to help coverage AI regulation throughout not just our continent, but among countries around the world that share our values. And of course, that want to be part of this process. Many of them are already well-versed in the content. Our Committee on Artificial Intelligence, which did such good work in bringing this text together. And you have the conductor of this important work sitting in the first row. Another Thomas. Thomas Schneider. And I would like to pay special tribute and thank him for an excellent work. So, we had the input not only of our 46 member states, which is just normal for the Council of Europe, because they are member states. But also the insights of a diverse group of 11 observer states from around the world, plus the European Union. Our inclusive process has produced a strong text that also drew from the ideas and expertise of 68 non-state actors. Respected academics, private businesses, and civil society organizations. This even-handed, big tent, multi-stakeholder approach has delivered. Now, we move to the next stage. The Framework Convention will be open for signature on 5th September, and by no surprise here in Vilnius. And I hope that many countries will move swiftly to sign it, ratify it, and indeed bring it into force. So that as many citizens as possible gain from what it has to offer. But we are also aware that a transversal treaty like the Framework Convention alone is not enough. We need to ensure a comprehensive approach with binding and non-binding instruments that address sector-specific challenges so that our common standards apply there too. We know, for example, that AI can include and even amplify bias in the systems. So we will undertake further urgent work on how to prevent these systems from entrenching structural discrimination, marginalization, and inequality. More than this, we should look for ways to ensure that AI should not merely avoid bias, but actively and positively promote equality instead. So alongside the Framework Convention, we will develop new sectoral instruments designed to do just that. We will shape tools to evaluate the implementation of the new convention. This will also be prepared by our Committee on Artificial Intelligence by the end of the next year. Ladies and gentlemen, we know that artificial intelligence has the power to transform our societies. But while people often talk about this in terms of its impact on our personal and professional lives, it is equally true of our rights, just simple as that. At the best, innovation in AI can promote equality and uphold our standards of human rights, democracy, and the rule of law when these are challenged. The Council of Europe is determined to work with experts and others to ensure that this happens so that AI innovates in the best possible way. Our new Framework Convention and future work plan reflect our determination to deliver on this forum and from these all angles, including internet governance. And finally, the future of technology remains ours to determine. Let us do so in the right way. Thank you very much.
Speakers
MP
Marija Pejčinović Burić
Speech speed
134 words per minute
Speech length
1241 words
Speech time
555 secs
Arguments
Regulation and innovation should move in tandem for the benefit of society.
Supporting facts:
- The Council of Europe balances innovation and regulation.
- Previous tech-related treaties like the Budapest Convention harmonized laws and improved international cooperation.
Topics: Technology Regulation, Innovation
The Council of Europe’s new framework convention on AI aims to protect individual rights.
Supporting facts:
- The convention ensures AI upholds standards such as human dignity, privacy, equality, and accountability.
- It is the first international legally binding treaty on AI.
Topics: Artificial Intelligence, Human Rights
International cooperation is crucial for effective internet governance and AI regulation.
Supporting facts:
- Input from member states, observer states, and non-state actors was included to create the AI framework.
- The convention is open for countries around the world to sign and ratify.
Topics: Internet Governance, International Cooperation
AI has the potential to transform societies and should promote equality and uphold human rights.
Supporting facts:
- AI innovation can actively promote equality.
- The Council of Europe is working to prevent AI systems from entrenching discrimination or inequality.
Topics: Artificial Intelligence, Equality, Human Rights
Report
The Council of Europe’s progressive framework convention on Artificial Intelligence epitomises the critical balance between fostering innovation and enforcing stringent regulation to safeguard core facets of human rights, such as human dignity, privacy, equality, and accountability. This convention is celebrated for its harmonious strategy that allows technological advancement to thrive in congruence with societal values and standards.
Capitalising on its legacy, the Council of Europe has enhanced international cooperation in technology regulation, drawing on lessons from established treaties like the influential Budapest Convention. This pioneering agreement has been crucial for harmonising laws and bolstering international cooperation, showcasing the positive interplay between legislative rigour and technological advancement.
The convention is aligned with key Sustainable Development Goals, specifically SDG9 (Industry, Innovation and Infrastructure), SDG10 (Reduced Inequalities), SDG16 (Peace, Justice and Strong Institutions), and SDG17 (Partnerships for the Goals). It is distinguished as the first legally binding international treaty to navigate the complexities of AI governance.
Its globally unified strategy underpins the Council’s vision of technology as a driver for socio-economic evolution, aiming to diminish disparities and fortify peace, justice, and institution integrity. By acknowledging AI’s transformative societal role, the convention stresses that such transformation should strive for equality and respect for human rights.
The Council of Europe actively seeks to prevent AI systems from exacerbating discrimination or inequality. The framework recognises that innovative pursuits in AI should be conducted in harmony with societal norms and ambitions, positioning AI as a catalyst for social innovation aimed at inclusivity and fairness.
The inclusive development of the convention is particularly noteworthy, incorporating input from member states, observer states, and non-state actors to craft a rounded and comprehensive treaty. The collaborative drafting process and the openness to international signatories demonstrate the crucial importance of global cooperation for effective internet governance and the perpetuation of AI regulation.
The Council of Europe’s approach establishes robust criteria for AI operation across its lifespan, committing to constructing sector-specific instruments to address the particularised challenges posed by AI. Setting such benchmarks, the convention paves the way for a future where technology acts not as a divisive force but as a cohesive agent promoting a more just, equitable society.
In summary, the Council of Europe’s framework convention reflects a far-sighted approach to managing the complex dynamics between AI and social construct. By intertwining innovation with human rights principles, the convention seeks to prepare the foundation for a future in which technology functions not just as a mere tool but as an elemental force shaping a world characterised by fairness, equality, and robust institutions.
M
Moderator
Speech speed
152 words per minute
Speech length
227 words
Speech time
90 secs
Arguments
Moderator expresses a jovial greeting and acknowledges possible tiredness from audience
Supporting facts:
- Mention of a party the previous night
- Mention of the morning coffee energy boost
Topics: EuroDIG participation, Event engagement
The main topic of the session is artificial intelligence
Topics: Artificial Intelligence, EuroDIG
Artificial Intelligence (AI) has become the hot topic in digital policy
Supporting facts:
- AI-powered disinformation, deepfakes have become a daily reality
- Global study found that about 45% of AI systems show gender bias
Topics: AI Governance, Digital Policy
AI presents risks and benefits
Supporting facts:
- AI can add an estimated 4.4 trillion USD to the global economy
- AI can help meet 70% of sustainable development goals targets
Topics: AI Risks, AI Benefits
Appropriate governance of AI is key
Supporting facts:
- Frameworks such as the Council of Europe’s Framework Convention on AI are important
- EU AI Act and other international regulatory measures signify steps toward appropriate governance
Topics: AI Governance, Policy Frameworks
AI divide needs to be addressed
Supporting facts:
- 2.6 billion people are unconnected
- Innovation and patents are concentrated in a few countries
- 85% of countries lack AI regulatory frameworks
Topics: AI Divide, Equity in Technology, Digital Inclusion
The UN system’s concerted efforts to harness AI responsibly
Supporting facts:
- UN Interagency Working Group on Artificial Intelligence was formed
- ITU’s challenges on AI and machine learning facilitate inclusive participation
- The AI for Good platform is a key initiative for responsible AI development
Topics: AI for Good, UN Initiatives
AI’s impact on the environment must be addressed
Supporting facts:
- AI could help mitigate up to 10% of greenhouse gas emissions by 2030
- The tech sector’s emissions are on the rise due to AI
Report
At the recent EuroDIG symposium, the discourse centred around the expansive impact of artificial intelligence (AI) on society, underscoring the need for robust governance to steer AI’s benefits towards achieving the Sustainable Development Goals (SDGs). The moderator initiated discussions with positive energy, playfully acknowledging the potential tiredness of attendees – possibly a result of the previous night’s festivities – and referenced the energising effect of morning coffee.
The comparison of AI’s transformative effects with historical technological breakthroughs, like the engine, underscored AI’s revolutionary potential. Digital policy deliberations highlighted AI’s omnipresence and the challenges it poses, such as the troubling proliferation of AI-powered disinformation and deepfakes, and inherent biases, evidenced by research indicating around 45% of AI systems exhibit gender bias.
Nonetheless, AI’s economic promise is vast, with the potential to add an estimated 4.4 trillion USD to the global economy and to advance 70% of the SDG targets. The ‘AI divide’ was a matter of concern, evidencing a technological disparity where 2.6 billion people lack connectivity and innovations are concentrated in few countries, prompting calls for enhanced digital inclusion and equitable technology development.
Furthermore, with 85% of countries deficient in AI regulations, the symposium emphasised the urgency for comprehensive digital policy frameworks. International governance initiatives like the EU AI Act and the Council of Europe’s Framework Convention on AI were highlighted at EuroDIG as exemplars for AI regulation.
Additionally, the UN’s efforts – notably the formation of the Interagency Working Group on Artificial Intelligence and the AI for Good platform – were celebrated for fostering responsible AI development and global cooperation. The symposium also discussed AI’s environmental implications, noting both its potential to reduce greenhouse gas emissions by up to 10% by 2030 and the tech sector’s growing carbon footprint due to AI.
This reflected the need for balanced AI integration in environmental management. The overarching consensus highlighted the aspiration for AI’s deployment to be foundational to prosperity, sustainability, and inclusion, contributing to advancements in industry, innovation, infrastructure, the reduction of inequalities, and the sustainability of cities and communities.
The expanded analysis from EuroDIG conveyed a narrative where AI holds significant positive potential, despite the associated challenges and risks. A recurring theme emphasised the imperative for global vigilance and concerted efforts to guide AI’s development in harmony with societal progress, advocating for an inclusive technological future.
TL
Tomas Lamanauskas
Speech speed
183 words per minute
Speech length
2818 words
Speech time
924 secs
Report
At a recent conference, the significance of artificial intelligence (AI) and its associated challenges, including the rise of AI-driven disinformation, the prevalence of deepfakes, and the apprehension surrounding its military applications, were thoroughly examined. These topics underscored the urgent need for strategic AI governance, especially considering the worrying gender bias within AI systems at a time of crucial global elections.
Despite these pressing concerns, the presenters encouraged a measured view, spotlighting AI’s potential to substantially enhance the global economy and advance a broad spectrum of sustainable development goals that encompass health, education, and environmental protection, with generative AI alone projected to contribute an estimated $4.4 trillion to the economy.
The discourse brought attention to the newly endorsed Council of Europe’s AI Convention—ratified on the Council’s anniversary—which, in concert with the EU AI Act, the US Executive Order on AI, and ongoing multinational collaborations, is poised to play an instrumental role in moulding future AI governance.
The conversation on worldwide governance emphasised the need for synchronised technological and regulatory frameworks and the development of ethical standards to prevent widening the ‘AI divide’. This disparity is exemplified by the 2.6 billion people globally who lack internet access—predominantly in developing countries—thereby excluding them from the AI revolution.
United Nations efforts in the arena of AI governance, notably the ITU’s AI for Good initiative, were explored in depth. This initiative reveals the broad capability of AI to back sustainable progress across UN agencies, highlighting the UN’s dedication to leveraging AI for the global good and stressing the importance of aiding developing countries.
The environmental impact of AI was also scrutinised. Although AI’s role in climate monitoring and disaster response illustrates its value, the symposium reflected on its contribution to increased greenhouse gas emissions and higher energy demands. International collaboration was shown to be essential for uniform action, with calls for transparent greenhouse gas emissions data and for the tech industry to commit to global warming objectives through renewable digital practices, representing a determined standpoint in addressing AI’s environmental footprint.
To sum up, the presenters made a plea for practicality in managing the swift advancement of AI innovations, which often outpace regulatory processes. By leveraging existing international instruments and frameworks, primarily within the UN system, the shared aim is to direct the AI revolution toward a reality marked by prosperity, sustainability, and inclusivity, rather than one overshadowed by trepidation, anxiety, and societal divides.
The summary of the discourse conveyed a strong consensus: AI’s potential must be navigated with caution to ensure it serves the public interest of all humanity fairly and equitably.