Hard power of AI

18 Jan 2024 17:30h - 18:15h

Event report

From diplomacy to defence, AI is markedly changing geopolitics. Shifts in data ownership and infrastructure will transform some stakeholders while elevating others, reshaping sovereignty and influence.

How is the landscape evolving and what does it mean for the existing international architecture?

More info: WEF 2024.

Table of contents

Disclaimer: This is not an official record of the WEF session. The DiploAI system automatically generates these resources from the audiovisual recording. Resources are presented in their original format, as provided by the AI (e.g. including any spelling mistakes). The accuracy of these resources cannot be guaranteed. The official record of the session can be found on the WEF YouTube channel.

Full session report

Nick Clegg

The analysis comprises multiple arguments related to technology, politics, and AI. One argument suggests that the rapid pace of technological change surpasses the speed of political and regulatory debates. This creates a gap in understanding and effective governance, as technology advances far more rapidly than the discussions and legislation surrounding it.

On the other hand, the ongoing political, societal, and ethical discussions around generative AI are seen as a positive and healthy phenomenon. These debates are happening in parallel with the evolution of the technology, enabling stakeholders to actively engage in conversations about the implications and potential consequences of AI.

The analysis also emphasizes the importance of public access to AI technology. The exclusive control of AI by a few corporations is deemed unsustainable and impractical. Instead, AI access and control should be democratic and inclusive, allowing it to be available to all, including developing countries. This promotes a more equitable distribution of the benefits and opportunities offered by AI.

Furthermore, industry advocates support the enforcement of common standards on AI imaging. They argue that effective regulation in this area requires the ability to detect, respond to, and regulate AI-generated content. To achieve this, advocates propose common standards for identifying and watermarking images and videos produced by generative AI tools. These standards would enable effective regulation and accountability in AI imaging.

The analysis also cautions against the distortions caused by predictions about Artificial General Intelligence (AGI). The debate surrounding AGI often relies on hypothetical future dangers and lacks consensus on its precise definition. This highlights the need for a more grounded and realistic discussion about AGI, focusing on its current state and capabilities.

Additionally, it is argued that current AI models are not as advanced as widely assumed. These models are described as “stupid” and lacking advanced capabilities like reasoning, planning, and understanding the meaning of words they produce. This challenges the perception that AI has reached human-level intelligence and raises questions about the limitations of current AI technologies.

The concern about open-sourcing AI is argued to stem from a lack of understanding about AGI. The assumption of potential dangers associated with open-sourcing AI is based on uncertain and unpredictable future possibilities of AGI. This implies that a clearer understanding of AGI is necessary to dispel fears and foster responsible approaches to open-sourcing AI.

The argument is also made that AI should not be exclusive to a few corporations but accessible to all, including the developing world. It highlights the need for AI access and control to be democratized, ensuring that it does not require excessive resources and is available to a wider range of individuals and organizations.

Furthermore, the analysis emphasizes the need for solutions to verify the authenticity of synthetic and hybrid content on the internet. It suggests that identifying watermarking for authentic and verified content has been discussed as a potential solution. This aims to address the increasing prevalence of synthetic and hybrid contents and ensure that their authenticity can be verified.

Moreover, AI is recognized as a highly effective tool in combating hate speech and harmful content. The use of AI has led to a significant reduction in hate speech on platforms like Facebook, with the prevalence now at 0.01%. This reduction is independently audited by EY, highlighting the reliability and effectiveness of AI in content moderation.

Nick Clegg, a key figure mentioned in the analysis, supports an advertising-financed business model that maintains equality and diversity of opinions online. It is argued that platforms like Instagram, Facebook, and WhatsApp being accessible to people of all economic backgrounds, regardless of their personal cost, promotes equality among users. Additionally, the commercial incentive for advertisers to avoid extreme or hateful content pushes platforms to keep such content in check.

Furthermore, Nick Clegg disagrees with the notion that the online world promotes a less diverse range of opinions compared to traditional media. He argues that fixed ideological viewpoints can be found in both cable news outlets and tabloid newspapers. In contrast, he suggests that the online world presents a broader selection of ideological and political input. Clegg refers to academic research indicating that polarization is often driven by old-fashioned, partisan media.

In conclusion, the analysis provides insights into the dynamic relationship between technology, politics, and AI. It highlights the disparity between the rapid pace of technological change and the relatively slower speed of political and regulatory debates. However, it also underscores the importance of ongoing discussions and debates surrounding AI, particularly in relation to generative AI, access to technology, AI imaging standards, and the ethics of AI development and use. The analysis raises important questions and considerations about the future of AI and the need for responsible and inclusive approaches to its governance and deployment.

Andrew R. Sorkin

The analysis explores different perspectives on the regulation of artificial intelligence (AI) and its potential impact. Andrew Sorkin questions the genuineness of the industry’s call for AI regulation, highlighting the unprecedented nature of this request. On the contrast, Mustafa Suleyman supports regulation and emphasises the need for safeguards and measures to oversee its use.

The analysis also delves into concerns regarding deepfakes, which refer to digitally manipulated content that can misrepresent individuals. Sorkin expresses worry about how AI can empower individuals to become nation states and manipulate information. He provides examples of deepfake videos featuring politicians saying things they never did.

Furthermore, the analysis addresses the proliferation of fake media, particularly through the use of deepfakes in cryptocurrency and financial scams. Sorkin emphasises the importance of enabling the public to identify deepfakes and misinformation.

Another aspect discussed is the responsibility for educating the public about AI-related issues. The analysis raises queries about who should carry the responsibility and raise awareness given the potential risks of AI misuse.

The European Union (EU) is recognised for its leading role in AI regulation, having implemented clear rules. However, concerns are raised regarding the possible relationship between strict regulations and limited innovation. Sorkin questions whether stringent regulations hinder innovation in the technology sector.

The analysis also examines the responsibility for technology misuse, advocating for a clear distinction between technology creators and users. While Sorkin believes creators should incorporate safeguards into their creations, he argues that users should also be accountable, highlighting the need for clear delineation between the two.

Efforts are underway to establish safeguarding measures for AI, demonstrating recognition of the necessity for regulation and protection in the field. The disruptive potential of AI and the conflicts arising from differing views and opinions are also highlighted.

The analysis sheds light on the diminishing trust in elites, who are perceived to impose a specific worldview on the public, outside of platforms like Davos and Silicon Valley.

Furthermore, the analysis observes that individuals tend to seek opinions that align with their preconceived notions despite exposure to diverse perspectives. This underscores the challenge of overcoming information bias and emphasizes the inclination towards reaffirmation.

Advertising is viewed positively as a means to democratise technology, providing access to individuals who may not be able to afford it otherwise. The business model of subsidising technology use through advertising is supported as a way to ensure inclusivity and broader accessibility.

To summarise, the analysis explores various viewpoints on AI regulation and its implications. It addresses concerns regarding deepfakes, fake media, responsibility for public education, the balance between regulation and innovation, safeguarding measures, declining trust in elites, information bias, and the democratization of technology through advertising. These insights contribute to a comprehensive understanding of the complex issues related to AI regulation.

Dmytro Kuleba

The impact of Artificial Intelligence (AI) on warfare is viewed negatively, as it has the potential to drastically influence the dynamics of conflict. The use of AI-powered drones, for example, can significantly increase artillery accuracy, thus amplifying the destructive capabilities of military operations. Additionally, the utilization of surveillance and striking drones has created an intensified battlefield, where AI plays a crucial role in gathering intelligence and carrying out attacks.

Moving on to global security concepts, AI is expected to bring about significant changes. It is anticipated that the advancement of AI technology could lead to a complete reset of diplomatic rules, potentially transforming the way nations interact with one another. Furthermore, AI has the ability to operate without the need for physical forces like fleets in distant regions, which could potentially revolutionize global security strategies and management.

The combined evolution of AI and quantum computing also raises concerns for global security. These emerging technologies have the potential to redefine security strategies and introduce new challenges. The integration of AI and quantum computing could enable unprecedented computational power, potentially disrupting traditional security systems and necessitating new approaches to managing potential threats.

Another significant observation is that AI may contribute to global polarization. As different nations adopt varying approaches to AI use, it is possible that two opposing camps may emerge, leading to increased tensions and conflicts in the international arena.

In terms of diplomacy, the impact of AI is expected to be transformative. With the introduction of AI, diplomacy could become either extremely boring or exciting, depending on its application. The use of AI-assisted diplomacy could streamline processes and improve efficiency, but it also raises concerns about the potential loss of human interaction and the personal touch often required in diplomatic negotiations.

Furthermore, the advent of new information technology, including AI, poses challenges akin to those faced during the invention of the printing press 500 years ago. The proliferation of information and opinions, despite increased accessibility, does not necessarily result in more informed choices. Individuals still tend to make unwise decisions despite having endless access to information.

AI’s potential to limit people’s exposure to different opinions also raises concerns about its impact on politics. With the advent of AI-driven assistants or chatbots, individuals may end up relying on a single AI-selected opinion, potentially reducing the diversity of perspectives and reinforcing echo chambers.

Another noteworthy observation is the prevalence of Russian propaganda on the internet, particularly concerning the conflict between Russia and Ukraine. Russia has been investing in filling the internet with its propaganda for decades, leading to an annual saturation of biased information. This presents a potential risk as the algorithms used by social media platforms may favor the majority viewpoint, swaying popular opinion towards the dominant narrative.

Finally, the negative sentiment towards AI and its impact on global security is partly rooted in concerns about the potential undermining of state sovereignty. In the case of Russia’s claim of Crimea, for instance, it is seen as unjust and a violation of Ukraine’s sovereignty. The prevalence of Russian propaganda further exacerbates the situation, perpetuating false narratives and potentially undermining the legitimacy of other states.

In conclusion, AI’s impact on warfare, global security, diplomacy, information technology, and politics is multifaceted. While it brings advancements and efficiencies, it also presents challenges and potential risks. The negative sentiment surrounding AI stems from concerns about its potential for destructive military capabilities, its potential to polarize nations, its impact on diplomatic processes and decision-making, and its potential to distort online narratives and undermine state sovereignty. As AI continues to evolve, it is crucial to consider the ethical and societal implications alongside its technical advancements.

Jeremy Jurgens

The World Economic Forum is actively engaged in addressing global AI issues that transcend national boundaries. They acknowledge the wide-ranging impact of AI and are committed to managing its risks and leveraging its potential for the betterment of society. AI’s implications for industries, innovation, and infrastructure are considered, and the Forum recognizes the importance of industry collaboration and the need for effective governance to drive its responsible development.

In the context of AI governance, the World Economic Forum emphasizes the significance of looking beyond regulation alone. While regulatory frameworks play a crucial role, they believe that comprehensive governance should involve a broader perspective. This approach entails managing risks associated with AI while also identifying and taking advantage of the opportunities it presents. By adopting this holistic view, the Forum aims to ensure that AI governance is not limited to mere compliance but actively seeks to maximize the positive impact of AI on economies and societies worldwide.

Another essential focus area for the World Economic Forum is to foster equitable access to data and AI capabilities. They believe that inclusivity is integral to harnessing the full potential of AI for global advancement. Recognizing the existing disparities, the Forum is committed to working towards bridging the digital divide, especially in the Global South. Through their numerous national centers situated predominantly in the Global South, the Forum aims to enable citizen empowerment and improve lives worldwide by ensuring that AI benefits are accessible to all.

In summary, The World Economic Forum is dedicated to addressing global AI issues by acknowledging AI’s cross-border impact and actively engaging in initiatives that transcend national boundaries. They stress the need for comprehensive AI governance that goes beyond regulation, focusing on managing risks and unlocking opportunities. Additionally, the Forum is committed to promoting equitable and inclusive access to data and AI capabilities with the aim of improving the lives of citizens around the globe. Their efforts reflect a commitment to leveraging AI advancements for the collective benefit of society.

Leo Varadkar

Leo Varadkar expressed concerns about the increasing believability and prevalence of deep fakes, AI-generated content using his image and voice to promote products and services. Varadkar emphasized the importance of effective detection of fake content, highlighting its transformative potential. He also emphasized the need for AI education and awareness within societies to adapt to this new technology. Varadkar discussed the potential of AI to revolutionize healthcare based on his personal experiences as a doctor. He sees AI as a powerful tool for advancements in healthcare. The discussion also touched on the balance between the benefits and dangers of AI, with Varadkar and others arguing that the advantages outweigh the risks. They compared the transformative potential of AI to that of the internet and the printing press. The responsibility of social media platforms to remove fake content quickly was emphasized, as well as the potential of AI to transform jobs rather than eliminate them. Lifelong education was seen as essential for adapting to changing job landscapes. The discussion also highlighted the need for tools to address the misuse of AI, with the primary responsibility lying with the individuals misusing the technology. Trust restoration was a key concern, with a focus on the importance of trusted sources and the potential shift in political communication due to AI. The establishment of international treaties and agencies to control AI risks was also discussed. The EU was recognized as a leader in AI regulation. Overall, the discussions emphasized the multifaceted impact of AI on society and the importance of education, responsible implementation, and international cooperation in navigating the AI era.

Audience

In the analysis, the speakers address several important aspects related to artificial intelligence (AI) and raise a range of concerns regarding its regulation and potential risks.

One of the main concerns raised is the effective control of AI risks. The speakers question how these risks can be adequately managed and controlled, highlighting the need for robust regulations in place. This reflects a neutral sentiment towards the topic, suggesting a balanced viewpoint that recognizes the potential benefits of AI while also acknowledging the importance of addressing its risks.

Another key point is the need for global inclusion in AI discussions. The speakers emphasize the importance of involving both the Global North and South in conversations about AI and emerging technologies. They mention the United Nations panel as a potential platform for fostering inclusive dialogue. This aligns with the objective of reducing inequalities, as outlined in Sustainable Development Goal 10. The sentiment towards this issue is also neutral, indicating a recognition of the need for inclusivity.

The analysis also questions the impact of the EU AI Act in promoting European champions in AI. The speakers express uncertainty about whether this regulatory framework would be effective in fostering European leadership and competitiveness in the field of AI. The sentiment remains neutral, underscoring the speakers’ concerns and the need for further evaluation of the EU AI Act.

Furthermore, the analysis highlights the importance of considering AI capabilities as a metric. The speakers argue that assessing artificial intelligence based on its capabilities, rather than solely on its potential risks, should be a central factor in evaluating and regulating AI systems. Concrete examples are provided, such as the ability of AI agents to multiply investment in a short timeframe. This neutral sentiment suggests an objective assessment of AI capabilities and their relevance in decision-making processes.

Additionally, there is interest in understanding when AI could pass the proposed metric test. The speakers express curiosity about the timeline for AI to achieve benchmarks such as making a million dollars from $100,000 in just a few months. This sentiment reflects a neutral stance, indicating a desire for further exploration of AI’s progression and potential accomplishments.

Lastly, there is concern about the need for regulation for advanced AI systems. The speakers highlight the importance of considering the implications of such advanced AI and emphasize the necessity of regulatory measures to ensure ethical and responsible development. This negative sentiment suggests a sense of urgency and a call for proactive steps to address the challenges posed by advanced AI.

Overall, the analysis provides an insightful examination of the various concerns and perspectives related to the risks, regulation, and global inclusion in AI discussions. It presents a balanced viewpoint, acknowledging both the potential benefits and risks of AI, while also highlighting areas of uncertainty and the need for robust regulatory frameworks.

Karoline Edtstadler

The European Union (EU) has taken the lead in addressing the risks associated with Artificial Intelligence (AI) by categorising them through the AI Act. This historic move makes the EU the first institution to proactively regulate AI. The EU aims to strike a balance between regulating AI and ensuring human oversight without stifling innovation.

Karoline Edtstadler, a proponent for AI regulation, stresses the significance of educating people about the potential risks of AI misuse. She specifically highlights the danger of deepfake technology, which can manipulate public opinion by creating realistic but false videos. Edtstadler advocates for public awareness and the implementation of innovative techniques to filter deepfake content.

Moreover, Edtstadler emphasises the importance of diversifying opinions on the internet to avoid falling into echo chambers. Algorithms on social media platforms often reinforce users’ existing beliefs, creating an environment where alternative perspectives are not easily accessible. She encourages individuals to actively seek out diverse viewpoints and not underestimate the influence of these algorithms.

AI has the potential to revolutionise various sectors, particularly healthcare. By leveraging AI technologies, accurate and efficient disease diagnosis can be achieved, leading to improved patient outcomes. This demonstrates how AI can positively impact the field of healthcare and contribute to the goal of achieving good health and well-being.

However, it is important to recognise that internet connectivity remains a significant hurdle, particularly in the global South. Many individuals around the world still lack access to the internet, creating a digital divide. Efforts must be made to address this issue and bridge the gap to ensure equal access to opportunities and information.

In order to foster innovation and economic growth, the EU should focus on creating a more attractive environment for startups. This entails removing obstacles in the single market and embracing a trial and fail approach to innovation. Success stories from the United States and Israel serve as evidence of the positive outcomes that can be achieved when a supportive ecosystem for startups is established.

In conclusion, the EU’s proactive stance in categorising the risks of AI through the AI Act reflects their commitment to beneficially regulate this powerful technology. Edtstadler’s advocacy for AI regulation and public education about potential risks highlights the need for responsible AI deployment. The significance of diverse opinions, the positive impact of AI in healthcare, the digital divide, and the importance of supporting startups also emerge as essential considerations in shaping a responsible and inclusive future.

Mustafa Suleyman

The tech industry is calling for the regulation of Artificial Intelligence (AI) due to perceived need. They believe that AI transformation will have significant impacts on culture, politics, and technology. This is because AI can process and distribute vast amounts of information at a lower cost than traditional methods, potentially leading to destabilisation.

AI also has the potential to create conflicts as differing views empowered by AI clash. However, the rise of Artificial General Intelligence (AGI) is not expected to happen immediately, making it hard to speculate on its consequences.

In Ukraine, open-source technology plays a major role in resistance movements, particularly for targeting, surveillance, and image classification. Open-source platforms enable widespread participation and collaboration.

General purpose technologies, as they become useful, tend to become cheaper, more accessible, and widely spread. This trend can be observed across various industries.

Increasing availability and affordability of technology mean that power and ability to take actions will become more widely accessible. However, it is crucial to manage this expansion carefully, as it can have both positive and negative implications depending on intentions.

Mustafa Suleyman believes that his closed-source, commercial approach to building AI models is superior to relying on open-source models. He argues that his models are objectively better, emphasizing innovation and development.

There is an ongoing debate surrounding the definition of intelligence, with some considering it a “distraction” in discussions about AI. Instead, the focus should be on discussing capabilities rather than fixating on the vague concept of intelligence.

A risk-based approach specific to different sectors is considered vital for AI governance and regulation. This approach involves identifying and addressing potential risks associated with AI technologies in a measurable way.

The idea of autonomy in technology is concerning, as it can be dangerous and may require regulation for responsible and ethical use. Efforts are underway to develop and implement safeguards to address the misuse of AI technologies.

While laptops, cellphones, and other technologies have clear benefits, they can also be misused by bad actors. It is challenging to prevent individuals from misusing these technologies, as demonstrated by difficulties in combating cyber crimes and ensuring online safety.

To avoid serious misuse of AI, certain capabilities should be restricted. For example, preventing AI from being used to manufacture harmful items like bioweapons or bombs is a priority that should be addressed through regulation and policies.

The rise of chatbots and conversational AI marks the beginning of a new era of technology. Their development and increased capabilities have the potential to revolutionise human-computer interactions and services.

As AI becomes more integrated into society, it is expected to reflect the values and interests of various organizations and individuals. The impact of AI on trust formation and erosion is a complex phenomenon. While conversational AI can create trust by providing accurate and personalised content in real-time, it can also undermine trust if the underlying business model prioritises advertisers over user interests.

The trustworthiness of AI is closely tied to its business model. If the business model relies on selling ads, it may prioritize the interests of advertisers over users. On the other hand, if the business model is transparent and aligned with users’ interests, trust in the AI system is more likely to be established.

The traditional Turing test, established over 70 years ago, is deemed inadequate in evaluating modern AI capabilities. Mustafa Suleyman proposes a modern Turing test that assesses an AI’s entrepreneurial capabilities, such as project management, invention, and marketing. This approach aims to evaluate AI’s potential impact on the economy and its ability to drive innovation.

Predictions suggest that AI will possess entrepreneurial capabilities and become widely available at low costs, which will have radical implications for the economy. With AI’s increasing ability to pass the Turing test, it is expected that these capabilities will become widespread within the next five years or by the end of the decade.

AR

Andrew R. Sorkin

Speech speed

205 words per minute

Speech length

1882 words

Speech time

550 secs

A

Audience

Speech speed

186 words per minute

Speech length

369 words

Speech time

119 secs

DK

Dmytro Kuleba

Speech speed

155 words per minute

Speech length

1066 words

Speech time

412 secs

JJ

Jeremy Jurgens

Speech speed

207 words per minute

Speech length

193 words

Speech time

56 secs

KE

Karoline Edtstadler

Speech speed

202 words per minute

Speech length

1490 words

Speech time

443 secs

LV

Leo Varadkar

Speech speed

196 words per minute

Speech length

1115 words

Speech time

342 secs

MS

Mustafa Suleyman

Speech speed

211 words per minute

Speech length

1322 words

Speech time

376 secs

NC

Nick Clegg

Speech speed

205 words per minute

Speech length

1526 words

Speech time

446 secs