AI and Digital @ WEF 2024 in Davos
Session reports
Traditionally, AI and digital play prominent role at the WEF Annual Meeting in Davos-Klosters. This year, WEF will take place on 15-19 January with the main theme ‘Rebuilding Trust’. WEF discussions will be cenered around four themes:
- Achieving Security and Cooperation in a Fractured World
- Creating Growth and Jobs for a New Era
- Artificial Intelligence as a Driving Force for the Economy and Society
- A Long-Term Strategy for Climate, Nature, and Energy
Digital Watch and Diplo will report from all publicly broadcasted sessions on AI and digital technologies.
For more information about the meeting please visit the dedicated web page.
AI at WEF
The World Economic Forum’s annual meeting in Davos has, since 1971, been a melting pot of global discussions, covering topics ranging from diplomacy and geopolitics to technology. However, in 2024, one digital theme stood out like never before—AI. The rise of AI has captured the attention of world leaders, turning the Swiss Alpine town into a hub for discussions that, for many, appeared to revolve exclusively around AI.
At Diplo, with the assistance of our AI tool, we tracked 49 sessions related to AI and digital. Interestingly, our data showed positive arguments overwhelmingly prevailed in the discussions. This also aligns with the conference’s overall theme, ‘Rebuilding Trust.’
We distilled AI-related discussions into a cohesive form, encapsulating the essence through the lens of 7 key questions.
1. Can trust in AI be established?
Can AI be trusted with important tasks like driving, writing, or medical forms? Trust in AI is a major concern for many people. The growing sophistication of AI often results in a perceived ‘black box’ effect, where the inner workings are opaque, leading to scepticism and mistrust.
Building trust in AI is closely related to understanding how it works. An argument suggests that if people can comprehend the underlying mechanisms of AI, they may be more inclined to trust it.
Strengthening rules, certifications, and validations in the AI industry is essential for consumer safety and trust. AI systems should be designed with the understanding that flaws or harmful elements may exist, similar to the zero-trust approach in cybersecurity approach.
On the other hand, another viewpoint highlights two elements that can build trust in LLM technology: the models’ factual accuracy (IQ) and the emotional connection (EQ) with them. The extent to which the models are factually correct can be formally measured, and emotional connection plays a significant role in decision-making.
Education and awareness are key components in cultivating trust in AI. Failure to educate citizens about AI hampers trust-building efforts. The greatest risk is doing nothing and assuming that the traditional education system is still relevant. Schools should modernise the curriculum to include relevant subjects such as computer science, AI, cybersecurity, and robotics. Educators can use AI technologies to teach advanced skills and content, speeding up the catch-up process.
2. Why is it important to open-source AI?
Today’s foundational tech architectures, including semiconductors and cloud-based technologies, are closed. Three major cloud computing companies continue to dominate the market, while Nvidia and TSMC dominate cutting-edge fabs. This situation may disadvantage smaller countries and companies and impede the democratisation of the tech industry.
AI access and control should not be exclusive to a few corporations but accessible to all, including the developing world. This is why it is important to emphasise open-source models, as models constructed by particular companies may only meet the needs of some applications or various cultures, languages, values, and interests. Open-source models accommodate new ideas and data modalities, enabling the community to build upon a strong foundation. If open-source large-language models (LLMs) are made available, they can promote inclusivity and reduce inequalities between the Global South and the Global North in the AI domain.
However, there is a push towards regulatory proposals that could impose burdensome requirements on open source. The assumption of potential dangers associated with open-sourcing AI may stem from concerns about AGI’s uncertain and unpredictable future possibilities. This implies that a clearer understanding of AGI is necessary to dispel fears and foster responsible approaches to open-sourcing AI.
3. Is AI the great equaliser or the great divider?
In the Global West, where around 90% of the population has internet access, the main question is how to leverage AI’s full potential fully. But for the 2.6 billion people, primarily situated in the Global South, who lack an internet connection, the question is how to access AI.
Most countries in the Global South still need to build the necessary foundation to fully harness AI. Notable progress was made in countries like Rwanda, India and Bangladesh. However, others still lack the essential foundation for fully embracing AI. For instance, there’s not a single country from Africa in the top 50 in terms of research output in AI.
Despite hopes that AI could be the great equaliser or a key player in achieving the SDGs, current trends suggest otherwise. AI, on its own, appears insufficient to significantly change the trajectory towards fulfilling these goals. On the contrary, the Global South needs to invest in human resources and the digital economy and prioritise investments in digital public infrastructure and education, as they offer more promising returns than focusing solely on AI.
4. Does AI create artificial truth?
The assumption that increased exposure to information and diverse opinions inevitably leads to more informed choices has been disproven. The rise of AI-driven assistants and chatbots introduces a potential downside, as users might lean towards relying on a single AI-selected opinion, possibly diminishing the diversity of perspectives and perpetuating echo chambers. This era is now characterised as the ‘era of artificial truth,’ in which AI significantly impacts and reshapes the information ecosystem.
A particular area of concern is the role of technology and AI in elections, where these advancements have facilitated the dissemination of misinformation and enabled targeted messaging to voters. Tech companies, encompassing search engines and social media platforms, bear a crucial responsibility in ensuring fair elections by implementing content policies that curb the use of their platforms for mass political targeting campaigns. OpenAI has recently announced its policies on this matter. At the same time, while trying to combat misinformation, imposing strict rules for AI could have adverse effects, leading to increased censorship and control of accessing information.
In addressing the challenge of misinformation, academics play a critical role by studying the issue and developing effective interventions, including techniques like fact-checking and labelling. Recognising that bias exists even without AI, educating people to discern between facts and misinformation is imperative.
5. What’s the impact of AI on jobs?
The World Economic Forum found that AI and LLMs could impact around 40% of tasks across 800 jobs. While AI holds transformative potential, there’s concern about job losses in finance and law. The focus is on fostering digital skills for jobs and envisioning a future where everyone can access capabilities, emphasising decision-making and curation.
While acknowledging the disruptive potential, leveraging AI-driven infrastructure and collaboration through shared data emerge as essential strategies for addressing global business challenges. The adoption of technologies like AI and blockchain proves effective during crises. AI facilitates predictions through data analysis, while blockchain ensures secure and transparent transactions in trade finance.
Furthermore, using AI and machine learning in fraud prevention is a significant innovation. Visa’s use of AI for rapid decision-making in transaction validity showcases the technology’s potential in combating fraud. Government regulation and commercial competition in managing data transparency in AI is imperative, as well as the the need for accuracy, verifiable data, and regulatory frameworks. Embracing digital technologies and automation is seen as a solution to drive productivity and mitigate labour shortages, particularly in manufacturing processes. AI’s role in accurate forecasting, material development, and innovation opens opportunities for advancement across various industries.
6. Does AI help cyberdefenders or cyberattackers?
The integration of AI in cybersecurity has become a double-edged sword for organisations worldwide.
Organisations are worried about AI being used to launch more cyberattacks. Criminals are leveraging AI to commit crimes at a larger scale, with greater sophistication, and at a faster speed. They offer malicious services such as denial of service attacks, phishing emails, and deepfakes. The availability of AI-as-a-service for criminals further amplifies these threats. This trend raises the spectre of AI fueling an arms race in cybersecurity, potentially tipping the advantage towards attackers in asymmetric cyberwarfare.
Yet the potential of AI to build cyber resilience was highlighted. AI can enhance efficiency and response times in security operations. Moreover, AI has the potential to make cybersecurity more accessible and cost-effective for governments and organisations alike. Leveraging AI technologies aids in identifying and apprehending cybercriminals, ensuring fair and just legal processes. Law enforcement agencies must adapt and equip themselves with AI technologies to combat cybercrime effectively.
However, fewer than one in 10 respondents believe that generative AI will give an advantage to defenders. This finding suggests that AI, in this context, is perceived as adding to the overall threats faced in cybersecurity rather than providing a solution.
7. Did AI governance discussions move beyond clichés?
The discussions on AI governance have been predictably filled with clichés such as those emphasising gaps in existing rules, the necessity for global regulation, and the importance of striking a balance between regulations and innovation.
Notably absent from the discussions was any critical examination of the root causes behind society’s unpreparedness for the so-called sudden rise of AI, another convenient cliché used perhaps to absolve responsibility under the camouflage of unforeseeability.
For now, one of the largest societal questions is defining values, defaults, and boundaries for AI. What was needed? Concrete suggestions rather than just calling on other nations to mimic Western regulatory approaches. People from the Global South should not only be consumers of the technology but should have a say in the future trajectory of the regulations. Expanding on this, we should explore how to ensure that values from the Global South are a part of these considerations.
‘When you look at the [EU’s] AI Act and the Chinese measures, you see on one side the voice of Aristotle, and on the other, the voice of Confucius—long different philosophical traditions that manifest themselves in how governments manage societies.’
A divergence in AI regulations could enable us to make well-informed decisions about what would be most suitable in the future.
WEF in numbers
Event statistics
Total session reports: 49
Unique speakers
221
Total speeches
279
Тotal time
2.136,77 min
7.0 days, 2.0 hours, 17.0 minutes, 56.0 seconds
Тotal length
377.842 words
0,64 ‘War and Peace’ books
Total
arguments
3425
Total positive arguments
1975
Total negative arguments
674
Total neutral arguments
776
Prominent Sessions
Explore sessions that stand out as leaders in specific categories. Click on links to visit full session report pages.
Fastest speakers @ WEF2024
2
Nicholas Thompson
240,6 words per minute
1
Chris Miller
254,6 words per minute
3-4
Francine Lacqua
236,9 words per minute
Andrew Ng
236,05 words per minute
Longest time speakers
Katrin Kuhlmann
36 min 51 sec
Jovan Kurbalija
47 min 41 sec
Jose Anson
42 min 42 sec
Most used prefixes during WEF
AI
1410 mentions
during WEF
The session that most mentioned the prefix AI:
AI: The Great Equaliser? (112 mentions)
Digital
502 mentions
during WEF
The session that most mentioned the prefix digital:
Thinking Big on Digital Inclusion (39 mentions)
Cyber
268 mentions
during WEF
The session that most mentioned the prefix cyber:
Open Forum: Cracking the Code (100 mentions)
Future
226 mentions
during WEF
The session that most mentioned the prefix future:
Open Forum: Cracking the Code (29 mentions)
Tech
206 mentions
during WEF
The session that most mentioned the prefix tech: