Home | Newsletters & Shorts | IGF 2023 – Daily 4

IGF 2023 – Daily 4

 Logo, Advertisement, Art, Graphics, Text

IGF Daily Summary for

Wednesday, 11 October 2023

Dear reader, 

The third day always brings a peak in the IGF dynamics, as happened yesterday in Kyoto. The buzz in the corridors, bilateral meetings, and tens of workshops bring into focus the small and large ‘elephants in the room’. One of these was the future of the IGF in the context of the fast-changing AI and digital governance landscape. 

What will the future role of the IGF be? Can the IGF fulfil the demand for more robust AI governance? What will the position of the IGF be in the context of the architecture proposed by the Global Digital Compact, to be adopted in 2024?

These and other questions were addressed in two sessions yesterday. Formally speaking, decisions about the future of the IGF will most likely happen in 2025. The main policy dilemma will be about the role of the IGF in the context of the Global Digital Compact, which will be adopted in 2024. 

While governance frameworks featured prominently in the debates, a few IGF discussions dived deeper into the specificities of AI governance. 

Yesterday’s sessions provided intriguing reflections and insights on cybersecurity, digital and the environment, human rights online, disinformation, and much more, as you can read below.

You can also read how we did our reporting from IGF2023. Next week, Diplo’s AI and Team of Experts will provide an overall report with the gist of our debates and many useful (and interesting)l statistics. 

Stay tuned!

The Digital Watch team

A rapporteur writes a report on a laptop while observing a dynamic panel discussion in front of a projection screen.

Do you like what you’re reading? Bookmark us at https://wp.dig.watch/igf2023 and tweet us @DigWatchWorld

Have you heard something new during the discussions, but we’ve missed it? Send us your suggestions at digitalwatch@diplomacy.edu


Highlights from yesterday’s sessions
Banners announcing the 18th Annual Meeting of the Internet Governance Forum hang from the ceiling of a walkway at the IGF2023 venue.
Kinkaku-ji Temple in Kyoto. Credit: Sasa VK

The day’s top picks

  • The future of the IGF
  • Ethical principles for the use of AI in cybersecurity
  • Inclusion (every kind of inclusion)

Digital Governance Processes

What is the future of the IGF? 

It could be a counter-intuitive question given the success of IGF2023 in Kyoto. But, continuous revisiting of the purpose of IGF is built into its fundaments. The next review of the future of the IGF will most likely happen in 2025 on the occasion of the 20th anniversary of the World Summit on Information Society (WSIS) when the decision to establish the IGF was made.

In this context, over the last few days in Kyoto, the future of the IGF has featured highly in corridors, bilateral meetings, and yesterday’s sessions. One of the main questions has been what will be the future position of the IGF in the context of the Global Digital Compact (GDC), to be adopted during the Summit of the Future in September 2024. For instance, what will be the role of the IGF if the GDC establishes a Digital Cooperation Forum as suggested in the UN Secretary-General’s policy brief

Debates in Kyoto reflected the view that fast developments, especially in the realm of AI, require more robust AI and digital governance. Many in the IGF community argue for a prominent role for the IGF in the emerging governance architecture. For example, the IGF Leadership Panel believes that it is the IGF that should participate in overviewing the implementation of the GDC. Creating a new forum would incur significant costs in finances, time, and effort. There is also a view that the IGF should be refined, improved and adapted to the rapidly changing landscape of AI and broader digital developments in order to, among other things, involve missing communities in current IGF debates. This view is supported by the IGF’s capacity to change and evolve, as has happened since its inception in 2006. 

Banners announcing the 18th Annual Meeting of the Internet Governance Forum hang from the ceiling of a walkway at the IGF2023 venue.

The Digital Watch and Diplo will follow the debate on the future of the IGF in the context of the GDC negotiations and the WSIS+20 Review Process.


AI

AI and governance

AI will be a critical segment of the emerging digital governance architecture. In the Evolving AI, evolving governance: From principles to action session, we learned that we could benefit from two things. First, we need a balanced mix of voluntary standards and legal frameworks for AI. It’s not about just treating AI as a tool, but regulating it based on its real-world use. Second, we need a bottom-up approach to global AI governance, integrating input from diverse stakeholders and factoring in geopolitical contexts. IEEE and its 400,000 members were applauded for their bottom-up engagement with regulatory bodies to develop socio-technical standards beyond technology specifications. The UK’s Online Safety Bill, complemented by an IEEE standard on age-appropriate design, is one encouraging example.

The open forum discussed one international initiative specifically – the Global Partnership on Artificial Intelligence (GPAI). The GPAI operates via a multi-tiered governance structure, ensuring decisions are made collectively, through a spectrum of perspectives. It currently boasts 29 member states, and others like Peru and Slovenia are looking to join. At the end of the year, India will be taking over the GPAI chair from Japan and plans to focus on bridging the gender gap in AI. It’s all about inclusion, from gender and linguistic diversity to educational programmes to teach AI-related skills. 

AI and cybersecurity

AI could introduce more uncertainty into the security landscape. For instance, malicious actors might use AI to facilitate more convincing social engineering attacks, like spear-phishing, which can deceive even vigilant users. AI is making it easier to make bioweapons and propagate autonomous weapons, raising concerns about modern warfare. National security strategies might shift towards preemptive strikes, as commanders fear that failure to strike the right balance between ethical criteria and a swift military response could put them at a disadvantage in combat. 

On the flip side, AI can play a role in safeguarding critical infrastructure and sensitive data. AI has proven to be a valuable tool in preventing, detecting, and responding to child safety issues, by assisting in age verification and disrupting suspicious behaviours and patterns that may indicate child exploitation. AI could be a game-changer in reducing harm to civilians during conflicts: It could reduce the likelihood of civilian hits by identifying and directing target strikes more accurately, thus enhancing precision and protecting humanitarian elements in military operations. One of yesterday’s sessions, Ethical principles for the use of AI in cybersecurity, highlighted the need for robust ethical and regulatory guidelines in the development and deployment of AI systems in the cybersecurity domain. Transparency, safety, human control, privacy, and defence against cyberattacks were identified as key ethical principles in AI cybersecurity. The session also argued that existing national cybercriminal legislation could cover attacks using AI without requiring AI-specific regulation.

Anastasiya panel
Diplo’s Anastasiya Kazakova at the workshop: Ethical principles for the use of AI in cybersecurity.

The question going forward is: Do we need separate AI guidelines specifically designed for the military? The workshop on AI and Emerging and Disruptive Technologies in warfare called for the development of a comprehensive global ethical framework led by the UN. Currently, different nations have their own frameworks for the ethical use of AI in defence, but the need for a unified approach and compliance through intergovernmental processes persists.

The military context often presents unique challenges and ethical dilemmas, and the first attempts at guidelines for the military are those from the RE-AIM Summit and the UN Convention on Prohibitions or Restrictions on the Use of Certain Conventional Weapons.


Cybersecurity

Global negotiations for a UN cybercrime convention

Instruments and tools to combat cybercrime were high on the agenda of discussions. The negotiations about the possible UN cybercrime convention in the Ad Hoc Committee (AHC) are nearing the end, yet many open issues remain. While the mandate is clearly limited to cybercrime (broader mandate proposals, like the regulation of ISPs, were removed from the text), there is a need to precisely define the scope of the treaty. There is no commonly agreed-upon definition of cybercrime yet, and a focus on well-defined crimes that are universally understood across jurisdictions might be needed. 

There are calls to distinguish between cyber-dependent (dependent upon some cyber element for its execution) serious crime like autonomous cyberweapon terrorist attacks and cyber-enabled (actions traditionally carried out in other environments, but now possible with the use of computer as well) actions like online speech which may hurt human rights., The treaty should also address safe havens for cybercriminals, since certain countries turn a blind eye to cybercrime within their borders — due to their limited capacity to combat it, or political or other incentives to ignore it.

Another major stumbling stone of the negotiations is how to introduce clear safeguards for human rights and privacy. Concerns are present over the potential misuse of the provision related to online content by authoritarian countries to prosecute activists, journalists, and political opponents. Yet the mere decision-making process for adopting the convention – which requires unanimous consensus, or, alternatively, a two-thirds majority vote – makes it unlikely that any provision curtailing human rights will be included in the final text.

The current draft includes explicit references to human rights and thus goes far beyond existing crime treaties (e.g. UNCTAD and UNCAC). A highly regarded example of an instrument that safeguards human rights is the Cybercrime Convention (known as the Budapest Convention) of the Council of Europe, which requires parties to uphold the principles of the rule of law and human rights; in practice, judicial authorities effectively oversee the work of the law enforcement authorities (LEA).

One possible safeguard to mitigate the risks of misuse of the convention is the principle of dual criminality, which is crucial for evidence sharing and cooperation in serious crimes. The requirement of dual criminality for electronic evidence sharing is still under discussion in the AHC.

Other concerns related to the negotiations on the new cybercrime convention include the information-sharing provisions (whether voluntary or compulsory), how chapters in the convention will interact with each other, and how the agreed text will manage to overcome jurisdictional challenges to avoid conflicting interpretations of the treaty. Discussions about the means and timing of information sharing about cybersecurity vulnerabilities, as well as reporting and disclosure, are ongoing.

A more robust capacity-building chapter and provisions for technical assistance are needed, apparently. Among other things, those provisions should enable collaborative capacities across jurisdictions and relationships with law enforcement agencies. The capacity-building initiative of the Council of Europe under the Budapest Convention can serve as an example (e.g. training in cybercrime for judges).

The process of drafting the convention benefited from the deep involvement of expert organisations like the United Nations Office on Drugs and Crime (UNODC), the private sector, and civil society. It is widely accepted that strong cooperation among stakeholders is needed to combat cybercrime. 

The current draft introduces certain challenges for the private sector. Takedown demands as well as placing the responsibility for defining and enforcing rules on freedom of speech on companies, generate controversy and debate within the private sector: Putting companies in an undefined space confronts them with jurisdictional issues and conflicts of law. Inconsistencies in approaches across jurisdictions and broad expectations regarding data disclosure without clear safeguards pose particular challenges; clear limitations on data access obligations are also essential.

What comes next for the negotiations? The new draft of the convention is expected to be published in mid-November, and one final negotiation session is ahead in 2024. After deliberations and approval by the AHC (by consensus or two-thirds voting), the text of the convention will need to be adopted by the UN General Assembly and opened for ratification. For the treaty to be effective, accession by most, if not all, countries is necessary. 

The success or failure of the convention depends on the usefulness of the procedural provisions in the convention text (particularly those relating to investigation, which are currently well-developed) and the number of states that ratify the treaty. Importantly, the success of the treaty implementation is also conditioned that it doesn’t impede existing functioning systems, such as the Budapest Convention, which has been ratified by 68 countries worldwide. An extended effect of a treaty would be the support given to cybercrime efforts by UN member states in passing related national bills. 

Digital evidence for investigating war crimes

A related debate developed around cyber-enabled war crimes, due to the recent decision by the International Criminal Court (ICC) prosecutor to investigate such cases. The Budapest Convention applies to any crime involving digital evidence, including war crimes (in particular Article 14 on war crime investigations, Article 18 on the acquisition of evidence from any service provider, and Article 26 on the sharing of information among law enforcement authorities). 

Of particular relevance is the development of tools and support to capture digital evidence, which could aid in the investigation and prosecution of war crimes. Some tech companies have partnered with the ICC to create a platform that serves as an objective system for creating a digital chain of custody and a tamper-proof record of evidence, which is critical for ensuring neutrality and preserving the integrity of digital evidence. The private sector also plays a role in collecting evidence: There are reports from multiple technology companies providing evidence of malicious cyber activities during conflicts. The Second Additional Protocol to the Budapest Convention offers a legal basis for disclosing domain name registration information and direct cooperation with service providers. At the same time, Article 32 of the Budapest Convention addresses the issue of cross-border access to data, but this access is only available to state parties.

Other significant sources of evidence are investigative journalism and open source intelligence (OSINT) – like the Bellingcat organisation – which uncover war crimes and gross human rights violations using new tools, such as the latest high-resolution satellite imagery. OSINT should be considered an integral part of the overall chain of evidence in criminal investigations, yet such sources should be integrated within a comprehensive legal framework. Article 32 of the Budapest Convention, for example, is already a powerful tool for member states to access OSINT from both public and private domains, with consent. Investigative journalism plays a role in combating disinformation and holding those responsible for war crimes accountable.

Yet, the credibility and authenticity of such sources’ evidence can be questioned. Technological advancements, such as AI, have enabled individuals, states, and regimes to easily manipulate electronic data and develop deepfakes and disinformation. When prosecuting cybercrime, it is imperative that evidence be reliable, authentic, complete, and believable. Related data must be preserved, securely guarded, protected, authenticated, verified, and available for review to ensure its admissibility in trials. The cooperation of state authorities could lead to the development of methodologies for verifying digital evidence (e.g. the work of the Global Legal Action Network).

A digital image in black and blue.

Human rights

Uniting for human rights

‘As the kinetic physical world in which we exist recedes and the digital world in which we increasingly live and work takes up more space in our lives, we must begin thinking about how that digital existence should evolve.’ This quotation, published in a background paper to the session on Internet Human Rights: Mapping the UDHR to cyberspace, succinctly captures one of the central issues of our age. 

The world today is witnessing a concerning trend of increasing division and isolationism among nations. Ironically, global cooperation and governance, the very reasons for IGF 2023, are precisely what we need to promote and safeguard human rights. 

At the heart of yesterday’s main session on Upholding human rights in the digital age was the recognition that human rights should serve as an ethical compass in all aspects of internet governance and the design of digital technologies. But this won’t happen on its own: We need collective commitment to ensure that human rights are at the forefront of the evolving digital landscape, and we need to be deliberate and considerate in shaping the rules and norms that govern it.

The Global Digital Compact framework could promote human rights as an ethical compass by providing a structured and collaborative platform for stakeholders to align their efforts towards upholding human rights in the digital realm. 

The IGF also plays a crucial role in prioritising human rights in the digital age by providing a platform for diverse perspectives, grounding AI governance in human rights, addressing issues of digital inclusion, and actively engaging with challenges like censorship and internet resilience.

Capitalist surveillance

In an era dominated by technological advancements, the presence of surveillance in our daily lives is pervasive, particularly in public spaces. Driven by a need for heightened security measures, governments have increasingly deployed sophisticated technologies, such as facial recognition systems. 

As yesterday’s discussion on private surveillance showed, citizens also contribute to our intricate web of interconnected surveillance networks: Who can blame the neighbours if they want to monitor their street to keep it safe from criminal activity? After all, surveillance technologies are affordable and accessible. And that’s the thing: A parallel development that’s been quietly unfolding is the proliferation of private surveillance tools in public spaces. 

These developments require a critical examination of their impact on privacy and civil liberties, and on issues related to consent, data security, and the potential for misuse. Most of us are aware of these issues, but the involvement of private companies in surveillance introduces a new layer of complexity. 

Unlike government agencies, private companies are often not subject to the same regulations and transparency requirements. This can lead to a lack of oversight and transparency regarding how data is collected, stored, and used. 

Additionally, the potential for profit-driven motives may incentivise companies to push the boundaries of surveillance practices, potentially infringing on individuals’ privacy rights. It’s not like we haven’t seen this before.

A metal post with surveillance cameras aimed in three directions stands against a blue sky marked by clouds

Ensuring ethical data practices

The exploitation of personal data without consent is ubiquitous. Experts in the session Decolonise digital rights: For a globally inclusive future drew parallels to colonial practices, highlighting how data is used to control and profit. This issue is not only a matter of privacy but also an issue of social justice and rights. 

When it comes to children, privacy is not just about keeping data secure and confidential but also about questioning the need for collecting and storing their data in the first place. This means that the best way to check whether a user accessing an online service is underaged is to use pseudonymous credentials and pseudonymised data. Given the wave of new legislation requiring more stringent age verification measures, there’s no doubt that we will be discussing this issue much more in the coming weeks and months. 

Civil society is perhaps best placed to hold companies accountable for their data protection measures and governments in check for their efforts in keeping children safe. Yet, we sometimes forget to involve the children themselves in shaping policies related to data governance and their digital lives. 

Hence, the suggestion of involving children in activities such as data subject access requests. This can help them comprehend the implications of data processing. It can also empower them to participate in decision-making processes and contribute to ensuring ethical and responsible data practices. After all, the experts argue, many children’s level of awareness and concern about their privacy is comparable to that of adults.


Development

Digital technologies and the environment

The pandemic clearly showed the intricate connection between digital technologies and the environment. Although lower use of gasoline-powered vehicles led to a decrease in CO2 emissions during lockdowns, isolating also triggered a substantial increase in internet use due to remote work and online activities, giving rise to concerns about heightened carbon emissions from increased online and digital activities. 

Data doesn’t lie, (statisticians do) and data has confirmed the dual impact of digital technologies: While these technologies contribute 1% to 5% of greenhouse gas emissions and consume 5% to 10% of global energy, they also have the potential to cut emissions by 20% by 2030.

To harness the potential benefits of digitalisation and minimise its environmental footprint, we need to raise awareness about what sustainable sources we have available and establish standards for their use.

While progress is being made, there’s a pressing need for consistent international standards that consider environmental factors for digital resources. Initiatives from organisations such as the Institute of Electrical and Electronics Engineers (IEEE) in setting technology standards and promoting ethical practices, particularly in relation to AI and its environmental impact, as well as collaborations between organisations like GIZ, the World Bank, and ITU in developing standards for green data centres, highlight how working together globally is crucial for sustainable practices. 

Currently, over 90% of global enterprises are small or medium-sized, contributing over 50% of the world’s GDP, yet they lack the necessary frameworks to measure their carbon footprint, which is a key step in enabling their participation in the carbon economy in a real and verifiable way. 

Inclusion of people with disabilities

There’s no one-size-fits-all solution when it comes to meeting the needs of people with disabilities (PWD) in the digital space. First of all, the perduring slogan, ‘nothing about us without us’ must be respected. Accessibility by design standards like Web Content Accessibility Guidelines (WCAG) 2 are easily available through the W3C Accessibility Standards Overview. Although accessibility accommodations require tailored approaches to address the specific needs of both groups and individuals, standards offer a solid foundation to start with. 

The inclusion of people with disabilities should extend beyond technical accessibility to include the content, functionality, and social aspects of digital platforms.The stigma PWD face in online spaces needs to be addressed by implementing policies that create a safe and inclusive online environment. 

Importantly, we must take advantage of the internet governance ecosystem to ensure that

  • We support substantial representation from the disability community in internet governance discussions, beyond discussions on disabilities.
  • We stress the importance of making digital platforms accessible to everyone, no matter their abilities or disabilities, using technology and human empowerment.
  • We provide awareness-raising workshops for those unaware of the physical, mental, and cognitive challenges others might be facing, including those of us who suffer from one disability without understanding what others are facing.
  • We provide skills and training to effectively use available accommodations to overcome our challenges and disabilities.
  • We make available training and educational opportunities for persons with disabilities to be involved in the policymaking processes that involve us, making the internet and digital world better for everyone with the resulting improvements.
  • We support research to continue the valuable scientific improvements made possible by emerging technologies and digital opportunities.
A person in a black suit sits in a wheelchair in front of a computer desk. They are wearing a virtual reality headset and gesturing with their arms and hands.

Sociocultural

The public interest and the internet 

The internet is widely regarded as a public good with a multitude of benefits. Its potential to empower communities by enabling communication, information sharing, and access to valuable resources was appreciated. However, while community-driven innovation coexists with corporate platforms, the digital landscape is primarily dominated by private, for-profit giants like Meta and X. 

This dominance is concerning, particularly because it risks exacerbating pre-existing wealth and knowledge disparities, compromises privacy, and fosters the proliferation of misinformation.

This duality in the internet’s role demonstrates its ability to both facilitate globalisation and centralise control, possibly undermining its democratic essence. The challenge is even greater when considering that efforts to create a public good internet often lack inclusivity, limiting the diversity of voices and perspectives in shaping the internet. Furthermore, digital regulations tend to focus on big tech companies, often overlooking the diverse landscape of internet services. 

To foster a public good internet and democratise access, there is a need to prioritise sustainable models that serve the public interest. This requires a strong emphasis on co-creation and community engagement. This effort will necessitate not only tailoring rules for both big tech and small startup companies but also substantial investments in initiatives that address the digital divide and promote digital literacy, particularly among young men and women in grassroots communities, all while preserving cultural diversity. Additionally, communities should have agency in determining their level of interaction with the internet. This includes enabling certain communities to meaningfully use the internet according to their needs and preferences.

Disinformation and democratic processes 

In the realm of disinformation, we are witnessing new dynamics, with an expanded cast of individuals and group actors responsible for misleading the public, with the increasing involvement of politics and politicians. 

Addressing misinformation in this fast-paced digital era is surely challenging, but not impossible. For instance, Switzerland’s resilient multi-party system was cited to illustrate how it can resist the sway of disinformation in elections. And while solutions can be found to limit the spread of mis- and dis-information online, they need to be put in place with due consideration to issues such as freedom of expression and proportionality. The Digital Services Act (DSA) – adopted in the EU – is taking this approach, although concerns were voiced about its complexity.

A UN Code of Conduct for information integrity on digital platforms could contribute to ensuring a more inclusive and safe digital space, contributing to the overall efforts against harmful online content. However, questions arose about its practical implementation and the potential impacts on freedom of expression and privacy due to the absence of shared definitions.

Recognising the complexity of entirely eradicating disinformation, some argued for a more pragmatic approach, focusing on curbing its dissemination and minimising the harm caused, rather than seeking complete elimination. A multifaceted approach that goes beyond digital platforms and involves fact-checking initiatives and nuanced regulations was recommended. Equally vital are efforts in education and media literacy, alongside the collection of empirical evidence on a global scale, to gain a deeper understanding of the issue.

Tiles with random letters surround five tiles lined up in a row to spell the word ‘FACTS’ on a pink background.

Infrastructure

Fragmented consensus

Yesterday’s discussions on internet fragmentation built on those of the previous days. Delving into diverse perspectives on how to prevent the fragmentation of the internet is inherently valuable. But when there’s an obvious lack of consensus on even the most fundamental principles, it underlines just how critical the debate is.

For instance, should we focus mostly on the technical aspects, or should we also consider content-related fragmentation – and which of these are the most pressing to address? If misguided political decisions pose an immediate threat, should policymakers take a backseat on matters directly impacting the internet’s infrastructure?

Pearls of wisdom shared by experts in both workshops – Scramble for internet: You snooze, you lose and Internet fragmentation: Perspectives & collaboration – offer a promising bridge to close this gap in strategy.

One of these insights emphasised the need to distinguish content limitations from internet fragmentation. Content restrictions, like parental controls or constraints on specific types of content, primarily pertain to the user experience rather than the actual fragmentation of the internet. Labelling content-level limitations as internet fragmentation could be misleading and potentially detrimental. Such a misinterpretation might catalyse a self-fulfilling prophecy of a genuinely fragmented internet.

Another revolved around the role of governments, in some ways overlapping with content concerns. There’s apprehension that politicians might opt to establish alternate namespaces or a second internet root, thereby eroding its singularity and coherence. If political interests start shaping the internet’s architecture, it could culminate in fragmentation and potentially impede global connectivity. And yet, governments have been (and still are) essential in establishing obligatory rules affecting online behaviour when other voluntary measures proved insufficient. 

A third referred to the elusive nature of the concept of sovereignty. Although a state holds the right to establish its own rules, should this extend to something inherently global like the internet? The question of sovereignty in the digital age, especially in the context of internet fragmentation, prompts us to reevaluate our traditional understanding of state authority in a world where boundaries are increasingly blurred by the whirlwinds of silt raised as governments search for survey markers in the digital realm.

Network with pins

Economic

Tax rules and economic challenges for the Global South

Over the years, the growth of the digital economy – and how to tax it – led to major concerns over the adequacy of tax rules. In 2021, over 130 countries came together to support the OECD’s new two-pillar solution. In parallel, the UN Tax Committee revised its UN Model Convention to include a new article on taxing income from digital services.

Despite significant improvements in tax rules, developing countries feel that these measures alone are insufficient to ensure tax justice for the Global South. First, these models are based on the principle that taxes are paid where profits are generated. This principle does not consider the fact that many multinational corporations shift profits to low-tax jurisdictions, depriving countries in the Global South of their fair share of tax revenue. Second, the two frameworks do not address the issue of tax havens directly, which are often located in the Global North. Third, the OECD and UN models do not take into account the power dynamics between countries in the Global North (which has historically been in the lead in international tax policymaking) and the Global South. 

Yesterday’s workshop on Taxing Tech Titans: Policy options for the Global South discussed policy options accessible to developing countries. 

Countries in the Global South have adopted various strategies to tax digital services, including the introduction of digital services taxes (DSTs) that target income from digital services. That’s not to say that they’ve all been effective: Uganda’s experience with taxing digital services, for instance, had unintended negative consequences. In addition, unilateral measures without a global consensus-based solution can lead to trade conflicts.

So what would the experts advise their countries to do? Despite the OECD’s recent efforts to accommodate the interests of developing nations, experts from the Global South remain cautious: ‘Wait and see, and sign up later’ a concluding remark suggested.

Tax word on wooden cubes on the background of dollar banknotes. Tax payment reminder
Diplo/GIP at the IGF

Reporting from the IGF: AI and human expertise combined

We’ve been hard at work following the IGF and providing just-in-time reports and analyses. This year, we leveraged both human expertise and DiploAI in a hybrid approach that consists of several stages:

  1. Online real-time recording of IGF sessions. Initially, our recording team set up an online recording system that captured all sessions at the IGF. 
  2. Uploading recordings for transcription. Once these virtual sessions were recorded, they were uploaded to our transcribing application, serving as the raw material for our transcription team, which helped the AI application split transcripts by speaker. Identifying which speaker made which contribution is essential for analysing the multitude of perspectives presented at the forum – from government bodies to civil society organisations. This granularity enabled more nuanced interpretation during the analysis phase.
  3. AI-generated IGF reports. With the speaker-specific transcripts in hand (or on-screen), we utilised advanced AI algorithms to generate preliminary reports. These AI-driven reports identified key arguments, topics, and emerging trends in discussions. To provide a multi-dimensional view, we created comprehensive knowledge graphs for each session as well as for individual speakers. These graphical representations intricately mapped the connections between speakers’ arguments and the corresponding topics, serving as an invaluable tool for analysis (see the knowledge graph from Day 1 at IGF2023)
Line drawing of an intricate web of fine, coloured lines and nexuses.
  1. Writing dailies. To conclude the reporting process, our team of analysts used AI-generated reports to craft comprehensive daily analyses. 

You can see the results of that approach on our dedicated page.

alt-text=0
One part of Diplo’s Belgrade team at work. Does that clock say 2:30 a.m.? Yes, it does.
A photo collage of tourist sights around Tokyo frames a photo of DiploTeam eating at a restaurant, with the comment: Diplo Crew at IGF2023’.
A part of our team attended the IGF in situ and participated in sessions as organisers, moderators and speakers. Here they are, on their last evening in Kyoto (above).