Global Internet Governance Academic Network Annual Symposium | Part 1 | IGF 2023 Day 0 Event #112

8 Oct 2023 03:00h - 05:30h UTC

Event report

Speakers:
  • Jamal Shahin, Giga-Net, Academic, WEOG
  • Raquel Gatto, NIC.br, Technical Community, GRULAC
  • Roxana Radu, Blavatnik School of Government Oxford University, Academic, WEOG
  • Berna Akcali Gur, Queen Mary University of London, Academic, APG
  • Yik Chan Chin, Beijing Normal University, Academic, APG
  • Vagisha Srivastava
  • Kimberley Anastasio
  • Danielle Flonk
  • Nanette Levinson
  • Jamie Stewart
  • Kazim Rizvi
  • Kamesh Shekar
Moderators:
  • Jamal Shahin, Giga-Net
  • Roxana Radu, Online Moderator

Table of contents

Disclaimer: This is not an official record of the IGF session. The DiploAI system automatically generates these resources from the audiovisual recording. Resources are presented in their original format, as provided by the AI (e.g. including any spelling mistakes). The accuracy of these resources cannot be guaranteed. The official record of the session can be found on the IGF's official website.

Knowledge Graph of Debate

Session report

Yik Chan Chin

In thorough discussions concerning China’s data policy and the right to data access, correlated with Sustainable Development Goal 9 (Industry, Innovation, and Infrastructure) and Sustainable Development Goal 16 (Peace, Justice and Strong Institutions), China’s unique interpretation of data access has become a focal point. According to the analysis, the academic debate and national policy in China are primarily driven by an approach that interprets data as a type of property. This perspective divides rights associated with data into three fundamental components: access, processing, and exchange rights. It posits that these rights can be traded to generate value, as explicitly stated in the government’s policy documents.

However, this policy approach has sparked substantial critique for its disregard of other significant aspects of data access. Chinese policies predominately fail to recognise data’s inherent character as a public good. The academic sphere and governmental policy make scarce acknowledgement of this, undervaluing its potential contribution to societal advancement beyond merely commercial gains. Along these lines, the rights and benefits of individual citizens are often overlooked in favour of promoting enterprise-related interests.

The country’s data access policy is primarily designed to unlock potential commercial value, especially within enterprise data – an aspect contributing to the imbalance of power between individual users and corporations. Such power dynamics remain largely unaddressed in China’s data-related discussions and policy settings, potentially leading to a power imbalance detrimental to individuals.

Given these observations, the overall sentiment towards the Chinese data policy appears to be broadly negative. Acknowledging data’s essence as a public good and according importance to individual rights and power balances would be fundamental components for a more favourable policy formulation and discourse. The inclusion of these elements will ensure that the data policy reflects the principles of SDG 9 and SDG 16, aiming for a balance between enterprise development and individual rights.

Vagisha Srivastava

Web Public Key Infrastructure (WebPKI), an integral component of internet security, provides several benefits such as document digital signing, signature verification, and document encryption. This is epitomised by an incident involving a company named DigiNotar, which, through the misissuing of 500 certificates, compromised internet security – underlining the significance of digital certificates in web client authentication.

WebPKI governance intriguingly falls within the public goods paradigm. While the government traditionally delivers public goods and the commercial market handles private goods, in the case of WebPKI, private entities take noticeable strides in contributing; this defies conventional dynamics in the production of both public and private goods. That said, the government’s involvement isn’t entirely dispelled, with the US Federal PKI and Asian national Certification Authorities (CAs) actively partaking.

The claim that private entities are spearheading WebPKI security governance presents certain concerns. Governments may find themselves somewhat hamstrung when attempting to represent global public interest or generate global public goods in this complex context. As a result, platforms which are directly affected by an insecure web environment (such as browsers and operating systems) secure vital roles in security governance.

The Certificate Authority and Browser Forum, established in 2005, is crucial in coordinating WebPKI-related policies. This forum serves as a hub where root stores coordinate policies and garner feedback from CAs directly. In fact, its influence is such that it sets baseline requirements for CAs on issues like identity vetting and certificate content, since its inception.

Regarding the internal functionings of such organisations, the voting process within the consensus mechanism is diligently arranged prior to the actual voting process. Any formal language proposed for voting is already agreed upon, and the consensus mechanism is established pre-voting. Notably, there is curiosity surrounding how browsers, an integral part of the internet infrastructure, respond to such voting processes.

To conclude, internet security and governance system operate within a complex realm driven by both private and public actors. Entities like WebPKI and the Certificate Authority and Browser Forum play pivotal roles. The power dynamics and responsibilities between these players influence the continued evolution of policies related to internet security.

Kamesh Shekar

The in-depth analysis underscores the urgent necessity for a comprehensive, 360-degree, full-circle approach to the artificial intelligence (AI) lifecycle. This involves a principle-based ecosystem approach, ensuring nothing in the process is overlooked and emphasising a need for coverage that is as unbiased and complete as possible. The subsequent engagement of various stakeholders at each stage of the AI lifecycle, from inception and development through to the end-user application, is seen as pivotal in driving and maintaining the integrity of AI innovation.

The principles upon which this ecosystem approach is formed have been derived from a range of globally respected frameworks. These include guidelines from the Organisation for Economic Co-operation and Development (OECD), the United Nations (UN), the European Union (EU), and notably, India’s G20 declaration. Taking these well-established and widely accepted frameworks on board strengthens the argument for thorough mapping principles for varied stakeholders in the AI arena.

The analysis also delves into the friction that can occur around the interpretation and application of said principles. Distinct differences are highlighted, for instance, in the context of AI facets such as the ‘human in the loop’, illustrating the different approaches stakeholders adopt at various lifecycle stages. This underscores the importance of operationalisation of principles at every step of the AI lifecycle, necessitating a concrete approach to implementation.

A key observation in the analysis is the central role the government plays in overseeing the implementation of the proposed framework. Whether examining domestic scenarios or international contexts, the study heavily emphasises the power and influence legislative bodies hold in implementing the suggested framework. This extends to recommending an international cooperation approach and recognising the potentially pivotal role India could play amidst the Global Partnership on AI (GPA).

The responsibility of utilising these systems responsibly does not rest solely with the developers of AI technologies. The end-users and impacted populations are also encouraged to take on the mantle of responsible users, a sentiment heavily emphasised in the paper. In this thread, the principles and operationalisation for responsible use are elucidated, urging a thoughtful and ethical application of AI technologies.

An essential observation in the analysis is the lifecycle referred to, which has been derived from and informed by both the National Institute of Standards and Technology (NIST) and OECD, with a handful of additional aspects added and validated within the paper. This perspective recognises and incorporates substantial work already performed in the domain whilst adding fresh insights and nuances.

As a concluding thought, the analysis recognises the depth and breadth of the topics covered, calling for further in-depth discussions. This highlights an open stance towards continuous dialogue and the potential for further exploration and debate, possible in more detailed, offline conversations. As such, this comprehensive and thorough analysis offers a wealth of insights and provides excellent food for thought for any stakeholder in the AI ecosystem.

Kazim Rizvi

The Dialogue, a reputed tech policy think-tank, has authored a comprehensive paper on the subject of responsible Artificial Intelligence (AI) in India. The researchers vehemently advocate for the need to integrate specific principles beyond the deployment stages, encompassing all facets of AI. These principles, they assert, should be embedded within the design and development processes, especially during the data collection and processing stages. Furthermore, they argue for the inclusion of these principles in both the deployment and usage stages of AI by all stakeholders and consumers.

In their study, the researchers acknowledge both the benefits and challenges brought about by AI. Notably, they commend the myriad ways AI has enhanced daily life and professional tasks. Simultaneously, they draw attention to the intrinsic issues linked with AI, specifically around data collection, data authenticity, and potential risks tied to the design and usage of AI technology.

They dispute the notion of stringent regulation of AI at the onset. Instead, the researchers propose a joint venture, where civil society, industry, and academia embark on a journey to understand the nuances of deploying AI responsibly. This approach would lead to the identification of challenges and the creation of potential solutions appropriate for an array of bodies, including governments, scholars, development organisations, multilateral organisations, and tech companies.

The researchers acknowledge the potential risks that accompany the constant evolution of AI. While they recall that AI has been in existence for several decades, the study emphasises that emerging technologies always have accompanying risks. As the usage of AI expands, the researchers recommend a cautious, steady monitoring of potential harms.

The researchers also advise a global outlook for understanding AI regulation. They posit that a general sense of regulation already exists internationally. What’s more, they suggest that as AI continues to grow and evolve, its regulatory framework must do the same.

In conclusion, the research advocates for a multi-pronged approach that recognises both the assets and potential dangers of AI, whilst promoting ongoing research and the development of regulations as AI technology progresses. The researchers present a balanced and forward-thinking strategy that could create a framework for AI that is responsible, safe, and of maximum benefit to all users.

Nanette Levinson

The analysis unearths the growing uncertainty and expected institutional alterations taking centre stage within the sphere of cyber governance. This is based on several significant indicators of institutional change that have come to the fore. Indicators include the noticeable absence of a concrete analogy or inconsistent isomorphic poles, a shift in legitimacy attributed to an idea, and the emergence of fresh organisational arrangements – these signify the dynamic structures and attitudes within the sector.

In a pioneering cross-disciplinary approach, the analysis has linked these indicators of institutional change to an environment of heightened uncertainty and turbulence, as evidenced from the longitudinal study of the Open-Ended Working Group.

An unprecedented shift within the United Nations’ cybersecurity narrative was also discerned. An ‘idea galaxy’ encapsulating concepts such as human rights, gender, sustainable development, non-state actors, and capacity building was prevalent in the discourse from 2019 through to 2021. However, an oppositional idea galaxy unveiled by Russia, China, Belarus, and a handful of other nations during the Open-Ended Working Group’s final substantive session in 2022, highlighted their commitment towards novel cybersecurity norms. The emergence of these opposing ideals gave rise to duelling ‘idea galaxies’, signalling a divergence in shared ideologies.

This conflict between the two ‘idea galaxies’ was managed within the Open-Ended Working Group via ‘footnote diplomacy.’ Herein, the Chair acknowledged both clusters in separate footnotes, paving the way for future exploration and dialogue, whilst adequately managing the current conflict.

Of significant note is how these shifts, underpinned by tumultuous events like the war in Ukraine, are catalysing potential institutional changes in cyber governance. These challenging times, underscored by clashing ideologies and external conflict, seem to herald the potential cessation of long-standing trajectories of internet governance involving non-state actors.

In conclusion, there is growing uncertainty surrounding the future of multi-stakeholder internet governance due to the ongoing conflict within these duelling idea galaxies. The intricate and comprehensive analysis paints a picture of the interconnectivity between global events, institutional changes, and evolving ideologies in shaping the future course of cyber governance. These indicate a potential turning point in the journey of cyber governance.

Audience

This discussion scrutinises the purpose and necessity of government-led mega constellations in the sphere of satellite communication. The principal argument displayed scepticism towards governments’ reasoning for setting up these constellations, with a primary focus on their significant role in internet fragmentation. Intriguingly, some governments have proposed limitations on the distribution of signals from non-domestic satellites within their territories. However, the motives behind this proposal were scrutinised, specifically questioning why a nation would require its own mega constellation if their field of interest and service was confined to their own territories.

Furthermore, the discourse touched on the subject of ethical implications within the domain of artificial intelligence (AI). It highlighted an often-overlooked aspect in the responsible use of AI—the end users. While developers and deployers frequently dominate this dialogue, the subtle yet pivotal role of end-users was underplayed. This is especially significant considering that generative AI is often steered by these very end-users.

Another facet of the AI argument was the lack of clarity and precision in articulating arguments. Participants underscored the use of ambiguous terminologies like ‘real-life harms’, ‘real-life decisions’, and ‘AI solutions’. The criticism delved into the intricacies of the AI lifecycle model, emphasising an unclear derivation and an inconsistent focus on AI deployers rather than a comprehensive approach including end-users. The model was deemed deficient in its considerations of the impacts on end-users in situations such as exclusion and false predictions.

However, the discussion was not solely encompassed by scepticism. An audience member provided a positive outlook, suggesting stringent regulations on emerging technologies like AI might stifle innovation and progress. Offering a historical analogy, they equated such regulations to those imposed on the printing press in 1452.

Throughout the discourse, themes consistently aligned with Sustainable Development Goal 9, thus underscoring the significance of industry, innovation, and infrastructure in our societies. This dialogue serves as a reflective examination, not just of these topics, but also of how they intertwine and impact one another. It accentuates the importance of addressing novel challenges and ethical considerations engendered by technological advances in satellite communication and AI.

Jamie Stewart

The rapid advancement of digital technologies and internet connectivity in Southeast Asia is driving the development of assorted regulatory instruments within the region, underwritten by extensive investment in surveillance capacities. This rapid expansion, however, is provoking ever-growing concerns over potential misuse against human rights defenders, stirring up a negative sentiment.

Emerging from the Office of the United Nations High Commissioner for Human Rights (OHCHR) is a report on cybersecurity in Southeast Asia, bringing attention to the potential usage of these legal legislations against human rights defenders. Concerns are heightening around the wider consensus striving to combat cybercrime. The general assembly has expressed particular apprehension leaning towards misuse, especially of provisions that relate to surveillance, search, and seizure.

What emerges starkly from the research is a disproportionate impact of cyber threats and online harassment on women. The power dynamics in cyberspace perpetuate those offline, leading to a targeted attack on female human rights defenders. This gender imbalance along with the augmented threat to cybersecurity raises concerns, aligning with Sustainable Development Goals (SDG) 5 (Gender Equality) and SDG 16 (Peace, Justice, and Strong Institutions).

The promotion of human-centric cybersecurity with a gendered perspective charters a course of positive sentiment. The protective drive is for people and human rights to be the core elements of cybersecurity. Recognition is thus given to the need for a gendered analysis, with research bolstered by collaborations with the UN Women Regional Data Centre in the Asia Pacific.

An in-depth exploration of this matter further uncovers a widespread range of threats, both on a personal and organisational level. This elucidates the sentiment that a human-centric approach to cybersecurity is indispensable. Both state and non-state actors are found to be contributing to these threats, often in a coordinated manner, with surveillance software-related incidents being particularly traceable.

Additionally, the misuse of regulations and laws against human rights defenders and journalists is an escalating worry, prompting agreement that such misuse is indeed occurring. This concern is extended to anti-terrorism and cybercrime laws, which could potentially be manipulated against those speaking out, potentially curbing freedom of speech.

On the issue of cybersecurity policies, while their existence is acknowledged, concerns about their application are raised. Questions emerge as to whether these policies are being used in a manner protective of human rights, indicating a substantial negative sentiment towards the current state of cybersecurity. In conclusion, although the progression of digital technologies has brought widespread benefits, they also demand a rigorous protection of human rights within the digital sphere, with a marked emphasis on challenging gender inequalities.

Moderator

Throughout the established GigaNet Academic Symposium, held at the Internet Governance Forums (IGFs) since 2006, a multitude of complex topics takes centre stage. This latest iteration featured four insightful presentations tackling diverse subjects ranging from digital rights and trust in the internet, to challenges caused by internet fragmentation and environmental impacts. The discourse centered predominantly on Sustainable Development Goals (SDGs) 4 (Quality Education) and 9 (Industry, Innovation, and Infrastructure).

In maintaining high academic standards, the Symposium employs a stringent selection process for the numerous abstracts submitted. This cycle saw roughly 59 to 60 submissions, of which only a limited few were selected. While this guarantees quality control, it simultaneously restrains the number of presentations and hampers diversity.

Key to this Symposium was the debate on China’s access to data, specifically, the transformative influence the internet and social media platforms have exerted on the data economy. This has subsequently precipitated governance challenges primarily revolving around the role digital social media platforms play in managing data access and distribution. The proposed model for public data in China involves conditional fee access, with data analyses disseminated instead of the original datasets.

One recurring theme in these discussions related to the state-led debate in China that posits data as marketable property rights. Stemming from government policies and the broader economic development agenda, this perspective on data has dramatically influenced Chinese academia. However, this focus has led to a significant imbalance in the data rights dialogue, with the rights of data enterprises frequently superseding those of individuals.

Environmental facets of ICT standards also commanded attention, underscoring the political and environmental rights encompassed within these standards. Moreover, the complexity of measuring the environmental impact of ICTs, which includes carbon footprint and energy consumption through to disposal, confirms the necessity of addressing the materiality of ICTs. The discussion further emphasised that governance queries relating to certificate authorities are crucial to understanding the security and sustainability of low-Earth orbit satellites, given the emergence of conflicts and connections between these areas.

Concluding the Symposium was an appreciative acknowledgement of the participants’ contributions, from submitting and reviewing abstracts to adjusting sleep schedules to participate. Transitioning to a second panel without a break, the Symposium shifted its focus towards cyber threats against women, responsible AI, and broader global internet governance. Suggestions for improvements in future sessions included clarifying and defining theoretical concepts more comprehensively, focusing empirical scopes more effectively, and emphasising the significance of consumers and end-users in cybersecurity and AI discourse. The Symposium, thus, offered a well-rounded exploration of multifaceted topics contributing to a deeper understanding of internet governance.

Berna Akcali Gur

Mega-satellite constellations are revising global power structures, signalling significant strategic transitions. Many powerful nations regard these endeavours, such as the proposed launch of 42,000 satellites by Starlink, 13,000 by Guowang, and 648 by OneWeb, as opportunities to solidify their space presence and exert additional control over essential global Internet infrastructure. These are deemed high-stakes strategic investments, indicating a new frontier in the satellite industry.

Furthermore, the rise of these mega constellations is met with substantial enthusiasm due to their impressive potential in bridging the existing gaps in the global digital divide. Through the superior broadband connectivity, vital for social, economic, and governmental functions, offered by these satellite constellations, along with their low latency and high bandwidth capabilities, fruitful benefits, such as optimising IoT, video conferencing, and video games, can be harvested.

However, concerns have been raised over the sustainable usage of the increasingly congested orbital space. Resources in space are finite, and the present traffic could result in threats such as collision cascading. Such a scenario could make orbits unusable, depriving future generations of the opportunity to utilise this vital space.

European Union’s stance on space policy, particularly the necessity of owning a mega constellation, demonstrates some contradictions. While a EU document maintains that owning a mega constellation isn’t essential for access, it is thought crucial from strategic and security perspectives, revealing a potentially contradictory standpoint within the Union.

Another issue is fragmentation in policy implementation due to diversification in government opinions, as demonstrated by the decoupling of 5G infrastructure where groups of nations have decided against utilising each other’s technology due to cybersecurity issues. With the rise in the concept of cyber sovereignty, governments are increasingly regarding mega constellations as sovereign infrastructure vital for their cybersecurity.

Lastly, data governance is a significant concern for countries intending to utilise mega constellations. These countries may require that constellations maintain ground stations within their territories, thereby exercising control over cross-border data transfers, a key aspect in the digital era.

In conclusion, the growth of mega-satellite constellations presents a complex issue, encompassing facets of international politics, digital equity, environmental sustainability, policy diversification, cyber sovereignty, and data governance. As countries continue to navigate these evolving landscapes, conscious regulation and implementation strategies will be integral in harnessing the potentials of this technology.

Kimberley Anastasio

The intersection between Information Communication Technologies (ICTs) and the environment is a pivotal issue that has been brought into focus by major global institutions. For the first time, the Internet Governance Forum highlighted this interconnectedness by setting the environment as a main thematic track in 2020. This decision evidences increasing international acknowledgment of the symbiosis between these two areas. This harmonisation aligns with two key Sustainable Development Goals (SDGs): SDG 9, Industry, Innovation and Infrastructure; and SDG 13, Climate Action, signifying a global endeavour to foster innovative solutions whilst advocating sustainable practices.

In pursuit of a more sustainable digital arena, organisations worldwide are directing efforts towards developing ‘greener’ internet protocols. Within this landscape, the deep-rooted role of technology in the communication field has driven an elevated demand for advanced and sustainable communication systems. This paints a picture of a powerful transition towards creating harmony between digital innovation and environmental stewardship.

Within ICTs, standardisation is another topic with international resonance. This critical process promotes uniformity across the sector, regulates behaviours, and ensures interoperability. Together, these benefits contribute to the formation of a more sustainable economic ecosystem. The International Telecommunications Union, a renowned authority within the industry, has upheld these eco-friendly values with over 140 standards pertaining to environmental protection. Concurrently, ongoing environmental debates by the Internet Engineering Task Force suggest a broader trend towards heightened environmental consciousness within the ICT sector.

The materiality and quantification of ICTs are identified as crucial facets to environmental sustainability. Measuring the environmental impact of ICTs, although challenging, is highlighted as vital. This attention underlines the physical presence of ICTs within the environment and their consequential impact. This primary focus realigns with the targets of the aforementioned SDGs 9 and 13, further emphasising the significance of ICTs within the global sustainability equation.

In parallel with these developments, a dedicated research project is being carried out on standardisation from an ICT perspective, involving comprehensive content analysis of almost 200 standards from International Telecommunications Union and Internet Engineering Task Force members. This innovative methodology helps position the study within the wider spectrum of standardisation studies, overcoming the confines of ICT-specific research and implying broader applications for standardisation.

Alongside this larger project, a smaller but related initiative is underway. Its objective is to understand the workings of these organisations within the extensive potential of the ICT standardisation sector. The ultimate goal is to develop a focused action framework derived from existing literature and real-world experiences, underlining an active approach to problem solving.

Collectively, these discussions and initiatives portray a comprehensive and positive path globally to achieve harmony between ICT and sustainability. Whilst there are inherent challenges to overcome in this journey, the combination of focused research, standardisation, and collaborative effort provides a potent recipe for success in the pursuit of sustainable innovation.

Speakers

Speech speed

0 words per minute

Speech length

words

Speech time

0 secs

Click for more

&

’Berna

Speech speed

0 words per minute

Speech length

words

Speech time

0 secs

Click for more

&

’Jamie

Speech speed

0 words per minute

Speech length

words

Speech time

0 secs

Click for more

&

’Kamesh

Speech speed

0 words per minute

Speech length

words

Speech time

0 secs

Click for more

&

’Kazim

Speech speed

0 words per minute

Speech length

words

Speech time

0 secs

Click for more

&

’Kimberley

Speech speed

0 words per minute

Speech length

words

Speech time

0 secs

Click for more

Speech speed

0 words per minute

Speech length

words

Speech time

0 secs

Click for more

&

’Nanette

Speech speed

0 words per minute

Speech length

words

Speech time

0 secs

Click for more

&

’Vagisha

Speech speed

0 words per minute

Speech length

words

Speech time

0 secs

Click for more

&

’Yik

Speech speed

0 words per minute

Speech length

words

Speech time

0 secs

Click for more