Global Internet Governance Academic Network Annual Symposium | Part 2 | IGF 2023 Day 0 Event #112

8 Oct 2023 04:30h - 06:30h UTC

Speakers and Moderators

Table of contents

Disclaimer: This is not an official record of the IGF session. The DiploAI system automatically generates these resources from the audiovisual recording. Resources are presented in their original format, as provided by the AI (e.g. including any spelling mistakes). The accuracy of these resources cannot be guaranteed. The official record of the session can be found on the IGF's official website.

Knowledge Graph of Debate

Session report

Robert Gorwa

The European Union (EU) is steadily establishing itself as the foremost regulator in the technological industry. This substantial shift is demonstrated in several regulations the EU has pursued, comprising the Digital Services Act and the AI Act. The EU’s regulatory strategy is considered to be an expanding toolkit with the potential to serve specific strategic objectives in the future. Notably, certain regulations, such as those mandating rapid takedown times, are now being integrated into the EU’s approach to content regulation.

Nonetheless, it’s crucial to acknowledge a considerable divergence within the EU’s overall digital policy. This divergence stems from the contrasting interests within the European Commission itself where distinct actors are driving differing objectives. This can be witnessed in discrepancies between the recently initiated European Media Freedom Act and the commission’s publicly declared objectives to combat disinformation. Further, optimal institutional arrangements vary substantially amongst the commission’s departments, contributing further to this divergence.

A renewed interest in European digital constitutionalism and the rise of digital capitalism have provided fresh theoretical perspectives to understand the alterations within the EU’s digital policy. Digital capitalism, interpreted as a battleground between firms and political actors, in addition to internal EU political conflicts, adds significant value. Moreover, industrial policy perspectives shed light on geopolitical strategies tied to the reshoring of supply chains and digital sovereignty projects.

Despite these insightful views, Robert Gorwa’s project, currently situated in its data collection phase, emphasises the importance of a deeper understanding of the EU commission’s actions. Hence, it necessitates a more significant focus on tangible findings, including securing internal communications and information via freedom of information requests.

Distinctively, data regulation stands to gain from public-level procurement, particularly within municipal governance. The Barcelona model, which commands companies to share data with the local community, serves as a prime example of this. This model reinforces the concept of a localised social contract, with mutual data sharing forming its nucleus.

Furthermore, tech policy witnesses a variety of sectors contesting the same jurisdiction, culminating in regular clashes between market-driven, rights-driven, and security-driven visions. The central actors involved in these confrontations are the US, EU, and China, each propounding their unique vision. An explicit example of this is the US actors influencing the controversial EU Child Sexual Abuse Material (CSAM) regulations, leading to the ‘Brussels effect’. Whilst these measures are aimed at child protection, they have sparked debates over potential infringements on user rights, privacy, and end-to-end encryption.

Amidst all this complexity, it’s encouraging to observe the considerable changes that have transpired within a short period, most notably in the sphere of content regulation and transparency. These advances, coupled with the other developments outlined, collectively depict a landscape of the evolving impacts of EU’s regulatory strides within the transnational tech sector.

Sophie Hoogenboom

Sophie Hoogenboom delves into the intricate concept of a global social contract pertaining to the digital sphere, specifically the vast, ever-growing realm of data. An essential facet of such theoretical frameworks, a social contract is frequently mooted as a potential intervention to streamline and optimise social cooperation within the global digital context. Drawing on Alexander Fink’s theory, she maintains that the likelihood of a social contract arising increases in communities with shared preferences, common social norms, and smaller sizes.

However, Hoogenboom critiques the ambitious notion of a sweeping global social contract on data, attributing its potential challenges to the culturally diverse preferences and social norms of the various global communities. She posits that, given the multifaceted contexts, notions of privacy and the universal definition of ‘common good’ could dramatically vary between societies, and the sheer size of the global community could inflate the costs associated with decision-making and monitoring protocols.

Nevertheless, she proposes a more manageable and immediate initial step could be the creation of a social contract at a community level, focusing on utilising community data for societal betterment. Such a contract could be highly beneficial in fulfilling human rights and propelling progress towards the ambitious objectives enlisted in the Sustainable Development Goals. Community data, specifically, could hold considerable potential for societal betterment, given its relevance in the field of health data and related sectors.

The current analogue social contract, she believes, leaves the potential of data untapped, thus stimulating debates about data decentralisation and its proposition as a global public good. She refrains from taking a hard stance on whether community data should be kept decentralised or placed under a global social contract, suggesting she is still formulating her view on this complex issue.

Hoogenboom advocates for a composite approach in data governance wherein community networks work simultaneously with global networks. She recognises the digital divide in certain parts of the world and underlines the need for inclusivity in the data governance framework.

Overall, she emphasises the importance of contextual understanding in these discussions, asserting that different communities might lay emphasis on distinct aspects of development, subject to their unique needs and challenges. This nuanced approach makes Hoogenboom’s analysis a significant contribution to the ongoing discourse about the need, purpose and potential form of a global social contract for data.

Audience

The analysis suggests a prevalent focus within academic literature on national Artificial Intelligence (AI) strategies, overshadowing the much-needed investigation of regional and global AI strategies. This skewness raises pivotal concerns regarding the comprehensive understanding and development of AI strategies worldwide. Moreover, it has been noted that the conscious decision to use the acronym ‘AI’, rather than consistently referring to ‘Artificial Intelligence’, could inadvertently limit the scope of future studies, confining the research to the era when AI became a recognised and frequently utilised concept.

In the realm of governance, the concept of the social contract has come under significant scrutiny, specifically in association with AI. Questions about the necessity of a social contract have been raised, and there are suggestions of a possible departure from traditional state-centric constitutions within the AI sphere. This shift could potentially pave the way for the development of a novel, decentralised social contract within AI governance, thus reflecting the differing nature of governance in this innovative field.

Attention has been directed towards the consequences of the Digital Services Act (DSA) on countries in the Global South, such as Costa Rica and Chile, where it has influenced local legislative creation. However, the European Commission, instrumental in these processes, seemingly harbours internal political issues that require unravelling for a comprehensive understanding of its operations.

There are potential contradictions within the legal framework, specifically between the Disinformation Code of Conduct and the Media Freedom Act. The necessity for clarification in this regard is evident, as ambiguities can hamper the efficacy and effectiveness of these legislative instruments.

There have emerged concerns about AI regulation and hate speech management due to observable inconsistencies in pertinent legislations. Consequently, there is an urgent call for more diligent digital leadership that guarantees consistently across AI regulations, and the importance of accurate and conscientious drafting of legislation is underscored.

On a more positive note, community networks have been recognised as potential facilitators of data governance. The analysis proposes these networks as instrumental infrastructures that could effectively embed data governance regimes, thereby fostering broader partnerships for sustainable development. Looking forward, this suggests a novel approach to managing data, leveraging the inherent strengths of community networks.

Radomir Bolgov

The analysis spotlights the domain of artificial intelligence (AI) policy as being in its fledgling stages. AI policy is a particularly nuanced field, presenting numerous dimensions of complexity, which necessitate comprehensive definition and documentation to ensure clarity of policy objectives. A key portion of the study encompassed a descriptive analysis and the subsequent development of a framework via bibliometric analysis. This approach was utilised to map the principal topics within the field, an exercise that underscored the multidimensional character of the domain.

Scientific output pertaining to AI policies, although demonstrating an uptick over the years, has shown signs of stagnation in recent times. Regrettably, the past three years have witnessed a plateau in annual scientific production. However, this underscores a compelling need for increased research surrounding the effects and implications of AI policies. The analysis unveiled a stark scarcity of studies focussing on either the positive or the negative outcomes resulting from AI policies. Alarmingly, themes such as AI policy evaluation are significantly underexplored, revealing a lacuna in research that needs addressing.

AI policies have been studied considerably during periods of stability, but, as the analysis emphasises, there is an urgent need for these policies’ analysis within the context of present-day crises like pandemics, conflicts and environmental crises. The reshaping and re-examination of AI policies are warranted by such drastic changes, posing challenges for policy-makers in this field.

Nonetheless, the findings of this analysis were constrained by the choice of keywords and the singular reliance on Google Scholar as the research database. The limitations evident in the choice of keywords are to be acknowledged. Given that the selection was based on the research team’s initial knowledge, this could potentially confine the scope of the study. The exclusive use of Google Scholar further restricts the breadth and diversity of research, considering that this database may not be as holistic as other databases such as Scopus and Web of Science.

There is, accordingly, an urgent need for strategic approaches to keyword selection to mitigate these limitations. Findings pertaining to global AI strategies are markedly few, indicating a need for expanded research in this area. This is especially significant in the context of aligning AI policy development with Sustainable Development Goal 9: Industry, Innovation, and Infrastructure. These insights, thus, form a vital basis for setting the agenda for future research initiatives and policy developments within the field of AI.

Moderator

Andrea was unable to participate in the discussion due to an unforeseen absence. Despite her absence, she assured that she would remain involved by committing to providing comments on the pertinent papers, exuding a neutral sentiment, and contributing to the ongoing discussion.

The forthcoming GIGNET Annual Symposium plans for an informative afternoon session during which three seminal papers will be presented. The papers cover crucial topics of AI policies, EU platform regulation, and a new social contract for data, to be presented by Radomir Bolgov, Robert Gorwa, and Sophie Hohenbaum respectively. The selection of these presentations expresses a positive sentiment towards the forward-thinking sectors of Industry, Innovation, and Infrastructure.

In the interest of optimising the panel discussion, there are plans to definitively manage time. The moderator plans to introduce all speakers at the start of proceedings, thereby establishing a clear and streamlined direction for the discussion.

The concept of building a new social contract was central to discussions, with Sophie Hohenbaum taking lead. Her view was that creating a fresh social contract requires a vacuum or void that does not presently exist due to our current social contracts. The moderator positively advised Sophie to thoroughly engage and respond to relevant theories, such as that of Alexander Fink, which could provide beneficial insights into the intricacies surrounding the proposed creation of a new social contract.

The EU’s intricate digital policy and its extensive implications were also discussed. Neutral concerns were raised about the complexities of mapping a policy landscape replete with interconnected, dynamic interactions like the role of the EU Commission and the EU External Action Service in shaping digital policy-making.

Returning to the subject of AI, the moderator commended Radomir Bolgov’s ongoing research on AI policies. The moderator expressed a positive sentiment, encouraging Radomir to delve into related fields such as education and health, where AI’s application and impact are profound.

There were questions raised about the size and functionality of social contracts as per Alexander Fink’s theory. The moderator expressed scepticism towards Fink’s claim that smaller groups lead to more effective social contracts, suggesting this could contravene the idea of global social contracts.

In the sphere of public procurement’s potential role in social contracts, examples were referenced from municipal governments around the world, focusing on data sharing as pivotal in shaping localised social contracts. The Barcelona model was cited as a commendable example where companies providing services like transportation platforms must share data with their communities.

Furthermore, the moderator agreed with and positively acknowledged the incorporation of social contract elements, notably data sharing, into public procurement. It was suggested that such an approach could foster a more engaged and informed community, echoing the success of the Barcelona model. To conclude, the discussion engaged with a wide range of topics from the influence of AI in various sectors and EU digital policies, to the norms of data sharing in public procurement and emerging trends in social contracts. Different stances, opinions, and ideas for further exploration within these paradigms were then outlined.

Speakers

Speech speed

0 words per minute

Speech length

words

Speech time

0 secs

Click for more

Speech speed

0 words per minute

Speech length

words

Speech time

0 secs

Click for more

Speech speed

0 words per minute

Speech length

words

Speech time

0 secs

Click for more

Speech speed

0 words per minute

Speech length

words

Speech time

0 secs

Click for more

Speech speed

0 words per minute

Speech length

words

Speech time

0 secs

Click for more