Policy Network on Artificial Intelligence | IGF 2023

Table of contents

Disclaimer: This is not an official record of the IGF session. The DiploAI system automatically generates these resources from the audiovisual recording. Resources are presented in their original format, as provided by the AI (e.g. including any spelling mistakes). The accuracy of these resources cannot be guaranteed. The official record of the session can be found on the IGF's official website.

Knowledge Graph of Debate

Session report

Sarayu Natarajan

Generative AI, a powerful technology that enables easy content generation, has resulted in the widespread production and dissemination of misinformation and disinformation. This has negative effects on society as false information can be easily created and spread through the internet and digital platforms. However, the rule of law plays a crucial role in curbing this spread of false information. Concrete legal protections are necessary to address the issue effectively.

Sarayu Natarajan advocates for a context-specific and rule of law approach in dealing with the issue of misinformation and disinformation. This suggests that addressing the problem requires understanding the specific context in which false information is generated and disseminated and implementing legal measures accordingly. This approach acknowledges the importance of tailored solutions based on a solid legal framework.

The labour-intensive task of AI labelling, crucial for the functioning of generative AI, is often outsourced to workers in the global south. These workers primarily label data based on categories defined by Western companies, which can introduce bias and reinforce existing power imbalances. This highlights the need for greater inclusivity and diversity in AI development processes to ensure fair representation and avoid perpetuating inequalities.

Efforts are being made to develop large language models in non-mainstream languages, allowing a wider range of communities to benefit from generative AI. Smaller organizations that work within specific communities are actively involved in creating these language models. This represents a positive step towards inclusivity and accessibility in the field of AI, particularly in underrepresented communities and non-mainstream languages.

Mutual understanding and engagement between AI technology and policy domains are crucial for effective governance. It is essential for these two disciplines to communicate with each other in a meaningful way. Creating forums that facilitate non-judgmental discussions and acknowledge the diverse empirical starting points is critical. This allows for a more integrated and collaborative approach towards harnessing the benefits of AI technology while addressing its ethical and societal implications.

While AI developments may lead to job losses, particularly in the global north, they also have the potential to generate new types of jobs. Careful observation of the impact of AI on employment is necessary to ensure just working conditions for workers worldwide. It is important to consider the potential benefits and challenges associated with AI technology and strive for humane conditions for workers in different parts of the world.

In conclusion, the advent of generative AI has made it easier and cheaper to produce and disseminate misinformation and disinformation, posing negative effects on society. However, the rule of law, through proper legal protections, plays a significant role in curbing the spread of false information. A context-specific and rule of law approach, advocated by Sarayu Natarajan, is key to effectively addressing this issue. Inclusivity, diversity, and mutual understanding between AI technology and policy domains are crucial considerations in the development and governance of AI. It is essential to closely monitor the impact of AI on job loss and ensure fair working conditions for all.

Shamira Ahmed

The analysis focuses on several key themes related to AI and its impact on various aspects, including the environment, data governance, geopolitical power dynamics, and historical injustices. It begins by highlighting the importance of data governance in the intersection of AI and the environment. This aspect is considered to be quite broad and requires attention and effective management.

Moving on, the analysis advocates for a decolonial-informed approach to address power imbalances and historical injustices in AI. It emphasizes the need to acknowledge and rectify historical injustices that have shaped the global power dynamics related to AI. By adopting a decolonial approach, it is believed that these injustices can be addressed and a more equitable and just AI landscape can be achieved.

Furthermore, the analysis highlights the concept of a just green digital transition, which is essential for achieving a sustainable and equitable future. This transition leverages the power of AI to drive responsible practices for the environment while also promoting economic growth and social inclusion. It emphasizes the need for a balanced approach that takes into account the needs of the environment and all stakeholders involved.

In addition, the analysis underscores the importance of addressing historical injustices and promoting interoperable AI governance innovations. It emphasizes the significance of a representative multi-stakeholder process to ensure that the materiality of AI is properly addressed and that all voices are heard. By doing so, it aims to create an AI governance framework that is inclusive, fair, and capable of addressing the challenges associated with historical injustices.

Overall, the analysis provides important insights into the complex relationship between AI and various domains. It highlights the need to consider historical injustices, power imbalances, and environmental concerns in the development and deployment of AI technologies. The conclusions drawn from this analysis serve as a call to action for policymakers, stakeholders, and researchers to work towards a more responsible, equitable, and sustainable AI landscape.

Audience

The panel discussion explored several crucial aspects of AI technology and its societal impact. One notable challenge highlighted was the difficulty in capacity building due to the rapidly changing nature of AI. It was observed that AI is more of an empirical science than an engineering product, meaning that researchers and designers often don’t know what to expect due to continual testing and experimentation. Misinformation and the abundance of information sources further exacerbate the challenges in capacity building.

The importance of providing education to a diverse range of demographics, from school children to the elderly, was also emphasised. It was recognised that ensuring high-quality education in the field of AI is vital in equipping individuals with the knowledge and skills required to navigate the rapidly evolving technological landscape. This education should be accessible to all, regardless of their age or background.

Additionally, the panel discussion shed light on the blurring boundaries between regulatory development and technical development in AI and other digital technologies. It was noted that the political domain of regulatory development and the technical domain of standards development are increasingly overlapping in the field of AI. This convergence presents unique challenges that necessitate a thoughtful approach to ensure both regulatory compliance and technical excellence.

Furthermore, the role of standards in executing regulations in the context of AI was discussed. The panel emphasised that standards are becoming an essential tool for implementing and enforcing regulations. Developing and adhering to standards can help address challenges such as interoperability, transparency, and accountability in AI systems.

The need for capacity building was also emphasised, allowing a broader stakeholder community to engage in the technical aspects of AI, which have become integral to major policy tools. The panel acknowledged that empowering a diverse and inclusive group of stakeholders, including policymakers, experts, civil society representatives, academics, and industry professionals, is crucial for the development and governance of AI technology.

The process of contributing to AI training and education through UNESCO was discussed, highlighting the involvement of a UNESCO member who distributes AI research materials and textbooks to universities, particularly in developing countries. This partnership and knowledge-sharing initiative aim to bridge the global education gap and ensure that AI education is accessible to all.

The assessment of AI systems was deemed crucial, with recognition that assessing non-technical aspects is as important as evaluating technical performance. This includes considering the wider societal impact, such as potential consequences on workers and the categorisation of people. The panel emphasised the need for assessment processes to go beyond technical measures and include potential unintended consequences and ethical considerations.

Furthermore, it was acknowledged that the assessment of AI systems should extend beyond their current context and consider performance in future or “unbuilt” scenarios. This reflects the need to anticipate and mitigate potential negative outcomes resulting from the deployment of AI technology and to ensure its responsible development and use.

In conclusion, the panel discussion provided valuable insights into the challenges and opportunities associated with AI technology. The rapidly changing nature of AI necessitates continuous capacity building, particularly in the education sector, to equip individuals with the necessary skills and knowledge. Moreover, the convergence of regulatory and technical development in AI requires a thoughtful and inclusive approach, with standards playing a critical role in regulatory compliance. The assessment of AI systems was identified as a key area, underscoring the importance of considering non-technical aspects and potential societal impacts. Overall, the discussion emphasised the need for responsible development, governance, and stakeholder engagement to harness the potential of AI technology while mitigating its risks.

Nobuo Nishigata

The analysis reveals several key points regarding AI governance. Firstly, it emphasizes the importance of striking a balance between regulation and innovation in AI initiatives. This suggests that while regulations are necessary to address concerns and ensure ethical practices, there should also be room for innovation and advancement in the field.

Furthermore, the report highlights the need for AI policy development to take into consideration perspectives and experiences from the Global South. This acknowledges the diverse challenges and opportunities that different regions face in relation to AI adoption and governance.

The analysis also discusses the dual nature of AI technology, presenting both risks and opportunities. It underscores the significance of discussing uncertainties and potential risks associated with AI, alongside the numerous opportunities it presents. Additionally, it highlights the potential of AI to significantly contribute to addressing economic and labor issues, as evidenced by Japan considering AI as a solution to its declining labour force and sustaining its economy.

Another noteworthy point raised in the analysis is the recommendation to view AI governance through the Global South lens. This suggests that the perspectives and experiences of developing nations should be taken into account to ensure a more inclusive and equitable approach to AI governance.

The analysis also provides insights into the ongoing Hiroshima process focused on generative AI. It highlights that discussions within the G7 delegation task force are centred around a code of conduct from the private sector. Notably, the report suggests support for this approach, emphasising the importance of a code of conduct in addressing concerns such as misinformation and disinformation.

Flexibility and adaptability in global AI governance are advocated for in the analysis. It argues that AI is a rapidly evolving field, necessitating governance approaches that can accommodate changing circumstances and allow governments to tailor their strategies according to their specific needs.

Collaboration and coordination between organisations and governments are seen as crucial in AI policy-making, skills development, and creating AI ecosystems. The analysis suggests that international collaborations are needed to foster beneficial AI ecosystems and capacity building.

The importance of respecting human rights, ensuring safety, and fostering accountability and explainability in AI systems are also highlighted. These aspects are considered fundamental in mitigating potential harms and ensuring that AI technologies are used responsibly and ethically.

In addition to these main points, the analysis touches upon the significance of education and harmonisation. It suggests that education plays a key role in the AI governance discourse, and harmonisation is seen as important for the future.

Overall, the analysis brings attention to the multifaceted nature of AI governance, advocating for a balanced approach that takes into account various perspectives, fosters innovation, and ensures ethical and responsible practices. It underscores the need for inclusive and collaborative efforts to create effective AI policies and systems that can address the challenges and harness the opportunities presented by AI technology.

Jose

The analysis of the speakers’ points highlights several important issues. Representatives from the Global South stress the importance of gaining a deeper understanding of movements and policies within their regions. This is crucial for fostering an inclusive approach to technology development and governance.

A significant concern raised in the analysis is the intricate link between labour issues and advancements in the tech industry. In Brazil, for instance, there has been a rise in deaths among drivers on delivery platforms, which is attributed to the pressure exerted by new platforms demanding different delivery times. This highlights the need to address the adverse effects of tech advancements on workers’ well-being.

The impact of the tech industry on sustainability is another topic of debate in the analysis. There are concerns about the interest shown by tech leaders in Bolivia’s minerals, particularly lithium, following political instability. This raises questions about responsible consumption and production practices within the tech industry and the environmental consequences of resource extraction.

The use of biometric systems for surveillance purposes comes under scrutiny as well. In Brazil, the analysis reveals that the criminal system’s structural racism is being automated and accelerated by these technologies. This raises concerns about the potential for discriminatory practices and human rights violations resulting from the use of biometric surveillance.

There is a notable push for banning certain systems in Brazil, as civil society advocates for regulations to protect individuals’ rights and privacy in the face of advancing technology. This highlights the need for robust governance and regulation measures in the tech industry to prevent harmful impacts.

The global governance of AI is also a point of concern. The analysis highlights the potential risk of a race to the bottom due to geopolitical competition and various countries pushing their narratives. This emphasizes the importance of global collaboration and cooperation to ensure ethical and responsible use of AI technologies.

Countries from the global south argue for the need to actively participate and push forward their interests in the governance of AI technologies. Forums like BRICS and G20 are suggested as platforms to voice these concerns and advocate for more inclusive decision-making processes.

The analysis also sheds light on the issue of inequality in the global governance of technology. It is observed that certain groups seem to matter more than others, indicating the presence of power imbalances in decision-making processes. This highlights the need for addressing these inequalities and ensuring that all voices are heard and considered in the governance of technology.

Furthermore, the extraction of resources for technology development is shown to have significant negative impacts on indigenous groups. The example of the Anamames ethnicity in the Brazilian Amazon suffering due to the activities of illegal gold miners underscores the need for responsible and sustainable practices in resource extraction to protect the rights and well-being of indigenous populations.

Lastly, tech workers from the global south advocate for better working conditions and a greater say in the algorithms and decisions made by tech companies. This emphasizes the need for empowering workers and ensuring their rights are protected in the rapidly evolving tech industry.

In conclusion, the analysis of the speakers’ points highlights a range of issues in the intersection of technology, governance, and the impacts on various stakeholders. It underscores the need for deeper understanding, robust regulation, and inclusive decision-making processes to tackle challenges and ensure that technology benefits all.

Moderator – Prateek

The Policy and Analysis Initiative (P&AI) is a newly established policy network that focuses on addressing matters related to AI and data governance. It originated from discussions held at the IGF 2020-2022 in Addis Ababa. The P&AI aims to tackle policy issues relating to AI and data governance, and has recently released its first report.

The first report produced by the P&AI is a collaborative effort and sets out to examine various aspects of AI governance. It specifically focuses on the AI lifecycle for gender and race inclusion and outlines strategies for governing AI to ensure a just twin transition. The report takes into account different regulatory initiatives on artificial intelligence from various regions, including those from the Global South.

One of the noteworthy aspects of the P&AI is its working spirit and commitment to a multi-stakeholder approach. The working group of P&AI was formed in the true spirit of multi-stakeholderism at the IGF, and they collaborated closely to draft this first report. This approach ensures diverse perspectives and expertise are considered in shaping the policies and governance frameworks related to AI.

Prateek, an individual interested in understanding the connectivity between AI governance and internet governance, sought insights on the interoperability of these two domains. To gain a better understanding of the implications of internet governance on AI governance, Prateek engaged Professor Xing Li and requested a comparison between the two in terms of interoperability.

During discussions, Jose highlighted the need for a deeper understanding of local challenges faced in the Global South in relation to AI. This includes issues concerning labor, such as the impacts of the tech industry on workers, as well as concerns surrounding biometric surveillance and race-related issues. Jose called for more extensive debates on sustainability and the potential risks associated with over-reliance on technological solutions. Additionally, Jose stressed the underrepresentation of the Global South in AI discussions and emphasized the importance of addressing their specific challenges.

In the realm of AI training and education, Pradeep mentioned UNESCO’s interest in expanding its initiatives in this area. This focus on AI education aligns with SDG 4: Quality Education, and UNESCO aims to contribute to this goal by providing enhanced training programs in AI.

In a positive gesture of collaboration and sharing information, Prateek offered to connect with an audience member and provide relevant information about UNESCO’s education work. This willingness to offer support and share knowledge highlights the importance of partnerships and collaboration in achieving the goals set forth by the SDGs.

In conclusion, the Policy and Analysis Initiative (P&AI) is a policy network that aims to address AI and data governance matters. Their first report focuses on various aspects of AI governance, including gender and race inclusion and a just twin transition. Their multi-stakeholder approach ensures diverse perspectives are considered. Discussions during the analysis highlighted the need to understand local challenges in the Global South, the significance of AI education, and the connectivity between AI and internet governance. Collaboration and information sharing were also observed, reflecting the importance of partnerships in achieving the SDGs.

Maikki Sipinen

The Policymakers’ Network on Artificial Intelligence (P&AI) is a relatively new initiative that focuses on addressing policy matters related to AI and data governance. It emerged from discussions held at the IGF 2020-2022 meeting in Addis Ababa last year, where the importance of these topics was emphasised. The P&AI report, which was created with the dedication of numerous individuals, including the drafting team leaders, emphasises the significance of the IGF meetings as catalysts for new initiatives like P&AI.

One of the key arguments put forward in the report is the need to introduce AI and data governance topics in educational institutions. The reasoning behind this is to establish the knowledge and skills required to navigate the intricacies of AI among both citizens and the labor force. The report points to the success of the Finnish AI strategy, highlighting how it managed to train over 2% of the Finnish population in the basics of AI within a year. This serves as strong evidence for the feasibility and impact of introducing AI education in schools and universities.

Another argument highlighted in the report involves the importance of capacity building for civil servants and policymakers in the context of AI governance. The report suggests that this aspect deserves greater focus and attention within the broader AI governance discussions. By enhancing the knowledge and understanding of those responsible for making policy decisions, there is an opportunity to shape effective and responsible AI governance frameworks.

Diversity and inclusion also feature prominently in the report’s arguments. The emphasis is on the need for different types of AI expertise to work collaboratively to ensure inclusive and fair global AI governance. By bringing together individuals from diverse backgrounds, experiences, and perspectives, the report suggests that more comprehensive and equitable approaches to AI governance can be established.

Additionally, the report consistently underscores the significance of capacity building throughout all aspects of AI and data governance. It is viewed as intrinsically linked and indispensable for the successful development and implementation of responsible AI policies and practices. The integration of capacity building recommendations in various sections of the report further reinforces the vital role it plays in shaping AI governance.

In conclusion, the P&AI report serves as a valuable resource in highlighting the importance of policy discussions on AI and data governance. It emphasises the need for AI education in educational institutions, capacity building for civil servants and policymakers, and the inclusion of diverse perspectives in AI governance discussions. These recommendations contribute to the broader goal of establishing responsible and fair global AI governance frameworks.

Owen Larter

The analysis highlights several noteworthy points about responsible AI development. Microsoft is committed to developing AI in a sustainable, inclusive, and globally governed manner. This approach is aligned with SDG 9 (Industry, Innovation and Infrastructure), SDG 10 (Reduced Inequalities), and SDG 17 (Partnerships for the Goals). Microsoft has established a Responsible AI Standard to guide their AI initiatives, demonstrating their commitment to ethical practices.

Owen, another speaker in the analysis, emphasises the importance of transparency, fairness, and inclusivity in AI development. He advocates for involving diverse representation in technology design and implementation. To this end, Microsoft has established Responsible AI Fellowships, which aim to promote diversity in tech teams and foster collaboration with individuals from various backgrounds. The focus on inclusivity and diversity helps to ensure that AI systems are fair and considerate of different perspectives and needs.

Additionally, open-source AI development is highlighted as essential for understanding and safely using AI technology. Open-source platforms enable the broad distribution of AI benefits, fostering innovation and making the technology accessible to a wider audience. Microsoft, through its subsidiary GitHub, is a significant contributor to the open-source community. By embodying an open-source ethos, they promote collaboration and knowledge sharing, contributing to the responsible development and use of AI.

However, it is crucial to strike a balance between openness and safety/security in AI development. Concerns exist about the trade-off between making advanced AI models available through open-source platforms versus ensuring the safety and security of these models. The analysis suggests a middle-path approach, promoting accessibility to AI technology without releasing sensitive model weights, thereby safeguarding against potential misuse.

Furthermore, the need for a globally coherent framework for AI governance is emphasised. The advancement of AI technology necessitates establishing robust regulations to ensure its responsible and ethical use. The conversation around global governance has made considerable progress, and the G7 code of conduct, under Japanese leadership, plays a crucial role in shaping the future of AI governance.

Standards setting is proposed as an integral part of the future governance framework. Establishing standards is essential for creating a cohesive global framework that promotes responsible AI development. The International Civil Aviation Organization (ICAO) is highlighted as a potential model, demonstrating the effective implementation of standards in a complex and globally interconnected sector.

Understanding and reaching consensus on the risks associated with AI is also deemed critical. The analysis draws attention to the successful efforts of the Intergovernmental Panel on Climate Change in advancing understanding of risks related to climate change. Similarly, efforts should be made to comprehensively evaluate and address the risks associated with AI, facilitating informed decision-making and effective risk mitigation strategies.

Investment in AI infrastructure is identified as crucial for promoting the growth and development of AI capabilities. Proposals exist for the creation of public AI resources, such as the National AI Research Resource, to foster innovation and ensure equitable access to AI technology.

Evaluation is recognised as an important aspect of AI development. Currently, there is a lack of clarity in evaluating AI technologies. Developing robust evaluation frameworks is crucial for assessing the effectiveness, reliability, and ethical implications of AI systems, enabling informed decision-making and responsible deployment.

Furthermore, the analysis highlights the importance of social infrastructure development for AI. This entails the establishment of globally representative discussions to track AI technology progress and ensure that the benefits of AI are shared equitably among different regions and communities.

The analysis also underscores the significance of capacity building and actions in driving AI development forward. Concrete measures should be taken to bridge the gap between technical and non-technical stakeholders, enabling a comprehensive understanding of the socio-technical challenges associated with AI.

In conclusion, responsible AI development requires a multi-faceted approach. It involves developing AI in a sustainable, inclusive, and globally governed manner, promoting transparency and fairness, and striking a balance between openness and safety/security. It also necessitates the establishment of a globally coherent framework for AI governance, understanding and addressing the risks associated with AI, investing in AI infrastructure, conducting comprehensive evaluations, and developing social infrastructure. Capacity building and bridging the gap between technical and non-technical stakeholders are crucial for addressing the socio-technical challenges posed by AI. By embracing these principles, stakeholders can ensure the responsible and ethical development, deployment, and use of AI technology.

Xing Li

The analysis explores various aspects of AI governance, regulations for generative AI, the impact of generic AI on the global south, and the need for new educational systems in the AI age. In terms of AI governance, the study suggests that it can learn from internet governance, which features organisations such as IETF for technical interoperability and ICANN for names and number assignments. The shift from a US-centric model to a global model in internet governance is viewed positively and can serve as an example for AI governance.

The discussion on generative AI regulations focuses on concerns that early regulations may hinder innovation. It is believed that allowing academics and technical groups the space to explore and experiment is crucial for advancing generative AI. Striking a balance between regulation and fostering innovation is of utmost importance.

The analysis also highlights the opportunities and challenges presented by generic AI for the global south. Generic AI, consisting of algorithms, computing power, and data, has the potential to create new opportunities for development. However, it also poses challenges that need to be addressed to fully leverage its benefits.

Regarding education, the study emphasises the need for new educational systems that can adapt to the AI age. Outdated educational systems must be revamped to meet the demands of the digital era. Four key educational factors are identified as important in the AI age: critical thinking, fact-based reasoning, logical thinking, and global collaboration. These skills are essential for individuals to thrive in an AI-driven world.

Finally, the analysis supports the establishment of a global AI-related education system. This proposal, advocated by Stanford University Professor Fei-Fei Li, is seen as a significant step akin to the creation of modern universities hundreds of years ago. It aims to equip individuals with the necessary knowledge and skills to navigate the complexities and opportunities presented by AI.

In conclusion, the analysis highlights the importance of drawing lessons from internet governance, balancing regulations to foster innovation in generative AI, addressing the opportunities and challenges of generic AI in the global south, and reimagining education systems for the AI age. These insights provide valuable considerations for policymakers and stakeholders shaping the future of AI governance and its impact on various aspects of society.

Jean Francois ODJEBA BONBHEL

The analysis provides different perspectives on the development and implementation of artificial intelligence (AI). One viewpoint emphasizes the need to balance the benefits and risks of AI. It argues for the importance of considering and mitigating potential risks while maximizing the advantages offered by AI.

Another perspective highlights the significance of accountability in AI control. It stresses the need to have mechanisms in place that hold AI systems accountable for their actions, thereby preventing misuse and unethical behavior.

Education is also emphasized as a key aspect of AI development and understanding. The establishment of a specialized AI school in Congo at all educational levels is cited as evidence of the importance placed on educating individuals about AI. This educational focus aims to provide people with a deeper understanding of AI and equip them with the necessary skills to navigate the rapidly evolving technological landscape.

The analysis suggests that AI development should be approached with careful consideration of risks and benefits, control mechanisms, and education. By adopting a comprehensive approach that addresses these elements, AI can be developed and implemented responsibly and sustainably.

A notable observation from the analysis is the emphasis on AI education for children. A program specifically designed for children aged 6 to 17 is implemented to develop their cognitive skills with technology and AI. The program’s focus extends beyond making children technology experts; it aims to equip them with the necessary understanding and skills to thrive in a future dominated by technology.

Furthermore, one speaker raises the question of whether the world being created aligns with the aspirations for future generations. The proposed solution involves providing options, solutions, and education on technology to empower young people and prepare them for the technologically advanced world they will inhabit.

In conclusion, the analysis underscores the importance of striking a balance between the benefits and risks of AI, ensuring accountability in AI control, and promoting education for a better understanding and access to AI innovations. By considering these facets, the responsible and empowering development and implementation of AI can be achieved to navigate the evolving technological landscape effectively.

Speakers

Speech speed

0 words per minute

Speech length

words

Speech time

0 secs

Click for more

Speech speed

0 words per minute

Speech length

words

Speech time

0 secs

Click for more

Speech speed

0 words per minute

Speech length

words

Speech time

0 secs

Click for more

Speech speed

0 words per minute

Speech length

words

Speech time

0 secs

Click for more

Speech speed

0 words per minute

Speech length

words

Speech time

0 secs

Click for more

Speech speed

0 words per minute

Speech length

words

Speech time

0 secs

Click for more

Speech speed

0 words per minute

Speech length

words

Speech time

0 secs

Click for more

Speech speed

0 words per minute

Speech length

words

Speech time

0 secs

Click for more

Speech speed

0 words per minute

Speech length

words

Speech time

0 secs

Click for more

Speech speed

0 words per minute

Speech length

words

Speech time

0 secs

Click for more

Global Internet Governance Academic Network Annual Symposium | Part 1 | IGF 2023 Day 0 Event #112

Table of contents

Disclaimer: This is not an official record of the IGF session. The DiploAI system automatically generates these resources from the audiovisual recording. Resources are presented in their original format, as provided by the AI (e.g. including any spelling mistakes). The accuracy of these resources cannot be guaranteed. The official record of the session can be found on the IGF's official website.

Knowledge Graph of Debate

Session report

Yik Chan Chin

In thorough discussions concerning China’s data policy and the right to data access, correlated with Sustainable Development Goal 9 (Industry, Innovation, and Infrastructure) and Sustainable Development Goal 16 (Peace, Justice and Strong Institutions), China’s unique interpretation of data access has become a focal point. According to the analysis, the academic debate and national policy in China are primarily driven by an approach that interprets data as a type of property. This perspective divides rights associated with data into three fundamental components: access, processing, and exchange rights. It posits that these rights can be traded to generate value, as explicitly stated in the government’s policy documents.

However, this policy approach has sparked substantial critique for its disregard of other significant aspects of data access. Chinese policies predominately fail to recognise data’s inherent character as a public good. The academic sphere and governmental policy make scarce acknowledgement of this, undervaluing its potential contribution to societal advancement beyond merely commercial gains. Along these lines, the rights and benefits of individual citizens are often overlooked in favour of promoting enterprise-related interests.

The country’s data access policy is primarily designed to unlock potential commercial value, especially within enterprise data – an aspect contributing to the imbalance of power between individual users and corporations. Such power dynamics remain largely unaddressed in China’s data-related discussions and policy settings, potentially leading to a power imbalance detrimental to individuals.

Given these observations, the overall sentiment towards the Chinese data policy appears to be broadly negative. Acknowledging data’s essence as a public good and according importance to individual rights and power balances would be fundamental components for a more favourable policy formulation and discourse. The inclusion of these elements will ensure that the data policy reflects the principles of SDG 9 and SDG 16, aiming for a balance between enterprise development and individual rights.

Vagisha Srivastava

Web Public Key Infrastructure (WebPKI), an integral component of internet security, provides several benefits such as document digital signing, signature verification, and document encryption. This is epitomised by an incident involving a company named DigiNotar, which, through the misissuing of 500 certificates, compromised internet security – underlining the significance of digital certificates in web client authentication.

WebPKI governance intriguingly falls within the public goods paradigm. While the government traditionally delivers public goods and the commercial market handles private goods, in the case of WebPKI, private entities take noticeable strides in contributing; this defies conventional dynamics in the production of both public and private goods. That said, the government’s involvement isn’t entirely dispelled, with the US Federal PKI and Asian national Certification Authorities (CAs) actively partaking.

The claim that private entities are spearheading WebPKI security governance presents certain concerns. Governments may find themselves somewhat hamstrung when attempting to represent global public interest or generate global public goods in this complex context. As a result, platforms which are directly affected by an insecure web environment (such as browsers and operating systems) secure vital roles in security governance.

The Certificate Authority and Browser Forum, established in 2005, is crucial in coordinating WebPKI-related policies. This forum serves as a hub where root stores coordinate policies and garner feedback from CAs directly. In fact, its influence is such that it sets baseline requirements for CAs on issues like identity vetting and certificate content, since its inception.

Regarding the internal functionings of such organisations, the voting process within the consensus mechanism is diligently arranged prior to the actual voting process. Any formal language proposed for voting is already agreed upon, and the consensus mechanism is established pre-voting. Notably, there is curiosity surrounding how browsers, an integral part of the internet infrastructure, respond to such voting processes.

To conclude, internet security and governance system operate within a complex realm driven by both private and public actors. Entities like WebPKI and the Certificate Authority and Browser Forum play pivotal roles. The power dynamics and responsibilities between these players influence the continued evolution of policies related to internet security.

Kamesh Shekar

The in-depth analysis underscores the urgent necessity for a comprehensive, 360-degree, full-circle approach to the artificial intelligence (AI) lifecycle. This involves a principle-based ecosystem approach, ensuring nothing in the process is overlooked and emphasising a need for coverage that is as unbiased and complete as possible. The subsequent engagement of various stakeholders at each stage of the AI lifecycle, from inception and development through to the end-user application, is seen as pivotal in driving and maintaining the integrity of AI innovation.

The principles upon which this ecosystem approach is formed have been derived from a range of globally respected frameworks. These include guidelines from the Organisation for Economic Co-operation and Development (OECD), the United Nations (UN), the European Union (EU), and notably, India’s G20 declaration. Taking these well-established and widely accepted frameworks on board strengthens the argument for thorough mapping principles for varied stakeholders in the AI arena.

The analysis also delves into the friction that can occur around the interpretation and application of said principles. Distinct differences are highlighted, for instance, in the context of AI facets such as the ‘human in the loop’, illustrating the different approaches stakeholders adopt at various lifecycle stages. This underscores the importance of operationalisation of principles at every step of the AI lifecycle, necessitating a concrete approach to implementation.

A key observation in the analysis is the central role the government plays in overseeing the implementation of the proposed framework. Whether examining domestic scenarios or international contexts, the study heavily emphasises the power and influence legislative bodies hold in implementing the suggested framework. This extends to recommending an international cooperation approach and recognising the potentially pivotal role India could play amidst the Global Partnership on AI (GPA).

The responsibility of utilising these systems responsibly does not rest solely with the developers of AI technologies. The end-users and impacted populations are also encouraged to take on the mantle of responsible users, a sentiment heavily emphasised in the paper. In this thread, the principles and operationalisation for responsible use are elucidated, urging a thoughtful and ethical application of AI technologies.

An essential observation in the analysis is the lifecycle referred to, which has been derived from and informed by both the National Institute of Standards and Technology (NIST) and OECD, with a handful of additional aspects added and validated within the paper. This perspective recognises and incorporates substantial work already performed in the domain whilst adding fresh insights and nuances.

As a concluding thought, the analysis recognises the depth and breadth of the topics covered, calling for further in-depth discussions. This highlights an open stance towards continuous dialogue and the potential for further exploration and debate, possible in more detailed, offline conversations. As such, this comprehensive and thorough analysis offers a wealth of insights and provides excellent food for thought for any stakeholder in the AI ecosystem.

Kazim Rizvi

The Dialogue, a reputed tech policy think-tank, has authored a comprehensive paper on the subject of responsible Artificial Intelligence (AI) in India. The researchers vehemently advocate for the need to integrate specific principles beyond the deployment stages, encompassing all facets of AI. These principles, they assert, should be embedded within the design and development processes, especially during the data collection and processing stages. Furthermore, they argue for the inclusion of these principles in both the deployment and usage stages of AI by all stakeholders and consumers.

In their study, the researchers acknowledge both the benefits and challenges brought about by AI. Notably, they commend the myriad ways AI has enhanced daily life and professional tasks. Simultaneously, they draw attention to the intrinsic issues linked with AI, specifically around data collection, data authenticity, and potential risks tied to the design and usage of AI technology.

They dispute the notion of stringent regulation of AI at the onset. Instead, the researchers propose a joint venture, where civil society, industry, and academia embark on a journey to understand the nuances of deploying AI responsibly. This approach would lead to the identification of challenges and the creation of potential solutions appropriate for an array of bodies, including governments, scholars, development organisations, multilateral organisations, and tech companies.

The researchers acknowledge the potential risks that accompany the constant evolution of AI. While they recall that AI has been in existence for several decades, the study emphasises that emerging technologies always have accompanying risks. As the usage of AI expands, the researchers recommend a cautious, steady monitoring of potential harms.

The researchers also advise a global outlook for understanding AI regulation. They posit that a general sense of regulation already exists internationally. What’s more, they suggest that as AI continues to grow and evolve, its regulatory framework must do the same.

In conclusion, the research advocates for a multi-pronged approach that recognises both the assets and potential dangers of AI, whilst promoting ongoing research and the development of regulations as AI technology progresses. The researchers present a balanced and forward-thinking strategy that could create a framework for AI that is responsible, safe, and of maximum benefit to all users.

Nanette Levinson

The analysis unearths the growing uncertainty and expected institutional alterations taking centre stage within the sphere of cyber governance. This is based on several significant indicators of institutional change that have come to the fore. Indicators include the noticeable absence of a concrete analogy or inconsistent isomorphic poles, a shift in legitimacy attributed to an idea, and the emergence of fresh organisational arrangements – these signify the dynamic structures and attitudes within the sector.

In a pioneering cross-disciplinary approach, the analysis has linked these indicators of institutional change to an environment of heightened uncertainty and turbulence, as evidenced from the longitudinal study of the Open-Ended Working Group.

An unprecedented shift within the United Nations’ cybersecurity narrative was also discerned. An ‘idea galaxy’ encapsulating concepts such as human rights, gender, sustainable development, non-state actors, and capacity building was prevalent in the discourse from 2019 through to 2021. However, an oppositional idea galaxy unveiled by Russia, China, Belarus, and a handful of other nations during the Open-Ended Working Group’s final substantive session in 2022, highlighted their commitment towards novel cybersecurity norms. The emergence of these opposing ideals gave rise to duelling ‘idea galaxies’, signalling a divergence in shared ideologies.

This conflict between the two ‘idea galaxies’ was managed within the Open-Ended Working Group via ‘footnote diplomacy.’ Herein, the Chair acknowledged both clusters in separate footnotes, paving the way for future exploration and dialogue, whilst adequately managing the current conflict.

Of significant note is how these shifts, underpinned by tumultuous events like the war in Ukraine, are catalysing potential institutional changes in cyber governance. These challenging times, underscored by clashing ideologies and external conflict, seem to herald the potential cessation of long-standing trajectories of internet governance involving non-state actors.

In conclusion, there is growing uncertainty surrounding the future of multi-stakeholder internet governance due to the ongoing conflict within these duelling idea galaxies. The intricate and comprehensive analysis paints a picture of the interconnectivity between global events, institutional changes, and evolving ideologies in shaping the future course of cyber governance. These indicate a potential turning point in the journey of cyber governance.

Audience

This discussion scrutinises the purpose and necessity of government-led mega constellations in the sphere of satellite communication. The principal argument displayed scepticism towards governments’ reasoning for setting up these constellations, with a primary focus on their significant role in internet fragmentation. Intriguingly, some governments have proposed limitations on the distribution of signals from non-domestic satellites within their territories. However, the motives behind this proposal were scrutinised, specifically questioning why a nation would require its own mega constellation if their field of interest and service was confined to their own territories.

Furthermore, the discourse touched on the subject of ethical implications within the domain of artificial intelligence (AI). It highlighted an often-overlooked aspect in the responsible use of AI—the end users. While developers and deployers frequently dominate this dialogue, the subtle yet pivotal role of end-users was underplayed. This is especially significant considering that generative AI is often steered by these very end-users.

Another facet of the AI argument was the lack of clarity and precision in articulating arguments. Participants underscored the use of ambiguous terminologies like ‘real-life harms’, ‘real-life decisions’, and ‘AI solutions’. The criticism delved into the intricacies of the AI lifecycle model, emphasising an unclear derivation and an inconsistent focus on AI deployers rather than a comprehensive approach including end-users. The model was deemed deficient in its considerations of the impacts on end-users in situations such as exclusion and false predictions.

However, the discussion was not solely encompassed by scepticism. An audience member provided a positive outlook, suggesting stringent regulations on emerging technologies like AI might stifle innovation and progress. Offering a historical analogy, they equated such regulations to those imposed on the printing press in 1452.

Throughout the discourse, themes consistently aligned with Sustainable Development Goal 9, thus underscoring the significance of industry, innovation, and infrastructure in our societies. This dialogue serves as a reflective examination, not just of these topics, but also of how they intertwine and impact one another. It accentuates the importance of addressing novel challenges and ethical considerations engendered by technological advances in satellite communication and AI.

Jamie Stewart

The rapid advancement of digital technologies and internet connectivity in Southeast Asia is driving the development of assorted regulatory instruments within the region, underwritten by extensive investment in surveillance capacities. This rapid expansion, however, is provoking ever-growing concerns over potential misuse against human rights defenders, stirring up a negative sentiment.

Emerging from the Office of the United Nations High Commissioner for Human Rights (OHCHR) is a report on cybersecurity in Southeast Asia, bringing attention to the potential usage of these legal legislations against human rights defenders. Concerns are heightening around the wider consensus striving to combat cybercrime. The general assembly has expressed particular apprehension leaning towards misuse, especially of provisions that relate to surveillance, search, and seizure.

What emerges starkly from the research is a disproportionate impact of cyber threats and online harassment on women. The power dynamics in cyberspace perpetuate those offline, leading to a targeted attack on female human rights defenders. This gender imbalance along with the augmented threat to cybersecurity raises concerns, aligning with Sustainable Development Goals (SDG) 5 (Gender Equality) and SDG 16 (Peace, Justice, and Strong Institutions).

The promotion of human-centric cybersecurity with a gendered perspective charters a course of positive sentiment. The protective drive is for people and human rights to be the core elements of cybersecurity. Recognition is thus given to the need for a gendered analysis, with research bolstered by collaborations with the UN Women Regional Data Centre in the Asia Pacific.

An in-depth exploration of this matter further uncovers a widespread range of threats, both on a personal and organisational level. This elucidates the sentiment that a human-centric approach to cybersecurity is indispensable. Both state and non-state actors are found to be contributing to these threats, often in a coordinated manner, with surveillance software-related incidents being particularly traceable.

Additionally, the misuse of regulations and laws against human rights defenders and journalists is an escalating worry, prompting agreement that such misuse is indeed occurring. This concern is extended to anti-terrorism and cybercrime laws, which could potentially be manipulated against those speaking out, potentially curbing freedom of speech.

On the issue of cybersecurity policies, while their existence is acknowledged, concerns about their application are raised. Questions emerge as to whether these policies are being used in a manner protective of human rights, indicating a substantial negative sentiment towards the current state of cybersecurity. In conclusion, although the progression of digital technologies has brought widespread benefits, they also demand a rigorous protection of human rights within the digital sphere, with a marked emphasis on challenging gender inequalities.

Moderator

Throughout the established GigaNet Academic Symposium, held at the Internet Governance Forums (IGFs) since 2006, a multitude of complex topics takes centre stage. This latest iteration featured four insightful presentations tackling diverse subjects ranging from digital rights and trust in the internet, to challenges caused by internet fragmentation and environmental impacts. The discourse centered predominantly on Sustainable Development Goals (SDGs) 4 (Quality Education) and 9 (Industry, Innovation, and Infrastructure).

In maintaining high academic standards, the Symposium employs a stringent selection process for the numerous abstracts submitted. This cycle saw roughly 59 to 60 submissions, of which only a limited few were selected. While this guarantees quality control, it simultaneously restrains the number of presentations and hampers diversity.

Key to this Symposium was the debate on China’s access to data, specifically, the transformative influence the internet and social media platforms have exerted on the data economy. This has subsequently precipitated governance challenges primarily revolving around the role digital social media platforms play in managing data access and distribution. The proposed model for public data in China involves conditional fee access, with data analyses disseminated instead of the original datasets.

One recurring theme in these discussions related to the state-led debate in China that posits data as marketable property rights. Stemming from government policies and the broader economic development agenda, this perspective on data has dramatically influenced Chinese academia. However, this focus has led to a significant imbalance in the data rights dialogue, with the rights of data enterprises frequently superseding those of individuals.

Environmental facets of ICT standards also commanded attention, underscoring the political and environmental rights encompassed within these standards. Moreover, the complexity of measuring the environmental impact of ICTs, which includes carbon footprint and energy consumption through to disposal, confirms the necessity of addressing the materiality of ICTs. The discussion further emphasised that governance queries relating to certificate authorities are crucial to understanding the security and sustainability of low-Earth orbit satellites, given the emergence of conflicts and connections between these areas.

Concluding the Symposium was an appreciative acknowledgement of the participants’ contributions, from submitting and reviewing abstracts to adjusting sleep schedules to participate. Transitioning to a second panel without a break, the Symposium shifted its focus towards cyber threats against women, responsible AI, and broader global internet governance. Suggestions for improvements in future sessions included clarifying and defining theoretical concepts more comprehensively, focusing empirical scopes more effectively, and emphasising the significance of consumers and end-users in cybersecurity and AI discourse. The Symposium, thus, offered a well-rounded exploration of multifaceted topics contributing to a deeper understanding of internet governance.

Berna Akcali Gur

Mega-satellite constellations are revising global power structures, signalling significant strategic transitions. Many powerful nations regard these endeavours, such as the proposed launch of 42,000 satellites by Starlink, 13,000 by Guowang, and 648 by OneWeb, as opportunities to solidify their space presence and exert additional control over essential global Internet infrastructure. These are deemed high-stakes strategic investments, indicating a new frontier in the satellite industry.

Furthermore, the rise of these mega constellations is met with substantial enthusiasm due to their impressive potential in bridging the existing gaps in the global digital divide. Through the superior broadband connectivity, vital for social, economic, and governmental functions, offered by these satellite constellations, along with their low latency and high bandwidth capabilities, fruitful benefits, such as optimising IoT, video conferencing, and video games, can be harvested.

However, concerns have been raised over the sustainable usage of the increasingly congested orbital space. Resources in space are finite, and the present traffic could result in threats such as collision cascading. Such a scenario could make orbits unusable, depriving future generations of the opportunity to utilise this vital space.

European Union’s stance on space policy, particularly the necessity of owning a mega constellation, demonstrates some contradictions. While a EU document maintains that owning a mega constellation isn’t essential for access, it is thought crucial from strategic and security perspectives, revealing a potentially contradictory standpoint within the Union.

Another issue is fragmentation in policy implementation due to diversification in government opinions, as demonstrated by the decoupling of 5G infrastructure where groups of nations have decided against utilising each other’s technology due to cybersecurity issues. With the rise in the concept of cyber sovereignty, governments are increasingly regarding mega constellations as sovereign infrastructure vital for their cybersecurity.

Lastly, data governance is a significant concern for countries intending to utilise mega constellations. These countries may require that constellations maintain ground stations within their territories, thereby exercising control over cross-border data transfers, a key aspect in the digital era.

In conclusion, the growth of mega-satellite constellations presents a complex issue, encompassing facets of international politics, digital equity, environmental sustainability, policy diversification, cyber sovereignty, and data governance. As countries continue to navigate these evolving landscapes, conscious regulation and implementation strategies will be integral in harnessing the potentials of this technology.

Kimberley Anastasio

The intersection between Information Communication Technologies (ICTs) and the environment is a pivotal issue that has been brought into focus by major global institutions. For the first time, the Internet Governance Forum highlighted this interconnectedness by setting the environment as a main thematic track in 2020. This decision evidences increasing international acknowledgment of the symbiosis between these two areas. This harmonisation aligns with two key Sustainable Development Goals (SDGs): SDG 9, Industry, Innovation and Infrastructure; and SDG 13, Climate Action, signifying a global endeavour to foster innovative solutions whilst advocating sustainable practices.

In pursuit of a more sustainable digital arena, organisations worldwide are directing efforts towards developing ‘greener’ internet protocols. Within this landscape, the deep-rooted role of technology in the communication field has driven an elevated demand for advanced and sustainable communication systems. This paints a picture of a powerful transition towards creating harmony between digital innovation and environmental stewardship.

Within ICTs, standardisation is another topic with international resonance. This critical process promotes uniformity across the sector, regulates behaviours, and ensures interoperability. Together, these benefits contribute to the formation of a more sustainable economic ecosystem. The International Telecommunications Union, a renowned authority within the industry, has upheld these eco-friendly values with over 140 standards pertaining to environmental protection. Concurrently, ongoing environmental debates by the Internet Engineering Task Force suggest a broader trend towards heightened environmental consciousness within the ICT sector.

The materiality and quantification of ICTs are identified as crucial facets to environmental sustainability. Measuring the environmental impact of ICTs, although challenging, is highlighted as vital. This attention underlines the physical presence of ICTs within the environment and their consequential impact. This primary focus realigns with the targets of the aforementioned SDGs 9 and 13, further emphasising the significance of ICTs within the global sustainability equation.

In parallel with these developments, a dedicated research project is being carried out on standardisation from an ICT perspective, involving comprehensive content analysis of almost 200 standards from International Telecommunications Union and Internet Engineering Task Force members. This innovative methodology helps position the study within the wider spectrum of standardisation studies, overcoming the confines of ICT-specific research and implying broader applications for standardisation.

Alongside this larger project, a smaller but related initiative is underway. Its objective is to understand the workings of these organisations within the extensive potential of the ICT standardisation sector. The ultimate goal is to develop a focused action framework derived from existing literature and real-world experiences, underlining an active approach to problem solving.

Collectively, these discussions and initiatives portray a comprehensive and positive path globally to achieve harmony between ICT and sustainability. Whilst there are inherent challenges to overcome in this journey, the combination of focused research, standardisation, and collaborative effort provides a potent recipe for success in the pursuit of sustainable innovation.

Speakers

Speech speed

0 words per minute

Speech length

words

Speech time

0 secs

Click for more

&

’Berna

Speech speed

0 words per minute

Speech length

words

Speech time

0 secs

Click for more

&

’Jamie

Speech speed

0 words per minute

Speech length

words

Speech time

0 secs

Click for more

&

’Kamesh

Speech speed

0 words per minute

Speech length

words

Speech time

0 secs

Click for more

&

’Kazim

Speech speed

0 words per minute

Speech length

words

Speech time

0 secs

Click for more

&

’Kimberley

Speech speed

0 words per minute

Speech length

words

Speech time

0 secs

Click for more

Speech speed

0 words per minute

Speech length

words

Speech time

0 secs

Click for more

&

’Nanette

Speech speed

0 words per minute

Speech length

words

Speech time

0 secs

Click for more

&

’Vagisha

Speech speed

0 words per minute

Speech length

words

Speech time

0 secs

Click for more

&

’Yik

Speech speed

0 words per minute

Speech length

words

Speech time

0 secs

Click for more

Connecting open code with policymakers to development | IGF 2023 WS #500

Table of contents

Disclaimer: This is not an official record of the IGF session. The DiploAI system automatically generates these resources from the audiovisual recording. Resources are presented in their original format, as provided by the AI (e.g. including any spelling mistakes). The accuracy of these resources cannot be guaranteed. The official record of the session can be found on the IGF's official website.

Knowledge Graph of Debate

Session report

Helani Galpaya

Accessing timely and up-to-date data for development objectives presents a significant challenge in developing countries. It can take up to three years to obtain data after a census, leading to outdated and insufficient data. This lag in data availability hampers accurate planning and decision-making as population and migration patterns change over time. Additionally, government-produced datasets are often inaccessible to external actors like civil society and the private sector. This lack of data transparency and inclusivity limits comprehensive and integrated data analysis.

Another issue is the lack of standardisation in metadata across sectors, such as telecom and healthcare, especially in developing countries. This lack of standardisation creates challenges in data handling and cleaning. The absence of interoperability standards in healthcare sectors further complicates data utilisation and analysis.

Cross-border data sharing also faces challenges due to the absence of standards. This absence hinders the secure and efficient exchange of data, hindering international collaboration and partnerships. Developing more standards for cross-border data sharing is crucial for overcoming these challenges.

Working with unstructured data also poses challenges, particularly when it comes to fact-checking. There is a scarcity of credible sources, especially in non-English languages, making it difficult to identify misinformation and disinformation. Access to credible data from government sources and other reliable sources is essential, but often limited.

Efficient policy measures and rules are necessary to govern data usage while preserving privacy. GDPR mandates user consent for sharing personal data, highlighting the importance of differentiating between sharing weather data and personal data based on different levels of privacy violation.

The usage of unstructured data by insurance companies to influence coverage can have negative implications, potentially resulting in unfair risk classification and impacting coverage options. Ensuring fairness and equality in data usage within the insurance industry is crucial.

To address these challenges, building in-house capabilities and utilising open-source communities for government systems is recommended. Sri Lanka’s success in utilising its vibrant open-source community and building in-house capabilities for government architecture exemplifies the benefits of this approach.

The process of data sharing is hindered by the incentives to hoard data, as it is seen as a source of power. The high transaction costs associated with data sharing, due to capacity differences, also pose challenges. However, successful data partnerships that involve a middle broker have proven effective, emphasising the need for sustainable systems and case-by-case incentives for data sharing.

The evolving definition of privacy is an important consideration, as the ability to gather information on individuals has surpassed the need to solely protect their personal data. This calls for a broader understanding of digital rights and privacy protection.

In conclusion, accessing timely and up-to-date data for development objectives is a significant challenge in developing countries. Government-produced datasets are often inaccessible, and there is a lack of standardisation in metadata across sectors. The absence of standards also hampers cross-border data sharing. Working with unstructured data and fact-checking face challenges due to the scarcity of credible sources. Policy measures are necessary to govern data usage while protecting privacy. Building in-house capabilities and utilising open-source communities are recommended for government systems. The government procurement system may need revisions to promote participation from local companies and open-source solutions. Data sharing requires sustainable systems and incentives. The definition of privacy has evolved to encompass broader digital rights and privacy protection.

Audience

During the discussion, the speakers explored various aspects of open source, highlighting its benefits and concerns. One argument suggested incentivising entities to share data as a way to counteract data hoarding for competitive advantage. It was noted that certain organisations hoard data as a strategy to gain a competitive edge, but this practice hampers the accessibility and availability of data for others. Creating incentives for entities to share data, therefore, was emphasised as a vital step in promoting data openness and collaboration.

Conversely, the potential negative effects of open source were also discussed. The speakers raised concerns regarding the need to verify open source code and adhere to procurement laws. They specifically mentioned the French procurement law, expressing apprehensions about the ability to effectively verify open source code and ensure compliance with regulations. These concerns highlight the necessity for thorough scrutiny and robust governance measures when relying on open source solutions.

Building trust in open source was another significant argument put forth. In Nepal, for instance, there was a lack of trust in open source, hindering its widespread adoption across different sectors. The speakers stressed the importance of establishing mechanisms that enable the verification of open source code, ensuring its reliability and security to build trust among stakeholders. They also emphasised the need for capacity building to enhance knowledge and expertise required for verifying and utilising open source code effectively.

Overall, the sentiment surrounding the discussion varied. There was a negative sentiment towards data hoarding as a strategy for competitive advantage due to its restriction of data availability and accessibility. The potential adverse effects of open source, such as the need to verify code and comply with regulations, were also viewed negatively because of the associated challenges. However, there was a neutral sentiment towards building trust in open source and recognising the necessity for capacity building to fully leverage its benefits.

Mike Linksvayer

Mike Linksvayer, the Vice President of developer policy at GitHub, is a strong advocate for the connection between open source technology and policy work. He firmly believes that open source plays a crucial role in making the world a better place by supporting the measurement and information of policy makers about developments in the open source community. Linksvayer expresses enthusiasm about the potential of sharing aggregate data to address privacy concerns. He sees promise in technologies like confidential computing and differential privacy for data privacy and recognises the importance of balancing privacy considerations while still making open source AI models beneficial to society.

Mike Linksvayer emphasises the crucial role of archiving in software preservation and appreciates the contributions of Software Heritage in this field. He highlights the separation of preservation and making data openly available. Linksvayer sees coding as unstructured data and acknowledges the importance of data collection in research on programming trends and cybersecurity. Collaboration in software development is facilitated by platforms like Github, which provide APIs and open all events feed, enabling the sharing of aggregate data. Linksvayer believes that digital public goods, including software, data, and AI models, can be effective tools for development and sovereignty, addressing various Sustainable Development Goals (SDGs).

Promoting and supporting open source initiatives is essential, according to Linksvayer, as they drive job creation and economic growth. He cites a study commissioned by the European Commission estimating that open source contributes between €65 to €95 billion to the EU economy annually. Linksvayer also stresses the importance of cybersecurity in protecting open source code and advocates for coordinated action and investment from stakeholders, including governments.

In summary, Mike Linksvayer’s advocacy for open source technology and its connection to policy work underscores the potential for positive global change. He emphasizes the importance of sharing aggregate data, advancements in data privacy technologies, and the promotion of digital public goods. Linksvayer also highlights the economic benefits of open source and the critical need for investment in cybersecurity.

Cynthia Lo

During the discussion, several key points were highlighted by the speakers. Firstly, Software Heritage was praised for its commendable efforts in software preservation. It was mentioned that the organization is doing an excellent job in this area, but there is consensus that greater investment is needed to further enhance software preservation. This recognition emphasizes the importance of preserving software as an essential component of data preservation.

Another significant point made during the discussion was the support for assembling data into specific aggregated forms based on economies. This approach was positively received, as it provides a large set of data that can be analyzed and utilized more effectively. The availability of aggregated data based on economies allows for better understanding and decision-making in various sectors, such as the public and social sectors. This aligns with SDG 9: Industry, Innovation and Infrastructure, which promotes the development of reliable and sustainable data management practices.

One noteworthy aspect discussed by Cynthia Lo was the need to safeguard user data while ensuring privacy and security. Lo mentioned the Open Terms Archive as a digital public good that records each version of a specific term. This highlights the importance of maintaining data integrity and transparency. The neutral sentiment surrounding this argument suggests a balanced consideration of the potential risks associated with user data and the need to protect user privacy.

Furthermore, the discussion touched upon the role of the private sector in providing secure data while ensuring privacy. Cynthia Lo raised the question of how public and private sectors can collaborate to release wide data sets that guarantee both privacy and data security. This consideration reflects the growing importance of data security in the digital age and the need for collaboration between different stakeholders to address this challenge. SDG 9: Industry, Innovation and Infrastructure is again relevant here, as it aims to promote sustainable development through the improvement of data security practices.

In conclusion, the discussion shed light on various aspects related to data preservation, aggregation of data, user data safeguarding, and the role of the private sector in ensuring data security. The acknowledgement of Software Heritage’s efforts emphasizes the importance of investing in software preservation. The support for assembling data into specific aggregated forms based on economies highlights the potential benefits of such an approach. The focus on safeguarding user data and ensuring privacy demonstrates the need to address this crucial issue. Lastly, the call for collaboration between the public and private sectors to release wide data sets while ensuring data security recognizes the shared responsibility for protecting data in the digital age.

Henri Verdier

In this comprehensive discussion on data, software, and government practices, several significant points are raised. One argument put forth is that valuable data can be found in the private sector, and there is a growing consensus in Europe about the need to promote knowledge and support research. The adoption of the Data Sharing and Access (DSA) policy serves as evidence of this, as it provides a specific mechanism for public research to access private data.

Furthermore, it is argued that certain data should be considered too important to remain private. The example given is understanding the transport industry system, which requires data from various transport modes and is in the interest of everyone. The French government is working on what is called ‘data of general interest’ or ‘Données d’intérêt général’ to address this issue.

The discussion also highlights the importance of data sharing and rejects the idea of waiting for perfect standardization. It is noted that delaying data sharing until perfect standardization and good metadata are achieved would hinder progress. Instead, it is suggested that raw data should be published without waiting for perfection. This approach allows for timely access and utilization of data, with the understanding that standardization and optimization can be addressed subsequently.

The protection of data privacy, consent, and the challenges of anonymizing personal data are emphasized. The European General Data Protection Regulation (GDPR) is mentioned as an example of legal requirements that mandate user consent for personal data handling. It is also noted that anonymization of personal data is not foolproof, and at some point, someone can potentially identify individuals despite anonymization attempts.

Open source software is advocated for government use due to its cost-effectiveness, enhanced security, and contribution to democracy. France has a history of utilizing open source software within the public sector, and there are laws mandating that every software developed or financed by the government must be open source. The benefits of open source software align with the principles of transparency, collaboration, and accessibility.

The discussion also addresses the need for skilled individuals in government roles. It is argued that attracting talented individuals can be achieved through offering a mission and autonomy, rather than relying solely on high salaries. The bureaucratic processes of government organizations are criticized as complex and unappealing to skilled workers, indicating a need for reform to attract and retain talent.

In conclusion, this discussion on data, software, and government practices emphasizes the importance of a collaborative and transparent approach. It highlights the value of data in both the private and public sectors, as well as the need for data sharing, open source software, and data privacy protection. The inclusion of skilled individuals in government roles and the promotion of a substantial mission and autonomy are also seen as essential for effective governance. Ultimately, this comprehensive overview underscores the significance of responsible data and software practices in fostering innovation and safeguarding individual rights.

Speakers

Speech speed

0 words per minute

Speech length

words

Speech time

0 secs

Click for more

Speech speed

0 words per minute

Speech length

words

Speech time

0 secs

Click for more

Speech speed

0 words per minute

Speech length

words

Speech time

0 secs

Click for more

Speech speed

0 words per minute

Speech length

words

Speech time

0 secs

Click for more

Speech speed

0 words per minute

Speech length

words

Speech time

0 secs

Click for more

Telegram Bot Test Test

 Text, Computer Hardware, Electronics, Hardware
Telegram Bot Test Test 2
HR embraces AI in hiring

Survey shows AI's growing acceptance in recruitment.

UN Cybercrime Convention: Will states give in disagreements for the sake of a global common threat?

As the concluding session of the UN process to negotiate a cybercrime convention approaches, Diplo i…

AI model improves speed and accuracy of heart MRI analysis

AI model analyses heart MRIs in seconds, outperforming manual methods.

Go legend Lee Saedol defeated by AI in landmark match

The landmark event highlighted the significant advancements in AI, demonstrating its capability to m…

Microsoft reveals VALL-E 2 AI, achieving human-like speech

Tests on datasets like LibriSpeech and VCTK show that VALL-E 2's voice quality matches or exceeds th…

Moroccan AI influencer wins Miss AI title

Kenza Layli, an AI-generated Moroccan influencer, has been crowned the first Miss AI. Created by Myr…

London startup founded by former Snapchat employee secures $4m for AI in gaming

Iconic AI raises $4m to redefine gaming industry.

New AI partnership focuses on early lung cancer diagnosis

Collaboration aims to detect more lung nodules with AI.

Germany to exclude Huawei and ZTE from 5G network by 2029

The decision aligns Germany with broader EU security measures but has drawn criticism from China's e…

India’s antitrust body finds Apple abused App Store dominance

Apple denies wrongdoing, citing its minor market share in India, where Google's Android system preva…

Chinese e-commerce struggles amid economic slowdown

Extreme discounting and favourable return policies once boosted the sector are now eroding profit ma…

TCS shares rise on Q1 results, sector optimism

The IT index climbed 3.4%, its highest since January 2022. Analysts from Macquarie and Nomura highli…

Musk’s vision: Establishing life on Mars

NASA's estimates of landing humans on Mars by the 2040s contrast with Musk's aggressive goals, but h…

SoftBank group acquires AI chipmaker Graphcore

Once considered a rival to Nvidia, Graphcore has faced financial struggles, leading to workforce red…

OpenAI introduces a five-tier system to measure AI progress

OpenAI shares its new AI classification system with stakeholders.

EU AI Act published in Official Journal, initiating countdown to legal deadlines

Set to come into force on August 1, 2024, the Act will be fully applicable by mid-2026, employing a …

Musk’s X faces EU investigation for DSA violations

The breaches involve dark patterns, a lack of advertising transparency, inadequate data access for r…

Indonesia begins data recovery after ransomware attack

The attackers, identified as Brain Cipher, initially demanded an $8 million ransom but later provide…

AI chatbot aids blind gamer’s journey in Tokyo

Despite relying heavily on tactile paving and a white stick, the journey took four times longer than…

US House committee releases TikTok hearing transcript

The DoJ, citing national security concerns, claims TikTok collects vast amounts of sensitive data, w…

US lawmakers question Microsoft’s $1.5B AI investment

The request highlights broader apprehensions about China's influence in the Middle East and the adeq…

UEFA European Championship causes shifts in internet traffic across Europe

While some nations experienced increased internet traffic, likely due to unstable broadcast signals,…

Australia to enforce anti-scam laws on internet firms

The need to introduce such legislation is driven by a significant increase in financial losses due t…

NATO unveils new Cyber Defence Centre

The move, unveiled during the 2024 NATO Summit in Washington, DC, marks NATO's 75th anniversary and …

AWS unveils studio for rapid AI-driven app development

AWS launches App Studio, allowing IT professionals to create enterprise-grade applications without c…

Japan boosts AI sovereignty with ABCI 3.0 supercomputer

The project receives robust support from Japan’s Ministry of Economy, Trade and Industry (METI) as p…

Government entities in Australia to assess foreign control risks in tech

Australia aims to secure its government technology assets against foreign interference and cyber thr…

National blockchain ‘Nigerium’ aims to boost Nigeria’s tech security

Nigeria is developing a national blockchain, 'Nigerium', to secure data and enhance cybersecurity.

Apple’s NFC technology will no longer be reserved to Apple Pay and Wallet in the EEA

Announced on Thursday, Apple will allow competitors to the NFC technology in its devices, offering a…

Bytespider tops list of AI crawlers, Cloudflare finds

Despite their activity, many website operators are unaware of these crawlers, and only 2.98% of the …

UAE launches AI challenge with Mastercard

The initiative, part of the UAE Strategy for AI, aims to stimulate the country's AI sector and foste…

Tokyo residents oppose massive data centre project

The centre, projected to emit 1.8 million tons of carbon dioxide annually, has prompted the group to…

Serbia unveils AI development strategy

The new AI Development Strategy 2024-2030 builds on this solid foundation, aiming to nurture a vibra…

Samsung showcases AI in face of strike

At a Paris event, the tech giant showcased AI applications in products like the Galaxy S24 smartphon…

GSMA announces global effort to improve smartphone access

The primary goal is to reduce barriers to entering the digital economy, addressing the significant i…

Slack marks decade milestone with AI integration

Celebrating a decade, Slack adopts AI tools like advanced search and thread summaries to elevate col…

Amazon enhances AI tools to tackle misinformation

With GenAI revenues projected to skyrocket to £33 billion by 2027, Amazon aims to lead in combating …

BNP Paribas enhances digital services with Mistral AI partnership

BNP Paribas and Mistral AI extend partnership, integrating AI models into key areas to improve clien…

Hedge funds target South Korean chipmakers amid AI demand surge

The surge in interest is driven by anticipated demand for high-end memory chips required for AI adva…

South Korean company launches AI beauty lab

The AI market in beauty and cosmetics is forecast to grow significantly, from $3.27 billion in 2023 …

Helsing raises € 450 million funding for AI defence of NATO’s eastern front

Valued at €4.95 billion, Helsing develops AI software to enhance weapons capabilities and battlefiel…

Vimeo introduces AI labelling for videos

Creators can currently self-disclose AI usage, with automated detection systems planned for developm…

India’s tech sector faces skills shortage

Major IT firms like Tata Consultancy Services and Larsenand& Toubroneed struggle to fill tens of…

Macau government websites hit by cyberattack

Officials believe the attack originated from overseas and have launched a criminal investigation to …

AI investment risks and uncertainties highlighted by Goldman Sachs

The report also highlights potential pressures on national grids and a projected increase in UK elec…

French startup unveils AI model for disease diagnosis

Bioptimus, founded in February with $35 million in seed funding, aims for further development to exp…

Rising threat of deepfake pornography for women

Deepfake pornography increasingly endangers women’s privacy, with 98% of content being explicit.

Intuit to cut 1,800 jobs, focus on AI investments

Restructuring costs for Intuit's workforce reduction and consolidation of technology roles are estim…

OpenAI and Los Alamos collaborate on AI research

The collaboration, billed as a first of its kind, will evaluate how GPT-4o can assist with lab tasks…

Healthcare experts demand transparency in AI use

These concerns underscore the need for careful and transparent AI integration in healthcare to build…

Biden administration assembles AI expert team

President Joe Biden has formed a team of experts to create standards for AI training and deployment …

Chinese businesses are the biggest adopters of generative AI, report says

A new report finds that Chinese businesses are most likely to use genAI, whereas their US counterpar…

US voters prefer cautious AI regulation over China race

US voters favour cautious AI regulation over rushing to compete with China, poll finds.

USA invests $1.6 billion in chip packaging to compete with China

The US currently faces significant reliance on Asian countries for chip packaging, with only about 3…

Bumble fights AI scammers with new reporting tool

By providing this option, Bumble aims to curb the misuse of AI technology in creating deceptive prof…

EU’s AI Act influences New Zealand’s digital strategy

Dr Lynch highlights the need for legal safeguards, while Professor Markus Luczak-Roesch recommends E…

Sri Lanka updates telecommunications law to welcome Musk’s Starlink

The legislative change introduces three new types of licences for satellite internet service provide…

The National Education Association approves AI policy to guide educators

Developed by a task force of educational stakeholders since autumn 2023, the policy highlights the i…

Meta will remove content in which ‘Zionist’ is used as a proxy term for antisemitism

The company reevaluates the use of the term 'Zionist' on its platforms, considering it to be a proxy…

Washington Post launches AI chatbot for climate queries

Developed by the Post's product, engineering, and editorial teams with support from AI firms like Op…

EU designates XNXX as VLOP

Non-compliance can attract fines from the European Commission.

Indian data protection law under fire for inadequate child online safety measures

The CDF webinar highlighted the need for India-specific solutions for child safety online, given the…

FTC bans NGL app from minors, issues $5 million fine for cyberbullying exploits

NGL has been fined $5 million and banned from serving minors by the FTC, who cited the app's facilit…

OpenAI and Arianna Huffington fund AI health coach development

The venture, led by former Google executive DeCarlos Love, aims to utilise peer-reviewed science and…

Matlock denies AI bot rumours amid concerns over campaign image

Matlock explained his absence, stating he had pneumonia, and provided the original campaign photo as…

US authorities disrupt Russian AI-powered disinformation campaign

US authorities have disrupted a sophisticated Russian disinformation campaign, Meliorator, which use…

AI startup investments surge to $24 billion

Overall startup funding grew 16% sequentially to $79 billion, driven primarily by AI investments.

AI conference spotlights Chinese GPU advances

Chinese GPU developers are grappling with manufacturing hurdles due to US export restrictions and a …

Driverless taxis gain traction in Wuhan amid controversy and competition

The traditional taxi industry has responded by petitioning the municipal transport authority to regu…

Musk’s xAI and Oracle halt $10 billion server deal talks

Challenges included Musk’s ambitious timelines and Oracle's concerns over power supply at xAI’s pref…

AI driving transformation in financial services

Ankur Pal of Aplazo highlighted AI's transformative impact on financial services, focusing on person…

AI-powered workplace innovation: Tech Mahindra partners with Microsoft

Tech Mahindra partners with Microsoft to boost productivity and streamline processes for over 1,200 …

Basel Committee of banking regulators proposes principles to reduce risk from third-party tech firms

With increasing cyberattacks threatening operational resilience, banks are urged to implement robust…

Alipay launches tap-and-pay feature in China

The feature allows users to complete payments by tapping their smartphones on a merchant's USB devic…

Microsoft steps down from OpenAI board

The decision comes amid regulatory scrutiny from antitrust watchdogs in Europe, the UK, and the US c…

ChatGPT vs Google: The battle for search dominance

ChatGPT's web traffic dropped by 12% while Google's profits soared nearly 60% from ad revenue.

User concerns grow as AI reshapes online interactions

The integration of AI in online platforms raises ethical questions about authenticity and user priva…

UK debates digital ID vs national ID cards

The debate over national IDs has historical roots, facing civil rights opposition and abandonment in…

Chinese AI companies react to OpenAI block with SenseNova 5.5

SenseTime debuts SenseNova 5.5 at World AI Conference in Shanghai, paralleling OpenAI's GPT-4 capabi…

AI stocks surge prompts profit-taking advice

The Citi team, led by Drew Pettit, notes that such strong readings historically indicate impending v…

Vanuatu PM visits Huawei to view policing technology

Australia's concerns over China's influence in the Pacific are noted, given a recent policing equipm…

Google Wallet to support US biometric passports

Users will be able to scan their passports into Google Wallet, allowing for identity verification vi…

Chip boom propels TSMC into top tech giants

The semiconductor industry, now leading the S&P 500, is experiencing explosive demand driven by AI a…

Japanese and US firms form semiconductor consortium in Silicon Valley

Based in Silicon Valley, the consortium will focus on developing advanced back-end technologies for …

Meta partners with Vodafone to optimise video delivery across Europe

The initiative underscores the importance of ongoing innovation and collaboration within the digital…

Singapore advocates for international AI standards

Digital development minister emphasised the importance of international cooperation and standardisat…

Tech giants promote AI-powered PCs

Despite their potential, only 3% of PCs shipped this year will meet Microsoft's AI processing standa…

Microsoft employees in China to use iPhones

The strategic move comes in response to the unavailability of Google's Android services in China, wh…

OpenAI blocks Chinese users amid growing tech rivalry

OpenAI announced it would block Chinese users from accessing its services on 9 July, amid rising US-…

Australia accuses China-backed APT40 of cyberattacks on national networks

China’s embassy in Australia dismissed the allegations as 'political manoeuvring'.

Thousands of event tickets leaked because of Ticketmaster hack

Hackers leaked nearly 39,000 print-at-home tickets for major events in an extortion scheme against T…

AI cybersecurity in devices deemed high-risk by European Commission

The European Commission's AI Act will classify AI-based cybersecurity and emergency services in conn…

AI app aids pastors with sermons

A new AI platform, Pulpit AI, aims to assist pastors by converting sermons into various content form…

Researchers develop a method to improve reward models using LLMs for synthetic critiques

The approach aims to reduce the time and cost associated with human annotation.

AI’s digital twin technology revolution

Digital twins range from statistical models to video avatars, promising applications in personalised…

Samsung wins AI chip order from Japan

The chips, designed by South Korea's Gaonchips Co, will utilise gate-all-around (GAA) architecture t…

IBM’s GenAI center to advance AI technology in India

The centre seeks to accelerate AI innovation, increase productivity, and enhance generative AI exper…

Microsoft committed to expanding AI in education in Hong Kong

The tech giant collaborates with the Education University of Hong Kong Jockey Club Primary School, w…

AI impact in music production: Nearly 25% of producers embrace innovation

A recent survey reveals that AI adoption among music producers approaches 25%, marking a significant…

Microsoft details threat from new AI jailbreaking method

Skeleton Key bypasses AI behavioural guidelines, revealing harmful information. Microsoft has implem…

Sustainable Batteries – The building blocks of a circular economy

ITU and the Secretariat of the Basel Convention will hold on the 26th of May 2023 a workshop on sustainable management of batteries. Batteries are essential for ICTs and the smart management of energy. By enhancing their durability and recyclability, preventing waste, and improving their design, it is possible to minimize energy usage, safeguard human and environmental health, and decrease global greenhouse gas emissions.

The workshop will discuss how managing batteries sustainably is crucial for a digital and circular economy, and how international standards can support this effort. Participants will learn about selecting batteries for ICT infrastructure and hear from experts on how sustainable batteries are essential for building a circular economy.

Webinar: Catalysing Local Innovation Ecosystems in Africa

As part of the Catalyst 2030, the African Business Technology Network (ATBN) and the AfriConEU will participate at a webinar featuring leaders of digital innovation hubs, ecosystem builders and digital skills and entrepreneur supporters, who will discuss topics such as the digital innovation landscape in Ghana, Nigeria, Uganda and Tanzania, and the opportunities and challenges present within these spaces. Catalyst, launched in 2020 in Davos at the World Economic Forum by a group of social entrepreneurs in collaboration with governments and community groups, strives to inspire change and innovation in an effort to achieve the SDGs by 2030. From May1-5, the group will host the ‘Catalysing Change Week’, under the theme Solutions from the Frontline.

Internet Governance Forum (IGF) Serbia 2023

The IGF Serbia has a new edition. On May 16, the Serbian IGF will take place in Belgrade, the capital of Serbia. Topics for this discussion will touch upon the issues of cybersecurity, privacy and infrastructure related to Serbia and the balkan region in whole

WP.6 Second Forum

Artificial intelligence (AI) is used more frequently in our daily lives, including search engines and rewards programs. The development of AI chatbots, like Chat GPT, has raised questions about regulation and standardisation. The UNECE’s Working Party on Regulatory Cooperation and Standardization Policies (WP.6) Second Forum will be held virtually on May 22-26, 2023. WP.6 examines AI from two angles: gender-responsive standards and product conformity. AI can contain inherent gender biases, and products with integrated AI can evolve and become non-conformant over time. The annual meeting in May intends to address these and other regulatory challenges and find ways to ensure that AI products benefit all consumers equally. For more information, access the following link.

Episode #23: STI Forum Side event on Building the pathway to sustainable digital transformation

ITU and Rwanda Utilities Regulatory Authority are organising a virtual event on 2 May 2023 as part of ITU’s Webinar Series on Digital Transformation.

The ‘Building the pathway to sustainable digital transformation’ event will focus on how digitization and sustainability can work together to drive growth and achieve global goals.

The session’s objective is to explore how the ICT sector can help accelerate sustainable digital transformation and progress towards sustainable development goals (SDGs), specifically SDG 7, SDG 12, and SDG 13.

The event will also discuss the importance of circular products, the advancements in greening digital networks, and how the ICT sector can reach Net Zero.