Fake or advert: between disinformation and digital marketing | IGF 2023 Networking Session #171

Table of contents

Disclaimer: This is not an official record of the IGF session. The DiploAI system automatically generates these resources from the audiovisual recording. Resources are presented in their original format, as provided by the AI (e.g. including any spelling mistakes). The accuracy of these resources cannot be guaranteed. The official record of the session can be found on the IGF's official website.

Knowledge Graph of Debate

Session report

Heloisa Massaro

The commercial marketing industry has always been a significant source of funding for newspapers and has a critical influence in shaping the information environment. Understanding how programmatic ads work and how they finance online ad campaigns is crucial for making informed choices and structures for online advertisements.

In Brazil, workshops were conducted with digital marketing actors, highlighting the necessity of integrating robust risk analysis into marketing and advertising content creation. This is aimed at tackling the risks associated with disinformation and hate speech. By embedding risk analysis, marketing campaigns and advertisements can be developed with the necessary precautions to counteract disinformation.

Heloisa Massaro advocates for the development of best practices and guidelines for the advertising industry to mitigate potential negative effects on the information environment. The Internet Lab conducted a project called “Desinfo,” initiating a dialogue on best practices and guidelines in the advertising industry.

The influence of digital influencers in politics is seen as a problem due to the difficulty in separating their work from their political marketing roles. This raises concerns about the transparency and credibility of the information disseminated by digital influencers.

Self-regulatory bodies play a crucial role in addressing disinformation in advertising. Discussions are held regarding measures to mitigate risks through self-regulation, promoting responsible advertising practices.

Exploring regulatory approaches is also important in handling disinformation in advertising. Mention is made of a platform regulatory build that tackles fake news in Brazil. These regulatory approaches aim to create a more accountable and transparent environment in the advertising industry.

To summarize, the commercial marketing industry significantly influences the information landscape. Understanding programmatic ads, integrating risk analysis, and developing best practices and guidelines are essential in addressing disinformation and ensuring responsible advertising practices. It is important to address the influence of digital influencers and explore regulatory approaches to mitigate potential negative effects on the information environment.

Audience

Political advertising plays a significant role in modern political systems, but it is a complex and problematic issue. This form of advertising has the potential to be weaponised and has frequently been used for data targeting, as highlighted by the Cambridge Analytica scandal. The misuse of data for political purposes poses a serious challenge to the integrity of elections and democratic processes.

It is argued that the role of political advertising needs better management and interventions to address these challenges. Election observation groups, such as the National Democratic Institute (NDI), engage in monitoring political advertising to ensure transparency and fairness. However, the Cambridge Analytica incident has underscored the need for stronger measures to regulate the use of data in political campaigns.

The involvement of digital influencers in political advertising further complicates the situation. There is a difficulty in distinguishing their actions as independent content creators from their role as political marketers. This blurring of lines makes it challenging to discern the extent of influence they have over public opinion and the potential impact on political campaigns.

To mitigate the risks associated with political advertising, it is argued that regulation should be developed to observe how advertisements contribute to disinformation in political campaigns. The dissemination of false or misleading information poses a serious threat to the integrity of elections and public trust. The difficulty lies in distinguishing between political content and other types of content circulating on the internet, which requires careful monitoring and regulation.

In Brazil, there is a self-regulating council for government advertisements. This council, overseen by the Brazilian Internet Steering Committee and advised by technical expert Juliana, aims to ensure that government advertisements adhere to ethical and legal standards. While the self-regulatory framework is in place, it is important to consider how measures to mitigate risks can interact with this framework and state regulations. The potential for regulatory capture within self-regulating councils and other complexities must be acknowledged and carefully addressed.

In conclusion, the role of political advertising in modern political systems necessitates better management and intervention. The weaponisation of political advertising, data targeting, challenges related to digital influencers, and the dissemination of disinformation all underscore the need for regulation and monitoring. As seen in Brazil, self-regulatory councils can play a role in ensuring ethical advertising practices, but it is crucial to consider the interactions between mitigation measures, self-regulatory frameworks, and state regulations. By addressing these concerns, steps can be taken towards fostering fair and transparent political campaigns and preserving the integrity of democratic processes.

Eliana Quiroz

An analysis of the role of marketing companies in the disinformation ecosystem reveals various perspectives. One viewpoint asserts that marketing companies are integral to the spread of disinformation. They excel in providing marketing strategies and facilitating effective micro-targeting, enabling the dissemination of misleading information. This complex ecosystem is formed by the involvement of multiple private companies in digital marketing and disinformation.

Contrarily, another perspective argues that the distinction between companies offering marketing services is blurred. This lack of clarity makes it challenging to define individual responsibilities in the disinformation ecosystem. For instance, Meta, a digital platform, provides marketing advice and services to influential clients, while newspaper companies in Peru act as intermediaries. This emphasises the need for a comprehensive understanding of the different actors involved to effectively combat disinformation.

The analysis also notes the impact of the Cambridge Analytica model on digital marketing and disinformation companies. This model, involving detailed data analysis and targeting strategies, serves as a reference for manipulating public opinion. However, its full implementation requires sufficient resources and interest. In cases of limited time or money, certain elements of the model may be utilised.

Having an understanding of country-specific marketing services is essential in addressing disinformation effectively. The analysis highlights the wide range of marketing services available in the global South, reflecting diverse resources. Additionally, journalists and influencers can play significant roles in the disinformation ecosystem. Therefore, a tailored approach is necessary to combat disinformation successfully.

Shifting focus to political advertising, the analysis underscores the importance of identifying the various actors involved to ensure transparency. The entities involved in political advertising include marketing companies, influencers, data providers, data analysts, media production companies, digital communication and public relations firms, and fact-checking and public opinion companies. A thorough understanding of this ecosystem is crucial for promoting transparency in political campaigns.

Regulation is suggested as a solution for promoting transparency and protecting human rights in political advertising. However, striking the right balance with freedom of expression is essential. It is recommended that regulation extend beyond digital platforms to include companies engaged in political advertising.

Lastly, the analysis highlights the significance of inclusivity and raising awareness of human rights frameworks among companies involved in political advertising. Some companies may not fully comprehend their role within the context of human rights. By fostering inclusion and promoting awareness, ethical implications associated with political advertising can be addressed.

In conclusion, a comprehensive understanding of the role of marketing companies in the disinformation ecosystem is crucial. The blurred boundaries between companies and the influence of models like Cambridge Analytica must be acknowledged. Tailored approaches, regulation, and a focus on human rights and inclusion are necessary to effectively combat disinformation and promote transparency in political advertising.

Anna Kompanek

The analysis explores the important role of the private sector, particularly local businesses, in addressing the issue of disinformation. It suggests that the definition of the private sector should be expanded beyond just big tech companies to include local business communities. These communities are both contributors to and victims of disinformation, making it crucial to involve them in tackling this problem.

The analysis highlights the need to sensitize companies about the potential ramifications of their advertising placements. It points out that companies may indirectly support disinformation through their advertising spending, with ads appearing on disreputable websites associated with disinformation. Therefore, companies must go beyond simply reaching audiences and consider the potential negative consequences of their ad placements.

The business community is seen as a key player in improving information spaces and combating disinformation. It is noted that a growing segment of companies is recognizing the dangers posed by disinformation. These companies can support independent journalism through ethical advertising and other means. By investing in healthier information spaces, businesses can contribute to creating a diverse and reliable range of information for the public.

The analysis underscores the need for global support and responsible business practices to foster healthier information spaces. The report by the Center for International Private Enterprise (CIPE) and the Center for International Media Assistance (CIMA) emphasizes ethical advertising as one way to support independent journalism. It suggests that responsible businesses have the power to promote and maintain healthy information spaces through their practices and collaborations.

Independent journalism is emphasized as being vital in combating disinformation. It is recognized for providing a diverse range of information to the public, countering the spread of false or misleading information. This underlines the importance of supporting independent journalism in efforts to tackle disinformation.

Furthermore, the analysis notes that local businesses can play a significant role in investing in healthy information insights and independent journalism. They can contribute through various strategies, such as ethical advertising, impact investment, blended finance, corporate philanthropy, and corporate social responsibility (CSR) initiatives. These initiatives enable local businesses to have a positive impact on information spaces and support the work of independent journalists.

Collaboration between government, civil society, and the private sector is identified as essential in addressing disinformation effectively. It is noted that the biggest danger lies in governments passing laws without consulting civil society and local private sector representatives. On the other hand, collaboration and dialogue can lead to more informed policies and effective measures against disinformation.

A noteworthy observation is the value of bringing local business organizations together as part of broader coalitions to secure the information space. In the Philippines, for example, the collaboration between the Philippine Association of National Advertising and the Makati Business Club was instrumental in discussing and addressing issues related to information security. By uniting local business organizations, effective measures can be taken to safeguard information spaces and combat disinformation.

In conclusion, the analysis underscores the crucial role of the private sector, particularly local businesses, in addressing disinformation. It promotes the inclusion of local businesses in efforts to combat disinformation and emphasizes the need for responsible advertising practices and support for independent journalism. Collaboration between government, civil society, and the private sector is crucial, and local business organizations can contribute to securing information spaces through broader coalitions. By working together, these stakeholders can foster healthier information environments and mitigate the negative impacts of disinformation.

Herman Wasserman

Disinformation has been a longstanding issue in the global south, with its roots tracing back to colonial periods. During this time, various forms of communication and propaganda were used to justify the subjugation of the colonised. In the post-colonial era, states in the global south have continued to control the media and engage in disinformation campaigns, aimed at limiting critical voices and maintaining their power.

The scholarly production around disinformation reached its peak in 2016, following elections in the United States, bringing increased attention to the issue. The advancement of new technologies has further amplified existing trends and forms of disinformation, posing a significant challenge to the global south.

The global south faces a dual threat to its information landscape, both externally and internally. Foreign influence operations draw on historical loyalties and presences in the region, while repressive states exploit the fight against “fake news” to enact laws that effectively criminalise dissent and restrict freedom of expression.

Another factor contributing to the proliferation of misinformation in the global south is misleading advertising and sensationalist journalism. These practices can promote false information and pose a challenge to the sustainability of small, independent media outlets which often rely on advertising for financial support. Economic downturns, in particular, can lead to cutbacks on advertising, further threatening the viability of local news outlets.

Despite these challenges, citizens in the global south are actively combating disinformation through various strategies. These strategies are often intertwined with other struggles, such as those for internet access, digital rights, media freedom and education. It is crucial to acknowledge the agency of individuals in the global south in the fight against disinformation.

In terms of political advertising regulations, South Africa currently faces a disconnect between the outdated regulations and the current social media climate. Regulations primarily focus on traditional broadcast channels and newspapers, failing to address the unconventional methods employed by political parties in the digital realm. As a result, there is a need to update and adapt regulations to match the evolving landscape of political advertising.

While formal regulation is an important aspect of controlling political advertisements, it is insufficient on its own. Public awareness and understanding of political communication play a pivotal role, along with fact-checking as a crucial part of political discourse. A coalition of journalists and civil society organisations is necessary to scrutinise political parties’ claims and ensure accuracy and transparency.

In conclusion, the issue of disinformation in the global south is multifaceted and complex. It stems from historical contexts and continues to be perpetuated by external influences and domestic repression. Misleading advertising and sensationalist journalism add further challenges to the region’s media landscape. However, the agency of citizens, along with updated regulations and collaborative efforts, can mitigate the effects of disinformation and uphold peace, justice and strong institutions in the global south.

Renata Mielli

The analysis provided reveals the detrimental consequences of false and misleading information being spread through the Internet and digital platforms. It argues that the Internet has allowed the dissemination of unreliable news and misleading content on a large scale, negatively affecting society. This widespread dissemination of false information has drawn attention to its harmful effects on society, as it undermines the credibility and reliability of information sources and can potentially manipulate public opinion.

The findings also highlight the role of digital platforms in amplifying and promoting misleading, false, and harmful content. It is noted that content with demonstrably false information circulates more widely than verified content, feeding the business models of digital platforms. This is further exacerbated by the use of personal and sensitive data by digital platforms, enabling targeted advertising and content distribution across various platforms. The promotion of such content through sponsored and boosted content has a greater impact on reaching internet users.

In response to these issues, the analysis suggests the need for regulatory initiatives and stricter rules in online advertising. It argues that these regulations should consider specific aspects of information flow, the advertising market, and its actors, as well as how the business models of large platforms favor misinformation. The analysis emphasizes the importance of establishing strict measures for transparency and advertisement, as well as the corporate responsibility of intermediaries and links in the advertising chain in relation to the integrity of public debate.

Moreover, the analysis supports the call for more transparency and stricter rules in online advertising. It advocates for the disclosure of the reach and profile involved in advertisements or boosted content, contributing to accountability and limiting the dissemination of false information. The analysis emphasizes the significance of establishing clear guidelines and measures for transparency and advertisement.

Additionally, the analysis highlights the need for locally designed policies to regulate online platforms. It points to the Brazilian Internet Steering Committee’s consultation process on platform regulations, which addressed issues about concentrations in the online advertising market and the risks of the platform business model, such as disinformation and infodemics. This emphasizes the importance of tailored regulations that consider the specific challenges and dynamics of each region.

The analysis also discusses the challenges of conceptualizing political advertisement and the negative impact of advertisements on health. It acknowledges the difficulty in determining whether political party content should be classified as advertisement or not. Furthermore, it raises concerns about the effect of advertisements on health, particularly during the pandemic, emphasizing that misleading advertisements about medicines can negatively affect people’s lives.

Notably, some arguments within the analysis reject the idea of self-regulation in the advertisement sector. They highlight the impact of advertisements on health and emphasize the need for a more serious public discourse on advertisement. They advocate for increased scrutiny and public engagement to address the negative consequences associated with advertising.

In conclusion, the analysis provides insightful observations on the harmful effects of false and misleading information disseminated through the Internet and digital platforms. It emphasizes the need for regulatory initiatives, transparency measures, and stricter rules in online advertising to protect society from the adverse consequences of misinformation. The analysis also highlights the importance of tailored, locally designed regulations and discusses the challenges surrounding political advertisement and the impact of advertisements on health.

Speakers

Speech speed

0 words per minute

Speech length

words

Speech time

0 secs

Click for more

Speech speed

0 words per minute

Speech length

words

Speech time

0 secs

Click for more

Speech speed

0 words per minute

Speech length

words

Speech time

0 secs

Click for more

Speech speed

0 words per minute

Speech length

words

Speech time

0 secs

Click for more

Speech speed

0 words per minute

Speech length

words

Speech time

0 secs

Click for more

Speech speed

0 words per minute

Speech length

words

Speech time

0 secs

Click for more

Future-proofing global tech governance: a bottom-up approach | IGF 2023 Open Forum #44

Table of contents

Disclaimer: This is not an official record of the IGF session. The DiploAI system automatically generates these resources from the audiovisual recording. Resources are presented in their original format, as provided by the AI (e.g. including any spelling mistakes). The accuracy of these resources cannot be guaranteed. The official record of the session can be found on the IGF's official website.

Knowledge Graph of Debate

Session report

Chris Jones

Geopolitical discussions should focus on areas of agreement rather than disagreement to foster cooperation and prevent conflicts. This approach aligns with SDG 16: Peace, Justice and Strong Institutions. Breaking down large tasks into smaller manageable ones, advocated by engineer Chris Jones, promotes effective problem-solving and resource allocation, in line with SDG 9: Industry, Innovation and Infrastructure. A positive stance towards international cooperation and addressing challenges through understanding and managing smaller components is supported, aligning with SDG 17: Partnerships for the Goals. Large organizations may need to make changes to become more agile and adapt to emerging technologies, a principle aligned with SDG 9. Governance discussions should consider both shared values and technical requirements, as highlighted by SDG 16. The process of governance is equally important as the final product, as demonstrated by the UK’s online harms legislation. Multi-stakeholder governance, involving diverse expertise and perspectives, is crucial, echoing SDG 17. The airline industry’s success in implementing common standards serves as an example of a bottom-up approach aligned with SDG 9. These approaches, emphasizing collaboration, agility, inclusive governance, and bottom-up solutions, contribute to sustainable development, peace, and justice.

Sheetal Kumar

The analysis examines the perspectives surrounding future technologies and their impact on marginalized groups, as well as the governance and development of these technologies.

One argument put forward is that future technology developments may not necessarily bring positive impacts, particularly for marginalized groups. New technologies like quantum-related developments, metaverse platforms, nanotech, and human-machine interfaces can be complex and intimidating, making it difficult for already marginalized individuals to access and benefit from them. This highlights the potential for further exacerbation of inequalities if technology is not developed and implemented in an inclusive manner.

On the other hand, there is a strong emphasis on the importance of inclusive technology development and governance. The argument asserts that the development and governance of technology should be more inclusive, particularly in relation to marginalized groups. This approach recognizes the need for diverse perspectives and experiences to be considered to avoid further marginalisation and ensure equitable access to technological advancements.

Furthermore, the analysis suggests that governments and industry stakeholders should prioritise engaging in multistakeholder discussions related to technology developments. Examples such as the IGF Best Practice Forum on Cybersecurity and the policy network on internet fragmentation are cited as instances of successful multistakeholder dialogue. This underscores the significance of collaboration and cooperation among various stakeholders to ensure that technological advancements are beneficial and meet the needs of all.

In terms of future-proofing, an important observation is that high-tech solutions are not the only way to achieve this. While future technologies are often associated with cutting-edge advancements, it is important to recognise that future-proofing can also involve other approaches that do not solely rely on high-tech solutions.

Another noteworthy perspective is the advocacy for connecting multilateral spaces through people and not solely through novel technology. The analysis highlights the need to improve and enhance existing spaces where work is being done, making them more diverse, inclusive, and connected. By prioritising diversity and inclusivity in these spaces, stakeholders can foster collaboration, coordination, and cooperation, ultimately leading to more effective and equitable outcomes.

The analysis also praises the United Nations’ Internet Governance Forum (IGF) as an open, inclusive deliberative space that plays a crucial role in discussing and shaping technology governance. It emphasises the significance of preserving and enhancing spaces like the IGF, which offer unique opportunities for stakeholders to come together, exchange ideas, and collaboratively address the challenges associated with technology governance.

Additionally, transparency, engagement, and the preservation of user autonomy are considered fundamental principles that should be upheld in technology governance. The analysis argues that good governance principles, which are already known, should be applied to new technologies. This includes timely and clear information sharing that is accessible to a wide range of individuals, ensuring transparency and meaningful engagement.

Another notable point is the integration of high-level principles, specifically the international human rights framework, in guiding the use of technologies. The analysis highlights that technologies like AI and data impact various aspects of life and suggests that the international human rights framework can be embedded throughout the technology supply chain through standards. This approach promotes a rights-respecting world where everyone benefits and ensures that the development and usage of technology uphold human rights.

In conclusion, the analysis presents various perspectives on the impact and governance of future technologies. It highlights the importance of inclusive technology development, multistakeholder engagement, connecting multilateral spaces through people, and embedding high-level principles such as the international human rights framework. By considering these perspectives and incorporating them into technology governance, it is possible to strive towards a more equitable and beneficial technological future.

Gallia Daor

Intergovernmental organisations, such as the Organisation for Economic Co-operation and Development (OECD), have demonstrated their ability to be agile while maintaining a thorough and evidence-based approach. The OECD’s AI principles were adopted in an impressive one-year time frame, making it the fastest process ever at the organisation. This highlights the organisation’s ability to adapt to the rapidly evolving landscape of emerging technologies.

To facilitate global dialogue on emerging technologies, the OECD established the Global Forum on Technology. This platform provides an avenue for stakeholders from different countries and sectors to come together and discuss the challenges and opportunities presented by these new technologies. This engagement ensures that decisions made by intergovernmental organisations are well-informed and incorporate perspectives from various stakeholders.

The importance of multi-stakeholder and interdisciplinary engagement in decision-making within intergovernmental organisations is evident through the OECD’s network of AI experts. With more than 400 experts from different stakeholder communities, the OECD is able to tap into a wide range of expertise and perspectives. This inclusivity ensures that the decisions made by the organisation are comprehensive and representative of diverse viewpoints.

Recognising the need to keep pace with emerging technologies, intergovernmental organisations like the OECD have established dedicated working groups that focus on different sectors. These working groups, such as those on compute, climate, and AI future, allow for a deeper understanding of the specific challenges and opportunities posed by each sector. By focusing on these emerging technology sectors, intergovernmental organisations can proactively address the unique issues that arise within each area.

High-level principles, such as trustworthiness, responsibility, accountability, inclusiveness, and alignment with human rights, are considered important and relevant for all technologies. Intergovernmental organisations aspire to develop technologies that are trustworthy, responsible, and inclusive, while also being aligned with human rights. It is essential to factor in potential risks to human rights and ensure accountability in the development processes of these technologies.

However, there is often a gap between these high-level principles and their actual implementation in specific technologies. Variations exist between technologies, and the importance of certain issues like data bias may be specific to AI. This calls for a careful examination and consideration of these factors during the governance processes of emerging technologies.

To address the complexity and differing requirements of different technologies, there may be a need to break up the governance processes into smaller components. By doing so, intergovernmental organisations can accommodate the varying expertise and process requirements associated with different technologies. This approach ensures that governance structures are tailored to the specific needs of each technology, promoting more effective decision-making and implementation.

In conclusion, intergovernmental organisations have shown their ability to be agile, adaptable, and evidence-based in the face of emerging technologies. The OECD’s fast adoption of AI principles and the establishment of the Global Forum on Technology exemplify their commitment to staying at the forefront of technological advancements. The inclusive and interdisciplinary approach to decision-making, along with the focus on specific technology sectors, further enhances the effectiveness of intergovernmental organisations in addressing the challenges and harnessing the opportunities presented by emerging technologies.

Carolina Aguirre

The analysis considered various perspectives on technological development and governance. The speakers emphasised the need to maintain openness in both processes, drawing parallels with the Internet Governance Forum (IGF), which has nearly 20 years of experience in dealing with open technology. They highlighted that the IGF’s bottom-up approach plays a vital role in achieving openness.

The growing influence of the private sector in shaping technological developments was recognised as an important aspect. The speakers noted that many new technological advancements are being driven and progressed by private companies. This recognition indicates the need to understand the limits and the actors shaping technology ecosystems.

There was concern that new technologies are being developed behind closed doors, deviating from the open nature of the Internet’s original development. The speakers argued that such closed development is less open by nature. This observation raises questions about transparency and inclusivity in the creation of new technologies.

The speakers universally agreed that technology is not neutral and is influenced by societal values. This recognition signals the importance of considering the ethical and social implications of technological advancements. The broader impact on society must be a critical consideration in technological development and decision-making.

The adequacy of existing institutions in the face of challenges posed by globalisation and technological development was called into question. One speaker, Carolina Aguirre, expressed scepticism about the sufficiency of the institutions currently in place. The analysis revealed a need for institutions to adapt and keep up with the rapid changes brought about by technological progress.

Furthermore, the analysis highlighted the decline of globalisation in terms of trade and international dialogue. This observation suggests that traditional processes concerning internationalisation are struggling to keep pace with technological advancements.

In conclusion, the analysis presented a multi-faceted view on technological development and governance. The speakers stressed the importance of openness, raised concerns about closed development, highlighted the influence of the private sector, and acknowledged the influence of societal values on technology. Additionally, the analysis pointed out the challenges faced by existing institutions and the decline of globalisation. These insights shed light on the need for continuous evaluation and adaptation in the realms of technology and governance.

Thomas Schneider

The analysis highlights several key points regarding disruptive technologies, global digital governance, and the regulation of artificial intelligence (AI). Firstly, it emphasizes the need for a change in approach towards disruptive technologies. As technologies continue to develop rapidly, with increasing complexity, it is important to adopt a more distant perspective to effectively regulate them. The analysis suggests that machines and algorithms can play a crucial role in developing regulations for disruptive technologies, taking into account their unique characteristics and potential impact.

In terms of governance, the analysis asserts that collaboration is a better approach than conflict. It argues that leaders have been losing sight of the notion of cooperation, which is crucial for achieving sustainable and effective global digital governance. Collaboration is believed to promote a better working environment and foster long-term solutions to complex challenges.

Moreover, the analysis delves into the regulation of AI. It argues that human beings are relatively stable over time, which necessitates the adaptation of regulations surrounding AI. The historical reactions to new technologies, including fear of job loss and ignorance of technology’s potential, are cited to highlight the need for a balanced and adaptable regulatory framework.

The analysis also highlights the importance of building a network of norms in response to advancements in AI. It emphasizes the need for different levels of harmonization depending on the context and argues that institutional arrangements should adapt to technological innovations to effectively govern AI.

Additionally, the analysis makes an interesting observation about the notion of a multi-stakeholder approach. It suggests that this concept is here to stay and proposes that with technology dematerializing, rule-making should also dematerialize. This means that decisions should be made based on stakeholder involvement rather than geographical boundaries, indicating a shift towards a more inclusive and participatory governance model.

In conclusion, the analysis brings attention to the need for a change in approach towards disruptive technologies, the importance of collaboration over conflict in global digital governance, the need to adapt regulation of AI in response to human stability, the necessity of building a network of norms to govern AI advances, and the significance of the multi-stakeholder approach in dematerializing rule-making. These insights provide valuable considerations for policymakers and organizations looking to navigate the complex landscape of disruptive technologies and governance in the digital age.

Alžběta Krausová

The convergence of technologies has become a cause for concern as it raises ethical and privacy issues. The development of human brain interfaces is particularly problematic as it intrudes on the privacy of our minds. This invasion into individuals’ innermost thoughts and feelings is seen as a major problem, raising questions about personal autonomy and the protection of mental privacy.

Additionally, there is a growing recognition of the importance of defining our future world. As technology continues to advance rapidly, it is crucial to establish clear guidelines and regulations to ensure its safe and ethical use. This includes operationalizing our current ethical principles in new and unfamiliar situations that arise with technological advancements. By applying our existing ethical frameworks to emerging technologies, we can address the ethical challenges they present and ensure they align with our values and principles.

Furthermore, it is argued that considering case-by-case scenarios is necessary when making decisions about the use of artificial intelligence (AI) and other advanced technologies. While general principles and guidelines guide our ethical considerations, it is important to take into account the specific context and circumstances surrounding each situation. This approach enables us to address the unique ethical dilemmas that may arise and make more nuanced and informed decisions.

Moreover, valuing cultural understanding and emotional connections is emphasized as a means to reduce inequalities and foster positive interpersonal relations. Recognizing the diversity of cultures and perspectives in our global society can help bridge gaps and promote empathy and understanding among individuals from different backgrounds. Striving for understanding beyond a rational level, including emotional understanding, is seen as crucial for building inclusive and harmonious societies.

In conclusion, the convergence of technologies presents complex ethical challenges that necessitate attention. Defining our future world, operationalizing our principles, considering case-by-case scenarios, and valuing cultural understanding and emotional connections are key aspects that stakeholders should address. By doing so, they can navigate the ethical landscape in a way that promotes fairness, inclusivity, and respect for individual privacy.

Cedric Sabbah

Cedric Sabbah, an expert in international governance, identifies the challenges posed by the rapid development of technology and its frequent disruption for global governance. He observes that periodically, a new technology becomes a major concern for the international community. These concerns have evolved from critical infrastructure to IoT, ransomware, and internet governance. Emerging issues, such as jurisdictions, content moderation, and encryption, have also come to the forefront.

Sabbah highlights the ever-changing nature of the global tech industry, emphasizing that international organizations cannot afford to be complacent. He suggests that an agile and bottom-up approach could assist in addressing the governance challenges posed by technology. Sabbah believes that as technology constantly evolves, policies need to be regularly revisited and updated. Incorporating domestic bottom-up principles into international governance may bring value in tackling these challenges.

Furthermore, Sabbah emphasizes the importance of future-proof and flexible global tech governance. He proposes an approach that can adapt to the changing technological landscape while maintaining long-lasting effectiveness. Sabbah also recognizes the potential of multi-stakeholder processes and bottom-up approaches in enhancing the quality of global governance mechanisms. He advocates for involving non-traditional stakeholders in discussions and encourages the development of rules by specialized networks.

However, the existence of numerous international bodies and initiatives addressing similar topics raises concerns about fragmentation within these organizations. This fragmentation includes bodies within the UN as well as external entities like ITU, UNESCO, Human Rights Council, WIPO, OECD, COE, and the EU. It prompts the question of whether fragmentation is advantageous, allowing for diverse efforts, or a disadvantage that diminishes focus and resources.

In conclusion, there is a need to reassess existing concepts and explore new approaches to effectively govern emerging technologies. Sabbah’s insights underscore the significance of an agile and bottom-up approach, as well as the potential value of multi-stakeholder processes in addressing technology governance challenges. The concern regarding possible fragmentation within international organizations calls for thorough examination and coordination of processes to ensure effective resource allocation. Overall, global governance mechanisms must adapt and evolve in response to the rapidly changing technology landscape.

Speakers

Speech speed

0 words per minute

Speech length

words

Speech time

0 secs

Click for more

Speech speed

0 words per minute

Speech length

words

Speech time

0 secs

Click for more

Speech speed

0 words per minute

Speech length

words

Speech time

0 secs

Click for more

Speech speed

0 words per minute

Speech length

words

Speech time

0 secs

Click for more

Speech speed

0 words per minute

Speech length

words

Speech time

0 secs

Click for more

Speech speed

0 words per minute

Speech length

words

Speech time

0 secs

Click for more

Speech speed

0 words per minute

Speech length

words

Speech time

0 secs

Click for more

Exploring Blockchain’s Potential for Responsible Digital ID | IGF 2023

Table of contents

Disclaimer: This is not an official record of the IGF session. The DiploAI system automatically generates these resources from the audiovisual recording. Resources are presented in their original format, as provided by the AI (e.g. including any spelling mistakes). The accuracy of these resources cannot be guaranteed. The official record of the session can be found on the IGF's official website.

Knowledge Graph of Debate

Session report

Judith

Vicky expresses gratitude and greets the audience, creating a positive and welcoming atmosphere. The speaker’s tone and appreciation set the stage for an engaging interaction.

Joey

The project had several positive outcomes for Ugandan students. Firstly, it provided exposure to technology and hands-on experience. Students had the opportunity to interact with students from Japan, which not only helped them develop their cross-cultural skills but also sparked an interest in technology. This exposure to different cultures and technology is important for their educational development and future career prospects.

Furthermore, the project had a significant impact on language and social learning. Students were able to engage in interactive language practices and received artistic feedback on their language skills. They also had the chance to express themselves in both Swahili and English. This not only improved their language proficiency but also facilitated their social and emotional learning.

However, the project faced challenges in integrating technology due to limited resources and budget constraints. The local setup, Gudu Samaritan, struggled to invest in technology because of these constraints. This highlights the need for adequate funding and resources to ensure the successful integration of technology in education.

Another obstacle was the unstable internet connection, which hindered online participation. This limited students’ ability to fully engage in online activities and access educational resources. Stable and reliable internet connection is crucial for effective technology integration in schools.

Regarding curriculum integration, there is a need to engage with the Ministry of Education. Engaging with the Ministry would ensure better resource allocation and adjustment of teaching methods to effectively integrate the project into the curriculum. This collaboration is necessary for the long-term sustainability and impact of the project.

Funding was deemed crucial for projects that integrate technology into schools. The government should provide infrastructure, such as a stable internet connection, for successful implementation. Additionally, schools like Gudu Samaritan require resources like an intelligence system, robots, and computer equipment to fully leverage the benefits of technology in education.

Another important aspect is promoting literacy in online platforms. All students and teachers should be literate in AOL (online platforms). This would ensure equal access to information and opportunities. Educators should be given the opportunity to participate in online workshops and training to gain confidence in incorporating technology in their everyday teaching.

In conclusion, the project had various positive impacts on Ugandan students, including exposure to technology, cross-cultural interaction, and development of language and social skills. However, challenges such as limited resources, budget constraints, unstable internet connection, and the need for curriculum integration must be addressed for the successful integration of technology in education. Adequate funding, collaboration with the Ministry of Education, and promoting literacy in online platforms are essential for the continuation and growth of such projects.

Ruyuma Yasutake

The HARU project has had a positive impact on English conversation classes, enhancing the overall learning experience. HARU, an advanced AI-based interactive robot, helps to create smoother and more engaging conversations by responding to moments of silence and using interesting facial expressions. This not only makes the conversations more enjoyable but also creates a dynamic learning environment. The use of HARU has also facilitated cross-cultural interaction by connecting students from different countries. This provides a unique opportunity for meaningful conversations and a better understanding of different cultures. While there have been some challenges, such as system troubles and interruptions in interactions, the overall experience has been positive. HARU also offers the opportunity for students to interact and work with professional international researchers, which enhances their learning. Furthermore, HARU has the potential to connect students from different countries, promoting global collaboration in education. Additionally, HARU can be used as a partner for practicing conversations, allowing students to improve their conversation skills in a supportive environment. The use of AI’s evaluation system in education also holds promise for fairer assessments, reducing biases and promoting fairness. In conclusion, HARU has numerous benefits and, with further advancements and improvements, has the potential to revolutionize education and communication.

Randy Gomez

The Honda Research Institute, led by Randy Gomez and his team, responded positively to UNICEF’s call to implement and test policy guidance. They dedicated a significant portion of their resources to developing technology for children, with a focus on creating a system that enables cross-cultural interactions among groups of children from different countries. This system involves a robot facilitator that connects to the cloud, allowing children to interact regardless of their geographical locations.

The team conducted experiments using interactive games facilitated by the robot to evaluate the effectiveness of their technology in promoting cross-cultural communication. The results were overwhelmingly positive, demonstrating the efficacy of the technology in enabling these interactions.

In addition to developing the technology, the team recognized the importance of understanding its societal, cultural, and economic impact on children from diverse backgrounds. They deployed the robots in hospitals, schools, and homes to gather insights into implementing the technology in different settings. They collaborated with Vicky from JRC and applied their application alongside IEEE standards to ensure industry compliance.

Overall, the Honda Research Institute’s work contributes to the United Nations’ Sustainable Development Goals, specifically in reducing inequalities, ensuring quality education, and promoting industry, innovation, and infrastructure. The technology they developed for cross-cultural interactions among children fosters understanding and connectivity. It has the potential to create a more inclusive and globally connected society, while also shedding light on the societal, cultural, and economic effects of robotic technology on children’s development.

Steven Boslow

Artificial Intelligence (AI) technology is increasingly present in the lives of children, being used in areas such as gaming, education, and social apps. These AI systems have the power to influence significant decisions, including those related to health benefits, loan approvals, and welfare subsidies. However, it is concerning that most national AI strategies in 2019 did not adequately consider children as stakeholders. This lack of recognition of children’s rights in AI policies highlights the need for improvements.

Moreover, the existing ethical guidelines for AI do not sufficiently address the unique needs of children. These guidelines are not specifically tailored to tackle the challenges and risks that children may face with AI technologies. This oversight is worrisome, considering the substantial impact that AI can have on children’s lives.

On a positive note, UNICEF, in collaboration with the Finnish Government, took an initiative in 2019 to address this issue by introducing policy guidance on AI and children’s rights. This guidance aims to provide a framework for responsible and ethical use of AI concerning children. Several organizations have since implemented these guidelines and shared their experiences and lessons learned. The implementation of UNICEF’s guidelines is a crucial process in safeguarding the rights and well-being of children in the context of AI.

Recognizing the fact that children make up approximately one-third of all online users and an even higher proportion in developing countries, it becomes evident why prioritizing children’s rights is essential. While AI presents great opportunities, it also poses significant risks for children. Therefore, it is important to establish robust regulations that effectively protect their rights while enabling the positive utilization of AI technology.

In conclusion, the increasing presence of AI in children’s lives emphasizes the need for them to be recognized as key stakeholders in national AI strategies and ethical guidelines. UNICEF’s efforts to develop and implement guidelines specifically addressing AI and children’s rights are commendable. They highlight the importance of prioritizing children’s needs and ensuring their protection in the development of AI regulations. To ensure a safe and beneficial AI environment for children, continuous improvement of policies, guidelines, and regulations that cater to their unique requirements is essential.

Moderator

According to the analysis, children were not adequately recognized in national AI strategies or ethical guidelines for responsible AI. This lack of recognition raises concerns about the potential negative implications AI could have on children.

One of the key findings is that AI is increasingly being used in education and gaming, indicating it has become an integral part of children’s lives. Given the significant number of children who are active online users, particularly in developing countries, the impact of AI on their lives cannot be ignored.

Furthermore, the analysis highlights that adopting responsible AI or technology can be challenging. Applying principles for responsible AI can cause tensions to arise, and the context in which these principles are applied is crucial. Developing effective regulations and policies concerning AI requires careful consideration of the specific needs and vulnerabilities of children.

The analysis also emphasizes the importance of prioritizing the role of AI in children’s lives when it comes to regulation and policy-making. It highlights the potential risks AI poses, such as providing poor mental health advice or infringing on children’s privacy. These risks underline the urgent need to establish robust guidelines and safeguards to protect children’s well-being and rights in the context of AI.

Additionally, the Honda Research Institute’s development of robotic technologies for children in response to UNICEF’s call for policy guidance implementation and testing is noteworthy. This initiative demonstrates the commitment to address the specific needs and challenges faced by children in an increasingly AI-driven world.

Collaboration between urban students from Tokyo and rural students from Uganda was a significant aspect of the analysis. This collaboration aimed to enhance intercultural understanding and explore the variations in children’s rights comprehension across different situations. This emphasizes the importance of context in comprehending and addressing children’s rights issues.

Moreover, the role of technology in education was found to have a positive impact on students’ understanding and interest. The projects analyzed contributed to the development of social and emotional skills, further reinforcing the potential benefits of integrating technology in educational settings.

However, the analysis also identified several challenges. Limited resources and budget constraints were major obstacles, particularly in the context of a local setup called Gudu Samaritan in Uganda. These constraints made it difficult to invest in technology and maintain stable internet connections, hindering the implementation of projects.

To overcome these challenges, the analysis suggests engaging the Minister of Education in Uganda to integrate the project into the curriculum and secure additional resources. This approach would not only address budget constraints but also provide the necessary time and support to adapt teaching methods effectively.

In conclusion, the analysis highlights the need for greater recognition of children in AI strategies and ethical guidelines. It underscores the importance of considering the specific needs and vulnerabilities of children when developing regulations and policies related to AI. The potential risks associated with AI, such as issues related to mental health and privacy, call for the implementation of comprehensive safeguards. The analysis also sheds light on the positive impact of technology in education, particularly in enhancing students’ understanding, interest, and social and emotional skills. However, challenges such as limited resources and budget constraints must be addressed through collaborative efforts involving government bodies and educational institutions. Overall, a comprehensive and child-centric approach to AI and technology adoption is essential to ensure the well-being and rights of children in the digital age.

Speakers

Speech speed

0 words per minute

Speech length

words

Speech time

0 secs

Click for more

Speech speed

0 words per minute

Speech length

words

Speech time

0 secs

Click for more

Speech speed

0 words per minute

Speech length

words

Speech time

0 secs

Click for more

Speech speed

0 words per minute

Speech length

words

Speech time

0 secs

Click for more

Speech speed

0 words per minute

Speech length

words

Speech time

0 secs

Click for more

Speech speed

0 words per minute

Speech length

words

Speech time

0 secs

Click for more

Framework to Develop Gender-responsive Cybersecurity Policy | IGF 2023 WS #477

Table of contents

Disclaimer: This is not an official record of the IGF session. The DiploAI system automatically generates these resources from the audiovisual recording. Resources are presented in their original format, as provided by the AI (e.g. including any spelling mistakes). The accuracy of these resources cannot be guaranteed. The official record of the session can be found on the IGF's official website.

Knowledge Graph of Debate

Session report

Audience

The analysis uncovers several significant points concerning gender equality and cybersecurity policies. One notable issue is the exclusion of women, girls, and individuals of other genders from discussions with the private sector and tech companies. This exclusion leads to a lack of diversity and representation in decision-making processes, potentially resulting in policies that do not adequately address the needs and concerns of all individuals.

Another concerning finding is the resistance to including gender language in the final text of policies. This pushback may arise from factors such as a resistance to change, a lack of understanding of the importance of gender-inclusive language, or intentional efforts to maintain the status quo. This resistance highlights the need for greater awareness and commitment to gender equality in policy-making processes.

On a positive note, the analysis recognizes the essential role of including a gender perspective and intersectionality in cybersecurity policies. By considering the experiences and challenges faced by different genders and intersecting identities, policies can be more comprehensive and effective in addressing cyber threats. This recognition emphasizes the importance of adopting an intersectional approach when developing cybersecurity strategies.

Furthermore, civil society and the United Nations are identified as key actors in ensuring gender-inclusive policies. Their involvement in advocating for and monitoring the implementation of gender equality measures can contribute to creating an environment that values and promotes the rights and representation of all genders.

Another noteworthy insight is the recognition that gender equality is a task that requires collective support, not just from women. It is important for everyone, regardless of gender, to actively contribute to achieving gender equality and dismantling gender-based discrimination and inequality.

Education is highlighted as a crucial tool for combating setbacks in gender equality. By promoting education that emphasizes gender equality principles and human rights, societies can foster greater understanding, empathy, and equal opportunities for all individuals.

However, limitations arise during negotiations, as member states often draw red lines that restrict progress on gender language. This observation suggests that political considerations and differing priorities among states can serve as obstacles to advancing gender equality within policy frameworks.

Additionally, the analysis emphasizes the need for a gender framework for digital transformation and cybersecurity. This framework should account for the specific challenges and vulnerabilities faced by different genders in the digital realm, ensuring that cybersecurity policies and practices are inclusive and responsive to diverse needs.

In conclusion, the analysis brings attention to several key aspects of gender equality and cybersecurity policies. It highlights the need for increased diversity and inclusive decision-making processes, the importance of gender-sensitive language, the role of education in promoting gender equality, and the significance of international cooperation and civil society engagement. These insights can inform policymakers, stakeholders, and advocates working towards gender-inclusive cybersecurity policies and contribute to building a more equitable and secure digital future.

Speaker 1

The analysis underscores the critical need for cybersecurity awareness among citizens and businesses. Policymakers should actively support collaboration between different sectors to effectively address this issue. By fostering cooperation and sharing knowledge, policymakers can enhance cybersecurity practices and protect individuals and organizations from cyber threats.

Furthermore, it is crucial for policymakers to take the lead in creating awareness about cybersecurity among citizens and businesses. They can educate the public about potential risks and promote best practices for safeguarding personal and sensitive data. This proactive approach can contribute to an overall improvement in cybersecurity measures and reduce the likelihood of successful cyber attacks.

The analysis also highlights the importance of respecting human rights within the domain of cybersecurity. Policymakers should integrate human rights as a fundamental principle when formulating cybersecurity policies. It is vital to remember that real people are affected by cyber threats, and their rights and privacy should be protected. By considering human rights, policymakers can strike a balance between ensuring cybersecurity and upholding individual freedoms.

Additionally, the analysis underscores the importance of balancing innovation with securing the digital infrastructure. Many young people are involved in both positive and negative innovations in the cyber domain. Policymakers need to find a middle ground that encourages and supports innovation while ensuring the security of digital infrastructure. This balance is essential for fostering technological advancements while safeguarding against potential vulnerabilities and cyber threats.

The analysis also emphasizes the significance of including vulnerable populations in policy considerations. Often, vulnerable populations are overlooked or ignored when it comes to cybersecurity policies, resulting in their problems being disregarded. By actively including these populations in policy discussions and decision-making processes, policymakers can address their unique needs and challenges. This inclusive approach helps ensure that the concerns and vulnerabilities of all individuals are taken into account in cybersecurity strategies and initiatives.

In conclusion, the analysis highlights the importance of cybersecurity awareness, collaboration, and human rights considerations in policymaking. Policymakers play a vital role in creating awareness, fostering cooperation, and protecting human rights in the realm of cybersecurity. Moreover, finding a balance between innovation and security, as well as actively including vulnerable populations, are instrumental in developing comprehensive and effective cybersecurity policies. By considering these factors, policymakers can enhance cybersecurity practices, promote a safer online environment, and work towards achieving the relevant Sustainable Development Goals.

Veronica Ferrari

Various speakers have emphasized the importance of including a gender perspective in cybersecurity discussions. Gender is not only a technical issue; it involves power relations and encompasses differentiated risks and needs experienced by individuals. The recognition that cyber incidents disproportionately harm specific social groups based on factors such as gender, sexual orientation, race, and religion is growing. There is also evidence that legal cyber frameworks are being exploited to persecute women and LGBTQ individuals.

To promote a gender-inclusive approach to cybersecurity, there have been calls to integrate a gender perspective at national, regional, and international levels. The Association for Progressive Communications (APC) has developed a specific tool/framework to achieve this goal.

Concerns were specifically raised about cyber laws in the Asia-Pacific region, where shrinking civic space and challenges to civil society inputs were highlighted. It was noted that cyber-related laws can be used for censorship and criminalization, with specific issues concerning the Philippines.

Additionally, there was a discussion on the gender perspective of cybercrime legislation and the strategies employed. Jess and her organization have conducted research and advocated for gender perspectives in cyber policy discussions. Veronica Ferrari showed interest in gaining insights into the gender perspective of cybercrime legislation from Jess.

The international dynamics of gender and cybersecurity were also examined. The appearance of gender considerations in multilateral processes on cybersecurity was addressed, with David providing his views on the important factors to consider for a gender perspective at the international level.

In order to link a human-centered approach to existing agendas such as sustainable development and digital economy indicators, recommendations were made within a gender framework. This highlights the importance of aligning cybersecurity with broader goals and keeping a focus on human well-being.

Veronica Ferrari agreed on the significance of continued advocacy, research, and raising awareness about a human-centered approach while rethinking the concept of security. This emphasizes the need to push for gender inclusion in cybersecurity, generate more evidence, and promote a shift in security perceptions.

In conclusion, integrating a gender perspective into cybersecurity discussions is vital. Recognizing and addressing differentiated risks and needs, the disproportionate impact of cyber incidents on different social groups, and the misuse of legal frameworks are crucial steps towards establishing a more inclusive and equitable approach to cybersecurity.

Kemly Camacho

The analysis delves into various aspects of cybersecurity strategies and the involvement of different stakeholders in promoting gender equality. One key point highlighted is the significance of budget allocation in cybersecurity strategies. For instance, the discussion brings up Costa Rica’s cybersecurity strategy, which primarily focuses on reacting to cyber incidents rather than proactive prevention. This indicates that budget allocation plays a crucial role in defining the government’s vision and priorities, including whether gender is prioritised in the strategy.

Another significant aspect discussed is the role of civil society and training in cybersecurity. Sula Batsú, an organisation, is mentioned for convening a network of organisations across different fields to advocate for cybersecurity. They also conducted a comprehensive six-month training programme aimed at educating various sectors about the importance of cybersecurity. This evidence underscores the positive impact civil society and training can have in enhancing cybersecurity measures.

A mixed sentiment is observed regarding the private sector-led push to include more women in cybersecurity. While the intention appears to encourage gender equality, there is concern that this push may be driven by the private sector’s need to address resource gaps, rather than a genuine commitment to gender equality. This highlights the importance of ensuring that motivations for gender inclusion are rooted in equality and not solely economic interests.

The analysis also advocates for greater women’s leadership in the IT and cybersecurity sector. It highlights the stagnant percentage of women in the Latin American IT sector, which has remained unchanged for the past 15 years despite investments and efforts. The unique qualities and analytical leadership that women can bring to the sector are recognised as valuable contributions.

Furthermore, the analysis emphasises the need for safe digital spaces, drawing a parallel with the concept of safe neighbourhoods. It suggests that just as people require a safe physical environment, they also need a safe digital space. While the initial idea of integrating women in the IT sector is viewed positively, it is argued that more needs to be done to ensure genuine inclusivity.

Additionally, the analysis draws attention to the violence faced by women in the IT sector, framing it as a form of violence against women. It highlights that the challenges experienced by women in the sector are often not integrated into conversations around violence against women. The existence of extensive research on the difficult conditions faced by women in IT further supports this assertion.

Overall, the analysis sheds light on various dimensions of cybersecurity strategies, the importance of stakeholder involvement, and the need for gender equality. It provides evidence and insights into the factors that influence cybersecurity strategies, the role of civil society and training, private sector motivations, women’s representation in the sector, the need for safe digital spaces, and the recognition of violence against women in the IT field. These findings offer valuable considerations for policymakers, organisations, and individuals seeking to promote cybersecurity and gender equality.

Speaker 2

The cybercrime law in the Philippines has faced significant criticism due to its potential threat to the rights of women and LGBTQ+ individuals. One of the main concerns stems from the broad parameters and nebulous key terms surrounding the provision about cybersex, which is seen as a potentially serious threat to these marginalized groups. Additionally, the law also criminalises cyber libel, further limiting freedom of expression and raising concerns about possible misuse by authorities.

Another issue with the cybercrime law is the imposition of excessive penalties for crimes involving the use of Information and Communication Technologies (ICTs). These penalties may not be proportionate to the offences committed and can lead to unfair and disproportionate punishments.

However, there has been positive development in recent times. The problematic provision regarding cybersex in the cybercrime law has been repealed. This significant change is the result of years of advocacy by women’s rights groups that tirelessly worked towards addressing the flaws in the legislation. The repeal was enacted through a provision under new legislation addressing online sexual abuse and exploitation of children, demonstrating a shift towards a more comprehensive approach to protecting vulnerable individuals online.

The success of repealing the problematic provision highlights the importance of collaboration and building alliances to effect changes in flawed cybersecurity policies. Women’s rights groups, children’s rights groups, and LGBTQ+ groups came together to advocate for the repeal. Their concerted efforts, along with the support of a champion in the Philippine Senate who is open to dialogue with civil society, have been crucial in achieving this positive outcome.

Overall, while the cybercrime law in the Philippines still has its flaws, the recent repeal of the problematic provision about cybersex is a significant step towards addressing concerns about gender and human rights. It underscores the power of advocacy and collaboration in bringing about meaningful changes in policy. The journey, however, does not end here, and continued efforts are needed to ensure that cybersecurity policies align with international standards and protect the rights of all individuals in the digital realm.

David Fairchild

The analysis of David’s remarks sheds light on several important points concerning gender inclusion in cybersecurity and international policy. David underscored the significance of multilateral processes in advancing this cause. He noted that Canada has consistently supported gender issues as a crucial component of their foreign aid policy, reflecting the country’s commitment to promoting gender equality on the global stage. However, David also expressed concerns about the potential negative consequences of overemphasizing gender. He cautioned against an excessive focus on gender, highlighting the strategic disadvantages that can arise from such an approach.

In addition to advocating for multilateral processes, David highlighted the importance of education and understanding in addressing gender issues within technical fields. Specifically, he referenced the International Telecommunications Union, emphasizing the need to ensure that gender equality and understanding are prioritized in highly technical areas, where human rights may not always receive sufficient attention. David further emphasized that gender equality should not be viewed solely as a women’s issue, but rather as an issue that requires the support and involvement of everyone.

The analysis also revealed David’s observations on the ongoing debates and pushbacks surrounding gender language, even within progressive platforms like the UN. He cited an unnamed state’s call to end the integration of gender-related language in UN documents, demonstrating the challenges faced in promoting gender inclusion. Moreover, David noted that some countries or blocs may use gender language as a bargaining chip during negotiations, further complicating the progress towards gender equality.

In conclusion, David’s remarks emphasized the crucial role of multilateral processes in promoting gender inclusion in cybersecurity and international policy. While commending Canada’s ongoing support for gender issues, he warned against the negative effects of overemphasizing gender. David stressed the need for education and understanding regarding gender issues in technical fields, highlighting the International Telecommunications Union as an example. Furthermore, he highlighted the ongoing debates and pushbacks surrounding gender language, underscoring the challenges faced in advancing gender equality. The analysis revealed both positive and negative sentiments expressed by David, reflecting the complexity and ongoing nature of these important issues.

Speakers

Speech speed

0 words per minute

Speech length

words

Speech time

0 secs

Click for more

Speech speed

0 words per minute

Speech length

words

Speech time

0 secs

Click for more

Speech speed

0 words per minute

Speech length

words

Speech time

0 secs

Click for more

Speech speed

0 words per minute

Speech length

words

Speech time

0 secs

Click for more

Speech speed

0 words per minute

Speech length

words

Speech time

0 secs

Click for more

Speech speed

0 words per minute

Speech length

words

Speech time

0 secs

Click for more

Decolonise Digital Rights: For a Globally Inclusive Future | IGF 2023 WS #64

Table of contents

Disclaimer: This is not an official record of the IGF session. The DiploAI system automatically generates these resources from the audiovisual recording. Resources are presented in their original format, as provided by the AI (e.g. including any spelling mistakes). The accuracy of these resources cannot be guaranteed. The official record of the session can be found on the IGF's official website.

Knowledge Graph of Debate

Session report

Ananya Singh

The analysis features speakers discussing the exploitation of personal data without consent and drawing parallels to colonialism. They argue that personal data is often used for profit without knowledge or permission, highlighting the need for more transparency and accountability in handling personal data. The speakers believe that the terms of service on online platforms are often unclear and full of jargon, leading to misunderstandings and uninformed consent.

One of the main concerns raised is the concept of data colonialism, which is compared to historical colonial practices. The speakers argue that data colonialism aims to capture and control human life through the appropriation of data for profit. They urge individuals to question data-intensive corporate ideologies that incentivise the collection of personal data. They argue that the collection and analysis of personal data can perpetuate existing inequalities, lead to biases in algorithms, and result in unfair targeting, exclusion, and discrimination.

In response, the speakers suggest that individuals should take steps to minimise the amount of personal data they share online or with technology platforms. They emphasise the importance of thinking twice before agreeing to terms and conditions that may require sharing personal data. They also propose the idea of digital minimalism, which involves limiting one’s social media presence as a way to minimise data.

The analysis also highlights the need for digital literacy programmes to aid in decolonising the internet. Such programmes can help individuals navigate the internet more effectively and critically, enabling them to understand the implications of sharing personal data and make informed choices.

Overall, the speakers advocate for the concept of ownership by design, which includes minimisation and anonymisation of personal data. They believe that data colonialism provides an opportunity to create systems rooted in ethics. However, they caution against an entitled attitude towards data use, arguing that data use and reuse should be based on permissions rather than entitlements or rights.

Some noteworthy observations from the analysis include the focus on the negative sentiment towards the unregulated collection and use of personal data. The speakers highlight the potential harm caused by data exploitation and advocate for stronger regulation and protection of personal data. They also highlight the need for a more informed and critical approach to online platforms and the terms of service they offer.

In conclusion, the analysis underscores the importance of addressing the exploitation of personal data without consent and the potential harms of data colonialism. It calls for more transparency, accountability, and individual action in minimising data sharing. It also emphasises the need for critical digital literacy programmes and promotes the concept of ownership by design to create ethical systems.

Audience

The discussions revolved around several interconnected issues, including legal diversities, accessibility, privacy, and economic patterns. These topics were seen as not always being respected globally due to economic interests and the perpetuation of stereotypes. This highlights the need for increased awareness and efforts to address these issues on a global scale.

One of the arguments put forth was that privacy should be considered as a global right or human right. This suggests the importance of acknowledging privacy as a fundamental aspect of individual rights, regardless of geographical location or cultural context.

Another point of discussion was the need for a taxonomy that identifies specific local needs and how they relate to cultural, historical, or political characteristics. The argument advocates for better understanding and consideration of these factors to address the unique requirements of different communities and regions. This approach aims to reduce inequalities and promote inclusive development.

The distinction between local and global needs was also highlighted as crucial for effective population planning and reducing migration to the Global North. By focusing on empowering individuals to thrive in their country of origin, the discussion emphasized the importance of creating conditions that allow people to stay and contribute to their local communities.

The importance of reimagining digital literacy and skills training was emphasized as essential for empowering marginalized communities. This involves providing equitable access to digital tools and promoting inclusivity in digital participation. Bridging the digital divide was seen as necessary to ensure that everyone has the necessary tools and skills to fully participate in the digital world.

The discussions also delved into the decolonization of the Internet and the digital landscape. It was recognized that this is an ongoing journey that requires continuous reflections, open dialogue, and actionable steps. The complexities surrounding decolonization were explored in relation to factors such as economic gains and the question of who benefits from the current digital landscape.

Lastly, the need to strive for a digital space that is inclusive and empowers all individuals, regardless of their background or geographical location, was highlighted. This vision of a future in which the internet becomes a force of equality, justice, and liberation motivates efforts towards digital inclusivity and empowerment.

In conclusion, the discussions explored various critical aspects related to legal diversities, accessibility, privacy, and economic patterns. They underscored the importance of addressing these issues globally, recognizing privacy as a universal right, understanding local needs, bridging the digital divide, and advocating for a decolonized digital space. The overall emphasis was on promoting inclusivity, reducing inequalities, and fostering empowerment in the digital age.

Jonas Valente

The analysis highlights several important points from the speakers’ discussions. Firstly, it is noted that the development and deployment of artificial intelligence (AI) heavily rely on human labor, particularly from countries in the global South. Activities such as data collection, curation, annotation, and validation are essential for AI work. This dependence on human labor underscores the important role that workers from the global South play in the advancement of AI technologies.

However, the analysis also reveals that working conditions for AI labor are generally precarious. Workers in this industry often face low pay, excessive overwork, short-term contracts, unfair management practices, and a lack of collective power. The strenuous work schedules in the sector have also been found to contribute to sleep issues and mental health problems among these workers. These challenges highlight the need for improved working conditions and better protections for AI labor.

One positive development in this regard is the Fair Work Project, which aims to address labor conditions in the AI industry. The project evaluates digital labor platforms based on a set of fair work principles. Currently operational in almost 40 countries, the Fair Work Project rates platforms based on their adherence to these principles, including factors such as pay conditions, contract management, and representation. This initiative seeks to improve conditions and drive positive change within the AI labor market.

Another concern raised in the analysis is the exploitation of cheap labor within the development of AI. Companies benefit from the use of digital labor platforms that bypass labor rights and protections, such as minimum wage and freedom of association. This trend, which is becoming more common in data services and AI industries, highlights the need for a greater emphasis on upholding labor rights and ensuring fair treatment of workers, particularly in the global South.

Furthermore, the analysis underscores the importance of considering diversity and local context in digital technology production. Incorporating different cultural expressions and understanding the needs of different populations are key factors in creating inclusive and fair digital labor platforms and global platforms. By doing so, the aim is to address bias, discrimination, and national regulations to create a more equitable digital landscape.

The analysis also acknowledges the concept of decolonizing digital technologies. This process involves not only the use of digital technologies but also examining and transforming the production process itself. By incorporating the labor dimension and ensuring basic fair work standards, the goal is to create a structurally different work arrangement that avoids exploitation and supports the liberation of oppressed populations.

In conclusion, the analysis highlights the challenges and opportunities surrounding AI labor and digital technology production. While the global South plays a crucial role in AI development, working conditions for AI labor are often precarious. The Fair Work Project and initiatives aimed at improving labor conditions are prominent in the discussion, emphasizing the need for fair treatment and better protections for workers. Additionally, considerations of diversity, local context, and the decolonization of digital technologies are crucial in creating a more inclusive and equitable digital landscape.

Tevin Gitongo

During the discussion, the speakers emphasised the importance of decolonising the digital future in order to ensure that technology benefits people and promotes a rights-based democratic digital society. They highlighted the need for creating locally relevant tech solutions and standards that address the specific needs and contexts of different communities. This involves taking into consideration factors such as cultural diversity, linguistic preferences, and social inclusion.

The importance of stakeholder collaboration in the decolonisation of digital rights was also emphasised. The speakers stressed the need to involve a wide range of stakeholders, including government, tech companies, fintech companies, academia, and civil society, to ensure that all perspectives and voices are represented in the decision-making process. By including all stakeholders, the development of digital rights frameworks can be more inclusive and reflective of the diverse needs and concerns of the population.

Cultural context was identified as a crucial factor to consider in digital training programmes. The speakers argued that training programmes must be tailored to the cultural context of the learners to be effective. They highlighted the importance of working with stakeholders who have a deep understanding of the ground realities and cultural nuances to ensure that the training programmes are relevant and impactful.

The speakers also discussed the importance of accessibility and affordability in digital training. They emphasised the need to bridge the digital divide and ensure that training programmes are accessible to all, regardless of their economic background or physical abilities. Inclusion of people with disabilities was specifically noted, with the speakers advocating for the development of digital systems that cater to the needs of this population. They pointed out the assistance being provided in Kenya to develop ICT standards for people with disabilities, highlighting the importance of inclusive design and accessibility in digital training initiatives.

Privacy concerns related to personal data were identified as a universal issue affecting people from both the global north and south. The speakers highlighted the increasing awareness and concerns among Kenyans about the protection of their data, similar to concerns raised in European countries. They mentioned the active work of the office of data commissioner in Kenya in addressing these issues, emphasising the importance of safeguarding individual privacy in the digital age.

The speakers also emphasised the need for AI products and services to be mindful of both global and local contexts. They argued that AI systems should take into account the specific linguistic needs and cultural nuances of the communities in which they are used. The speakers raised concerns about the existing bias in AI systems that are designed with a focus on the global north, neglecting the unique aspects of local languages and cultures. They stressed the importance of addressing this issue to bridge the digital divide and ensure that AI is fair and effective for all.

Digital literacy was highlighted as a tool for decolonising the internet. The speakers provided examples of how digital literacy has empowered individuals, particularly women in Kenya, to use digital tools for their businesses. They highlighted the importance of finding people where they are and building on their existing skills to enable them to participate more fully in the digital world.

One of the noteworthy observations from the discussion was the need to break down complex information, such as terms and conditions, to ensure that individuals fully understand what they are agreeing to. The speakers noted that people often click on “agree” without fully understanding the terms and emphasised the importance of breaking down the information in a way that is easily understandable for everyone.

Overall, the discussion emphasised the need to decolonise the digital future by placing people at the centre of technological advancements and promoting a rights-based democratic digital society. This involves creating inclusive tech solutions, collaborating with stakeholders, considering cultural context in training programmes, ensuring accessibility and affordability, addressing privacy concerns, and bridging the digital divide through digital literacy initiatives. By adopting these approaches, it is hoped that technology can be harnessed for the benefit of all and contribute to more equitable and inclusive societies.

Shalini Joshi

The analysis highlights several important points related to artificial intelligence (AI) and technology. Firstly, it reveals that AI models have inherent biases and promote stereotypes. This can result in inequalities and gender biases in various sectors. Experiments with generative AI have shown biases towards certain countries and cultures. In one instance, high-paying jobs were represented by lighter-skinned, male figures in AI visualisations. This not only perpetuates gender and racial stereotypes but also reinforces existing inequalities in society.

Secondly, the analysis emphasises the need for transparency in AI systems and companies. Currently, companies are often secretive about the data they use to train AI systems. Lack of transparency can lead to ethical concerns, as it becomes difficult to assess whether the AI system is fair, unbiased, and accountable. Transparency is crucial to ensure that AI systems are developed and used in an ethical and responsible manner. It allows for scrutiny, accountability, and public trust in AI technologies.

Furthermore, the analysis points out that AI-based translation services often overlook hundreds of lesser-known languages. These services are usually trained with data that uses mainstream languages, which results in a neglect of languages that are not widely spoken. This oversight undermines the preservation of unique cultures, traditions, and identities associated with these lesser-known languages. It highlights the importance of ensuring that AI technologies are inclusive and consider the diverse linguistic needs of different communities.

Additionally, the analysis reveals that women, trans people, and non-binary individuals in South Asia face online disinformation that aims to marginalise them further. This disinformation uses lies and hate speech to silence or intimidate these groups. It targets both public figures and everyday individuals, perpetuating gender and social inequalities. In response to this growing issue, NIDAN, an organisation, is implementing a collaborative approach to identify, document, and counter instances of gender disinformation. This approach involves a diverse set of stakeholder groups in South Asia and utilises machine learning techniques to efficiently locate and document instances of disinformation.

The analysis also highlights the importance of involving local and marginalised communities in the development of data sets and technology creation. It emphasises that hyperlocal communities should be involved in creating data sets, as marginalised people understand the context, language, and issues more than technologists and coders. Inclusive processes that include people from different backgrounds in technology creation are necessary to ensure that technology addresses the needs and concerns of all individuals.

In conclusion, the analysis underscores the pressing need to address biases, promote transparency, preserve lesser-known languages, counter online disinformation, and include local and marginalised communities in the development of technology. These steps are crucial for creating a more equitable and inclusive digital world. By acknowledging the limitations and biases in AI systems and technology, we can work towards mitigating these issues and ensuring that technology is a force for positive change.

Pedro de Perdigão Lana

The analysis highlights several concerns about Internet regulation and its potential impact on fragmentation. It argues that governmental regulation, driven by the concept of digital colonialism, poses a significant threat to the Internet. This is because such regulations are often stimulated by distinctions that are rooted in historical power imbalances and the imposition of laws by dominant countries.

One example of this is seen in the actions of larger multinational companies, which subtly impose their home country’s laws on a global scale, disregarding national laws. For instance, the Digital Millennium Copyright Act (DMCA) is mentioned as a means by which American copyright reform extends its legal systems globally. This kind of imposition from multinational companies can undermine the sovereignty of individual nations and lead to a disregard for their own legal systems.

However, the analysis also recognizes the importance of intellectual property in the discussions surrounding Internet regulations. In Brazil, for instance, a provisional measure was introduced to create barriers for content moderation using copyright mechanisms. This indicates that intellectual property is a crucial topic that needs to be addressed in the context of Internet regulations and underscores the need for balance in protecting and respecting intellectual property rights.

Another important aspect highlighted is platform diversification, which refers to the adaptation of platforms to individual national legislation and cultural contexts. It is suggested that platform diversification, particularly in terms of user experience and language accessibility, may act as a tool to counter regulations that could lead to fragmentation of the Internet. By ensuring that platforms can adapt to different national legislations, tensions can be alleviated, and negative effects can be minimized.

Pedro, one of the individuals mentioned in the analysis, is portrayed as an advocate for the diversification of internet content and platforms. Pedro presents a case in which internet content-based platforms extended US copyright laws globally, enforcing an alien legal system. Thus, diversification is seen as a means to counter this threat of fragmentation and over-regulation.

The analysis also explores the concern of multinational platforms and their attitude towards the legal and cultural specificities of the countries they operate in. While it is acknowledged that these platforms do care about such specifics, the difficulty of measuring the indirect and long-term costs associated with this adaptation is raised.

Furthermore, the discrepancy in the interpretation of human rights across cultures is highlighted. Human rights, including freedom of expression, are not universally understood in the same way, leading to different perspectives on issues related to Internet regulation and governance.

The importance of privacy and its differing interpretations by country are also acknowledged. It is suggested that privacy interpretations should be considered in managing the Internet to strike a balance between ensuring privacy rights and maintaining a safe and secure digital environment.

The analysis concludes by emphasizing the need for active power sharing and decolonization of the digital space. It underscores that preserving the Internet as a global network and a force for good is crucial. The failure of platforms to diversify and respect national legislation and cultural contexts is seen as a factor that may lead to regional favoritism and even the potential fragmentation of the Internet.

In summary, the analysis highlights the concerns about Internet regulation, including the threats posed by governmental regulation and the subtle imposition of home country laws by multinational companies. It emphasizes the importance of intellectual property in the discussions surrounding Internet regulations, as well as the potential benefits of platform diversification. The analysis also highlights the need for active power sharing, the differing interpretations of human rights, and considerations for privacy. Overall, preserving the Internet as a global network and ensuring its diverse and inclusive nature are key priorities.

Moderator

The analysis delves into the various aspects of the impact that AI development has on human labour. It highlights the heavy reliance of AI development on human labour, with thousands of workers involved in activities such as collection, curation, annotation, and validation. However, the analysis points out that human labour in AI development often faces precarious conditions, with insufficient arrangements regarding pay, management, and collectivisation. Workers frequently encounter issues like low pay, excessive overwork, job strain, health problems, short-term contracts, precarity, unfair management, and discrimination based on gender, race, ethnicity, and geography. This paints a negative picture of the working conditions in AI prediction networks, emphasising the need for improvements.

The distribution of work for AI development is another area of concern, as it primarily takes place in the Global South. This not only exacerbates existing inequalities but also reflects the legacies of colonialism. Large companies in the Global North hire and develop AI technologies using a workforce predominantly from the Global South. This unbalanced distribution further contributes to disparities in economic opportunities and development.

The analysis also highlights the influence of digital sovereignty and intellectual property on internet regulation. It argues that governments often regulate the internet under the pretext of digital sovereignty, which extends the legal systems of larger nations to every corner of the globe. This practice is justified through the concept of digital colonialism, where multinational companies subtly impose alien legislation that does not adhere to national standards. Intellectual property, such as the DMCA, is cited as an example of this behaviour. To counter this, the analysis suggests that diversification of internet content and platforms can be an essential tool, safeguarding against regulations that may result in fragmentation.

Furthermore, the analysis emphasises the need for documentation and policy action against gender disinformation in South Asia. Women, trans individuals, and non-binary people are regularly targeted in the region, with disinformation campaigns aimed at silencing marginalised voices. Gender disinformation often focuses on women in politics and the public domain, taking the form of hate speech, misleading information, or character attacks. The mention of NIDAN’s development of a dataset focused on gender disinformation indicates a concrete step towards understanding and addressing this issue.

Digital literacy and skills training are highlighted as important factors in bridging the digital divide and empowering marginalised communities. The analysis emphasises the importance of democratising access to digital education and ensuring that training is relevant and contextualised. This includes providing practical knowledge and involving the user community in the development process. Additionally, the analysis calls for inclusive digital training that takes into consideration the needs of persons with disabilities and respects economic differences.

The analysis also explores the broader topic of decolonising the internet and the role of technology in societal development. It suggests that the decolonisation of digital technologies should involve not only the use of these technologies but also the production process. There is an emphasis on the inclusion of diverse perspectives in technology creation and data analysis to avoid biases and discrimination. The analysis also advocates for the adaptation of platform policies to respect cultural differences and acknowledge other human rights, rather than solely adhering to external legislation.

In conclusion, the analysis provides a comprehensive assessment of the impact of AI development on human labour, highlighting the precarious conditions faced by workers and the unequal distribution of work. It calls for improvements in labour conditions and respect for workers’ rights. The analysis also raises awareness of the need to document and tackle gender disinformation, emphasises the importance of digital literacy and skills training for marginalised communities, and supports the decolonisation of the internet and technology development. These insights shed light on the challenges and opportunities in ensuring a more equitable and inclusive digital landscape.

Speakers

Speech speed

0 words per minute

Speech length

words

Speech time

0 secs

Click for more

Speech speed

0 words per minute

Speech length

words

Speech time

0 secs

Click for more

Speech speed

0 words per minute

Speech length

words

Speech time

0 secs

Click for more

Speech speed

0 words per minute

Speech length

words

Speech time

0 secs

Click for more

Speech speed

0 words per minute

Speech length

words

Speech time

0 secs

Click for more

Speech speed

0 words per minute

Speech length

words

Speech time

0 secs

Click for more

Speech speed

0 words per minute

Speech length

words

Speech time

0 secs

Click for more

European Parliament Delegation to the IGF & the Youth IGF | IGF 2023 Open Forum #141

Table of contents

Disclaimer: This is not an official record of the IGF session. The DiploAI system automatically generates these resources from the audiovisual recording. Resources are presented in their original format, as provided by the AI (e.g. including any spelling mistakes). The accuracy of these resources cannot be guaranteed. The official record of the session can be found on the IGF's official website.

Knowledge Graph of Debate

Session report

Mr. Lagodinsky

The European Parliament’s approach to artificial intelligence (AI) and generative AI focuses on regulation rather than a complete ban. The regulation primarily targets high-risk applications and generative AI to ensure responsible and safe use of these technologies.

One of the driving forces behind this approach is the recognition of citizens’ unease regarding AI technology. There is growing concern among the public about the potential risks and implications of AI, leading to a closer examination of the European Union’s regulation strategy. This scrutiny extends beyond European borders, with Africa also closely observing these developments.

The Parliament emphasizes the importance of striking a balance between protecting small and medium-sized enterprises and safeguarding fundamental rights and environmental standards. While there is a need to avoid overregulation that could stifle innovation and burden businesses, it is equally crucial to establish regulations that prioritize human rights and environmental sustainability.

By taking a supportive stance towards the regulation of AI, the European Parliament acknowledges the need for a careful and measured approach. It recognizes the concerns of small and medium-sized enterprises that prefer to avoid excessive regulation while understanding the value of protecting fundamental rights and environmental standards.

Overall, the European Parliament aims to establish regulations that create an environment where AI technology can thrive while ensuring its responsible use that promotes individual well-being and environmental preservation. This approach aligns with the United Nations’ Sustainable Development Goals, particularly SDG 9 (Industry, Innovation, and Infrastructure) and SDG 16 (Peace, Justice & Strong Institutions). It demonstrates a commitment to addressing the ethical and societal implications of AI technology and sets an example for other regions and countries grappling with similar challenges.

Nathalie

In order to address the emerging online threats and vulnerabilities affecting children, there is an urgent need for a comprehensive online risk assessment. This assessment can provide valuable insights that will inform policies and industry standards aimed at protecting children online. By understanding the specific risks and vulnerabilities that children face in the digital world, stakeholders can develop targeted measures to safeguard their well-being.

It is crucial to recognize that the online landscape is constantly evolving, with new risks emerging all the time. Therefore, a comprehensive assessment is necessary to ensure that policies and industry standards remain effective and up to date. By identifying and analyzing these risks, decision-makers can better understand the scope and severity of the challenges faced by children in cyberspace.

To successfully protect children’s rights online, it is essential for governments, companies, academia, educators, and civil society to collaborate. Each stakeholder brings unique expertise and perspectives to the table, making multi-stakeholder collaboration studies vital in reducing online risks. By working together, these different entities can share knowledge, resources, and best practices, and develop comprehensive strategies to safeguard children and promote their digital well-being.

Moreover, this collaboration is not just limited to protecting children’s rights, but also contributes to the global partnership for sustainable development. The need for a safe and secure digital environment is aligned with the Sustainable Development Goal 17.16, which aims to enhance global partnerships for sustainable development. By engaging in multi-stakeholder collaboration, stakeholders can collectively work towards creating a safer online space for children, supporting the broader goal of sustainable development.

In conclusion, a comprehensive online risk assessment is crucial for addressing the evolving online threats and vulnerabilities faced by children. It provides the necessary insights to shape effective policies and industry standards. Additionally, multi-stakeholder collaboration studies are of paramount importance in reducing online risks and protecting children’s rights. The involvement of governments, companies, academia, educators, and civil society is essential for enhancing the global partnership for sustainable development and ensuring a safer digital environment for children.

Brando

Brando emphasises the need for the involvement of young people in the design and governance of AI policies. He recognises that young people bring a unique perspective and understanding, which is essential in shaping policies that are relevant and effective. Brando is actively working on the AI Act which includes a clear reference to the importance of stakeholder involvement, including young people.

In addition to his focus on youth involvement, Brando also recognises the crucial issue of understanding and handling the tension between democracy and new technologies. He believes that this issue requires more engagement from young people, similar to the global mobilisation they have shown for climate issues. Brando commends the efforts of young people in advocating for climate action and sees a need for similar engagement in addressing the challenges posed by new technologies.

Brando’s work extends beyond mere recognition and advocacy. He is actively involved in negotiating for the inclusion of stakeholder involvement in the parliament text of the law. By doing so, he aims to ensure that the perspectives of young people and other stakeholders are considered and integrated into the decision-making process.

Overall, Brando’s stance highlights the significance of youth involvement in shaping AI policies and addressing the tension between democracy and new technologies. His recognition of the global dimension in legislative work and the need for stakeholder engagement reflects a comprehensive and inclusive approach. By actively working towards these goals, Brando aims to create policies that are democratic, equitable and responsive to the challenges of our rapidly evolving technological landscape.

Peter

The involvement and interests of the youth community have greatly enhanced the Declaration for the Future of the Internet (DFI) process. A successful half-day workshop, held on the first day of the Internet Governance Forum (IGF), had youth IGF rapporteurs participating as animators and reporters. This workshop emphasised the importance of the DFI and highlighted the critical role of youth in shaping the digital future and the governance system of the DFI.

The main objective of the DFI is to integrate governments that are already part of the multi-stakeholder process into various communities. This approach aims to bridge the gap between the government and other stakeholders, including civil society, academia, the business sector, and most importantly, the youth. By involving diverse stakeholders, the DFI ensures that concerns from different communities, particularly the youth, are considered.

It is argued that the DFI provides an opportunity for governments to become more aware of concerns raised by various communities, including the youth. By actively involving governments in the multi-stakeholder process of the IGF, the DFI aims to make them more engaged and informed decision-makers. This facilitates a democratic approach to internet governance by incorporating diverse perspectives.

Furthermore, governments that believe in democratic principles and a human-centric nature of the internet are encouraged to support and sign up for the DFI. By participating in the DFI, governments can engage with like-minded countries and have meaningful interactions. Additionally, the DFI plays a significant role in the Global Digital Cooperation (GDC) process and the World Summit on the Information Society Plus 20 (WSIS+20) discussions.

In conclusion, the active involvement and interests of the youth community have positively influenced the success of the DFI process. The DFI seeks to bring governments closer to the multi-stakeholder process of the IGF and raise awareness about the concerns of different communities, including the youth. Governments that value democratic principles and a human-centric internet should actively support and participate in the DFI. By doing so, they can engage with like-minded countries and play a significant role in shaping the future of internet governance.

Regina Fuxova

Regina Fuxova, a member of EURID, recognizes the Youth Committee as an integral aspect of the company’s corporate governance. This committee serves as a platform for inspiration and the dissemination of information concerning EURID’s activities, providing members with new opportunities to enhance their future careers. The involvement of young people in the committee is testament to EURID’s commitment to youth inclusion.

EURID goes beyond youth involvement solely within the Youth Committee and extends it to activities for smaller children, such as Code Week.eu. This inclusion emphasizes the importance of involving young people in various aspects of EURID’s work. By engaging young individuals in activities such as Code Week.eu, EURID demonstrates its dedication to fostering a sense of inclusion and inspiring young minds.

EURID’s commitment to raising awareness about cybersecurity and Internet governance is demonstrated through initiatives like the ‘Safe Online’ art competition. This competition, designed for high school students, aims to start conversations about these vital issues with teachers and, indirectly, with parents. By organizing such events, EURID actively spreads awareness about the importance of cybersecurity and Internet governance, contributing to the UN’s sustainable development goals of Decent Work and Economic Growth and Industry, Innovation, and Infrastructure.

Regina Fuxova further showcases her support for EURID’s youth inclusion initiatives by suggesting that the organization shares its best practices with other peers in the field. This proposal highlights her belief in the strength of EURID’s approach and suggests that other organizations could benefit from implementing similar strategies. Through sharing its best practices for youth inclusivity, EURID can inspire and guide other entities in their own efforts.

In conclusion, Regina Fuxova’s perspective on the Youth Committee as a vital component of EURID’s corporate governance, EURID’s commitment to youth inclusion through activities like Code Week.eu, its efforts to raise awareness about cybersecurity and Internet governance, and Regina’s suggestion to share best practices all reflect EURID’s dedication to youth involvement and inclusive practices. These initiatives contribute to the achievement of the sustainable development goals of Quality Education, Reduced Inequalities, Decent Work and Economic Growth, and Industry, Innovation, and Infrastructure.

Collegue

The discussion centred around the Hiroshima process, which aims to enhance global cooperation among G7 countries in the field of Artificial Intelligence (AI). This process complements the AI Act introduced by the European Union (EU), which seeks to ensure AI systems undergo a risk-based security analysis.

The EU places significant emphasis on developing AI that is human-centric and aligned with fundamental rights. It actively works towards legislation addressing the ethical concerns of AI, aiming to establish regulations that guarantee responsible and accountable AI use.

The EU encourages a multidisciplinary approach to AI, recognizing its complexity and the need for input from various sectors and stakeholders. Discussions have taken place on establishing a multi-stakeholder forum to foster collaboration and knowledge sharing. These initiatives demonstrate the EU’s commitment to engaging the international community and avoiding isolation in developing and regulating AI technologies.

Overall, participants supported regulating AI while promoting innovation. They advocated for a framework for AI regulation akin to the regulation of medicines, ensuring appropriate scrutiny and oversight while allowing room for advancement.

The analysis primarily focused on the positive sentiment surrounding AI regulation and innovation, indicating a widespread recognition of the need for responsible and ethical AI development. The emphasis on risk-based security analysis, human-centric AI, and the multidisciplinary approach highlights a strong desire to align with international standards and respect fundamental rights.

In conclusion, the discussion underscores the importance of global cooperation, multidisciplinary collaboration, and ethical considerations in AI regulation and innovation. The EU and participating countries are committed to creating a regulatory framework balancing innovation and safeguarding individual rights and well-being.

Yulia Mournets

The Youth Internet Governance Forum (Youth IGF) has actively contributed to shaping the future of internet policies, with a particular emphasis on involving young leaders in decision-making processes. Yulia Mournets, a key figure in the Youth IGF, stressed the importance of dialogue between the youth and the Internet Governance Forum (IGF) in influencing the policies that will shape the future of the online world.

The European Parliament delegation has shown potential support for a working group focused on the IGF. This is a positive development, as it indicates that the youth’s perspective and participation in internet governance are being recognized and valued by influential stakeholders.

The Youth IGF has made significant recommendations for a digital compact, one of which is the establishment of youth advisory committees within private sector structures. This recommendation aims to ensure that young people have a voice in decision-making processes related to internet policies. Notably, the Youth Advisory Committee created by EURID serves as a successful example of implementing such recommendations.

Under the presidency of the Czech Republic, the Youth IGF actively participated in several meetings, which demonstrates their dedication and commitment to advocating for youth involvement in internet governance. This involvement extends beyond Europe, as the Youth IGF has established more than 10 safe internet committees in African countries, highlighting their global reach and impact.

The Youth IGF has also played a significant role in the child online protection initiative of the International Telecommunication Union (ITU). Their contribution to this initiative underscores their commitment to ensuring a safe and secure internet environment for young people.

Furthermore, the Youth IGF’s recommendations have led to the establishment of a special category for the .EU award, which focuses on recognizing the achievements of young entrepreneurs. This acknowledgement of young entrepreneurs’ contributions aligns with the Sustainable Development Goal 8 (Decent Work and Economic Growth) and further solidifies the Youth IGF’s influence in shaping policies that support economic opportunities for the youth.

In conclusion, the Youth IGF has actively participated in shaping internet policies, with a particular focus on involving young leaders in decision-making processes. Their efforts have been acknowledged and supported by entities such as the European Parliament delegation, and their recommendations have led to the successful implementation of initiatives such as youth advisory committees and the .EU award category for young entrepreneurs. The Youth IGF’s impact extends beyond Europe, with their involvement in meetings under the Czech Republic presidency and the establishment of safe internet committees in African countries. Ultimately, their dedication to advocating for youth participation in internet governance has made a positive contribution to the future of the internet.

Muhammad

The analysis discussed the importance of including youth in the digital governance sector and cooperation sector. It emphasized that youth are not only current stakeholders but also future leaders in digital transformation. Their active involvement in digital governance is crucial for shaping policies and strategies that will have a long-term impact on the digital world.

One noteworthy individual mentioned in the analysis is Muhammad, who serves as the Generation Connect Youth NY for the Asia-Pacific region with the International Delhi Communication Union. His interest in digital governance further underscores the importance of youth engagement in this sector. His involvement brings valuable perspectives and insights that can contribute to the development of effective digital governance mechanisms.

The argument put forth is that youth, as the ones who embrace digital transformation most passionately, should be included in the digital governance infrastructure. This inclusion is seen as essential for ensuring the continuity of digital knowledge and skills to future generations. By actively involving youth in decision-making processes, their unique experiences and perspectives can be leveraged to develop inclusive and sustainable digital policies.

Furthermore, the analysis highlighted that including youth in digital governance and cooperation aligns with several Sustainable Development Goals (SDGs). These include SDG 4 – Quality Education, SDG 9 – Industry, Innovation and Infrastructure, and SDG 17 – Partnerships for the Goals. Involving youth in digital governance not only supports their educational development but also promotes innovation and fosters collaborations that drive positive change.

The sentiment towards the importance of including youth in digital governance is consistently positive throughout the analysis. It is clear that all speakers recognize the value of youth contribution in the digital governance sector and believe in their potential as agents of change. By creating an inclusive and youth-centered digital governance ecosystem, societies can harness the immense talent and creativity of young individuals to shape a future that is technologically advanced and socially equitable.

In conclusion, the analysis and observations made strongly advocate for the inclusion of youth in the digital governance and cooperation sector. Youth are not just passive consumers of digital technologies but active participants and drivers of digital transformation. Their perspectives and insights are vital for creating sustainable and inclusive digital policies that benefit present and future generations. By involving youth in decision-making processes and fostering collaborations, we can harness their potential to shape a technologically advanced and socially equitable digital future.

Herman Lopez

Herman Lopez, a member of the standing group of the Internet Society, has expressed concern regarding Latin America’s limited participation in global Artificial Intelligence (AI) discussions. Lopez highlights the absence of Latin America in AI talks, while noting the active engagement of India and Africa. He advocates for the inclusion of Latin America, emphasising the importance of reducing inequalities and promoting representation.

Lopez’s concern arises from the fact that Latin America has seemingly been excluded from AI discussions, despite the potential contributions the region could make and the need for diverse perspectives in shaping AI policies and implementation. He argues that this exclusion prevents Latin America from influencing the development of AI systems that address its specific needs and challenges.

By highlighting the active involvement of India and Africa in shaping global AI discussions, Lopez provides evidence of other regions’ participation. This highlights the importance of Latin America having a voice in these discussions, to ensure its interests and perspectives are considered in the development of AI technologies.

Lopez’s call for the inclusion of Latin America in global AI discussions is driven by the goal of reducing inequalities. He believes that AI has the potential to exacerbate existing inequalities if it is driven solely by the interests of powerful countries or regions. By including Latin America, with its unique socio-economic context and challenges, in these discussions, Lopez argues for a more inclusive and equitable approach to AI.

Furthermore, Lopez emphasizes the importance of representation in AI discussions. By including Latin America, a region with diverse cultural, social, economic, and political contexts, decision-making processes around AI can be enriched. This diverse representation can lead to a comprehensive understanding of the implications of AI on different communities and ensure that the development and deployment of AI technologies are fair and inclusive.

In conclusion, Herman Lopez expresses concern regarding Latin America’s limited involvement in global AI discussions, while noting the active participation of India and Africa. He advocates for the inclusion of Latin America, highlighting the need to reduce inequalities and promote representation. By giving Latin America a voice in shaping AI policies and technologies, Lopez believes that a more inclusive and equitable approach to AI can be achieved, mitigating the potential adverse effects of unchecked AI development.

Irena Joveva

The speakers in the European Parliament discussed several important topics related to youth and digital literacy. Irena Joveva, the youngest elected delegate from Slovenia, emphasised the need for greater inclusion of the younger generation in the European Parliament, expressing appreciation for their involvement. This aligns with SDG 16, which aims to promote peace, justice, and strong institutions.

One speaker highlighted the importance of media freedom and the fight against disinformation. They mentioned their role in the recently adopted Media Freedom Act and the initiation of inter-institutional negotiations, taking a positive step towards protecting media freedom and democratic principles. This promotes transparency, accountability, and informed decision-making, all crucial for SDG 16.

The undervaluation of digital literacy, especially among young people exposed to the digital world, was also discussed. The speakers emphasized the need to give digital literacy the recognition it deserves, as it plays a significant role in achieving SDG 4, which focuses on quality education.

Furthermore, the speakers called for increased efforts from schools and politicians in promoting digital literacy. This raises questions about the responsibility of educational institutions and policymakers in ensuring that young people have the necessary digital skills. This argument aligns with SDG 10’s goal of reducing inequalities, promoting digital inclusivity, and bridging the digital divide.

In summary, the analysis highlights the importance of youth involvement in the European Parliament, the need to protect media freedom and combat disinformation, and the undervaluation of digital literacy. It also prompts further exploration of the responsibilities of schools and politicians in promoting digital literacy. By addressing these issues, policymakers and stakeholders can work towards building a more inclusive and digitally empowered society.

Ananya

Three key arguments related to youth participation in digital technologies were presented. Firstly, it was emphasised that young people should be involved as stakeholders in any process related to digital technologies. This was supported by the fact that Ananya is a youth advisor to the USAID Digital Youth Council and is actively involved in the design and implementation of the Digital Strategy. The significance of this argument is underscored by the statement that digital technologies influence young people’s aspirations, ideas, and lives right from birth. By involving young people as stakeholders, their unique perspectives and insights can be incorporated into the decision-making processes, ensuring that the digital technologies being developed and implemented meet the needs and aspirations of the youth.

The second argument put forward was that young people from diverse backgrounds must be provided with a platform to share their inputs on policies that influence their lives. This argument was justified by Ananya’s suggestion to host consultations, youth summits, site events, networking sessions, conferences, exhibitions, and educational programmes. This inclusive approach recognises the importance of enabling participation from all segments of society and the value of diverse perspectives. Ananya further emphasised the significance of local, national, and international level fora to make the policy-making process more accessible, inclusive, and globally relevant. By actively involving young people from diverse backgrounds, policies can be better informed, resulting in reduced inequalities and stronger institutions.

Finally, it was highlighted that leveraging digital platforms and social media can be effective in engaging young people. Ananya emphasised the creation of interactive online spaces and the use of social media campaigns, hashtags, and online events like webinars to raise awareness and mobilise support from and with the youth. This approach recognises the increasing influence of digital platforms on young people’s lives and the ease with which they can connect and engage on these platforms. Utilising digital platforms and social media provides a powerful tool to reach and involve young people in discussions and decision-making processes related to digital technologies.

In conclusion, the arguments presented highlight the importance of involving young people as stakeholders in the development and implementation of digital technologies, providing a platform for their inputs on policies, and leveraging digital platforms and social media for effective engagement. By adopting these approaches, there is potential to create a more inclusive and impactful digital ecosystem that meets the needs and aspirations of young people from diverse backgrounds. It is vital to recognise the value of youth participation and ensure their voices are heard and incorporated into decision-making processes to build a digital future that is equitable and relevant for all.

Levi

The analysis of the provided information highlights several significant points raised by the speakers. Firstly, there are concerns about the impact of AI, misinformation, and disinformation, especially when perpetrated by certain government officials. This raises questions about the reliability and potential consequences of information in today’s digital age. The speakers have a negative sentiment towards this issue and stress the need for vigilance and measures to combat the spread of false information.

Secondly, the role of youths in internet governance and decision-making is emphasized. As three-quarters of internet usage is by the youth, their involvement becomes crucial in shaping policies and decisions related to the internet. The speakers acknowledge the innovative ideas and perspectives young individuals bring to the table. This underscores the importance of including young voices in discussions surrounding internet laws and regulations. The sentiment towards this point is positive, indicating the recognition of the valuable contributions young people can make.

Furthermore, the analysis reveals a questioning sentiment towards the European Union’s efforts to ensure the sustainability of youth engagement in policy and governance, particularly in the realm of technology and the internet. Levi, one of the speakers, raises doubts about the deliberate actions taken by the European Union to promote youth participation and inclusion. This observation highlights the need for further examination of the European Union’s initiatives and their effectiveness in bridging gaps and fostering sustainable youth engagement.

Lastly, the analysis reiterates the importance of equality and inclusion of youths in decision-making processes to pave the way for a sustainable future. There is a need for deliberate engagement of young individuals to create a sustainable future. This sentiment aligns with the principles of SDG 8 (Decent work and economic growth) and SDG 10 (Reduced inequalities), emphasizing the necessity of empowering and involving young people in shaping policies that directly affect them.

In conclusion, the analysis highlights concerns surrounding AI and misinformation, the significance of youth involvement in internet governance, questioning of the European Union’s efforts in promoting youth engagement, and the necessity of equality and inclusion in decision-making processes. These insights shed light on the complex landscape of internet governance, youth empowerment, and policy-making, prompting further examination and consideration of these issues.

João Pedro

João Pedro, a member of the youth advisory committee, is a strong advocate for the inclusion of youth voices in both private and public institutions. He believes that involving young people in decision-making processes has positive outcomes for all parties involved. João Pedro has found the collaboration between the youth and businesses, such as EURID, to be mutually beneficial.

One area where João Pedro sees potential for improvement is in evaluating strategies such as promoting the .eu domain in different regions of Europe. He suggests that EURID, the organization responsible for managing the .eu domain name, should assess the effectiveness of these strategies within their own structure. This comprehensive approach would provide a deeper understanding of how the .eu domain can be utilized across Europe.

The youth committee, including João Pedro, has been actively contributing valuable insights and feedback to EURID’s activities within the Internet governance ecosystem. Their advisory role positions them to provide guidance and recommendations to EURID, enhancing its decision-making processes.

Overall, João Pedro’s experiences highlight the importance of involving young people in decision-making within institutions. By incorporating youth voices, institutions like EURID can benefit from fresh perspectives, innovative ideas, and a better understanding of the needs and preferences of younger stakeholders.

This case study also emphasizes the significance of youth participation in achieving the Sustainable Development Goals (SDGs), particularly SDG 10: Reduced Inequalities and SDG 16: Peace, Justice, and Strong Institutions. By including young people in decision-making, we can work towards a more equitable and just society.

Christian-Sylvie Bouchoy

The European Parliament, led by Christian-Sylvie Bouchoy, is actively involved in internet governance forums and is committed to supporting the activities of the Internet Governance Forum (IGF). Bouchoy, a member of the European People’s Party and President of the Industry Research and Energy Committee, introduced the members of the European Parliament Delegation to the IGF.

The European Parliament is strongly committed to supporting IGF activities. They have participated in most of the IGF forums and have initiated a letter to President Roberta Metzola to form a permanent working group on IGF in the European Parliament. Additionally, members of the European Parliament are involved in different legislative dossiers on various areas related to internet governance.

In the digital area, the European Parliament is actively developing legislation. They have already adopted legislation on data governance, Digital Markets Act, Digital Services Act, Cyber Security, and the Artificial Intelligence Act. The Parliament is currently engaged in inter-institutional negotiations with the Council and the European Commission to finalize the legislation. These efforts demonstrate the Parliament’s commitment to addressing the challenges presented by AI and ensuring responsible use of technology.

The European Parliament strongly believes that artificial intelligence should not be used for mass surveillance. They are working on a position on artificial intelligence and are particularly concerned about the ethical issues surrounding biometric AI usage. The Parliament advocates for responsible and regulated use of AI.

Youth involvement and consultation in decision-making processes are encouraged by the European Parliament. They recognize the need for stronger and clearer involvement of young people in decisions related to digital legislation and future artificial intelligence. Some young people who are members of the European Parliament are actively connected to the youth and support their participation.

The European Parliament acknowledges the importance of dialogue and cooperation in internet governance. They have strong ties with Latin America and Africa and believe in working closely with them on issues related to internet governance and digital artificial intelligence. They have also suggested the possibility of establishing a similar network in Latin America.

Youth participation, particularly through the European Youth IGF and the public consultation phase, is deemed critical in shaping legislation on internet governance. The European Parliament commends the consultation process with Director O’Donohue and encourages the youth to take part in it.

In conclusion, the European Parliament, under the leadership of Christian-Sylvie Bouchoy, is actively engaged in internet governance and is dedicated to supporting IGF activities. They are actively developing legislation in the digital area and are advocating for the responsible use of artificial intelligence. Youth involvement and consultation are encouraged, and strong partnerships are being established with Latin America and Africa. The Parliament believes in the importance of constructive dialogue and recognizes the vital role young people play in shaping the future of internet governance.

Stefanets

Stefanets actively advocates for the organization of special events at the European Parliament to promote cooperation and involve young people. These events provide a platform for individuals to exchange ideas and establish regular cooperation. By drawing inspiration from the perspectives of the younger generation, senior members of the Parliament can benefit from their insights.

One significant event Stefanets supports is the Youth Forum, where young individuals present their ideas and contribute to discussions on important issues. Stefanets actively participates in the Forum, fostering an inclusive environment that values and encourages young voices. They recognize that many innovative concepts originate from the Youth Forum, highlighting the importance of engaging with young people and leveraging their fresh perspectives.

In addition to youth involvement, Stefanets prioritizes quality education and supports SDG 4. By fostering idea development, Stefanets empowers young individuals to contribute to the Sustainable Development Goals.

Stefanets also focuses on the digital decade, addressing issues such as addictive design and online child protection. They actively engage with children to understand the dangers they face in the digital world, allowing them to shape policies that safeguard their well-being.

The arguments presented by Stefanets reflect a positive sentiment towards promoting youth involvement, idea development, and prioritizing the well-being of children in the digital realm. By encouraging cooperation and engaging with young people, they aim to create a more inclusive and progressive future.

Overall, Stefanets’ commitment to organizing special events, supporting the Youth Forum, and addressing digital challenges showcases their dedication to cooperation, empowering youth, and safeguarding children’s well-being. Their actions align with SDG 16 and SDG 17, focusing on peace, justice, strong institutions, and partnerships for the goals.

Nadia Chekhia

During a discussion on youth participation in internet governance, two speakers shared their perspectives. The first speaker, who is responsible for coordinating the youth activities of the European Regional IGF, expressed doubts about how meaningful participation should be defined. They emphasized the need to reflect on this matter and to gain a better understanding of what meaningful participation truly entails. The sentiment of their argument was neutral.

On the other hand, the second speaker strongly advocated for integrating more young people from across Europe into the system of internet governance. They believed that it was crucial to provide youth with leadership positions to enhance their involvement. This approach aligned with the positive sentiment of the second speaker’s argument.

Both speakers highlighted their commitment to comprehending the concept of meaningful participation. They emphasized the importance of exploring this notion in depth and working towards implementing it.

The first speaker’s argument raised questions regarding the definition of meaningful participation, indicating a potentially critical analysis of the current understanding of the concept. The second speaker, on the other hand, firmly believed in the necessity of promoting youth involvement in internet governance and assigning them leadership roles.

This discussion on youth participation in internet governance sheds light on the varying perspectives within the field. It portrays the complexities involved in defining and implementing meaningful participation and highlights the importance of involving young people in decision-making processes. Such efforts can contribute to achieving the Sustainable Development Goals, including SDG 4 (Quality Education), SDG 5 (Gender Equality), SDG 10 (Reduced Inequalities), and SDG 16 (Peace, Justice, and Strong Institutions).

Speakers

Speech speed

0 words per minute

Speech length

words

Speech time

0 secs

Click for more

Speech speed

0 words per minute

Speech length

words

Speech time

0 secs

Click for more

&

’Christian-Sylvie

Speech speed

0 words per minute

Speech length

words

Speech time

0 secs

Click for more

Speech speed

0 words per minute

Speech length

words

Speech time

0 secs

Click for more

&

’Herman

Speech speed

0 words per minute

Speech length

words

Speech time

0 secs

Click for more

&

’Irena

Speech speed

0 words per minute

Speech length

words

Speech time

0 secs

Click for more

&

’João

Speech speed

0 words per minute

Speech length

words

Speech time

0 secs

Click for more

Speech speed

0 words per minute

Speech length

words

Speech time

0 secs

Click for more

&

’Mr.

Speech speed

0 words per minute

Speech length

words

Speech time

0 secs

Click for more

Speech speed

0 words per minute

Speech length

words

Speech time

0 secs

Click for more

&

’Nadia

Speech speed

0 words per minute

Speech length

words

Speech time

0 secs

Click for more

Speech speed

0 words per minute

Speech length

words

Speech time

0 secs

Click for more

Speech speed

0 words per minute

Speech length

words

Speech time

0 secs

Click for more

&

’Regina

Speech speed

0 words per minute

Speech length

words

Speech time

0 secs

Click for more

Speech speed

0 words per minute

Speech length

words

Speech time

0 secs

Click for more

&

’Yulia

Speech speed

0 words per minute

Speech length

words

Speech time

0 secs

Click for more

Enhancing the digital infrastructure for all | IGF 2023 Open Forum #135

Table of contents

Disclaimer: This is not an official record of the IGF session. The DiploAI system automatically generates these resources from the audiovisual recording. Resources are presented in their original format, as provided by the AI (e.g. including any spelling mistakes). The accuracy of these resources cannot be guaranteed. The official record of the session can be found on the IGF's official website.

Knowledge Graph of Debate

Session report

Dian

During Indonesia’s presidency in the Digital Economy Working Group (DWG) 2023, they placed a strong emphasis on the importance of digital skills. As part of their efforts, they launched three output documents aimed at improving digital skills and digital literacy. These documents include the Compendium of Framework of Practices and Policies on Advanced Digital Skills and Digital Literacy, the G20 Toolkit for Measuring Digital Skills and Digital Literacy, and a collection of policies and recommendations to improve meaningful participation of people in vulnerable situations in the digital economy. These initiatives demonstrate Indonesia’s commitment to equipping its citizens with the necessary skills to thrive in the digital era.

Indonesia also actively participates in the BUD Forum, led by the Ministry of Communication and Informatics. They carry out priority deliverables in this forum, further highlighting their commitment to the development of the digital economy.

One of the key priorities of Indonesia’s presidency in the DWG 2023 is to bridge the digital divide. By prioritising digital skills, Indonesia aims to bring economic prosperity and social inclusion on a global scale. They recognise that the digital divide hinders progress and are committed to ensuring that all individuals have access to the necessary resources and opportunities to thrive in the digital era.

Furthermore, Indonesia places great importance on developing a robust digital infrastructure. They understand that reliable and high-speed internet is the backbone of digital information and plays a crucial role in supporting economic growth and development. As such, Indonesia actively engages with international fora, including ASEAN and ITU, and seeks support from multinational entities to build and maintain a robust digital infrastructure.

In addition to these priorities, Indonesia also focuses on promoting e-governance and the digitalisation of government services. By involving the public sector in these efforts, they aim to streamline administrative processes, enhance transparency, and make it easier for citizens to access essential services.

Indonesia also recognises the importance of cybersecurity and data protection in the digital age. They collaborate with both the public sector and international organisations to establish data protection laws and enhance cybersecurity measures. This reflects their commitment to create a secure and trustworthy digital environment.

Another area of focus for Indonesia is digital education. They understand that digital skills are crucial for preparing the workforce of tomorrow. To facilitate this, they actively engage in public-private partnerships to develop and implement digital education programmes that train individuals in necessary digital skills.

Lastly, Indonesia emphasises the importance of inclusivity and cultural diversity in the digital space. Being a country with diverse cultural entities, Indonesia recognises the need for content in local languages and subsidising access to digital services. They strive to ensure that everyone, regardless of their background, has equal access to the benefits of the digital world.

In conclusion, Indonesia, under its presidency in the DWG 2023, is committed to advancing the digital economy by prioritising digital skills, bridging the digital divide, developing reliable digital infrastructure, promoting e-governance and cybersecurity, providing digital education, and fostering inclusivity and cultural diversity. These efforts demonstrate Indonesia’s dedication to harnessing the power of digital transformation for economic growth and social development.

Audience

During the forum, the individual made multiple requests to leave, expressing gratitude several times by saying “thank you.” The person also indicated their intention to say goodbye multiple times, using the phrase “bye-bye.” This suggests a polite and appreciative attitude toward the audience and participants. Although the reasons for wanting to leave were not explicitly stated, it can be inferred that the individual has completed their participation or has other commitments to attend to. Overall, their repeated expressions of gratitude and farewell indicate a respectful and appreciative departure from the forum. The individual’s gestures and words demonstrated a gracious and courteous exit, leaving a positive impression on the audience and participants.

Mr. Amano

Mercari, a popular peer-to-peer trading platform, is actively promoting a circular economy and expanding its global reach. With a customer base of over 20 million and gross merchandise volume (GMV) reaching 100 billion yen last year, Mercari is dedicated to reducing the disposal of items and encouraging sustainable consumption practices. Their initiatives include equipping high school students with digital skills and digital marketing skills through project-based learning programs. They collaborate with local educational institutions in places such as Wakayama and Kyoto, providing opportunities for students to sell local products on their platform. In addition, Mercari supports IT education and aims to increase the number of female engineers by donating to Kamiyama Tech College and conducting workshops for engineers and local communities. They also recognize the importance of hands-on interaction and dispatch specialist engineers to local schools and companies, facilitating real-life learning experiences. Furthermore, Mercari understands the global significance of implementing digital skills in local and developing countries, emphasizing the importance of communication and collaboration with top-tier engineers. Overall, Mercari’s commitment to sustainability, education, and inclusivity sets an inspiring example for companies seeking to make a positive impact.

Yamanaka San

The Japan International Cooperation Agency (JICA) is playing a significant role in capacity and infrastructure building across the ASEAN and Pacific regions. Last fiscal year, they had funding of approximately 1.2 to 1.3 trillion dollars for projects, demonstrating their commitment to supporting development initiatives in these regions. Moreover, JICA’s efforts go beyond financial support. They have trained 13,217 individuals and employed 9,163 experts and volunteers from around the world, showcasing their dedication to capacity building and knowledge transfer.

JICA is also intensifying efforts to integrate digital components into existing infrastructure. They aim to enhance cybersecurity measures and have partnered with the ASEAN-Japan Cybersecurity Capacity Building Centre (AJCCBC) to develop and strengthen cybersecurity capabilities across the ASEAN region, ensuring a secure digital environment.

Additionally, JICA is actively working to expand technological connectivity. They plan to lay fiber lines for the New Urban Information Infrastructure (NUI) project, a digital initiative to enhance connectivity in urban areas. To achieve this, JICA is collaborating with partners such as the United States and Australia, highlighting the importance of global cooperation in driving technological advancements and promoting connectivity.

Partnerships with private sectors are considered crucial in achieving technological solutions and supporting connections between companies. JICA recognizes that private sectors have valuable technological solutions and expertise that can contribute greatly to development projects. Collaboration between Inazians and Japanese companies is particularly emphasized to facilitate knowledge-sharing and innovative solutions.

Furthermore, the importance of having appropriate policies in place to support and foster innovation and ecosystem development is highlighted. The speakers argue that countries need a comprehensive approach that encompasses not only technology skills but also policy areas and digital skills to connect technology skills with the private sector and ensure a conducive environment for growth and progress.

In conclusion, JICA is playing a crucial role in capacity and infrastructure building across the ASEAN and Pacific regions. Their substantial funding, extensive training programs, and efforts to integrate digital components into infrastructure exemplify their commitment to sustainable development. Additionally, their emphasis on partnerships with the private sector and the need for effective policies underscores the importance of collaboration and a holistic approach to foster innovation and drive ecosystem development. Ultimately, JICA’s initiatives are contributing to the advancement of the regions and paving the way for a prosperous future.

Dr. Ran

The Association of Southeast Asian Nations (ASEAN) is recognizing the potential of a single digital economy, as it is currently the fifth largest economy in the world, with a market worth $3,000 billion USD. The region has a significant consumer base of 300 million people, which has been further amplified by the pandemic’s acceleration of the digital transformation in ASEAN.

Despite this positive development, there are challenges in the journey of digital transformation in the region. One major challenge is the varying levels of digital readiness among ASEAN countries. Some countries are better prepared for the digital transformation than others, which creates a gap in terms of embracing the digital economy. Another challenge is the issue of cybersecurity, with a significant divide between the lowest and highest performing nations in terms of cybersecurity measures. This poses a risk to the stability and security of the digital ecosystem in the region. Additionally, emerging technologies like AI and Cloud Computing are having an impact on the labor market, further complicating the challenges of digital transformation.

Another pressing concern in the ASEAN region is the urgency for skill development and training. It is estimated that 10-20% of jobs will be displaced by digital technology in the coming years. However, there is a shortage of digitally skilled professionals in ASEAN, resulting in a need for about 50 million additional digital professionals. This highlights the need for comprehensive measures to bridge the digital divide and upgrade skills in the region.

ASEAN is taking proactive steps to address these challenges. Various ASEAN bodies, such as SME, Science Technology, and Education, are setting up facilities to enhance digital knowledge and skills. The aim is to make currently unskilled workers relevant and train higher-level professionals to meet the demands of the digital economy.

Inclusivity in the digital economy is a priority for ASEAN. Efforts are being made to equip micro, small, and medium-sized enterprises (SMEs) with digital knowledge through initiatives like the ASEAN SME Academy. This will enable these enterprises to participate more actively in the digital economy and benefit from it.

Additionally, ASEAN is actively working on improving the logistics system and developing a digital payment system across the region. An agreement has already been secured to enhance cross-border e-commerce and digital payment using QR codes. These efforts aim to promote seamless integration and efficiency in cross-border transactions.

Addressing digital security is also a priority for ASEAN. Plans are underway to develop a system for digital ID or digital business ID, with the goal of creating an interoperable platform for businesses and consumers. This will enhance digital security and facilitate trustworthy digital transactions in the region.

In conclusion, ASEAN recognizes the potential of a single digital economy and is actively pursuing measures to accelerate the digital transformation. However, challenges such as varying levels of digital readiness, cybersecurity, and job displacement persist. The urgency for skill development and training in ASEAN is apparent, and initiatives to bridge the digital divide and upgrade skills are being implemented. Inclusivity in the digital economy, improvement of logistics and digital payment systems, and the development of digital ID systems are also key areas of focus. By addressing these challenges and embracing digital advancements, ASEAN aims to thrive in the digital age.

Daisuke Hayashi

The digital economy has great potential and is experiencing faster growth compared to the traditional economy. A 10% increase in internet adoption leads to a 0.5 to 1.2% growth in income, while a 1% increase in adoption of digital technology is associated with a labor productivity growth of 1 to 2.0%. The size of the digital economy is rapidly increasing and now accounts for between 5 to 7% of GDP. However, there is a disparity in company growth across regions, with GAFA and Microsoft dominating the market. Companies in the East Asia and Pacific (EAP) region have experienced slower growth, possibly due to a lack of knowledge and skilled workers. To address these challenges, discussions and partnerships involving multiple stakeholders are necessary. Daisuke Hayashi advocates for a diversified approach that involves both the public and private sectors in enhancing digital skills. Efforts are being made to address the digital skills gap, particularly among the younger generation, and support the development of skilled individuals in local areas. There is a shift in focus from cybersecurity to the development of digitally skilled individuals. It is necessary to improve digital skills and foster digitalization through public-private collaboration. International exchange is also encouraged to drive innovation in the digital economy. Overall, it is essential to improve digital skills and ensure equitable growth in the digital economy.

Rika Tsunoda

During the G7 Hiroshima Summit 2021, Japan emphasized its key focus areas in digital infrastructure and capacity building in developing countries. One of the main areas of focus is the need to bolster security and resilience in digital infrastructure. Japan recognises the importance of having a secure and robust digital infrastructure to support economic growth and development. To ensure supply chain resilience, Japan promotes the use of open 5G architecture and vendor diversification.

In addition to infrastructure, Japan is also working to address the knowledge gap in digital skills and literacy. The government of Japan offers capacity-building programmes in the digital field, aimed at improving digital skills and literacy in developing countries. Examples of these programmes include the ASEAN-Japan Cybersecurity Capacity Building Centre and cyber defence exercises. These initiatives are steps in the right direction towards improving digital skills and literacy, paving the way for greater digital inclusion.

Another area where Japan is actively promoting is the establishment of a 5G Open RAN architecture. The Open RAN approach promotes supply chain resilience and transparency, as well as encourages healthy competition. The Quad leaders have even announced cooperation with Palau to establish the deployment of Open RAN. Japan plans to hold a symposium on Open RAN through the ASEAN-Japan ICT Fund, further demonstrating its commitment to this technology.

The importance of variety in capacity-building programmes tailored to the needs of different countries is also emphasised by Japan. Each country has different requirements and needs when it comes to capacity building, and it is important to cater to those specific needs. Already, there are institutions and programmes such as APT (Asia-Pacific Telecommunity) and AJCCBC (ASEAN-Japan Cybersecurity Capacity Building Centre) providing capacity building. The Ministry of Internal Affairs and Communications (MIC) has been instrumental in helping Japanese telecom companies expand their ICT solution services overseas, catering to the diverse needs of different countries.

Furthermore, Japan aims to promote its ICT companies to share solutions for cross-border payment and digital ID with developing countries. Japanese ICT companies possess the technical abilities required to achieve secure cross-border payment and digital ID systems. The government of Japan sees the potential in using capacity-building initiatives to share these solutions and contribute to reducing inequalities and improving infrastructure in developing countries.

In conclusion, the G7 Hiroshima Summit 2021 highlighted Japan’s commitment to digital infrastructure and capacity building in developing countries. Japan aims to bolster security and resilience in digital infrastructure, address the knowledge gap in digital skills and literacy, and support the establishment of a 5G Open RAN architecture. The emphasis on variety in capacity-building programmes, as well as the promotion of Japanese ICT companies to share solutions for cross-border payment and digital ID, further demonstrates Japan’s dedication to fostering inclusive and sustainable development.

Speakers

Speech speed

0 words per minute

Speech length

words

Speech time

0 secs

Click for more

Speech speed

0 words per minute

Speech length

words

Speech time

0 secs

Click for more

Speech speed

0 words per minute

Speech length

words

Speech time

0 secs

Click for more

Speech speed

0 words per minute

Speech length

words

Speech time

0 secs

Click for more

Speech speed

0 words per minute

Speech length

words

Speech time

0 secs

Click for more

Speech speed

0 words per minute

Speech length

words

Speech time

0 secs

Click for more

Speech speed

0 words per minute

Speech length

words

Speech time

0 secs

Click for more

Ethical principles for the use of AI in cybersecurity | IGF 2023 WS #33

Table of contents

Disclaimer: This is not an official record of the IGF session. The DiploAI system automatically generates these resources from the audiovisual recording. Resources are presented in their original format, as provided by the AI (e.g. including any spelling mistakes). The accuracy of these resources cannot be guaranteed. The official record of the session can be found on the IGF's official website.

Knowledge Graph of Debate

Session report

Martin Boteman

The discussion delves into the significance of identity in the realm of security and the crucial role that AI can play in safeguarding identities. It is acknowledged that with the advancement of AI, data has become more frequently personally identifiable than ever before, leading to a need to address the complex relationship between identity and privacy.

One argument put forward is that security will require identity. The increasing personal identifiability of data, facilitated by AI, has made it imperative to establish and protect individual identities for the sake of security. This argument highlights the evolving nature of security in the digital age and the need to adapt to these changes.

On the other hand, a positive stance is taken towards the potential of AI in enhancing security with the identity factor. It is suggested that AI can aid in securing identities by leveraging its capabilities. The specifics of how AI can contribute to this aspect are not explicitly mentioned, but it is implied that AI can play a role in ensuring the authenticity and integrity of identities.

Furthermore, the discussion recognises the necessity to address the dichotomy between identity and privacy. While identity is essential for security purposes, safeguarding privacy is equally important. This creates a challenge in finding a balance between the two. The analysis raises the question of how to deal with this dichotomy in future endeavours, emphasizing the need for a thoughtful and nuanced approach.

Legal measures are acknowledged as an important consideration in the context of AI. However, it is argued that relying solely on legal frameworks is not enough. This underlines the complexity of regulating AI and the urgent need for additional measures to ensure the responsible and ethical use of the technology. The mention of the Algorithmic Accountability Act in the USA and the European Union’s AI Act serves to highlight the efforts being made to address these concerns.

Overall, there is a positive sentiment regarding the potential of AI in enhancing security with the identity factor. The discussion reinforces the significance of ethical principles such as security by design and privacy by design when implementing AI solutions. It asserts that taking responsibility for AI and incorporating these principles into its development and deployment is essential.

It is worth noting that the expanded summary provides a comprehensive overview of the main points discussed. However, more specific evidence or examples supporting these arguments could have further strengthened the analysis. Nonetheless, the analysis highlights the intersection of identity, privacy, AI, and security and emphasizes the need for responsible and balanced approaches in this rapidly evolving landscape.

Amal El Fallah-Seghrouchini, Executive President, Moroccan International Center for Artificial Intelligence

Artificial Intelligence (AI) has emerged as a powerful tool in the field of cybersecurity, with the potential to enhance and transform existing systems. By leveraging AI, common cybersecurity tasks can be automated, allowing for faster and more efficient detection and response to threats. AI can also analyze and identify potential threats in large datasets, enabling cybersecurity professionals to stay one step ahead of cybercriminals.

The importance of AI in cybersecurity is further highlighted by its recognition as a national security priority. Organizations such as the National Science Foundation (NSF), National Science and Technology Council (NSTC), and National Aeronautics and Space Administration (NASA) have emphasized the significance of AI in maintaining the security of nations. This recognition demonstrates the growing global awareness of the role that AI can play in safeguarding critical infrastructure and sensitive data.

However, the use of AI in cybersecurity also raises concerns about the vulnerability of AI systems. Adversarial machine learning techniques can be deployed to attack AI systems, potentially compromising their effectiveness. It is crucial to regulate the use of AI in cybersecurity to mitigate these vulnerabilities and ensure the reliability and security of these systems.

Furthermore, AI is not only a tool for defending against cyber threats but can also be used to create new kinds of attacks. For example, AI-powered systems can be utilized for phishing, cyber extortion, and automated interactive attacks. The potential for AI to be used maliciously highlights the need for robust ethical and regulatory considerations in the development and deployment of AI systems in the cybersecurity domain.

Ethical and regulatory considerations are necessary to strike a balance between the power of AI and human control. Complete delegation of control to AI in cybersecurity is not recommended, as human oversight and decision-making are essential. Frameworks should be established to ensure the ethical use of AI and to address concerns related to privacy, data governance, and individual rights.

Initiatives aimed at differentiating between identifier and identity are being pursued to strengthen security and privacy measures. By avoiding the use of a unique identifier for individuals and instead associating sectorial identifiers with identity through trusted third-party certification, the risk of data breaches and unauthorized access is reduced.

In addition to data protection, ethics in AI extend to considerations of dignity and human rights. It is essential to incorporate these ethical principles into the design and implementation of AI systems. Furthermore, informed consent and user awareness are crucial in ensuring that individuals understand the implications and potential risks associated with using generative AI systems.

Preserving dignity and human rights should be a priority in all systems, including those powered by AI. This encompasses a continuous debate and discussion in which the principles of ethics play a central role. Educating the population and working towards informed consent are important steps in achieving a balance between the benefits and potential harms of AI.

Accountability, privacy, and data protection are recognized as tools towards ensuring ethical practices. These principles should be integrated into the development and deployment of AI systems to safeguard individual rights and maintain public trust.

Overall, AI has the potential to revolutionize cybersecurity, but its implementation requires careful consideration of ethical, regulatory, and privacy concerns. While AI can enhance and transform the field of cybersecurity, there is a need for comprehensive regulation to address vulnerabilities. The differentiation between identifier and identity, as well as the emphasis on dignity and human rights, are important factors to consider in deploying AI systems. Promoting informed consent, user awareness, and ethical use of AI should be prioritized to maintain a secure and trustworthy digital environment.

Audience

During the discussion, the speakers delved into the implementation of ethical AI in the field of cybersecurity and raised concerns regarding its potential disadvantages when countering unethical adversarial AI. They emphasised that adversaries employing adversarial AI techniques are unlikely to consider ethical principles and may operate without any regard for the consequences of their actions.

The audience expressed apprehension about the practicality and effectiveness of using ethical AI in defending against unethical adversarial AI. They questioned whether the application of ethical AI would provide a sufficient response to the increasingly sophisticated and malicious tactics employed by adversaries. It was noted that engaging in responsive actions by deploying ethical AI to counter unethical adversarial AI might place defenders at a disadvantage, highlighting the complexity of the issue.

Given these concerns, the need for a thorough review of the application of ethical AI in response to unethical adversarial AI was acknowledged. There was specific emphasis on active cyber defence, which involves proactive measures to prevent cyber attacks and mitigate potential harm. The aim of the review is to ensure that the use of ethical AI is optimised and effectively aligned with the challenges posed by unethical adversarial AI.

These discussions revolved around the topics of Ethical AI, Adversarial AI, Cybersecurity, and Active Cyber Defence, all of which are highly relevant in today’s digital landscape. The concerns raised during the discussion reflect the ongoing tension between the desire to uphold ethical principles and the practical challenges faced when countering adversaries who disregard those principles.

Furthermore, this discussion aligns with the Sustainable Development Goals (SDGs) 9 and 16, which emphasise the importance of creating resilient infrastructure, fostering innovation, promoting peaceful and inclusive societies, and ensuring access to justice for all. By addressing the ethical challenges associated with adversarial AI in cybersecurity, efforts can be made towards achieving these SDGs, as they are integral to building a secure and just digital environment.

Overall, the discussion underscored the need for careful consideration and evaluation of the application of ethical AI in response to unethical adversarial AI. Balancing the ethical dimension with the practical requirements of countering adversaries in the ever-evolving digital landscape is a complex task that warrants ongoing discussion and analysis.

Anastasiya Kazakova, Cyber Diplomacy Knowledge Fellow, DiploFoundation

Artificial Intelligence (AI) plays a crucial role in enhancing cybersecurity by improving threat detection and intelligence gathering. However, concerns have been raised regarding the autonomous nature of AI and its potential to make impactful decisions in everyday life. It is argued that AI should not operate solely autonomously, highlighting the importance of human oversight in guiding AI’s decision-making processes.

A major issue faced in the field of AI is the anticipation of conflicting AI regulations being established by major markets, including the EU, US, and China. This potential fragmentation in regulations raises concerns about the limitations and hindered benefits of AI. It is important to have uniform regulations that promote the widespread use and opportunities of AI for different communities.

The challenge of defining AI universally is another issue faced by legislators. With AI evolving rapidly, it becomes increasingly difficult to encompass all technological advancements within rigid legal frameworks. Instead, the focus should be on regulating the outcomes and expectations of AI, rather than the technology itself. This flexible and outcome-driven approach allows for adaptable regulations that keep up with the dynamic nature of AI development.

In the realm of cybersecurity, the question arises of whether organizations should have the right to “hack back” in response to attacks. Most governments and industries agree that organizations should not have this right, as it can lead to escalating cyber conflicts. Instead, it is recommended that law enforcement agencies with the appropriate mandate step in and investigate cyberattacks.

The challenges faced in cyberspace are becoming increasingly sophisticated, requiring both technical and policy solutions. Addressing cyber threats necessitates identifying the nature of the threat, whether it is cyber espionage, an Advanced Persistent Threat (APT), or a complex Distributed Denial of Service (DDoS) attack. Hence, integrated approaches involving both technical expertise and policy frameworks are essential to effectively combat cyber threats.

Ethical behavior is emphasized in the field of cybersecurity. It is crucial for good actors to abide by international and national laws, even in their reactions to unethical actions. Reacting unethically to protect oneself can compromise overall security and stability. Therefore, ethical guidelines and considerations must guide actions in the cybersecurity realm.

The solution to addressing cybersecurity concerns lies in creativity and enhanced cooperation. Developing new types of response strategies and increasing collaboration between communities, vendors, and governments are vital. While international and national laws provide a foundation, innovative approaches and thinking must be utilized to develop effective responses to emerging cyber threats.

Regulations play an important role in addressing AI challenges, but they are not the sole solution. The industry can also make significant strides in enhancing AI ethics, governance, and transparency without solely relying on policymakers and regulators. Therefore, a balanced approach that combines effective regulations with industry initiatives is necessary.

Increased transparency in software and AI-based solution composition is supported. The initiative of a “software bill of materials” is seen as a positive step towards understanding the composition of software, similar to knowing the ingredients of a cake. Documenting data sources, collection methods, and processing techniques promotes responsible consumption and production.

In conclusion, AI has a significant impact on cybersecurity, but it should not operate exclusively autonomously. Addressing challenges such as conflicting regulations, defining AI, the right to “hack back,” and increasing sophistication of cyber threats requires a multidimensional approach that encompasses technical expertise, policy frameworks, ethical considerations, creativity, and enhanced cooperation. Effective regulations, industry initiatives, and transparency in software composition all contribute to a more secure and stable cyberspace.

Noushin Shabab, Senior Security Researcher, Global Research and Analysis, Kaspersky

Kaspersky, a leading cybersecurity company, has harnessed the power of artificial intelligence (AI) and machine learning to strengthen cybersecurity. They have integrated machine learning techniques into their products for an extended period, resulting in significant improvements.

Transparency is paramount when using AI in cybersecurity, according to Kaspersky. To achieve this, they have implemented a global transparency initiative and established transparency centers in various countries. These centers allow stakeholders and customers to access and review their product code, fostering trust and collaboration in the cybersecurity field.

While AI and machine learning have proven effective in cybersecurity, it is crucial to protect these systems from misuse. Attackers can manipulate machine learning outcomes, posing a significant threat. Safeguards and security measures must be implemented to ensure the integrity of AI and machine learning systems.

Kaspersky believes that effective cybersecurity requires a balance between AI and human control. While machine learning algorithms are adept at analyzing complex malware, human involvement is essential for informed decision-making and responding to evolving threats. Kaspersky combines human control with machine learning to ensure comprehensive cybersecurity practices.

Respecting user privacy is another vital consideration when incorporating AI in cybersecurity. Kaspersky has implemented measures such as pseudonymization, anonymization, data minimization, and personal identifier removal to protect user privacy. By prioritizing user privacy, Kaspersky provides secure and trustworthy solutions.

Collaboration and open dialogue are emphasized by Kaspersky in the AI-enabled cybersecurity domain. They advocate for collective efforts and knowledge exchange to combat cyber threats effectively. Open dialogue promotes the sharing of insights and ideas, leading to stronger cybersecurity practices.

It is crucial to be aware of the potential misuse of AI by malicious actors. AI can facilitate more convincing social engineering attacks, like spear-phishing, which can deceive even vigilant users. However, Kaspersky highlights that advanced security solutions, incorporating machine learning, can identify and mitigate such attacks.

User awareness and education are essential to counter AI-enabled cyber threats. Kaspersky underscores the importance of educating users to understand and effectively respond to these threats. Combining advanced security solutions with user education is a recommended approach to tackle AI-enabled cyber threats.

In conclusion, Kaspersky’s approach to AI-enabled cybersecurity encompasses leveraging machine learning, maintaining transparency, safeguarding systems, respecting user privacy, and promoting collaboration and user education. By adhering to these principles, Kaspersky aims to enhance cybersecurity practices and protect users from evolving threats.

Dennis Kenji Kipker, Expert in Cybersecurity Law, University of Bremen

The discussions revolve around the integration of artificial intelligence (AI) and cybersecurity. AI has already been used in the field of cybersecurity for automated anomaly detection in networks and to improve overall cybersecurity measures. The argument is made that AI and cybersecurity have been interconnected for a long time, even before the emergence of use cases like generative AI.

It is argued that special AI regulation specifically for cybersecurity is not necessary. European lawmakers are mentioned as leaders in cybersecurity legislation, using the term “state-of-the-art of technology” to define the compliance requirements for private companies and public institutions. It is mentioned that attacks using AI can be covered by existing national cyber criminal legislation, without the need for explicit AI-specific regulation. Furthermore, it is highlighted that the development and security of AI is already addressed in legislation such as the European AI Act.

The need for clear differentiation in the regulation of AI and cybersecurity is emphasized. Different scenarios need different approaches, distinguishing between cases where AI is one of several technical means and cases where AI-specific risks need to be regulated.

The privacy risks associated with AI development are also acknowledged. High-impact privacy risks can arise during the development process and need to be carefully considered and addressed.

The struggles in implementing privacy laws and detecting violations are mentioned. It is suggested that more efforts are needed to effectively enforce privacy laws and detect violations in order to protect individuals’ privacy.

While regulation of AI is deemed necessary, it is also suggested that it should not unnecessarily delay or hinder other necessary regulations. The European AI Act, with its risk classes, is mentioned as a good first approach to AI regulation.

The importance of cooperation between the state and industry actors is emphasized. AI is mainly developed by a few big tech players from the US, and there is a need for closer collaboration between the state and industry actors for improved governance and oversight of AI.

It is argued that self-regulation by industries alone is not enough. Establishing a system of transparency on a permanent legal basis is seen as necessary to ensure ethical and responsible AI development and deployment.

Additional resources and stronger supervision of AI are deemed necessary. Authorities responsible for the supervision of AI should be equipped with more financial and personnel resources to effectively monitor and regulate AI activities.

The need for human control in AI-related decision-making is emphasized. Official decisions or decisions made by private companies that can have a negative impact on individuals should not be solely based on AI but should involve human oversight and control.

Safety in AI development is considered paramount. It is emphasized that secure development practices are crucial to ensure the safety and reliability of AI solutions.

Lastly, it is acknowledged that while regulation plays a vital role, it alone cannot completely eliminate all the problems associated with AI. There is a need for a comprehensive approach that combines effective regulation, cooperation, resources, and human control to address the challenges and maximize the benefits of AI technology.

Jochen Michels, Head of Public Affairs Europe, Kaspersky

During the session, all the speakers were in agreement that the six ethical principles of AI use in cybersecurity are equally important. This consensus among the speakers highlights their shared understanding of the significance of each principle in ensuring ethical practices in the field.

Furthermore, the attendees of the session also recognized the importance of all six principles. The fact that these principles were mentioned by multiple participants indicates their collective acknowledgement of the principles’ value. This shared significance emphasizes the need to consider all six principles when addressing the ethical challenges posed by AI in cybersecurity.

However, while acknowledging the equal importance of the principles, there is consensus among the participants that further multi-stakeholder discussion is necessary. This discussion should involve a comprehensive range of stakeholders, including industry representatives, academics, and political authorities. By involving all these parties, it becomes possible to ensure a holistic and inclusive approach to addressing the ethical implications of AI use in cybersecurity.

The need for this multi-stakeholder discussion becomes evident through the variety of principles mentioned in a poll conducted during the session. The diverse range of principles brought up by the attendees emphasizes the importance of engaging all involved parties to ensure comprehensive coverage of ethical considerations.

In conclusion, the session affirmed that all six ethical principles of AI use in cybersecurity are of equal importance. However, it also highlighted the necessity for further multi-stakeholder discussion to ensure comprehensive coverage and engagement of all stakeholders. This discussion should involve representatives from industry, academia, and politics to effectively address the ethical challenges posed by AI in cybersecurity. The session underscored the significance of partnerships and cooperation in tackling these challenges on a broader scale.

Moderator

The panel discussion on the ethical principles of AI in cybersecurity brought together experts from various backgrounds. Panelists included Professor Dennis Kenji Kipker, an expert in cybersecurity law from Germany, Professor Amal, the Executive President for the AI Movement at the Moroccan International Center for Artificial Intelligence, Ms. Nushin, a Senior Security Researcher from Kaspersky in Australia, and Ms. Anastasia Kazakova, a Cyber Diplomacy Knowledge Fellow from the Diplo Foundation in Serbia.

The panelists discussed the potential of AI to enhance cybersecurity but stressed the need for a dialogue on ethical principles. AI can automate common tasks and help identify threats in cybersecurity. Kaspersky detects 325,000 new malicious files daily and recognizes the role AI can play in transforming cybersecurity methods. However, AI systems in cybersecurity are vulnerable to attacks and misuse. Adversarial AI can attack AI systems and misuse AI to create fake videos and AI-powered malware.

Transparency, safety, human control, privacy, and defense against cyber attacks were identified as key ethical principles in AI cybersecurity. The panelists emphasized the importance of transparency in understanding the technology being used and protecting user data. They also highlighted the need for human control in decision-making processes, as decisions impacting individuals cannot solely rely on AI algorithms.

The panelists and online audience agreed on the equal importance of these ethical principles and called for further discussions on their implementation. The moderator supported multi-stakeholder discussions and stressed the involvement of various sectors, including industry, research, academia, politics, and civil society, for a comprehensive and inclusive approach.

Plans are underway to develop an impulse paper outlining ethical principles for the use of AI in cybersecurity. This paper will reflect the discussion outcomes and be shared with the IGF community. Feedback from stakeholders will be gathered to further refine the principles. Kaspersky will also use the paper to develop their own ethical principles.

In summary, the panel discussion highlighted the ethical considerations of AI in cybersecurity. Transparency, safety, human control, privacy, and defense against cyber attacks were identified as crucial principles. The ongoing multi-stakeholder discussions and the development of an impulse paper aim to provide guidelines for different sectors and promote an ethical approach to AI in cybersecurity.

Speakers

&

’Amal

Speech speed

0 words per minute

Speech length

words

Speech time

0 secs

Click for more

&

’Anastasiya

Speech speed

0 words per minute

Speech length

words

Speech time

0 secs

Click for more

Speech speed

0 words per minute

Speech length

words

Speech time

0 secs

Click for more

&

’Dennis

Speech speed

0 words per minute

Speech length

words

Speech time

0 secs

Click for more

&

’Jochen

Speech speed

0 words per minute

Speech length

words

Speech time

0 secs

Click for more

&

’Martin

Speech speed

0 words per minute

Speech length

words

Speech time

0 secs

Click for more

Speech speed

0 words per minute

Speech length

words

Speech time

0 secs

Click for more

&

’Noushin

Speech speed

0 words per minute

Speech length

words

Speech time

0 secs

Click for more

DCNN (Un)Fair Share and Zero Rating: Who Pays for the Internet? | IGF 2023

Table of contents

Disclaimer: This is not an official record of the IGF session. The DiploAI system automatically generates these resources from the audiovisual recording. Resources are presented in their original format, as provided by the AI (e.g. including any spelling mistakes). The accuracy of these resources cannot be guaranteed. The official record of the session can be found on the IGF's official website.

Knowledge Graph of Debate

Session report

Audience

The discussion surrounding Europe’s influence on Latin America’s policy decisions is of great interest. While the sentiment towards this topic remains neutral, it is acknowledged that everything discussed in Europe has a significant impact on Latin America’s policy agenda. This highlights the interconnectedness between the two regions in terms of policy-making.

The development of the interconnection ecosystem has been a notable achievement for the internet technical community. Previously, all the interconnections between ISPs and content providers used to happen in Miami. However, a significant effort has been made to develop a completely new interconnection ecosystem. This development has been positively received and is seen as a step forward in enhancing access for people and supporting industry, innovation, and infrastructure, in line with SDG9.

On the other hand, the adoption of new policies by countries like Brazil can have negative consequences. When a country adopts a particular policy, companies are required to pay and comply with the law, which may result in additional costs. As a result, companies may choose not to bring their caches and peerings into exchange points. This policy change can disrupt the existing system and have an adverse effect on telecommunications companies and content providers. The smaller stakeholders, such as small ISPs, small platforms, and small internet companies, will be particularly affected by such changes. The disruption caused by this policy change is expected to be significant, with results similar to the current scenario.

The European telecom sector is facing several challenges, with a major concern being the cost involved. The sector has experienced a decrease in revenues by 30% since 2011. Furthermore, the returns on investment for the capital employees have been lower than those in the US. This negative trend highlights the need for attention and potential solutions to address the financial health of the sector.

Investment in networks is considered of utmost importance. The focus remains on the quality of networks, along with the need to improve coverage, especially regarding 5G networks. The current adoption rate of 5G in Europe stands at 15%, underscoring the room for growth and the importance of investing in network infrastructure. These investments align with the goals of SDG9, which include industry, innovation, and infrastructure.

Another suggestion put forth during the discussion is the idea of redistributing funds from over-the-top (OTT) platforms to support telecommunications services, particularly in rural areas. This proposal aims to utilize the funds obtained from OTT platforms as a source for a Universal Service Fund, which can be dedicated to strengthening telecommunications services in areas with limited connectivity. This concept resonates with the focus of the SDGs on reducing inequalities (SDG10) and industry, innovation, and infrastructure (SDG9).

In conclusion, the discussion on Europe’s influence on Latin America’s policy decisions provides valuable insights into various aspects of policy-making, interconnection ecosystems, the impact on small stakeholders, challenges faced by the European telecom sector, the importance of investment in networks, and the potential of redistributing funds for rural telecommunications services. While some of these points have positive implications, others highlight concerns and challenges, making it a diverse and multifaceted discussion.

Maarit Palovirta

The telecommunications market in Europe faces limitations in investment for infrastructure due to its unique market structure and intense competition. Compared to the United States and Japan, Europe has a more fragmented market with 38 telecom operators serving over 500,000 customers, creating challenges in securing investment for vital infrastructure like 5G networks.

Additionally, heavy sector-specific regulations and restrictions on mergers hinder the growth of European telecom operators. Pricing regulation further limits their flexibility in pricing services. Limited investment in telecommunication infrastructure impacts service quality, trust, and sustainability, leading to decreased customer satisfaction. Efforts are being made to measure the environmental sustainability of the sector.

Despite these challenges, the European Commission deems the existing open internet principles valid and not in need of revisiting. However, operators in Europe face a one-sided obligation to deliver any traffic regardless of size or form, limiting their ability to manage data traffic.

Investments in private networks are applauded, despite creating regulatory asymmetry. The impact of these investments needs evaluation in relation to access network investment. Addressing the lack of coverage and capacity in some areas requires investment and enhancement.

The European Commission aims to deliver a new regulatory framework to tackle industry challenges and supports open discussions with stakeholders. They advocate for a check-up of the internet ecosystem and regulation framework. In conclusion, the telecommunications market in Europe faces limitations in infrastructure investment due to its unique market structure and competition. Sector-specific regulations and pricing restrictions further hinder operator growth. Limited investment affects service quality, trust, and sustainability. However, the existing open internet principles are deemed valid. Investments in private networks are praised, and efforts are being made to address coverage and capacity issues. The Commission aims to deliver a new regulatory framework and supports open discussions to address challenges in the industry. A comprehensive evaluation of the internet ecosystem and regulation framework is advocated.

Kamila Kloc

The issue of concern over internet fragmentation due to the practices of telecom companies and big tech companies is gaining significant attention. These practices have the potential to create a division between users and services, ultimately leading to increased inequality. The original intention of the internet was to be an open and interconnected environment, but certain practices have disrupted this ideal.

Limited internet access poses a significant drawback, especially for economically disadvantaged individuals. In Brazil, for instance, many people rely on public Wi-Fi or have limited access to home Wi-Fi. Towards the end of the month, when data allocations are nearing their limit, accessing the internet becomes challenging. As a result, individuals are left with restricted access to only a few apps or websites, exacerbating existing inequalities.

Additionally, limited internet access can contribute to the spread of misinformation. When people are unable to verify the information they receive due to restricted access, it becomes easier for unverified or false information to circulate. This situation leads to an increase in disinformation, undermining the goal of an informed and educated society.

The practices of zero rating and fair share also adversely affect consumers, particularly in economically disadvantaged regions. Zero rating is often presented as a way to provide free and unlimited access to specific apps or services. However, in practice, it can restrict individuals’ choices and tie them to specific apps. Furthermore, fair share practices, aimed at increasing revenue for telecom companies, may result in increased prices and reduced service quality. These practices further disadvantage consumers, especially those in economically vulnerable communities.

When discussing open internet access and methods to expand access, it is crucial to prioritize the well-being of consumers. The focus should be on finding solutions that ensure equal access to the internet for all individuals, irrespective of their socioeconomic status. Addressing the distortion of the telecom market, whether through existing or potential practices, is essential to prevent further inequality.

To summarize, the concern over internet fragmentation and limited access resulting from the practices of telecom companies and big tech companies is of growing importance. These practices can lead to a digital divide and increased inequality among users and services. Limited internet access exacerbates this inequality and hampers individuals’ ability to verify information, facilitating the spread of misinformation. The practices of zero rating and fair share also harm consumers, particularly in economically disadvantaged areas. It is crucial to prioritize consumers’ welfare when discussing open internet access and explore equitable approaches to expand access for all.

Artur Coimbra

The internet architecture has significantly changed over the past 15-20 years, with content now being located closer to users. This transformation has led to the emergence of micro data centers, content delivery networks, and caching infrastructures, revolutionizing the way content is delivered. Additionally, there has been a remarkable reduction in data storage costs, with prices decreasing by as much as 98% or 99% during this period. These changes have not only made the service more affordable and efficient but have also resulted in cost savings for IP transit contracts.

While these developments have brought benefits to users and content providers, telcos are facing pressure from large digital platforms to provide content for free. Previously, telcos charged both content providers and users through IP transit contracts. However, due to pressure from big tech platforms, telcos are now compelled to provide content without charge, leading to a shift towards a one-sided market. This transition has placed telcos in a challenging position as they are unable to increase charges for users due to legal restrictions on data caps and other market factors.

A market solution is seen as a positive approach to address the pressure telcos face from big tech platforms. Creating a healthy and sustainable network is an incentive for both telcos and big digital platforms, emphasizing the need for a market-driven solution.

It is important to differentiate whether the pressure telcos experience is a result of bargain power or market power exerted by big tech platforms. If the pressure is due to bargain power, it is considered a norm within the business environment. However, if it is a consequence of market power, then it becomes a structural issue that necessitates intervention from regulators and legislators. This distinction is crucial in determining the appropriate course of action.

In Brazil, regulators are adopting an evidence-based approach to define the problem before seeking a solution. Gathering evidence and understanding the issue better is seen as essential for achieving the objective of increasing funds available for network investment.

When designing the concept of a fair share, careful consideration must be given to ensure sufficient funds are allocated to network investment. If the fair share results in pricing competition among users, the available funds for investment could be depleted. Therefore, striking a balance between fair treatment and maintaining adequate investment funds is vital.

In conclusion, the evolution of internet architecture has brought about positive changes, including cost reduction and improved services. However, telcos now face challenges due to pressure from big tech platforms. Finding a market solution and distinguishing between bargain power and market power will be crucial for maintaining healthy networks. Regulatory intervention may be necessary in cases involving market power. The regulator in Brazil is adopting an evidence-based approach to addressing the issue at hand. Designing a fair share concept that enables investment without depletion of funds is of utmost importance.

KS Park

The standard payroll rule implemented among internet service providers (ISPs) in South Korea has had several negative consequences. This rule has resulted in inflated internet access fees, which have put a financial strain on both ISPs and content providers. Content providers have been required to pay more as their host ISPs send more data to other ISPs. Consequently, South Korea’s transit IP fees have become significantly higher than those in Frankfurt and London, reaching ten times and eight times the respective fees in those cities in 2021. As a result, public interest apps, such as the COVID location announcement system, have been unable to fully function due to the exorbitant internet transit fees.

Furthermore, the presence of paid peering has caused confusion and violated network neutrality. A significant portion of internet traffic goes through paid peering points, which has led to concerns about unfair share violation and the lack of network neutrality. The confusion surrounding whether network share and unfair share violation are a result of paid peering persists.

Regulations from both the Federal Communications Commission (FCC) and the Body of European Regulators for Electronic Communications (BEREC) do not explicitly condemn paid peering, leaving room for uncertainty and complications in enforcing network neutrality.

The concept of mandatory paid peering is also met with negative sentiment. Implementing mandatory paid peering would likely lead major companies such as Google and Netflix to disconnect from the network rather than pay access fees. If content providers burdened with peering fees disconnect, regulators would have limited options without fundamentally altering the nature of the internet.

Despite these issues, the principles of freedom to connect and not charging for data delivery remain positive aspects of the internet. These principles are considered the foundation of the global product of the internet and enable users to connect freely without being burdened by data delivery charges.

On a positive note, despite a five-fold increase in data traffic, the cost of network maintenance and development has remained constant over the past five years due to technological advancements. This demonstrates the efficiency and progress made in maintaining networks and supporting the growing demand for data.

Turning to the topic of 5G, Korean telecoms have faced challenges in delivering good connectivity despite forcing consumers to purchase 5G phones. This has resulted in consumer dissatisfaction and the filing of class-action lawsuits against telecoms. Additionally, the government in Korea has taken away the 5G bandwidth license from certain telecoms, further complicating the situation.

European telcos, on the other hand, have managed to maintain profits despite falling revenues, thanks to the decreasing cost per unit of data. They have been able to offset the declining revenues by reducing the cost of data delivery.

However, it’s important to note that declining profits of telcos do not guarantee the maintenance of privacy. The Korean case, for example, indicates that despite falling costs and sustained profits, privacy has not been adequately protected.

In conclusion, the standard payroll rule among ISPs in South Korea has had negative effects on both ISPs and content providers, causing financial strain. The presence of paid peering has raised concerns about unfair share violation and violated network neutrality. Despite these challenges, the principles of freedom to connect and not charge for data delivery are key pillars of the internet. Technological advancements have enabled the cost of network maintenance and development to remain constant despite increased data traffic. Challenges with 5G connectivity and lawsuits have arisen in Korean telecoms, while European telcos have maintained profits through reduced data delivery costs. However, the declining profits of telcos do not guarantee the protection of privacy.

Konstantinos Komaitis

Applying old telecoms rules to the internet is widely regarded as detrimental, as it would result in unanticipated barriers to entry. This approach is seen as nonsensical, considering that telecoms rules operate under the pretext of the Internet Governance Forum. The argument against these rules is based on the belief that they would hinder competition and impede innovation in unpredictable ways.

It is also argued that the internet infrastructure is not solely dependent on telecom operators. A diverse range of actors, including technology companies, contribute significantly to the development and maintenance of the internet ecosystem. Content and application providers play a vital role in supporting internet infrastructure, as exemplified by their contributions through CDNs, data centers, and cloud services. Therefore, portraying only telecom operators as the sole contributors to internet infrastructure is inaccurate.

The issue at hand also revolves around network neutrality, and concerns have been raised regarding the potential discrimination against certain applications and counterproviders. These cases highlight the violation of network neutrality principles, not only from a technological standpoint but also in terms of economic fairness.

The debate around Universal Service Funds (USFs) has garnered criticism from various perspectives. Telefonica, for instance, suggests that Europe should not replicate the USA’s approach to USFs and instead advocates for direct payments as a more suitable solution. Additionally, Komaitis questions whether telecom companies genuinely desire a discussion centered on USFs, suggesting a misalignment of interests.

Criticism is also directed towards Europe’s telecom model, which is deemed as setting a poor example. Komaitis specifically points out flaws in Europe’s approach and highlights the need for a more effective model.

Notably, over 20 organizations globally, including Brazil, India, Europe, and the United States, express similar concerns about the infrastructure issue, indicating a widespread and significant global concern. This highlights the need for a global dialogue and deliberation on infrastructure, led by civil society organizations.

Komaitis stands firmly against the current method of discussing infrastructure and believes that it needs fundamental changes. He argues that the current conversation around infrastructure is primarily driven by telecom operators, neglecting the perspectives and interests of other stakeholders.

In conclusion, applying outdated telecoms rules to the internet is widely seen as detrimental and likely to create unforeseen barriers. The internet ecosystem relies on diverse actors, including technology companies, and portraying only telecom operators as contributors to infrastructure is misleading. The issue at hand encompasses concerns over network neutrality, technological and economic discrimination, universal service funds, Europe’s telecom model, and the need for a more inclusive and global discussion on infrastructure. Komaitis takes a stance against the current infrastructure dialogue and calls for a change in approach.

Thomas Lohninger

In the discussion surrounding the telecom industry, several key points emerge. Firstly, the practice of zero-rating, which allows users to access certain online content without incurring data charges, is prevalent in many nations. This practice controls how users experience the internet by incentivising them to use certain services for free.

Concerns have also been raised about the shift in the telecom industry towards prioritising profit over quality. Some argue that this focus on profit optimisation may lead to a deterioration in the overall user experience. Critics suggest that this approach could result in the elimination of local caching services, potentially increasing costs for consumers.

The concept of net neutrality is also a contentious issue. It is argued that network fees are inherently incompatible with the principles of net neutrality. Those who support net neutrality argue that all users should have equal access to the internet, without any discrimination or preferential treatment based on payment.

Opponents of a proposition that violates net neutrality predict that it would be harmful to society and the internet ecosystem as a whole. They argue that such a proposition would violate the principle of net neutrality and would primarily serve the profit margins of telecom companies. Instead, they suggest that the concerns and needs of society should be the deciding factor, rather than simply focusing on telecom companies’ profits.

Commissioner Thierry Breton has faced criticism for not upholding due diligence standards. His previous role as CEO of France Telecom has led to accusations that he broke his promise in the European Parliament. In response, some countries, such as Germany and the Netherlands, have issued letters to the European Commission, urging it to uphold due diligence standards.

Furthermore, when it comes to network investment, there is evidence suggesting that simply investing more money in improving the network infrastructure may not necessarily result in better quality for users. This challenges the notion that money is the main bottleneck in network rollout.

The influence of corporate interests on the decision-making process within the European Commission is also a point of concern. The appointment of the former CEO of France Telecom to the commission is seen by critics as an example of corporate capture. This has led to the promotion of potentially damaging ideas that have been rejected by stakeholders other than telecom companies.

Additionally, the creation of a major telecommunication oligopoly in Europe is viewed by some as an unfavorable outcome. Instead, it is argued that a more desirable model for the telecom industry would involve competition and cooperation among multiple players, rather than domination by a few.

There are also diverging opinions regarding the nature of telecommunications. Some argue that it should be treated as a public utility, prioritising public access and welfare. On the other hand, there are those who disapprove of market deregulation in the industry, likely due to concerns about inequality and the integrity of the market.

In conclusion, the telecom industry has sparked various debates and concerns. The practice of zero-rating, the shift towards profit optimisation, net neutrality, corporate influence, network investment, market deregulation, and the nature of telecommunications as either public utilities or market-driven entities are all key topics of contention. Clear arguments have been presented from different perspectives, each supported by specific evidence and rationales. The discussions highlight the complex challenges faced by the telecom industry and the importance of carefully considering the potential consequences of various policy decisions.

Jean Jaques Sahel

The analysis of the speakers’ views on the internet ecosystem and its impact on consumers, innovation, and infrastructure provides valuable insights. One of the key points emphasised by the speakers is the need to enhance the open internet to drive innovation and foster digital transformation. They argue that strong emphasis should be placed on preserving the open nature of the internet, as it has been a game-changer in providing access to information for people globally. They also highlight how the internet has become an essential tool for everyday life and the economy as a whole.

Efforts to improve internet connectivity should not only focus on urban areas but also on reaching the last 5-10% of the population in hard-to-reach areas. The aim is to bridge the digital divide and ensure that everyone can benefit from the opportunities offered by the internet. In this regard, it is important to facilitate the easier deployment of internet infrastructure, making it more accessible to remote communities.

The analysis also recognises the significant contributions made by content and application providers in the internet ecosystem. These providers play a crucial role in driving innovation and creating products that attract customers. Additionally, they fund infrastructure such as subsea cables, which help transport traffic more efficiently and save costs for internet service providers (ISPs). The speakers argue that content and application providers should be acknowledged for their massive contributions and the positive impact they have on the network infrastructure.

Regulatory frameworks and market evolution were also discussed as important factors in shaping the internet landscape. The speakers suggest that improvements can be made to regulatory frameworks, both in Europe and worldwide, to accommodate new technologies and seize emerging opportunities. They highlight the need for a forward-thinking approach that embraces the positive aspects of the evolving market.

Stakeholder inclusion was another aspect that was emphasised. The speakers argue that all stakeholders, including consumer organisations, civil society organisations, industry, academics, and the technical community, should be invited to speak at internet governance events. This inclusive approach ensures a well-rounded and diverse perspective in decision-making processes.

Evidence-based decision-making was also highlighted as a crucial factor in internet governance. The speakers emphasised the importance of utilising expert analysis from organisations such as BEREC, telecom regulators, OECD, and the German Motor Police Commission, among others. This approach promotes informed decision-making that considers the implications and potential challenges related to internet governance.

In conclusion, the analysis highlights the need to enhance the open internet, extend connectivity to remote areas, recognise the contributions of content and application providers, improve regulatory frameworks, embrace market evolution, foster stakeholder inclusion, and prioritise evidence-based decision-making. These actions will ultimately contribute to a more accessible, innovative, and inclusive internet ecosystem.

Luca Belli

The analysis examines three perspectives on zero rating and the increase in internet traffic. The first perspective asserts that zero rating is less common in the global north, but prevalent in the global south. In the global south, large platforms have been subsidised through zero rating for the past 10 years, resulting in these platforms generating most of the internet traffic. This prevalence of zero rating has created a new kind of poverty known as “data poverty,” whereby users quickly exhaust their data allowances, similar to running out of money. This perspective presents a negative sentiment towards zero rating and its impact on internet access and digital rights, thereby emphasising the need for fair share.

The second perspective critically examines operators who claim to promote fair share. It argues that these operators are responsible for implementing business models that have led to the exponential increase in internet traffic. Therefore, their assertion of fair share appears self-serving and contradictory to their own actions. This viewpoint highlights the negative consequences of these business models and expresses a critical sentiment towards operators’ claims of fair share.

The third perspective focuses on the shift in telecom operators’ perspectives on increasing internet traffic. It points out that, until the pandemic, telecom operators, especially in countries like Germany, encouraged high video consumption through schemes like BingeOn. However, it is now intriguing that these very operators consider the increase in traffic problematic. This observation indicates a change in their perception and raises questions about their motivations and inconsistencies in their approach.

Overall, the analysis emphasises the negative impact of zero rating on internet access and digital rights, highlighting disparities between the global north and south. It also critiques operators for claiming fair share while implementing business models that contribute to the surge in traffic. The shifting perspectives of telecom operators further highlight the need to scrutinise their motives and actions. These insights underscore the importance of addressing the issue of zero rating, promoting responsible consumption and production, and reducing inequalities in global internet access.

Speakers

Speech speed

0 words per minute

Speech length

words

Speech time

0 secs

Click for more

Speech speed

0 words per minute

Speech length

words

Speech time

0 secs

Click for more

Speech speed

0 words per minute

Speech length

words

Speech time

0 secs

Click for more

Speech speed

0 words per minute

Speech length

words

Speech time

0 secs

Click for more

Speech speed

0 words per minute

Speech length

words

Speech time

0 secs

Click for more

Speech speed

0 words per minute

Speech length

words

Speech time

0 secs

Click for more

Speech speed

0 words per minute

Speech length

words

Speech time

0 secs

Click for more

Speech speed

0 words per minute

Speech length

words

Speech time

0 secs

Click for more

Speech speed

0 words per minute

Speech length

words

Speech time

0 secs

Click for more