IGF Daily Summary for
Thursday, 1 December 2022
Approaches, arguments, and analysis continue developing around the main themes of the IGF, ranging from AI governance and cybersecurity, to internet fragmentation and digital developments, among others.
In this issue of the IGF Daily, you can visit an online exhibition of AfroFuturism that combines African traditional motives with sci-fiction and technology. You can also try to solve the IGF Crossword Puzzle on digital developments and internet governance.
Enjoy the concluding day of the IGF 2022!
Digital Watch team
Issues Discussed
Internet fragmentation
The term splinternet emerged in discussions on internet fragmentation. The session on balancing digital sovereignty and the splinternet focused on the impact of these processes on the internet infrastructure. So far, there have not been significant open pushes for major changes to the core internet protocol (TCP/IP). If an alternative to the current internet protocol is introduced and widely adopted, it would signal a major shift that would mark the end of the current global internet and lead to the emergence of new, parallel architectures.
Internet fragmentation could also be triggered, to some extent, by regulations related to cybersecurity and content policy, if such regulations create national rules obligations that are incompatible with the global nature of critical internet resources.
The risks of internet fragmentation are most likely to arise in controversies about content and data. The filtering and blocking of certain content in some jurisdictions, as well as different approaches to data sovereignty, will increase the risk of weakening the global internet.
Governance of artificial intelligence
Day 3 discussion on AI focused on two aspects: the role of AI in the Global South and AI certification.
Three sessions tackled AI in the Global South. Designing an AI ethical framework in the Global South brought into focus AI regulatory initiatives in Brazil, China, and Chile. Africa is lagging behind when it comes to the development of AI policies and regulations, with some notable exceptions, such as Mauritius’ national AI strategy. A potential build-up towards regulating the protection of personal data as one of the main sources for AI development is the AU’s Malabo Convention. This regional instrument requires one more ratification to enter into force. An example of a successful public campaign on Net Neutrality in India in 2015 was mentioned as an inspiration for grassroots campaigns in the Global South on the questions of AI governance.
The session on need for fundamental regulation for the Global South argued that the interests of developing countries could be better protected if AI is considered a digital public good.
Beyond regulatory issues, the development of AI technologies in Africa has been taking off. There are examples of homegrown AI technologies for cater to the 92 languages spoken in Ethiopia. But when it comes to AI systems developed by big tech companies, societies in the Global South are concerned about the risk of biased data and a lack of understanding of the ethical and cultural context in which these AI systems are deployed.
Discussion in the session Global AI governance for sustainable development argued that the potential AI-driven growth of productivity and the economy is not equivalent to sustainable development. Moreover, the benefits from AI won’t be fairly shared with developing countries, which will most likely experience a negative impact through the loss of jobs as industrial production becomes automated by AI.
Another problem is that most current AI development focuses on very specific sectors, such as agriculture, transport, or water systems. However, very little attention is paid to the holistic impact of AI on other sectors of society or, for example, jobs and environmental impact.
The session on assurance and certification of emerging digital technologies argued that AI certification is the next step in applying AI ethical principles and policies to the use and deployment of AI platforms. Countries have started establishing AI institutes focusing on building certification programmes for different types of AI systems and other emerging technologies.
This AI Certification system in the making faces many challenges, including the need to keep up with the fast pace of the evolution of technology and the shortage of skilled assessment professionals. As AI technology is deployed worldwide, AI certification should be internationalised to reflect the different ethical, cultural, and societal contexts that AI will impact strongly.
A quality safeguarding mechanism is key to building public confidence and security in emerging technologies. Due to the international nature of most digital service provision, best practices for the assurance and conformity assessment of digital services depend on global and regional cooperation.
Addressing cyberattacks
The cyberthreat landscape is increasingly complex, and good cyber defenders are needed. Cyber capacity development is now a priority in the international cooperation agenda. But on the national level, there is an overall lack of impetus by government institutions on cyber capacity building, a low number of cybersecurity courses at university levels (sometimes with outdated materials) and an inability of recent graduates to get cybersecurity jobs because they lack experience.
A capacity development approach connecting industries and educational institutions ensures that there is no supply-demand mismatch. It was also noted that workforce development strategies should be country-specific. The need for cybersecurity personnel varies depending on the country’s industrialisation and digitalisation levels. Text
What we often neglect when a cyberattack occurs is its societal harm and impact. There is an increasing need to develop a harm methodology with quantitative and qualitative indicators to document the harm of cyberattacks on people, communities, and societies.
We need a taxonomy of cyber harm where all stakeholders can contribute to inform the next steps in developing effective legislation, push the private sector to increase security standards, and inform civil society how to help victims. Measuring harm needs to be part of a bigger project with all parties involved where silos are broken: governments introducing new legislation, the private sector creating new security standards, and civil society supporting victims.
Day 3 discussions also touched on the role of parliaments in addressing cyberattacks, and noted how parliamentarians can act as the link between high-level conversations with other stakeholders involved in addressing cyber threats. Concerning the role of other stakeholders, civil society can collaborate with parliaments to ensure accountability and oversight. Civil society and the private sector were encouraged to see parliamentarians as a link to get their voices heard.
Meaningful connectivity and a safe internet
There’s so much more to meaningful connectivity than only internet access. Access to the internet won’t mean much if a user’s device is outdated or if a regular subscription is prohibitively expensive. Users who don’t speak English – widely considered the internet’s lingua franca – won’t find much value in an internet which rarely speaks their language.
Meaningful connectivity, which refers to all those aspects that users require to experience the internet in a valuable and empowering way, was a major reference point for today’s main session on connectivity and human rights.
Solutions are several; among the most reiterated is the need to narrow – or close – the digital divide. Concerning people with a disability, there’s a need for stronger awareness of the need to develop products and services that are fully inclusive.
Governments are no exception: Authorities developing e-government services, for instance, have plenty of privacy and data protection aspects to consider to foster meaningful connectivity.
The main session set the scene for other discussions on human rights, such as ensuring that online spaces are safe and inclusive while at the same time upholding and protecting people’s human rights.
Connectivity, safety, and free speech are all protected by international human rights principles and systems. Yet, there’s a significant gap in the implementation of these laws. For instance, internet shutdowns don’t happen without context. They happen just before elections, amid conflicts, when people are protesting on the streets, etc. The solution is to uphold human rights. It’s through respect for human rights that the internet can become safer and more connected.
Some regions are experiencing a surge in online violence, particularly gender-based violence. Journalists are also increasingly subjected to doxing (personal data inappropriately released online) and digital surveillance. Speaking of surveillance, experts say the COVID-19 pandemic provided an entry point for invasive government surveillance to become normalised.
It’s time to re-assess the legality, necessity, and proportionality of the measures and technology introduced to fight the pandemic and to recognise the lessons learned to ensure. In this way, governments, businesses, civil society, and the entire world, will be better prepared for the next global emergency.
Internet users who fear for their safety, including human rights defenders and vulnerable communities, often depend on encrypted communications. Encryption can keep people safe not only online but in the physical world. So how do we reconcile users’ need to use encryption to protect themselves with law enforcement’s need to access communications as part of investigations?
If we’re pitting privacy and security against each other, that’s a false binary, experts warn. The two are mutually reinforcing, and one cannot meaningfully exist without the other. If a platform introduces the slightest possibility of circumventing encryption, it loses both its security and its privacy features. They say there are other ways of identifying perpetrators, preventing crime, and keeping people safe.
Regulatory harmonisation
Data that flows freely across borders can foster innovation, competitiveness, and economic growth. But it also brings challenges, for instance, in terms of personal data protection or the protection of national economic interests. Discussing balances and trade-offs between digital sovereignty and the harmonisation of regulatory approaches and between business interests and human rights protection, the session on whether to regulate or not to regulate digital spaces pointed out several issues that need to be considered:
Countries do not have the same starting point when it comes to data regulation and developing countries are often put at a disadvantage.
There is a need for agile regulatory systems to allow for rapid technology development while providing consumer protection.
Active public participation is indispensable to achieving effective regulatory frameworks.
Another challenge related to data flows is access to digital evidence. Crime investigators and prosecutors depend on access to data that is frequently located in other jurisdictions or requires the involvement of private actors.
However, crime investigation is still primarily a national activity, with non-agile mechanisms for processing data evidence requests from other countries. Traditional mutual legal assistance treaties (MLATs) were designed with sovereignty at the forefront and do not fare well in situations where the only foreign element in the investigation is data location.
Countries are putting legal solutions to this issue in place, in many cases with unknown extraterritorial effects and insufficient interoperability mechanisms. Those countries that do not have legal frameworks for cross-border digital evidence usually resort to data localisation restrictions.
The solution to this issue should be an interoperable and efficient legal framework that protects the rights of individuals, such as the rights to privacy and due process.
Whole of society approaches to connecting the unconnected
Like the day before, the Day 3 sessions on development issues addressed connectivity gaps and proposed alternative ways of connecting disadvantaged communities. The Internet Backpack project was presented as a complementary alternative, capable of providing sustainable connectivity on 95% of the Earth’s territory.
Spectrum allocation is seen as an essential element in promoting connectivity. However, the mobile telecoms provider sector usually comprises only a few players and in rural communities, little or no spectrum segmentation is available to serve small internet providers.
It is, therefore, necessary to ensure that policymakers recognise the value of small operators, such as community networks, and formulate timely policies to assist them. An infrastructure built by the community itself should not be seen as competing with big telecommunications operators but as benefiting the community. For example, in Ghana, the National IT Agency manages its spectrum allocations through so-called Enhanced Community Centres, digital hubs that provide last-mile connectivity without charge to villages.
Policymakers and regulators are encouraged to look outside traditional regulatory frameworks to avoid the exclusion of marginalised groups, concluded the session on Policy network: Meaningful access.
Whole of society responses to the lack of connectivity and other challenges of the digital age were highlighted in the session on Strengthening African voices in global digital policy. The role of communities of practice was noted, emphasising that they can ensure a stronger representation of African interests in global digital discussions.
While training is essential as a starting point, sustainable impact is created through institutions within the African Union and regional economic communities, national governments, and universities. Strong African diaspora communities, especially at universities worldwide, are seen as untapped potential.
Towards universal internet principles?
Discussed on Day 3 as well, the Declaration for the Future of the Internet (DFI) sparked another debate between the representatives of countries that have signed the declaration and those that have not. There are several reasons why countries might decide not to join the declaration – refraining from signing a document that one did not negotiate was cited the most.
Initiators of the declaration underlined that the DFI was conceived as a shared positive vision for the future of the internet to counteract a rising trend of digital authoritarianism. The declaration says that civil society, the private sector, the technical community, academia, and other interested parties have a role to play in getting more states to follow these principles and holding states accountable for them.
In times of crisis, it’s even more important to stick to rules that everyone agrees on for how to run content and platforms. A major contribution in this regard is the Declaration of principles for content and platform governance in times of crisis launched by AccessNow during the IGF 2022. The session recognised the challenge of ad hoc responses when a crisis escalates or when there is ongoing public and political pressure on platforms to react.