Tackling violent extremism online: New human rights challenges for states and businesses
19 Dec 2017 16:15h - 17:15h
Event report
[Read more session reports and live updates from the 12th Internet Governance Forum]
This session, moderated by Ms Peggy Hicks, Office of the High Commission of Human Rights, opened the panel by noting the interest in exploring ‘the interface between human rights and digital space’ in the quest to build upon our human rights frameworks, to address hate speech and a robust plan of action.
Mr David Kaye, UN Special Rapporteur on the Promotion and Protection of the Right to Freedom of Opinion and Expression, first argued that there is a generic problem about the term ‘extremism’ when addressing these issues. There is no clear definition of extremism in human rights law, and it is thus problematic to find the line between extremist and non-extremist content. In certain countries, journalism or minority expression can, for instance, be associated with extremism. Kaye also referred to the Joint Declaration on Freedom of Expression issued in March 2017, which identifies human rights standards that have to be taken into account when dealing with disinformation and propaganda online. He also noted that major challenges to rule of law occur everywhere, from western democratic countries to repressive countries around the world, and in this sense, they may be best solvable by the platforms themselves.
Then Mr Chinmayi Arun, Executive Director, Centre for Communication Governance, India, argued that crucial distinction is drawn between extremist content and legitimate political expression at national and local levels, as exemplified by recent events in India. Mentioning the role of platforms in controlling content, Arun insisted that more information must be made available showing these automated tools are designed and what their actual impacts are.
Mr Brett Solomon, Executive Director of Access Now, argued we need to know more about what the impact of taking down content online is. Otherwise, such measures could lead us to enable a massive monitoring regime, combined with a censorship infrastructure. Indeed, removing content requires both surveillance or monitoring, and censorship. Currently, take-down measures are mostly carried out outside of necessary judicial oversight and appropriate regulation. The infrastructure designed by governments and private companies to remove content could have long-term consequences on our societies, especially in light of the rise of artificial intelligence. Transparency is also an important issue when it comes to removing content online Access Now has developed a transparency reporting index, in order to identify private companies’ practices in this respect.
Ms Fiona Asonga, CEO, Technology Service Providers Association of Kenya, then recounted her experience in Kenya in dealing with extremist content online. Asonga insisted that there needs to be a balance between take-down measures and the right to freedom of expression. The recent security and political situation in Kenya has led private actors to change their practices for dealing with violence and terrorism online. Businesses are now directly engaging with government, military, and intelligence agencies, as well as Internet platforms, in order to address extremism online.
Finally, Ms Alexandria Walden, Public Policy and Government Relations Counsel, Google, presented the Google’s approach to dealing with problematic content online and freedom of expression. The scale of the content that is uploaded to Google services requires the company to develop creative ways to deal with this challenge. Although most users employ Google services for very legitimate motives, a few may also use them for nefarious purposes. Regarding terrorism, Google has engaged with the industry in order to address this issue, in particular as part of the Global Internet Forum to Counter Terrorism and the Global Network Initiative (GNI). As one indicator, less than 1% of content was removed last year, flagged as being terrorist propaganda. Google relies on both automated systems and humans to identify and flag content online. To increase transparency, Google recently promised that in 2018 more details will be made available regarding the flagging process used to remove content.
A vibrant discussion continued in small discussion groups between the audience and panellists, to explore the nuances of the points addressed.
By Clément Perarnaud