The interaction of platform content moderation and geopolitics
13 Nov 2020 12:20h - 13:50h
Event report
Ms Jyoti Panday (Researcher, Internet Governance Project, Georgia Institute of Technology) led the workshop and discussed how transnational social media companies are becoming more influential in public affairs even though they are not elected bodies. Overall, the workshop discussed the uneven enforcement of community standards related to content moderation, cases pursued by law enforcement, and eroding trust of users due to account suspensions, lack of procedural transparency, and algorithmic bias.
Mr Pratik Sinha (Co-Founder, Alt News, India) claimed that countries like India, Pakistan, and Bangladesh are not a priority for companies like Facebook, Google, and Twitter. He criticised the decision and rationale of Facebook to refrain from fact-checking politicians because they are overly scrutinised. This reasoning is not true in countries where there is limited press freedom. ‘If your view of politics is one of those sitting in Palo Alto or wherever these policymakers are sitting and there is not enough of local views coming in on how their decisions impact the democracy in different countries, then we are getting in the situation where countries like India are going from bad to worse.’
Ms Marianne Díaz (Public Policy Analyst, Derechos Digitales) echoed that Latin America is not a priority for platforms either, and there are challenges regarding ‘over-moderation’ and ‘under-moderation’. She complained that people cannot flag content as misinformation from Latin America, and drew the example of the ‘Election2020’ hashtag that was used only for the US elections, as if there were no other elections occurring around the world. Platforms lack resources to implement fact-checking anywhere other than five key countries. Diaz believed it is mainly an economic issue because it is related to the economic model of the platforms. ‘The platforms are trying to look for a certain homeostasis in the platform, which requires that enough content is left there for people to engage, but also that some content is taken down, even though it should be protected by freedom of speech standards, just because it makes people uncomfortable.’
Panday noted that when platforms choose to remain ‘hands off’ and maintain that they cannot intervene, then governments can step in by regulating the platforms.
Ms Amélie Pia Heldt (Junior Researcher, Leibniz Institute for Media Research, Hans-Bredow-Institut) confirmed that there is a very US-centric approach to regulations that is based on US constitutional rights on free speech, which Congress is not allowed or unwilling to regulate. Europe, by comparison, is not hampered by the same constitutional framework and has speech-restricting laws derived from criminal law. Secondly, Heldt pointed out that private actors can become the functional equivalent of state actors in certain situations. She reminded the audience about the 2017 German law (NetzDG law) which makes it mandatory for companies or platforms to maintain a tool for users to flag content that is forbidden by the law. However, she is skeptical about the effectiveness of the law. One positive result, she posited, is that platforms now have many more content monitors and people who are trained to understand German laws and ascertain whether there is an infringement of German law.
Mr Varun Reddy (Production Engineer, Facebook, Asia Pacific) focused on transparency around the enforcement of Facebook content policies and standards, and how it works in the local context when it comes to both policy development and enforcement. Facebook developed community standards together with experts worldwide in fields such as law, human rights, public safety, and technology. As an example of the localisation of content policy, Reddy pointed to the expansion of hate speech policy in India to include particularities in relation to caste, in addition to race, nationality, and ethnicity. Finally, he mentioned that Facebook relies heavily on AI systems and machine learning systems because they are quite good at detecting, nudity, and graphic violence. However, when it comes to hate speech, bullying and harassment, humans analyse the content in question because context plays an important role.
Mr Tarleton Gillespie (Principal Researcher, Microsoft Research) made several observations about the current state of platforms. First, platforms are in their adolescent years, which is why we are experiencing issues with content moderation. Second, most of the platforms we are concerned were created in the USA and began with the US political framework as their foundation, making them sensitive only to a limited context regarding violence and terrorism. Third, over the last 10 to 15 years, these platforms have grown in scale and complexity and have arrived at some kind of industrial approach to content moderation: Tens of thousands of moderators, software running constantly, oversight by policy teams that mostly deal with rule changes, emerging problems, public backlash, front-line moderators, and identification software.
The final remarks came from Mr Urvan Parfentyev (RAEC, ROCIT, Russia). ROCIT runs a hotline that address complaints on different types of content. They also co-operate with hotlines worldwide, including Europe and the USA. Parfentyev pointed out that there are no common standards for social networks, and instead local standards are applied that sometimes contradict each other. Parfentyev suggested to think about the global governance of this issue in the form of a UN convention, for example, which covers basic issues such as how content has to be treated and basic requirements for the Internet platforms in the realm of content regulation.