People vs machines: Collaborative content moderation
9 Nov 2020 18:20h - 19:50h
Event report
This session explored the complexities of content moderation at scale and potential implications of content moderation for trust in the Internet. Mr Jan Gerlach (WikiMedia Foundation) and Ms Anna Mazgal (EU Policy Advisor, WikiMedia Foundation) elaborated that the particular focus of the conversation is on harmful or potentially illegal content, spanning misinformation, and incitement to violence or terrorist content.
Mr Robert Faris (Civil Society, Western European and Others Group [WEOG]) summarised a study on how English-language Wikipedia moderates harmful speech. The study concludes that that Wikipedia does fairly well in removing the majority of harmful content from the platform, but does better on the Articles pages than on the Talk and User pages. The study focused on three principal types of harmful speech: harassment, identity-based attacks, and physical threats or incitement to violence. The main focus was on how well a combination of humans and machines do in removing harmful content. An important part of the Wikipedia model is keeping community discussions alive and the principal challenge is determining what is acceptable from a community standpoint. The automated tools are helpful and complement and support editor activities, but this refers primarily to the Articles space. The human component remains essential.
Also speaking concerning the difference between machine treatment and human rights online in the era of online content moderation was Ms Marwa Fatafta (Civil Society, Asia-Pacific Group). According to Fatafta, the central question of content governance is how to brings users’ fundamental rights to the debate and how to protect freedom of expression, assembly, and free association. AccessNow’s report on content governance provides recommendations on the issue for law makers, regulators, and company policy makers. Content governance can rest on state regulation, self-regulation by platforms, or co-regulation between the two. Governments risk rushing legislation that is disproportionate. Self-regulation by platforms is often unilateral and lacks transparency and remedies. Companies’ moderation remains opaque especially about automated-decision making. Furthermore, local context is often disregarded by giant companies resulting in false positives and arbitrary and discriminatory decisions. Unilateral and automated decisions also erode trust and deepens the imbalance of power between the platform and the user. She agreed with Faris that human review mechanisms are crucial, for both context, non-English context, and understanding.
That users play a large role in content moderation was echoed by Ms Mercedes Mateo Diaz (Intergovernmental Organization, Latin American and Caribbean Group [GRULAC]). The opposition of people versus machine poses the broader question of user behaviour. Diaz explained that research showed, in the world of video games, that 95% of antisocial behaviour comes from 99% of users, and only 5% from the trolls (the remaining 1%). ‘We need to educate students to become digital, civil, global citizens’, she said, adding that ‘the combination between banning abusive players and giving them immediate feedback improved the behavior of about 92% of toxic players and this is significant’. Ensuring good content moderation means strengthening media and digital literacy and building skills like collaboration, empathy, creativity, ethics, global citizenship, and critical thinking. Human review mechanisms should rest with humans with these skills. She agreed with Fatafta and Faris that platform rules need to be good, but content is the ultimate responsibility of the user and we should not let machines define the relationships between humans in a virtual space.
Ethical engagement with information and valuable media content were highlighted by Ms Mira Milošević (Civil Society, WEOG). The global pandemic has shown that governments are quick to counter harmful content when disinformation spreads, but little investment has been made into something that we would define as ethical, trustworthy, and credible content, especially from the journalism perspective. An increase in take-downs prompts projects like #MissingVoices to show wrongfully banned content. Journalism interacts with moderation at several levels, including single article moderation, content curation, and content monetisation. Monetisation via likes, shares, and subscribers is crucial in some countries and current architecture, together with the business models of advertising, shows increasingly negative trends. Digital rights activists, journalists, media in general, and staff are treated as terrorists in many cases for spreading what is sometimes defined as terrorism content. On the other hand, advertisers are shying away from serious journalistic content. ‘We need to look at the economic side and market power balance, Milošević said.
Panellists agreed that content moderation is about re-thinking collective online space; going forward we should have clear quality standards, skills, and a new economic incentive model. A mix of technological and community-based approaches can be somewhat successful as shown by Wikipedia. Faris noted that the Wikipedia volunteer-based model is not feasible, but that a balance is needed in sharing responsibility between governments, users, and, perhaps, platform owners.