Meta’s Oversight Board reviews Holocaust denial case on Instagram
This content, which included a post denying the Holocaust and targeting specific audiences, remained on the platform despite policy updates.
Meta’s Oversight Board has initiated a review of a case involving Holocaust denial content on Instagram. The content, featuring a post denying the Holocaust with specific audience targeting, remained on the platform despite updates to Meta’s content policies. It originally surfaced in September 2020 on an account with 9,000 followers and garnered around 1,000 views.
Meta admitted on its transparency page that the content was mistakenly left up after the initial review but later acknowledged its violation of hate speech policies and removed it. The Board is now seeking public input on issues related to using automation to combat hate speech and the transparency of Meta’s reporting.
While Meta has since removed the content, the Oversight Board’s recommendations could potentially reshape the way Meta employs automation for content moderation on its platforms.
Why does it matter?
This case underscores the persistent challenges that online platforms encounter in enforcing their content policies. Despite policy updates, the fact that this Holocaust denial content remained on the platform for an extended period raises questions about the effectiveness of content moderation systems, particularly those heavily reliant on automation for detecting and removing hate speech. It should be noted that, in these situations, Meta’s Oversight Board plays a crucial role as an external check, allowing users to appeal and seek second opinions on content moderation decisions.