Meta commissions a sevenfold takedown of content about the Israel-Hamas conflict
Meta increases content moderation strategies amidst the Hamas-Israel conflict to curb misinformation, disinformation, and hate speech.
After the October 7 attacks by Hamas on Israel, Meta has removed over 700,000 pieces of content deemed to violate established policies. According to the company, posts written on Facebook and Instagram by Hamas, labeled a terrorist organisation by several Western governments, are considered in contravention of its Dangerous Organisations and Individuals Policy. Yet, despite the group being banned from posting on these platforms, Meta continues to remove hundreds of thousands of posts in praise or support of Hamas. Three days following the October 7 attack, these posts numbered close to 800,000, seven times the removal rate just two months prior.
Meta has also ramped up its defenses by establishing:
- A special operations centre staffed with linguists versed in Arabic and Hebrew.
- Stronger steps to prevent the posting of violent content in the first place
- An expansion of the Violence and Incitement Policy to prioritise the safety of hostages.
- Restricted access to specific hashtags associated with violent content
- Blurring of the face of victims posted on the platforms.
Other platforms like X took similar actions in the wake of a call for greater platform moderation from organisations such as the European Union, the Arab Centre for the Advancement of Social Media, and the Anti-Defamation League.
Conversely, the Association for Progressive Communications decried the heavy-handed approach to censoring Palestinians’ speech.
Why does it matter?
Meta’s swift and decisive action amidst this crisis is well noted, given the pummelling the platform operator faced for its role in the promotion of disinformation, misinformation, and hate speech during other offline conflicts.