Report by CCDH finds that X is failing to curtail online disinformation in Hamas-Israel conflict
These findings underscore the urgent need for governments, tech companies, and civil society to address and counter the proliferation of misinformation and hate speech, particularly in the sensitive context of the Israel-Hamas conflict.
In a recent investigation by the Center for Countering Digital Hate (CCDH), it has been revealed that X has allowed 98% of misinformation and hate-fueled posts related to the Israel-Hamas conflict to remain unremoved. The study, conducted on October 31, identified 200 posts promoting hate speech, including antisemitism, Islamophobia, and anti-Palestinian hate. Shockingly, a week later, 196 posts were still present on X, collectively accumulating over 24 million views.
According to the CCDH report, these posts, all published after Hamas’s October 7 attacks on Israel, originated from 101 X accounts. Only one account was suspended, and two were ‘locked,’ rendering them unable to post content until the reported posts were removed. Amid criticism, X posted a blog announcing their continued efforts to tackle misinformation during the conflict, emphasising its community notes feature, which allows users to add contextual notes to posts and rate their helpfulness.
X has a particularly bad track record for tackling misinformation, especially since the Elon Musk takeover, which has seen content moderation staff cuts. They currently have, by far, the least number of content moderators of any VLOP and are largely inactive in taking down or regulating posts from their paid, or Twitter Blue, accounts. Their coverage of the Israel-Hamas conflict has also been subject to much criticism, as it has shown their inability to tackle misinformation issues in times of heightened engagement and crises.
About previous allegations by the CCDH, X has initiated legal action, claiming that the group’s past research involved unethical practices, including the use of unlawfully scraped Twitter data. The lawsuit is strategically positioned to counter criticism by ‘essentially saying that research is tortious interference,’ said Imran Ahmed, CEO of CCDH.
Why does it matter?
The issue of misinformation and hate speech extends beyond X, with other social media platforms also contributing to the spread of false claims and misrepresented content related to the Israel-Hamas conflict (examples of TikTok and Instagram here). Calls for heightened content moderation on social media platforms have emerged in response to the surge in misinformation and hate speech. However, the effectiveness of such measures remains uncertain, as the rapid dissemination of false information often outpaces fact-checking efforts and content removal policies.