Report reveals X’s persistent failure to remove ‘extreme hate speech’ posts
The study, which examined 300 flagged posts, revealed that an alarming 86% of these posts, all clear violations of X’s hate speech policies, remained on the platform a week after being reported to moderators.
A recent report by the Center for Countering Digital Hate (CCDH) reveals that X (formerly known as Twitter) failed to remove the vast majority of posts flagged for ‘extreme hate speech.’ The report examined 300 posts, all violating X’s hate speech policies. 86% of these flagged posts remained on the platform a week after reporting them to moderators. Some of the flagged content included racist caricatures, Holocaust denial, and Nazi imagery like the swastika.
X CEO Linda Yaccarino had previously claimed that the platform is now ‘healthier and safer,’ contradicting third-party researchers who have argued that X has become more toxic since Elon Musk took over.
X defended its content moderation practices, accusing the CCDH of making false claims and misrepresenting the extent of post-exposure.
Why does it matter?
In light of Musk’s strong reactions, including a lawsuit against CCDH following a report released in June and threats of a defamation suit against the Anti-Defamation League, independent reports won’t sway X’s content moderation policy. This is worrisome, as independent reports are crucial for external oversight and can drive public demand for improvements. These conflicts also raise concerns about transparency and trust in the platform’s operations.