Unprecedented surge in hate speech against Palestinians on digital platforms
PICTURE: Faheem Ahamad/Pexels
As the genocidal violence in Gaza continued throughout 2024, a second layer of digital violence emerged in the form of relentless hateful and inflammatory online content targeting Palestinians.
This digital violence mirrored the physical violence on the ground, with both forms feeding into one another. Calls for increased bombardment and killing circulated widely online, while Israeli soldiers frequently uploaded videos of their acts of violence against Palestinians to social media, further fueling the harm and dehumanisation of Palestinians.
The surge in violent content in Hebrew on social media has been significant, though systematic hate speech against Palestinians predates October 2023. Between January and September 2023, 7amleh detected approximately 7 million instances of violent Hebrew content targeting Palestinians across various platforms, with Facebook alone accounting for 24.57%.
A 2021 report by Business for Social Responsibility (BSR) found that Meta lacked functional classifiers for hate speech and violent content in the Hebrew language, leading to under-enforcement of policies. While Meta claimed to work on introducing classifiers for Hebrew language in 2022, it was revealed that those classifiers were not effective, as revealed through the company’s internal documents in October 2023.
Since October 2023, violent Hebrew content targeting Palestinians has surged as revealead by 7amleh’s documentation, exposing the continued failure of Meta’s moderation efforts
Given this history, Meta recently announced significant changes to its content moderation approach, stating that its filtering systems will now focus only on what it classifies as ‘high-severity’ violent content. This shift means that many forms of violent content previously moderated will now remain on its platforms
While the full impact of this change is unclear, it appears likely to allow even more unchecked proliferation of violent content targeting Palestinians online.
While Meta’s changes are concerning, X remains the primary challenge when it comes to violent content against Palestinians. This is particularly alarming because, despite a smaller Israeli presence on X compared to Meta’s platforms, the volume of violent content is significantly higher. Moreover, this large amount of violent tweets originates from a relatively small number of users compared to Meta.
Meanwhile, at the local political level, Israeli authorities continue to detain and repress Palestinian citizens of Israel for merely posting or expressing solidarity with Palestinians in Gaza. In contrast, Israeli Knesset members, government ministers, and politicians openly lead hate and incitement campaigns against Palestinians without any legal consequences, further highlighting the double standards in enforcement both online and offline.
Over the last year, 12 482 041 pieces of content in Hebrew were identified as violent or hateful, equating to 23.6 instances per minute, with interactions totalling 187 226 176 across platforms.
This staggering volume reflects the systemic nature of online hate, exacerbated by significant real-world events, particularly the devastating genocide in Gaza. The tragedy not only intensified the conflict but also fuelled an alarming surge in incitement and hate speech, creating fertile ground for inflammatory rhetoric targeting Palestinians in order to dehumanize them and legitimise the Israeli attacks.
The Arab Center for the Advancement of Social Media Racism & Incitement Index 2024 provides a detailed examination of hateful and violent Hebrew content trends against Palestinians on social media platforms, focusing on Facebook and X online platforms.
Hateful and violent content could be statements or communications that create an imminent risk of discrimination, hostility or violence by directly calling for action, expressing intent, advocating for violence, hoping for harm, aspiring violence or conveying approval, encouragement, glorification or identification with acts of violence
‘Violence’ is broadly defined as ‘actions that result in harm, injury, or damage to individuals, groups, or property’. It also could be any type of hate speech, without necessarily calling directly for violence – any form of discriminatory or pejorative verbal or written discourse, communication or content that expresses, encourages, flares up, or incites hatred against people or groups based on inherent characteristics or specific factors of their identity such as gender, race, color, nationality, religion, origin, or political opinion.
In doing so, hate speech creates an environment of violence and social, political, and cultural rifts.
The report categorises hate based on motives such as political, racial, religious, and gender-based biases. It reveals a disproportionate targeting of Palestinians, but also specific religious communities, particularly Muslims and Christians, alongside heightened hostility toward Jerusalemites and gloating over victims in Palestinian towns in Israel.
Temporal analysis shows spikes in hateful content during key moments of escalation in Gaza, indicating a direct correlation between violence on the ground and its reflection in digital discourse.
Through an in-depth analysis of temporal trends, content, and thematic focuses, the report unveils the pervasive impact of digital violence on marginalized communities.
By exposing these patterns, the index aims to empower stakeholders, policymakers, and advocacy groups with actionable insights to address online hate effectively, promote accountability, and advocate for safer digital spaces.
The data for this report was collected from multiple social media platforms, using tailored methods to identify and analyse hateful and violent content, the approach varied depending on the platform to ensure the most effective data gathering process.
For X, a carefully selected list of hateful and violent keywords and hashtags was used to track and collect relevant posts. For Facebook, the process began with compiling a list of hundreds of various pages. Posts from these pages were gathered, and the dataset was expanded to include the comments associated with these posts, offering a broader view of interactions and discussions.
Once collected, the data was analysed using a combination of deep learning classifiers and large language models (LLMs). Our internal classifiers were used to determine whether a piece of text contained hate speech and to identify the motives of violence present.
Additionally, advanced LLMs, similar to GPT models, were employed to classify content into more nuanced categories beyond hate speech, providing deeper insights into the nature of the discourse.
The numbers presented in the report are the result of automated predictions. While our system is generally reliable, it operates with an estimated accuracy rate of 84%, meaning some margin of error exists.
As of 28 January 2024, changes in X’s data access policies required us to adjust our data collection methods. Any fluctuations in the number of recorded cases since this date should not be interpreted as a decrease in hate speech or violent content but rather as a result of these policy changes.
Recommendations:
- Online platforms must enhance content moderation systems to accurately detect and remove violent Hebrew content, ensuring consistent enforcement across all languages, and
- Platforms should urgently address the proliferation of violent Hebrew content, implementing targeted measures to curb its spread and prevent further harm
Meta and X must:
- conduct independent, publicly available human rights impact assessments to evaluate and mitigate platform-related harms
- allocate sufficient resources, including Hebrew-language expertise, to strengthen moderation and enforcement mechanisms
- establish clear, transparent, and timely mechanisms for addressing digital rights violations reported by civil society organisations
- engage in regular, structured dialogue with Palestinian civil society and affected communities to assess and mitigate platform-related harms
- protect user data privacy, ensuring that personal data is not weaponised against vulnerable populations, including Palestinians, and
- prevent the use of platform technologies in ways that facilitate war crimes, crimes against humanity, or genocide