Oversight Board finds Facebook ‘From the River to the Sea’ cases did not break Meta’s rules
The Meta Oversight Board reviews Meta’s content decisions to see if the company acted in line with its own policies, values and human rights commitments. The Board can choose to overturn or uphold Meta’s decision.
In reviewing three cases involving different pieces of Facebook content containing the phrase ‘From the River to the Sea’, the Board today found they did not break Meta’s rules on Hate Speech, Violence and Incitement or Dangerous Organizations and Individuals.
Specifically, the three pieces of content contain contextual signs of solidarity with Palestinians – but no language calling for violence or exclusion. They also do not glorify or even refer to Hamas, an organisation designated as dangerous by Meta.
In upholding Meta’s decisions to keep up the content, the majority of the Board notes the phrase has multiple meanings and is used by people in various ways and with different intentions. A minority, however, believes that because the phrase appears in the 2017 Hamas charter and given the October 7 attacks, its use in a post should be presumed to constitute glorification of a designated entity, unless there are clear signals to the contrary.
These three cases highlight tensions between Meta’s value of voice and the need to protect freedom of expression, particularly political speech during conflict, and Meta’s values of safety and dignity to protect people against intimidation, exclusion and violence.
The current and ongoing conflict that followed the Hamas terrorist attack in October 2023 and Israel’s subsequent military operations has led to protests globally and accusations against both sides for violating international law.
Equally relevant is the surge in antisemitism and Islamophobia not only to these cases but also general use of ‘From the River to the Sea’ on Meta’s platforms.
These cases have again underscored the importance of data access to effectively assess Meta’s content moderation during conflicts, as well as the need for a method to track the amount of content attacking people based on a protected characteristic.
The Board’s recommendations urge Meta to ensure its new Content Library is an effective replacement for CrowdTangle and to fully implement a recommendation from the BSR Human Rights Due Diligence Report of Meta’s Impacts in Israel and Palestine.
About the Cases
In the first case, a Facebook user commented on a video posted by a different user. The video’s caption encourages others to ‘speak up’ and includes hashtags such as ‘#ceasefire’ and ‘#freepalestine’. The user’s comment includes the phrase ‘FromTheRiverToTheSea’ in hashtag form, additional hashtags such as ‘#DefundIsrael’ and heart emojis in the colors of the Palestinian flag.
Viewed about 3 000 times, the comment was reported by four users but these reports were automatically closed because Meta’s automated systems did not prioritise them for human review.
The Facebook user in the second case posted what is likely to be a generated image of floating watermelon slices that form the words from the phrase, alongside ‘Palestine will be free’. Viewed about 8 million times, this post was reported by 937 users. Some of these reports were assessed by human moderators who found the post did not break Meta’s rules.
For the third case, an administrator of a Facebook page reshared a post by a Canadian community organisation, in which the founding members declared support for the Palestinian people, and condemned their ‘senseless slaughter’ and ‘Zionist Israeli occupiers’. With fewer than than 1 000 views, this post was reported by one user, but the report was automatically closed.
In all three cases, users then appealed to Meta to remove the content, but the appeals were closed without human review following an assessment by one of the company’s automated tools. After Meta upheld its decisions to keep the content on Facebook, the users appealed to the Board.
Unprecedented attacks by Hamas on Israel in October 2023, which killed 1 200 people and involved 240 hostages being taken, have been followed by a large-scale military response by Israel in Gaza, killing over 39 000 people (as of July 2024). Both sides have since been accused of violating international law, and committing war crimes and crimes against humanity.
This has generated worldwide debate, much of which has taken place on social media, including Facebook, Instagram and Threads.
Key Findings
The Board finds there is no indication that the comment or the two posts broke Meta’s Hate Speech rules because they do not attack Jewish or Israeli people with calls for violence or exclusion, nor do they attack a concept or institution associated with a protected characteristic that could lead to imminent violence
Instead, the three pieces of content contain contextual signals of solidarity with Palestinians, in the hashtags, visual representation or statements of support.
On other policies, they do not break the Violence and Incitement rules nor do they violate Meta’s Dangerous Organizations and Individuals policy as they do not contain threats of violence or other physical harm, nor do they glorify Hamas or its actions.
In coming to its decision, the majority of the Board notes that the phrase ‘From the River to the Sea’ has multiple meanings. While it can be understood by some as encouraging and legitimising antisemitism and the violent elimination of Israel and its people, it is also often used as a political call for solidarity, equal rights and self-determination of the Palestinian people, and to end the war in Gaza.
Given this fact, and as these cases show, the standalone phrase cannot be understood as a call to violence against a group based on their protected characteristics, as advocating for the exclusion of a particular group, or of supporting a designated entity – Hamas.
The phrase’s use by this terrorist group with explicit violent eliminationist intent and actions, does not make the phrase inherently hateful or violent – considering the variety of people using the phrase in different ways.
It is vital that factors such as context and identification of specific risks are assessed to analyse content posted on Meta’s platforms as a whole
Though removing content could have aligned with Meta’s human rights responsibilities if the phrase had been accompanied by statements or signals calling for exclusion or violence, or legitimising hate, such removal would not be based on the phrase itself, but rather on other violating elements, in the view of the majority of the Board.
Because the phrase does not have a single meaning, a blanket ban on content that includes the phrase, a default rule towards removal of such content, or even using it as a signal to trigger enforcement or review, would hinder protected political speech in unacceptable ways.
In contrast, a minority of the Board finds that Meta should adopt a default rule presuming the phrase constitutes glorification of a designated organization, unless there are clear signals the user does not endorse Hamas or the October 7 attacks.
One piece of research commissioned by the Board for these cases relied on the CrowdTangle data analysis tool. Access to platform data is essential for the Board and other external stakeholders to assess the necessity and proportionality of Meta’s content moderation decisions during armed conflicts.
This is why the Board is concerned with Meta’s decision to shut down the tool while there are questions over the newer Meta Content Library as an adequate replacement.
Finally, the Board recognises that even with research tools, there is limited ability to effectively assess the extent of the surge in antisemitic, Islamophobic, and racist and hateful content on Meta’s platforms. The Board urges Meta to fully implement a recommendation previously issued by the BSR report to address this.
The decision
The Oversight Board upholds Meta’s decisions to leave up the content in all three cases.
The Board recommends that Meta:
- Ensure that qualified researchers, civil society organisations and journalists, who previously had access to CrowdTangle, are onboarded to the new Meta Content Library within three weeks of submitting their application
- Ensure its Content Library is a suitable replacement for CrowdTangle, providing equal or greater functionality and data access
- Implement recommendations from the BSR report to develop a mechanism to track the prevalence of content attacking people based on specific protected characteristics (for example, antisemitic, Islamophobic and homophobic content)
The Board and external stakeholders will be in a better position to assess the necessity and proportionality of Meta’s content moderation decisions during ongoing armed conflicts should Meta continue to provide the Board and independent researchers with access to platform data.
In March 2024, Meta announced it would be shutting down CrowdTangle on August 14, 2024. The company explained it would instead focus its resources on ‘new research tools, Meta Content Library & Content Library API’.
While the Board commends Meta for developing new research tools and working to provide greater functionality, the Board is concerned with the company’s decision to shut down CrowdTangle before these new tools can effectively replace it
According to an open letter sent by several organisations to Meta urging the company not to discontinue CrowdTangle ‘during a key election year’, there are significant concerns about the adequacy of the Meta Content Library to provide sufficient data access for independent monitoring.
The European Commission has opened formal proceedings under the Digital Services Act against Facebook and Instagram for the decision to shut down its ‘real-time public insights tool CrowdTangle without an adequate replacement’.
The Board echoes concerns raised by these organisations, individuals and the European Commission.
The Board does note that even with CrowdTangle, there are limits to the Board’s and the public’s abilities to effectively assess the extent of the surge in antisemitic, anti-Muslim, or racist and other hateful content on Meta’s platforms, and where and when that surge may be most prominent.
Meta’s transparency reporting is not granular enough to evaluate the extent and nature of hateful content on its platforms.
One of the recommendations … issued by BSR in its report … was for the company to develop a mechanism to track the prevalence of content attacking people on the basis of specific protected characteristics (for example, antisemitic, Islamophobic or homophobic content).
In September 2023, one year after the BSR report was issued, Meta reported it was still assessing the feasibility of this recommendation.
The Board urges Meta to fully implement this recommendation as soon as possible.
The Oversight Board’s Decision
The Oversight Board upholds Meta’s decisions to leave up the content in all three cases.
Recommendations
1. Meta should ensure that qualified researchers, civil society organisations and journalists, who previously had access to CrowdTangle, are onboarded to the company’s new Content Library within three weeks of submitting their application.
The Board will consider this implemented when Meta provides the Board with a complete list of researchers and organisations that previously had access to CrowdTangle, and the turnaround time it took to onboard them to the Meta Content Library, at least 75% of which should be three weeks or less.
2. Meta should ensure the Meta Content Library is a suitable replacement for CrowdTangle, which provides equal or greater functionality and data access.
The Board will consider this implemented when a survey of a representative sample of onboarded researchers, civil society organisations and journalists shows that at least 75% believe they are able to reasonably continue, reproduce or conduct new research of public interest, using the Meta Content Library.
3. Meta should implement recommendation no. 16 from the BSR report to develop a mechanism to track the prevalence of content attacking people on the basis of specific protected characteristics (for example, antisemitic, Islamophobic and homophobic content).
The Board will consider this recommendation implemented when Meta publishes the results of its first assessment of these metrics and issues a public commitment on how the company will continue to monitor and leverage those results.
Procedural Note
The Oversight Board’s decisions are made by panels of five members and approved by a majority vote of the full Board. Board decisions do not necessarily represent the views of all members.
For this case decision, independent research was commissioned on behalf of the Board. The Board was assisted by Duco Advisors, an advisory firm focusing on the intersection of geopolitics, trust and safety, and technology.
Memetica, a digital investigations group providing risk advisory and threat intelligence services to mitigate online harms, also provided research.
Linguistic expertise was provided by Lionbridge Technologies, LLC, whose specialists are fluent in more than 350 languages and work from 5 000 cities across the world.
Read the whole report here.
For more information on Facebook about the Oversight Board, see here.