A newly released report on Meta’s content moderation practices confirmed the company’s documented over-enforcement of Arabic content (i.e. erroneously taking down posts, restricting reach, falsely flagging content as inappropriate, etc.) during May last year. Commissioned by Meta and conducted by BSR, the long-awaited report provides evidence of disparate treatment of Arabic content compared to Hebrew as well as “unintentional bias” against the Palestinian narrative. BSR’s findings confirm that Meta’s content moderation policies violated Palestinian human rights.
Last May, Palestinians in East Jerusalem resorted to social media to document to the world a violent Israeli campaign to evict them from their homes. Meta’s platforms, and specifically Instagram, were the primary channels used to broadcast Israerli police brutality as well as assault and intimidation by Jewish settlers against residents of the Sheikh Jarrah neighborhood. In response, Meta deliberately censored hashtags, took down posts and limited distribution of content by Palestinians documenting Israeli violence. At the same time, it failed to address Hebrew calls for the slaughter of Palestinians.
After countless campaigns by civil society, notably the Stop Silencing Palestinian Voices, Meta appointed BSR for an audit of Meta’s actions and inactions during the May 2021 Sheikh Jarrah raids in Palestine.
According tot the report, the company’s practices had an adverse impact on Palestinian human rights including, freedom of speech (from content over-enforcement), freedom from incitement (due to under-enforcement of content inciting violence), and non-discrimination. While there is no hard evidence of intentional bias, over-enforcement mostly targeted Palestinian content, as material reviewed “showed that proactive detection rates of potentially violating Arabic content were significantly higher than [those in] Hebrew.”
The report also attributes over-enforcement to errors in Palestinian Arabic classifiers and speculates that content may have been routed to moderators who do not understand the specific dialect. The report does not, however, address the role of the Israeli government, especially the Israeli Cyber Unit in the censorship of Palestinian content with Meta complying with most of the Cyber Unit’s demands.
BSR’s report also noted that Meta’s Dangerous Individuals and Organizations (DOI) Policy had a disproportionate focus on “individuals and organizations that have identified as Muslim,” clearly stating that Meta’s DOI policy has a higher impact on Palestinian and Arabic-speaking users. However, BSR did not share quantitative data when it comes to content moderation policies and the number of requests received by the Israeli Cyber unit questioning the transparency of Meta.
Also, BSR’s report only cites unintentional bias in Meta’s policies and practices even though Meta was clearly aware of the negative impact they’ve had on Palestinian content. Meta responded to the BSR report in a document where it added in a footnote that “Meta’s publication of this response should not be construed as an admission, agreement with, or acceptance of any of the findings, conclusions, opinions or viewpoints identified by BSR, nor should the implementation of any suggested reforms be taken as admission of wrongdoing.”
BSR’s recommendations are a step in the right direction as they echo previous demands put forth by human rights organizations. They urge Meta to review their content moderation policies, increase transparency, invest in Hebrew and Arabic content moderation resources, and clarify their legal obligations concerning the Foreign Terrorist Organizations and State Designated Global Terrorists.
We call on Meta to implement these recommendations within a transparent timeframe and to commit further by implementing co-design processes with civil society organizations.