Laying the Ground for Further Repression of Palestine’s Supporters
73 civil society organizations, including SMEX, have urged Meta to reverse its decision to censor electronic content containing the term “Zionist” and its designation as hate speech. Additionally, Action Network started an open petition demanding Meta to stop its suppression of Palestinians’ right to freely talk about “Zionism” without fear of retaliation from a political ideology affecting their survival.
The statement expressed concerns regarding Meta’s decision, which involves the possibility of putting the term “Zionist” on the same level as “Jew” and “Israeli”, constituting a clear linguistic and cultural deviation that will result in an increased suppression of freedom of expression and the enforcement of strict restrictions in a reality where protected global political discourse is essential. Such measures could hinder millions of users from freely sharing their thoughts and opinions on the Palestinian issue without reasonable justification.
Meta, the parent company of platforms like Instagram and Facebook, is insistent on entangling terms with clear-cut meanings and connotations. Unlike “Jew,” which refers to adherents of the Jewish religion, and “Israeli,” which applies to owners of an Israeli passport, the term “Zionist” carries a political and ideological connotation. Therefore, merging religious and political categories under the justification of combating hate speech risks further repression and infringement upon users’ freedoms.
Regarding this matter, Metehan Durmaz, a policy analyst at SMEX, emphasizes the importance of recognizing the emotional and political significance of words when describing a specific individual or group. Many words employed in hate speech carry deep historical and emotional weight and are often used to achieve specific aims.
Durmaz notes that terms like “Zionist” do not inherently carry derogatory or harmful meanings. Instead, they serve as descriptors within environments characterized by geopolitical and political controversy. This isn’t the first time Meta has protected an extremist ideology that doesn’t reflect the beliefs of an entire minority. Durmaz suggests that before making such decisions, Meta should engage in transparent discussions with civil society groups, which is clearly not the case given the current situation.
If this decision is passed, it is anticipated that posts will be deleted and accounts suspended under allegations of anti-Semitism and extremism. This would create barriers that suppress criticism and dissent and impede everyone’s ability to communicate effectively.
Mohamad Najem, Executive Director at SMEX, raised concerns about the ramifications of including the term “Zionist” in the protected category, as doing so would open the door to more requests for protection from different political ideologies. He questions why other political ideologies such as socialism, capitalism, or political Islam aren’t similarly protected from criticism.
Najem warns that such a step would act as a significant obstacle to freedom of expression globally, highlighting the importance of the ongoing advocacy efforts by numerous organizations and human rights activists worldwide to halt this outdated and recurring initiative, with progressive Jewish institutions in particular opposing this policy.
Not the First Time
Meta was in the process of assessing whether to classify the term “Zionist” as hate speech in light of the Israeli aggression on the Gaza Strip that commenced in October 2023. A spokesperson for Meta stated in press releases that “given the escalating polarized public discourse surrounding events in the Middle East, we believe it is important to evaluate our guidelines for reviewing posts that utilize the term ‘Zionist’.”
Meta’s track record is filled with errors and misjudgments in moderating content related to Palestine. For instance, when users prompted Meta’s artificial intelligence sticker tool on WhatsApp to create a sticker containing Palestinian children, it depicted them holding rifles.
The company also enforces bans and suspensions on widely circulated accounts that disseminate news from within Gaza. It has previously prohibited words like “Al-Aqsa” and flagged the term “Al Hamdoulillah” (Praise be to God) as extremist speech, triggering its “dangerous individuals and organizations” policy.
In October 2021, The Intercept website published a report exposing Facebook algorithms that automatically censored posts containing specific words such as “martyr,” “Qassam,” “resistance,” and “Ayyash” (referring to the martyr Yahya Ayyash).
Meta continues to adhere to similar methods despite the significant violations carried out as a result of its content moderation policies. This was acknowledged by Meta’s Oversight Board in March 2023, which expressed intent to reevaluate the approach to dealing with the word “martyr” in Arabic, one that has triggered more content removals on Meta’s platforms than any other single word or phrase. Council President Thomas Hughes emphasized the complexity of the issue, highlighting concerns about excessive control over content within Muslim and Arabic-speaking communities due to Meta’s moderation practices.
Back in September 2023, the Oversight Board was expected to issue a decision on how to address the word “martyr” based on a new approach while taking users’ rights into consideration, but it has failed to come up with one.
Furthermore, a report released by the Council in December concluded that Meta violated its content management rules by removing two posts about the Gaza war from social media in October. One of these posts depicted the aftermath of an airstrike near Al-Shifa Hospital in Gaza, showing children who appeared to be wounded or killed.
Despite these instances, why does Meta persist in taking similar actions that potentially foster extremism and suppression?
In a report released by Human Rights Watch last December, the organization documented over 1,050 examples of content removal and blocking on Instagram and Facebook between October and November 2023, affecting Palestinian users and their supporters. This included content addressing human rights violations. Of the 1,050 cases examined in the report, 1,049 contained peaceful content advocating for Palestine that were unjustifiably blocked or shadow banned. Only one case involved the removal of content supportive of Israel.
Meta has consistently relied on the “Dangerous Individuals and Organizations” list and applied this policy based on it while aligning with the United States’ designation of “terrorist organizations.” This policy has been used to restrict legitimate expression in hundreds of documented cases.
Meta’s Violation of Promised Standards
As a company that vows to uphold human rights laws and the principles of the United Nations Global Compact while embracing the values of the European Union, Meta is expected to prioritize the conditions necessary to ensure and foster an open and inclusive online environment.
Freedom of expression stands as a cornerstone of democratic values and is safeguarded by international conventions, including its incorporation into the First Amendment of the Constitution of the United States, the country where Meta is headquartered. Despite not being legally binding on private entities, Meta has voluntarily pledged its commitment to broad human rights principles.
The European Union emphasizes the importance of safeguarding freedom of expression not only at the state level but also among private entities operating within its jurisdiction, including Meta. Consequently, reports of excessive content moderation concerning Arabic content and discussions regarding the Palestinian cause and its surrounding realities call into question Meta’s true dedication to the principles it claims to uphold.
The Human Rights Council previously affirmed in its resolution on the promotion and protection of human rights on the Internet that “the same rights that people enjoy offline must also be protected on the internet, especially the right to freedom of expression.” This assertion starkly contrasts with the current digital landscape.
Double Standards in Policy Implementation
Meta’s content moderation policies and biases represent a violation of the human rights of Palestinians, subjecting their speech to intensive monitoring that results in both censorship and self-censorship. Meta utilizes a customized system to gauge the severity of potential abuse in content. Since October 7, the company has adjusted its algorithms to automatically hide comments from users in Palestine. These comments are flagged if what they post has at least a 25% probability of containing offensive speech. On the other hand, the threshold is set at 80% for the rest of the world.
According to the “Hashtag Palestine 2023” report published by the “7amleh” Center on January 17, 2024, and based on data from the “Palestinian Observatory of Digital Rights Violations” (7or), 4,400 cases of violations were documented. These ranged from content removal and restrictions, to accounts being hacked, hate speech, and incitement of violence. Notably, 69% of these cases were documented after October of last year, coinciding with the onset of the Israeli aggression on the Gaza Strip.
The report strikingly reveals that nearly three million instances of violence, hatred, and incitement were in Hebrew and directed against Palestinians. However, they were neither addressed nor were any accounts suspended as a result.
According to Durmaz, the absence of Hebrew-speaking moderators exacerbates this issue, especially ones who are unfamiliar with various Arabic dialects. However, the fundamental problem lies within Meta’s policies themselves.
Durmaz adds that Meta’s explicit alignment with the political positions of the United States, ones particularly evident in its “Dangerous Individuals and Organizations” list and policy, sheds light on the reason behind the disproportionate restriction and removal of Arabic content compared to other languages.
He emphasizes, “Most of the list, leaked in previous years, is made up of individuals and organizations classified as aggressive or dangerous by the United States. We are currently witnessing a US policy that disregards human rights and supports manifestations of genocide perpetrated by Israel, those of which are clearly outlined in Meta’s content management practices.”
In September 2022, an independent report revealed the extent of Meta’s exaggerated implementation of moderation policies on Arabic content and it clearly contrasted with their leniency towards Hebrew content.
Public and Secret Collaboration
In January, a data researcher at the Meta Center in New York accused the company of tightening content moderation policies to restrict discussions on Gaza and support for the Palestinian narrative. In a video, she reported collecting 450 signatures from her colleagues in half a day in support of her opposition to the company’s suppression of Palestinian employees’ voices. However, Meta swiftly deleted the post from its internal forum and restricted her access to its systems, prompting an investigation.
On October 17, an Israeli air strike targeted Al-Ahli Hospital in Gaza resulting in the martyrdom of over 500 victims, including doctors and hospital staff, most of whom were civilians. Despite the newsworthy nature of the content, Instagram and Facebook insisted on removing clips documenting the massacre, citing them as containing “nudity and sexual activity.” This highlights Meta’s inconsistency in implementing its own “newsworthy content” policy which occasionally permits the publication of content that may violate community standards.
In February 2023, Israeli Prime Minister Benjamin Netanyahu tasked National Security Minister Itamar Ben-Gvir with forming a special team to counter incitement on Palestinian social media platforms. Ben-Gvir’s team, supported by investigators and police in coordination with the Ministry of Justice, includes officials from the General Security Agency, the Shin Bet, the army, and the National Cybersecurity Authority.
The team is divided into three groups: one initiates investigations and prosecutes accused individuals, another monitors and gathers inflammatory content on Palestinian social networks, and the third proposes legal strategies to address online incitement. These developments occurring within Meta’s spaces call into question its claims about engineering its platforms to prioritize user rights and foster open discourse.
On May 15, 2021, amidst escalating Israeli aggression in Gaza, Israeli Defense Minister Benny Gantz held meetings with representatives from Facebook and TikTok. Afterwards, Facebook faced criticism after news of its meeting with Israeli officials was leaked, resulting in a meeting with representatives from the Palestinian Authority. Later that month, Facebook apologized to the Palestinian Prime Minister Mohammad Shtayyeh in light of complaints regarding the “Israeli-Palestinian conflict” after establishing a specialized team to address content related to the subject matter.
During Israel’s forced displacement operations of Palestinian families in Jerusalem in 2021, significant censorship of Palestinian content occurred across Meta platforms (formerly Facebook). The suppression of content ranged from unjustifiable systemic classification of keywords and posts and warning about their use, to the swift removal of Instagram Stories related to the situation on the ground.
Meta is complicit in the tragedy and massacres suffered by Palestinians due to its delayed response, negligence, and insistence on silencing victims while perpetuating Israeli propaganda, all of which could have been prevented. This makes the company a partner in the ongoing genocide against the Palestinian people.
If Meta wishes to rectify its stance, it must first take steps to investigate Israeli content spread on the internet. There have been repeated attempts by parties from Israel to manipulate public opinion through paid advertisements and fake news. Meta should implement neutral policies that moderate content while not being influenced by any country’s positions or interests, especially when conflict, war crimes, and massacres are taking place. In essence, Meta needs to implement a comprehensive review of its content moderation policies to ensure that they align with the values of freedom of expression outlined in the Universal Declaration of Human Rights and the principles of the European Union, values to which it has committed itself before the eyes of the international community.
We have seen Meta respond differently to crises, such as the emergency measures it took during the outbreak of war in Ukraine. This included hiding user information related to follower counts, the accounts they follow, and mutual followers in personal accounts located in the countries involved. This goes to show that Meta has the capacity to truly address these incidents compared to what is happening right now in the Palestinian context.