In a long-overdue move, Meta finally published its 2024 annual human rights report in December 2025, strategically releasing it just ahead of the holiday break, a timing choice potentially aiming to curtail scrutiny from the press and civil society. This delay is particularly striking given that the company’s previous human rights report was released in September 2024, suggesting a consistent pattern of contempt towards human rights and the company’s commitments under the United Nations Guiding Principles for Business and Human Rights (UNGPs).
The substance of the 2024 report is in many ways outdated, taking into consideration the policy changes announced by Mark Zuckerberg in January 2025. The overall content is superficial, vaguely refers to the UNGPs and the GNI Principles as the main standards for adherence to the protection of human rights, and only draws sporadic and decontextualized examples from Meta’s minimal effort to mitigate human rights risk worldwide. The report covers a range of topics, providing descriptive rather than analytical information.
A significant portion of the report is spent on AI and elections, topics that dominated public discourse in 2024. However, the section on AI feels more like an attempt to advertise Meta’s new products, with limited and generic mention of the safeguards put in place. As 2024 was the largest election year in history, Meta faced mounting pressure to adopt and implement more robust measures to prevent related risks, including risks stemming from the increasing use of AI. Although Meta seemingly invested in providing protection measures to avoid the malevolent use of its services and products in the 2024 elections, the following year it entirely removed fact checking from the United States, most likely nullifying any improvements it had previously made.
In a particularly concerning detail, in the section on elections, Meta boasts about establishing new communication channels with government authorities and law enforcement agencies for content removal requests—including requests to remove content under Meta’s own Community Standards. The report doesn’t provide further information about the nature of these authorities and agencies.
These types of requests have been criticized by civil society and Meta’s own Oversight Board as encroaching on human rights. Unlike requests for content removal under national legislation, which are tracked in Meta’s own transparency reporting and have only local effect, requests under the company’s Community Standards result in the complete removal of the content from the platform on a global scale. Meta hasn’t fully complied with the Board’s recommendations around such requests.
One important section that merited more details is the one dedicated to responding to crises and conflicts. After years of civil society repeatedly insisting on the importance of paying attention to different dialects and linguistic nuances, Meta has finally acknowledged that Arabic cannot be considered one language. In the context of the conflict in Sudan, the company tested its new system which “can identify the particular dialect of Arabic used, and direct the content to the reviewer most likely to understand it.” But will Meta be applying this decision in different contexts too?
Meta’s Crisis Policy Protocol is constantly referenced as a universal remedy, although it has time and again proven to be inadequate, especially in the context of Israel’s war on Gaza. The report briefly mentioned the war under the “Middle East” section and argued it was “a priority for Meta,” though it was merely worthy of a single-page mention in its 2024 report.
Despite Meta’s claims that they focused on respecting freedom of expression, throughout 2024, Palestinian voices were severely over-moderated, while content in Hebrew that incited to violence remained largely under-moderated, an issue persistently brought up by the MENA Alliance for Digital Rights (MADR). Similarly, SMEX and its partners urged Meta not include the term “zionist” as a proxy term for hateful conduct. However, the company celebrates this new policy throughout its 2024 report as if it were noncontroversial.
Syria is also very briefly mentioned in the report, as a case study regarding the contribution of trusted partners to Meta’s shaping crisis response efforts and implementing policies. Interestingly, in May 2025, the Oversight Board announced a deliberation on the impact of Meta’s content moderation on freedom of expression in Syria based on two cases and invited people and organizations to submit public comments. SMEX submitted a public comment urging the OSB to overturn the removal of two posts in its decision.
Our submission highlighted how the removal of the two posts at issue reflects a recurring pattern: Meta’s persistent over-moderation of Arabic content and a systematic failure to account for the political and human rights realities of the West Asia and North Africa (WANA) region, as well as a simultaneous failure to understand coded hate speech such as that directed at Syria’s Druze communities and other minorities.
Trusted partners are frequently invoked throughout the report as an indication of Meta’s engagement with various stakeholders, including civil society. The company goes as far as arguing that they improved the speed at which they respond to content reported through the Trusted Partner program, however our experience at SMEX’s Digital Safety Helpdesk and other helpdesks in the WANA region suggests otherwise.
Once more, Meta opts for optics over substance. With minimal specific examples of actions taken to prevent human rights violations on its platforms and gross generalities, the company’s 2024 Human Rights report is consistent with Meta’s lack of meaningful investment in the protection of human rights both online and offline.
In 2026, Meta can no longer evade accountability by offering surface-level fixes to human rights issues. Meta bears clear responsibilities under international human rights law—responsibilities articulated in the UN Guiding Principles on Business and Human Rights. These principles require Meta and other tech companies to identify, prevent, and actively address human rights harms linked to their products instead of merely acknowledging them or providing cosmetic solutions.
Featured Image from AFP.