This research was conducted as part of the Masarouna project, a five-year program that mobilizes young people in the Middle East and North Africa to claim their sexual and reproductive health and rights (SRHR). Researchers and policy analysts at SMEX explored SRHR within the scope of digital rights in the West Asia and North Africa (WANA) region. They analyzed SRHR-related content moderation policies and practices on social media platforms Facebook, Instagram, TikTok, X (formerly Twitter), and YouTube and provided recommendations based on these findings.
Executive summary
This research analyzes the content moderation policies and practices of Facebook, Instagram, TikTok, X (formerly Twitter), and YouTube regarding content related to sexual and reproductive health and rights (SRHR) in the West Asia and North Africa (WANA) region. The aim of this research is to establish, through policy analyses and interviews with SRHR-promoting organizations, how these platforms treat such content, both in theory and in practice, and what the impact of their policies is.
The data for this research, both qualitative and quantitative, was retrieved from desk research, policy analyses, and interviews with ten organizations and activists from WANA who have active experience in facing and dealing with content moderation of SRHR-related posts and ads. Through an adaptation of the Ranking Digital Rights methodology, the content moderation policies of five social media platforms were carefully analyzed and compared. In addition, the interviews conducted in this research show the practical application and implications of these policies.
The outcomes of this research revealed a bleak picture. Platforms do not have SRHR-specific policies; instead, regulations are generally scattered around different policies, such as community guidelines and advertising policies. These regulations often fall under “adult” or “sexual” content, leaving room for poor and summary regulation. The restriction or removal of posts, ads, and accounts often happens on vague grounds, sometimes with illogical or irrelevant explanations, despite the content being innocuous and far from explicit in any form. Ad rejection, in particular, has proven to be a challenge for SRHR advocates, who engage in lengthy and time-consuming appeal processes, making self-censorship a common practice. Finally, Arabic content is met with harsher restrictions than similar English-language content.
It is worth mentioning that no cases were documented where a platform took action due to third-party demands, particularly government demands to restrict content and accounts violating local laws. Hence, this report focuses only on restrictions and actions platforms take to enforce their own policies and rules.
The report’s first section provides an overview of platform content moderation practices and censorship of SRHR content in the region, including the types of restrictions organizations and activists face. The second section explores platforms’ strict moderation of advertising content on sexual and reproductive health from regional organizations. In the third section, the research looks into how platforms contradict their own policies by restricting these organizations’ or activists’ informative and artistic content. In the fourth section, we assess the appeal mechanisms provided by platforms and how they are failing users in the region. Finally, the fifth section documents the censorship of regional queer voices and LGBTQ+ organizations by platforms.
Key findings
- We documented restrictions on all platforms that are the focus of the study (Facebook, Instagram, TikTok, X, and YouTube), in addition to LinkedIn, which Microsoft owns. Regarding Meta’s platforms, Instagram was responsible for the most restrictions, followed by Facebook. Five out of 10 interviewees said they faced 15 to 20 cases of restrictions in 2022 alone on both platforms.
- Restrictions imposed on organizations and content creators posting about SRHR included content takedowns, removal of accounts, ad rejection, and limited organic reach. Their content was restricted mainly for violating platform policies on “sexual” and “adult” content. The reasoning provided by platforms did not always make sense to interviewees who shared cases of innocuous content, such as medical or informative information, being removed by platforms because they were deemed “sexual” or “adult” content.
- In their policies, all platforms make exceptions for the publication of “sexual,” “adult,” and nude content if it falls under educational, medical, or artistic purposes. However, we documented several cases where platforms contradicted their own policies by removing SRHR-related content that is educational, artistic, etc.
- Ad rejection, in particular, has proven to be a challenge for SRHR advocates, preventing them from widely disseminating their content and essential information. Four out of 10 organizations and activists we interviewed said they lost access to one or more of their ad accounts. Most rejected ads get approved after review. However, interviewees criticized the appeal process, describing it as time-consuming, inefficient, and unpredictable.
- According to our policy analysis of five platforms (Facebook, Instagram, TikTok, X, and YouTube), all platforms disclosed mechanisms for users to submit content moderation appeals. However, these mechanisms did not always cover all forms of content moderation actions. In the case of TikTok and X, very little information was provided on how they handle appeals. Meta was the most transparent about its process for reviewing appeals. Not a single platform disclosed the role of automation in reviewing appeals.
- Interviewees slammed platforms’ content moderation practices as “biased” towards the WANA region and the Arabic language, arguing that platforms often fail to consider the context in which their content is published.
Recommendations to platforms
- Conduct human rights impact audits. Platforms should conduct human rights impact assessments on how their rules and policy enforcement impact sexual and reproductive health and rights, in addition to freedom of expression and access to information as enablers of SRHR. These assessments should investigate reports of bias toward the region and its languages and dialects, in addition to double standards in moderation of content dealing with female anatomy and sexual pleasure. These types of publications face much more severe moderation than other content, particularly when they use Arabic. They should also consider the role and shortcomings of AI in content moderation in languages spoken in the region.
- Clarify the reasoning for restricting “adult” and “sexual” content. Platforms should clearly specify which content, including advertising, is considered “sexual,” “adult,” or “inappropriate,” and they should refrain from using vague and broad interpretations for content and ads that also fall under these categories. We documented several cases where platforms cited this policy violation to restrict rather innocuous content.
- Improve the training of content moderators in recognizing SRHR content and its local nuances. Platforms should train their content moderators to recognize SRHR-related content that, instead, gets flagged as “adult” or “sexual” content. In addition, given the nuance of these cases, a higher percentage of human content moderators is required.
- Enforce and improve exceptions for educational, medical, scientific, and artistic content. Platforms should consistently enforce their own policies on granting exceptions for “nude,” “partially nude,” “adult,” or “sexual” content when posted for health, informative, artistic, and scientific reasons. They should also improve existing policies by steering away from vague language (e.g., “exceptions may be made…”) and granting more exceptions (for example, TikTok does not allow artistic exceptions).
- Review advertising policies on sexual and reproductive health and well-being. Platform advertising rules on these topics are very strict, banning, for example, advertisements that focus on “sexual pleasure” (Meta, TikTok) and “display excessive visible skin.” While it is understandable that some of this content needs to be restricted to minors, blanket bans prevent the broader dissemination of essential information on SRHR. Platforms should also clarify how they enforce their advertising rules, particularly the role of automation in deciding whether an ad is rejected or taken down.
- Clarify the legal basis for local ad restrictions. Platforms place restrictions on advertising related to sexual and reproductive health and wellbeing in specific markets, such as non-prescription contraceptives (X) and “birth control or fertility products” (YouTube) for most markets in the WANA region. Platforms must specify the legal bases under which these are banned in each country. They should refrain from imposing generalized bans about what activities or content are acceptable in the region.
- Be transparent about restrictions on visibility. Platforms must be transparent about their practices of decreasing the visibility of content, ads, accounts, pages, etc. They should notify users when they take such actions, including their reasons, and provide avenues to appeal these restrictions.
- Dedicate resources for fair and human rights-centered content moderation in the region. Platforms should improve their content moderation in Arabic and its dialects and other languages spoken in WANA. They should ensure that their content moderation teams are adequately trained on human rights, linguistically diverse, and have the contextual knowledge needed for making human-rights-centered content moderation decisions. By deploying diverse training data sets, platforms should also ensure that their content moderation AI is adequately trained in the region’s languages, dialects, and contexts.
- Improve content moderation appeal process. Platforms should improve their content-moderation appeal mechanisms to cover all types of restrictions, including removal of content, ad rejections, suspension of accounts, pages, advertising accounts, and deliberate decisions to decrease the visibility of content and accounts. When a platform removes content using automation, users and advertisers should be able to appeal to a human moderator or team. Platforms should also specify their timeframes for notifying users of content moderation appeal decisions.
- Take action against users abusing flagging mechanisms. Platforms should address the practice whereby a group or groups of users massively report content or accounts they want to silence. Repeat abusers should be banned from reporting or flagging content in the future.
You can read the complete report or download it from here.