As it happens, governments, like Lebanon’s, are not the only ones censoring online expression. Internet companies, including Facebook, Twitter, and Google, regularly remove user content that is deemed either to violate terms of service (TOS) agreements or national laws like the U.S. Digital Millennium Copyright Act, hate speech laws, etc. The result is that corporations can end up determining what constitutes free expression arbitrarily and without due process or input from citizens.

In an attempt to tackle this issue, designer and technologist Ramzi Jaber and free expression advocate Jillian York teamed up in 2012 to create after they both noticed posts disappearing from their friends’ Facebook feeds. A 2014 Knight News Challenge grant supported further development of the idea and a new site, which launched last week.

Users of Facebook, Google, Twitter, Flickr, Instagram, and YouTube can report content that has been taken down by the platform. These reports will then be aggregated, analyzed, and in some cases shared “to encourage companies to operate with greater transparency and accountability toward their users as they make decisions that regulate speech.”

To learn more about the platform and how it can benefit users in our region, we spoke to Ms. York, who is also a member of SMEX’s Advisory Group. 

Q: What does online censorship mean for an average user and why do they need to be aware of it?

A: The average user is probably unaware of the degree to which companies control their users’ speech…until they experience it firsthand. That’s why we created Online Censorship. We want to be able to show how much censorship is occurring on social media platforms and to help users understand why this matters.

Q: How does work, and what is the logic of the process?

A: Users who visit the site can access a bunch of information. We have a page that demonstrates how each site’s appeals system works, as well as a number of articles and resources users can read.

The Submit Report section is the main focus of the site, and offers a series of questions that help us better understand what types of content are being censored, and why. We ask a number of different questions because we want to be able to conduct different analyses of the data we receive.

To show, for example, what type of speech is commonly being censored in a given geographic area, or how different companies approach hate speech.

Online censorship testing session held in Beirut, in July 2015

Q: What do you want the Arab region to share mainly on the platform?

A: We want to hear from all kinds of users. We know about political censorship happening in the Arab region, but I imagine that the same censorship we hear about in the U.S. is happening here too. We’re here to demand more transparency from companies, which have so much stock in the places we treat as our modern public squares.

Q: How much are concerned companies interested in collaborating on a project like yours?

A: We’ve been in conversation with the companies for a while. While we don’t expect we can get them to be completely transparent, we know from experience that companies listen to NGOs and users when there’s critical mass. We hope to create that critical mass.

Q: What’s the ideal outcome? Can you mention an example as a prototype?

A: The ideal outcome is that we receive a robust set of data that allows us to really see into how these companies work. With transparency, users will be better equipped to fight back against policies they find overly restrictive.

Q: And finally, how does a user get started?

A: The process is easy, you can be part of this just by entering the website and posting your report there.

This page is available in a different language العربية (Arabic) هذه الصفحة متوفرة بلغة مختلفة