There is no doubt that the power Big Techs possess over society is “big.” It is ironic how a deregulated sector is regulating societies, a job that is mandated to the law. The power that Big Techs accumulate, enables them to impose their own standards and policies on the whole world. This fragmentation of the digital legal environment can cause substantial harm to fundamental human rights. Until now, however, societies have struggled dealing with this issue.
Big Techs’ main concern is regulating the retention of data. Although data retention regulation is important to address privacy concerns, another source of power of Big Techs remains uncontrolled: the community standards of Big Techs and the mechanisms that are used to enforce them. Community standards are a set of guidelines used by Big Techs to determine what content is permitted online (e.g. Facebook’s community standards). It is the place where Big Techs regulate the virtual interlinked societies by outlining the policies about how a user can make use of their services. These services are so essential to our day-to-day lives since natural and legal persons use them in almost everything they do.
The community standards and policies are being set by Big Techs, i.e. companies whose only interest is generating profit. This raises the question whether we want a for-profit company to regulate societies rather than a representative public entity aiming to provide and maintain the public interest of societies? Big Techs pursue profit and are only concerned with their own interests. They do not answer to society, but to their investors. The latter expect to obtain a profit in return for their investments. This commercial interest will always be reflected in the way they outline their goals and values on digital platforms and in the manner in which they design the community standards and enforce them.
Big Techs create their own set of rules, the so-called “community standards.” What are their implications? Do Big Techs also have the capacity to monitor the enforcement of these standards, and if so, how is this monitoring done? This article addresses these questions in turn and discusses the main criticism related to the use of algorithms. Big Techs have made efforts to address these issues, but are these efforts enough? To make sure that such efforts are effective, one should have guidelines for Big Techs to follow. These guidelines will not only provide a solution for some of the problems related to the use of algorithms, but will also contain valuable information on how to outline the community standards.
Community Standards
Big Techs draft community standards to decide what is or is not allowed on their platforms. These standards are related to the right to freedom of expression (including freedom of speech), the right to privacy, the principle of non-discrimination, the right to freedom of thought, conscience and religion, etc.
For example, YouTube’s community standards outline that hate speech is not allowed on YouTube and that they will remove content promoting violence or hatred against individuals or groups based on certain attributes like race or religion. Facebook’s community standards also set out standards related to ”violence and incitement to.” For instance, it is not allowed to post threats that could lead to the death of (or other forms of serious violence against) one or more targets. The threat could be defined, among other things, as statements of intent to commit serious violence.
This leads to a set of questions; who drafts the community standards? Who decides what content is permitted online? Since these standards are being set by a rather homogeneous group of people from Silicon Valley, democratic legitimacy issues are raised.
Big Tech’s Capacity to Monitor the Enforcement of Community Standards
It is clear that Big Techs do have the financial capacity to monitor the enforcement of their community standards. For instance, the annual revenue in 2019 for Twitter was 3.46 billion US dollars, for YouTube 15.15 billion US dollars, and for Facebook 70.7 billion US dollars.
However, the number of people employed to review posts, videos, and pictures flagged by artificial intelligence (the so-called “content moderators”) is low. For example, Twitter has only around 1.500 content moderators. YouTube and Facebook, on the other hand, have about 10.000 and 15.000 content moderators, respectively. Although the number of content moderators of YouTube and Facebook is significantly higher than Twitter, this is not enough since Big Techs have to review a huge amount of content every day.
Algorithmic Moderation and its Criticism
As mentioned before, Big Techs have a small number of content moderators. As a result, they are increasingly relying on the use of algorithms for content moderation. This is called “algorithmic moderation” and is defined as “systems that classify user-generated content based on either matching or prediction, leading to a decision and governance outcome”. For instance, YouTube announced that 98% of the videos that have been taken down because of its violent extremism content are flagged by algorithms.
Automated moderation systems are preferred over human beings due to practical reasons. An enormous volume of content on digital platforms has to be checked daily. The large amount of data that can be examined in a small-time frame by a “scientific impartial” system is one of the main advantages of artificial intelligence. However, there is also much criticism about algorithmic moderation.
One of the much-discussed issues is their lack of transparency and accountability. The complexity of automated moderation systems makes it difficult to understand and to control them. There are many questions as to how and to what extent algorithmic moderation is used to flag and take down content on digital platforms (e.g. How are these automated systems trained? What kind of data do they use?). The specific features of these systems are deliberately left unclear. That is why they are often called “black boxes”. Moreover, the databases containing removed content are blocked for everyone, including critical experts and auditors. The same is true for legal advisors who would like to use the flagged content as evidence of crimes in international and national courts.
The possible discriminatory impact of algorithmic moderation on certain social groups has also been broadly discussed. For example, automated moderation systems identifying violations of a digital platform’s standards concerning hate speech might unreasonably flag language used by a specific social group. Therefore, it will be more likely that the content posted by this social group will be taken out.
Another problem with algorithmic moderation is their inaccuracy and unreliability. Regarding certain content such as hate speech and extremism, the context is crucial to decide whether or not it should be taken down. Although freedom of speech is regarded as a universal value, in practice however, universality has not been achieved yet since cultures differ from one location to another. To sum up, algorithmic moderation is not as good as people regarding context. These automated moderation systems cannot take into account the different meanings of similar content across various groups and regions.
Efforts of Big Techs to Address the Issues Related to Algorithmic Moderation
It should be noted that some Big Techs have tried to mitigate the above-mentioned issues. For example, Facebook established the “Oversight Board” (“Board”) to address the transparency and accountability problem. The Board consists of 40 members from all over the world, who are selected by Facebook itself (art. 1.8 Charter of the Board: “To support the initial formation of the board, Facebook will select a group of co-chairs. The co-chairs and Facebook will then jointly select candidates for the remainder of the board seats”). It has the authority to review content on the basis of a request for review made by persons using Facebook’s services or by Facebook itself. The board has the discretion to choose which requests it will review and decide upon (art. 2 Charter of the Board).
It remains to be seen how the Board, when delivering its decisions, will decide about inconsistencies between Facebook’s community standards and international human rights standards. For example, the Board is not able to uphold or reverse Facebook’s content decisions where the underlying content has already been blocked, following the receipt of a valid report of illegality (see Bylaws of the Board).
Therefore, one might ask whether the Board will be able to review cases of state censorship violating the right of freedom of expression. It is also noteworthy that the Charter of the Board only focuses on one fundamental human right, namely freedom of expression. Still, other fundamental human rights might be affected as well by content moderation decisions. In this way, the Board might be hesitant to deal first with content cases that implicate severe human rights violations other than the right of freedom of expression.
Another example is YouTube’s Community Guidelines Enforcement Report. This report gives per quarter information about the videos and comments that were removed by YouTube. For example, it discloses how many videos and comments were removed, how the removed videos and comments were detected (e.g. automated flagging or human detection), and the reasons why they were removed. Remarkably, the vast majority of removed videos and comments were automatically flagged.
Guidelines for Improvement
When drafting the community standards, Big Techs should take into account the UN Guiding Principles on Business and Human Rights (“UNGPs”). This is a set of guidelines for states and business enterprises to prevent, address and remedy human rights abuses committed in business operations. The objective of the UNGPs is to enhance “standards and practices with regard to business and human rights so as to achieve tangible results for affected individuals and communities, and thereby also contributing to a socially sustainable globalization.”
One of the most important principles of the UNGPs with regard to business enterprises is principle 11: “Business enterprises should respect human rights. This means that they should avoid infringing on the human rights of others and should address adverse human rights impacts with which they are involved”. This principle also applies to Big Techs since they qualify as “business enterprises.”
Besides the 31 principles of the UNGPs, a new set of guidelines should be adopted, one that is specifically tailored to the needs of Big Techs. These guidelines should not only address some of the issues we identified above in relation to algorithmic moderation, but should also provide recommendations for Big Techs on how to outline their community standards.
Regarding the transparency and accountability issue as explained above, the new set of principles should include an obligation for Big Techs to publish the community standards, the specific features of their automated moderation systems, and transparency reports (e.g. YouTube’s Community Guidelines Enforcement Report). They should also be obliged to create some kind of independent oversight mechanism (e.g. Facebook’s Oversight Board). With regard to the latter recommendation, the creation of a national or international oversight body, independent from Big Techs, is highly recommended. Such a body would issue independent reports monitoring Big Techs’ mechanisms and procedures.
Concerning the adoption of the community standards, the new set of guidelines should also include the participation of other actors of society, such as governments and civil society organizations, in order to address the democratic legitimacy concerns. Not only Big Techs should have a say in drafting the community standards. The same applies to the decision as to what kind of data should be used to train the automated moderation systems. This is because it is expected that the personal biases of experts will get into the training data.
Conclusion
Big Techs have too much unfettered power over society and should not be entrusted alone with setting the standards on what is or is not allowed on their platforms. They are private, for-profit companies, who do not sufficiently take into account the public interest and cannot be held democratically accountable in doing so.
Currently, the community standards are enforced mainly through the use of algorithmic moderation. However, transparency and accountability, accuracy and reliability concerns have been raised in this regard. Big Techs have tried to mitigate these issues, but their efforts are not effective nor efficient enough.
Big Techs should adhere to certain guidelines when drafting the community standards. Besides the UNGPs, a new set of principles supplementing them and specifically tailored for Big Techs should be adopted. These guidelines would not only provide a solution for some of the problems we identified in relation to algorithmic moderation, but would also contain valuable information on how to outline the community standards.
Since Big Techs do have the financial capacity to monitor the enforcement of community standards, they should also be incentivised to spend more money on personnel and oversight mechanisms. In addition to these incentives, sanctions could be imposed after a certain transitional period, such as the suspension of the processing of data; Big Techs’ biggest nightmare.
Feature image via Stockvault.
Eline Labey is a PhD student at the Vrije Universiteit Brussel (VUB), specializing in international criminal laws and international human rights law. She earned her LLM in International law from the University of Cambridge.