Great expectations surround Facebook’s Oversight Board, an audacious experiment in platform self-regulation, with its first 20 members announced on May 6. It was a milestone in Facebook’s response to compelling calls for greater accountability and transparency. The board evokes constitutional law notions in various ways: its original conceptualization by Mark Zuckerberg as “almost like a Supreme Court;” its power to deliver “binding decisions,” as well as “policy advisory statements;” the institution-building language and framework of its charter and bylaws; and the fact that its initial membership encompasses six constitutional lawyers, including three specialists in American constitutional law, with two serving as co-chairs.
Such “constitutional features” exist against a backdrop of approaches to content moderation by Facebook and other platforms that so far have been “undergirded by American free speech norms.” Facebook’s creation of the board suggests a clear and conscious turn from such a U.S. constitutional-law paradigm towards an international human rights approach in content moderation by the world’s most powerful social media company. But the nature and degree of this shift depend on how the board, in delivering its decisions within the confines of its jurisdiction, will interpret the relationship between Facebook’s community standards and values, on the one hand, and international human rights standards, encompassing international treaty law and non-binding standards, on the other.
The shift was heralded by the four co-chairs of the board in their New York Times op-ed the day of the announcement. The writers affirmed the board’s “[commitment] to freedom of expression within the framework of international norms of human rights” and “without regard to the economic, political or reputational interests of the company.”
The same is indicated through the board’s basic documents. The charter states that “[when] reviewing decisions, the board will pay particular attention to the impact of removing content in light of human rights norms protecting free expression.” Under the bylaws: board members are required to “[participate] in training on … international human rights standards;” administration staff should have expertise in, among other things, human rights; Facebook’s response to board decisions and recommendations should “be guided by relevant human rights principles;” and the board’s annual report should include, at minimum, an “analysis of how the board’s decisions have considered or tracked the international human rights implicated by a case.” The preamble-like values that the board should use to interpret Facebook’s content policies, notably its community standards, further indicate that the company “looks to international human rights standards” in determining whether content “that would otherwise go against” those standards should be allowed.
Evolving Standards Over Time
The turn to international human rights standards at Facebook has not been overnight. Over the past two years, such thresholds have become an explicit reference point for senior leadership at Facebook and the board. In August 2018, Vice President of Policy Richard Allan stated that the company “[looks] for guidance in documents like Article 19 of the International Covenant on Civil and Political Rights” (ICCPR). In September 2019, around the time that human rights language – including the statement that “all people are equal in dignity and rights,” with its obvious echoes of the Universal Declaration of Human Rights – was added to Facebook’s values, Zuckerberg wrote that enforcement of the community standards is “guided by international human rights standards.” In presenting the board’s first members to civil society representatives on a conference call on May 6, Director of Oversight Board Administration Thomas Hughes, emphasized the specific relevance of the United Nations’ Guiding Principles on Business and Human Rights for the board’s operations, particularly Guiding Principle 31 on the “effectiveness criteria for non-judicial grievance mechanisms.”
These rhetorical references to international human rights standards are reinforced through key decisions and appointments. In May 2019, Facebook commissioned a human rights review of the proposed scope and structure for the board, as laid out in its draft charter. That may be a sign of the company’s desire to enhance the board’s human rights identity at a formative stage, though the review’s recommendations appear to have had a limited influence upon the final charter and bylaws. As for appointments, in July 2019, Miranda Sissons, formerly a senior associate at the International Center for Transitional Justice, was appointed as Facebook’s first director of human rights. In January 2020, Facebook appointed Hughes, who formerly served as executive director of the freedom of expression organization ARTICLE 19. According to the board’s website, six of its members, including former U.N. Special Rapporteur on the rights to freedom of peaceful assembly and of association Maina Kiai, have expertise in human rights or international human rights law.
The acknowledgement of international human rights standards is important, even necessary, for the board to work effectively. Global norms offer the board and Facebook a sense of global legitimacy, credibility, and appeal, especially outside the United States. (At the same time, international standards may serve to undermine their appeal within the United States among those who prefer content moderation to be based on First Amendment standards.) The recognition of global norms in the board’s basic documents also addresses certain shortfalls in terms of the global diversity of the board: five of the board’s first 20 members are from the United States, even though about 90 percent of Facebook users are outside the U.S.; there are currently no members from Central Asia or North Africa; and, despite having 60 percent of the world’s population, Asia is severely underrepresented, with only four members so far.
Facebook’s business interests may be influencing some of this shift. The identification of such standards in the board’s basic documents, Facebook’s values, and the statements of its leading figures suggests that some of the motivation lies in minimizing potential government regulation of social media platforms in the European Union and elsewhere. The rhetoric – and any commensurate action – also may help shore up public trust and the company’s reputation globally, after many well-founded criticisms.
The shift is also likely to have been spurred by pressure from global human rights actors, notably David Kaye, the U.N. special rapporteur on the promotion and protection of the right to freedom of opinion and expression, and leading civil society organizations, such as Access Now, ARTICLE 19, the Electronic Frontier Foundation, and the Association for Progressive Communications. Kaye has persuasively argued that companies should explicitly recognize “human rights law” as the “authoritative global standard for ensuring freedom of expression on their platforms” and as the benchmark “underlying their content moderation and write that into their rules.”
Applying the Standards
How the board will apply the charter’s embrace of “human rights norms protecting free expression” as a basis for its decision-making depends on how the board conceives the interrelationship and reconciles the tensions between such norms and Facebook’s community standards and values. In any case, the board’s jurisdiction is circumscribed by the bylaws. For instance, the panel cannot review Facebook’s decisions on content that “has already been blocked, following the receipt of a valid report of illegality.” It also cannot consider content that is criminal or otherwise “unlawful in a jurisdiction with a connection to the content” and “where a board decision to allow the content on the platform could lead to criminal liability” or “adverse governmental action” for Facebook, its employees, the board’s administration, or its members.
This means that the board cannot consider cases that human rights activists care most deeply about: state censorship of content, particularly political speech, in flagrant violation of international human rights standards on freedom of expression.
It is significant that, under the charter, the board’s decision-making is to be guided by “human rights norms” in relation to one particular right, namely freedom of expression, only. This emphasis on freedom of expression consolidates Facebook’s own focus on “voice and free expression” through its values and in the public statements of its senior leaders, notably Zuckerberg. Yet, although freedom of expression may be most obviously adversely impacted by content-moderation decisions, other internationally recognized human rights – including the right to life, the right to equality before the law, the right to privacy, freedom of assembly, and the right to an adequate standard of living – can be impacted too, as the 2019 review of the board indicates. The pre-eminence of freedom of expression in the charter creates a hierachy of human rights for the board’s evaluation of content decisions, which is antithetical to an international human rights approach in principle. But it also means that, in practice, the board may be reluctant to prioritize cases which harm human rights other than freedom of expression, even if those harms are severe. At the same time, cases coming before the board will require it to examine the interplay between freedom of expression and other internationally recognized human rights and, in doing so, engage with international human rights standards on those other rights, too.
The board will need to address a number of hard questions in the coming years: What relative weight will it place on Facebook’s community standards and values, which encompass vague and imprecise notions such as “newsworthiness,” and international human rights standards protecting freedom of expression, under which such notions would be considered problematic?
Another question is how the board will make decisions when there is an inconsistency between Facebook’s content policies and values, and international human rights standards on, say, hate speech and incitement to hostility, discrimination or violence. Will the board rely on international human rights law, specifically under Article 19 of the ICCPR, to overrule Facebook’s content-moderation decisions, even when they appear consistent with Facebook’s community standards? Will the board end up assessing whether Facebook’s content policies, particularly the community standards and values, meet the three-part test of legality, legitimacy, and necessity and proportionality, under Article 19 of the ICCPR? How much weight will the board give to Facebook’s freedom to build the “global community” it seeks?
Cumulative Effect?
Overall, what will be the cumulative effect on the company’s content policies and values of the board’s human-rights-based decisions and policy advisory statements? Will the board or Facebook itself seek changes to the bylaws in order to resolve emerging tensions? And to what extent will the board’s decisions and advisory statements, which will be published, impact the advocacy of human rights activists, the elaboration and interpretation of international human rights law by UN human rights bodies, and the judicial rulings of regional human rights courts and national courts? To what extent will the board shape international normative developments and discourse on freedom of expression, notwithstanding the fact that it is a self-regulatory body established by a private company?
What does seems clear is that international human rights standards do have the potential of exerting a critical influence on the board’s review of content-moderation decisions in areas of tension with Facebook’s content policies and values. Given that the board’s prior decisions will have “precedential value,” its early decisions in addressing this tension will be crucial. The possibility that the board’s final decisions may include a policy advisory statement that “will be taken into consideration by Facebook to guide its future policy development” sets a clear pathway for international human rights standards to make a systemic and long-term difference, even within the given legal obligations mentioned above.
Yet, just as a human rights approach does not present a panacea to the challenges of content moderation generally, international standards on freedom of expression should not be assumed to present straightforward solutions for the board’s review of cases. Rather, these standards will increase the complexity, and also the dynamism, of the board’s decision-making. Global human rights norms – from binding treaty provisions to soft law recommendations of international human rights bodies – are diverse, nuanced, evolving, sometimes inconsistent, and contested, especially in the area of freedom of expression. The board will therefore need to gauge the normative value of the range of international human rights sources and make choices when there are tensions between them, like between Article 20 of the ICCPR and Article 4 of the International Convention on the Elimination of Racial Discrimination. The board also will need to determine how much relative weight to accord human rights norms deriving from regional systems, particularly regional courts.
The board’s review of Facebook’s content-moderation decisions should, at minimum, be guided by the U.N. Human Rights Committee’s jurisprudence and authoritative interpretation of Article 19 of the ICCPR, as well as key touchstone texts such as the Rabat Plan of Action, and the detailed and relevant recommendations of the Special Rapporteur on freedom of opinion and expression. The board’s commitment to freedom of expression under international human rights norms should therefore be tested on the basis not only of how it interprets Facebook’s community standards and values in light of such norms, but also how far it is willing to engage with the granularity of those norms in its review of cases.