On Wednesday, the Facebook Oversight Board (FOB) will release its decision on whether to uphold or reverse Facebook’s indefinite suspension of former President Donald Trump from the platform. The decision has been highly anticipated in the United States not only given the high-stakes outcome for the country’s most polarizing figure, but also as a test case for private regulation of political speech. In April, the Board announced that it would extend the timeline for the decision due to the unusually high level of public interest and comment.
But the decision’s impacts will reach far beyond U.S. borders and must be understood in this broader context. Two features of the case are particularly salient. First, Trump was a government official when he was removed from the platform, but has since left office. The decision to de-platform him thus occurred when he was a head of state, but the decision to maintain his suspension comes while he is among the principal leaders of the opposition political party. Second, the decision to ban Trump was justified on the grounds that his posts were inciting violence, a situation generated in part by Facebook’s own underlying design decisions that amplified the reach of Trump’s posts for months before the platform decided to act.
The Trump decision presents a case study of the considerations faced by Facebook and other platforms as they consider moderation policies that apply across different types of users, physical locations, cultural contexts, and political systems – and as they design their systems for distributing and amplifying speech. Many expect the FOB to overturn Trump’s ban, with some preemptively decrying such a decision. But analyzing the merits of any decision – to maintain or lift the ban – requires assessing the impacts of the precedent set not only in the U.S. political system but around the world. It also requires understanding that user bans and content removals are only the tip of the content moderation iceberg – a fact that has important implications for evaluating the merits of Facebook’s own justification for its moderation policies.
This article addresses three issues we should focus on as we read tomorrow’s Oversight Board decision. First, it describes the real tensions between the rights to speak and to hear, the risks of real-world harm caused by certain speech, and the design decisions adopted by platforms to connect speakers and listeners. Second, it explains that while the impulse of both the Oversight Board and many of Facebook’s critics has been to look to human rights law or U.S. constitutional law to govern content decisions, attempting to ground decisions in international law on free expression or privacy rights, or in domestic regulation of speech, cannot resolve these tensions. Third, we must keep in mind the decision’s implications for government officials and political leaders around the world, for the listeners who access their speech, and for Facebook’s decisions about how to connect these groups in political and media environments that are very different from those in the United States.
The Decision to Ban Trump
First, some background on the case: Facebook removed two posts by then-President Donald Trump on Jan. 6, 2021. Both posts were messages to Trump’s supporters, posted during the Capitol Hill insurrection, urging rioters to go home but also repeating the lies regarding election security that prompted the attack; telling the rioters that he “love[d]” them and that they were “very special;” and calling on them to “remember this day forever!” Facebook also suspended Trump’s ability to post for 24 hours. The following day, Facebook announced that its suspension would last “indefinitely, and for at least the next two weeks.” (Twitter permanently suspended Trump’s account on Jan.8). Finally, on Jan. 21, Facebook announced that it would refer the then ex-president’s suspension to its Oversight Board and that the suspension would remain in effect until the Board issued its decision.
In its initial statement on the decision to suspend Trump’s access, Facebook did not explicitly reference violations of its Community Standards. Instead, in announcing its initial decision to take down the two Trump posts, Facebook justified its decision on the grounds that, “on balance these posts contribute to, rather than diminish, the risk of ongoing violence” (which could be grounds for violation of its policy on violence and incitement, among others).
Similarly, Facebook justified its decision to extend the suspension of Trump’s account with a general reference to the ongoing, immediate threat of violence: “We believe the risks of allowing President Trump to continue to use our service during this period are simply too great, so we are extending the block we have placed on his Facebook and Instagram accounts indefinitely and for at least the next two weeks.” In a simultaneous post explaining the decision, Facebook CEO Mark Zuckerberg noted that the suspension would remain in effect “for at least the next two weeks until the peaceful transition of power is complete.”
The Broader Context
Facebook has previously hesitated to take down posts or suspend accounts of public figures, including heads of state, in situations with potential for deadly violence.
For example, Facebook was used to spread rampant disinformation and hate speech in Myanmar throughout intense episodes of genocidal violence in 2016-2018, including through official channels and the posts of government officials. The company was slow to take action despite widespread criticism and a U.N. investigation citing the platform’s role in stoking the genocide. Facebook finally suspended the accounts of army chief Min Aung Hlaing and several other military leaders in August 2018, nearly two years after the most intense episodes of violence. According to Facebook, this was the first time that the company had taken such action against a political or military leader. In their statement announcing the ban, the company acknowledged, “we were too slow to act.”
The company has been likewise reluctant to police or incapable of policing hate speech and incitement in Ethiopia, where ethnic tensions have periodically erupted into violence fueled by Facebook posts, including posts from political officials and opposition leaders. In India, Facebook has been used to spread anti-Muslim rhetoric and allegations by leading politicians, and Facebook workers allege that local Facebook officials specifically intervened to prevent the platform from banning Indian leaders engaged in hate speech.
Meanwhile, Facebook has been accused of acquiescing to pressure from governments to block access to the accounts or posts of opposition leaders. As early as 2014, the company blocked Russians’ access to an event page for a rally in support of opposition leader Alexei Navalny at the request of Russian prosecutors. Last year, Facebook blocked event pages of anti-lockdown rallies in the United States that violated government stay-at-home orders to combat COVID-19.
In other cases, Facebook has been criticized for failing to resist government cutoffs of the service designed to silence protestors or political opponents – although some of these decisions involved potential threats of real-world harm, and unique considerations come into play with each piece of content. Most recently, the Oversight Board swiftly overturned Facebook’s original decision to remove a video criticizing India’s Prime Minister Narendra Modi (though Facebook restored the content before the FOB issued its decision, attributing the takedown to an error). The Board decision came after public outcry over overt pressure from the Indian government to take down content criticizing the government’s coronavirus response, and a brief period in which “#ResignModi” was blocked in India (according to Facebook, the latter was also a mistake, not the result of government pressure). The Board noted in its decision on the video that “Facebook also declined to provide specific answers to the Board’s questions regarding possible communications from Indian authorities to restrict … content critical of the government.” Some governments have formalized systems for requesting that Facebook take down content they deem troublesome by developing Internet Referral Units to flag content that may violate terms of service; Facebook honors an increasing proportion of these requests.
Free Speech (on a Private Platform)
Free speech advocates have voiced concern about Facebook’s extraordinary control over public debate and urged respect for international human rights standards of free expression. Former United Nations Special Rapporteur on the right to freedom of opinion and expression David Kaye has urged social media companies to “recognize that the authoritative global standard for ensuring freedom of expression on their platforms is human rights law, not the varying laws of States or their own private interests, and they should re-evaluate their content standards accordingly.” Others have pointed out that speech and expression (and related rights to listen) are not the only rights at stake: rights to privacy, protections against hate speech, and safety interests are also implicated by content moderation policies. Some have recommended that the Oversight Board prioritize the protection of democracy as a core value distinct from free expression.
In its formation of the FOB, Facebook heeded the call to incorporate human rights into its structure, listing international human rights law as a third “body of law” – in addition to its self-generated “Community Standards” (essentially content moderation policies) and “values” – that should guide the Board’s decisions.
Other free speech advocates suggested alternative sources of law to guide moderation of political speech. Citing U.S. First Amendment jurisprudence, the Knight institute explained in its public comment on Facebook’s decision to block Trump’s account access:
Because of Facebook’s scale, Facebook’s decisions about which speech and speakers to allow on its platform determine not just which voices and ideas get heard on the platform, but which voices and ideas get heard at all. Against this background, Facebook should adopt a heavy presumption in favor of leaving political speech up, in keeping with the principle that ‘debate on public issues should be uninhibited, robust, and wide-open.’ New York Times v. Sullivan, 376 U.S. 254, 270 (1964)
But as many (including Facebook) have pointed out, neither body of free speech law, international or domestic, fits comfortably with private companies’ choices to remove content or to de-platform particular users. The International Covenant on Civil and Political Rights (ICCPR) enshrines a broad right of freedom of expression including the “freedom to seek, receive and impart information and ideas of all kinds, regardless of frontiers, either orally, in writing or in print, in the form of art, or through any other media of his choice.” Moreover, government restriction of this right “shall only be such as are provided by law and are necessary…For respect of the rights or reputations of others;… For the protection of national security or of public order (ordre public), or of public health or morals.” But international human rights law such as the ICCPR is designed to bind States; it is likewise made and interpreted by States. It is not designed for application to the content regulation decisions of a private company. Likewise, the U.S. Constitution’s First Amendment protects the free expression rights of persons against unjustified government interference and is not applicable to companies like Facebook.
This mismatch between the sources of law and their intended application has two potential implications. First, and most obviously, there is the risk that human rights law may not be adequate or suited to grapple with the distinct problems of private restrictions on speech, leading to normatively undesirable or suboptimal outcomes in private content moderation. There is also a risk that the high-profile and impactful decisions taken by private companies on content restrictions – decisions that they publicly ground in human rights law, but which are also necessarily informed by profit maximization and other private interests – will in turn shape the interpretation and implementation of international human rights law.
As Sejal Parmar asked when Facebook announced the creation of its Oversight Board:
[T]o what extent will the board’s decisions and advisory statements, which will be published, impact the advocacy of human rights activists, the elaboration and interpretation of international human rights law by UN human rights bodies, and the judicial rulings of regional human rights courts and national courts? To what extent will the board shape international normative developments and discourse on freedom of expression, notwithstanding the fact that it is a self-regulatory body established by a private company?
Within evolving norms of free expression, the decision reached by the Board on Trump’s de-platforming may create a precedent not only for private content moderation by other platforms, but also for the norms of free expression applied by States to their citizens.
Content Regulation by Another Name
Perhaps the most severe mismatch between international human rights law as applied to States and private regulation of speech by tech giants is that the vast majority of content “regulation” on private platforms happens through choices that are not only non-transparent, they are largely non-human. On all dominant social media platforms, algorithmic formulas decide to a large extent what content is seen, by whom, and in what context. This is true not only in takedown decisions – where automated removals of content have led to high-profile “errors” that the Oversight Board has urged Facebook to correct – but also in shaping the overall information space created on these platforms. Determining what is seen and by whom has a much greater impact on the ability of particular users to speak and be heard than the relatively limited instances of removals or bans, a fact which has led to endless articles advising content creators on how to “beat,” “outsmart,” or “make [Facebook’s algorithm] work for you.”
In this context, the restriction of speech that occurs when Facebook or other platforms remove a particular post or ban a user entirely is overshadowed by the restriction, manipulation, and control of speech made by algorithmic design decisions long before any content reaches an audience. This fact holds particular salience in the context of the Trump ban. As the Knight Institute notes in its comment on the case:
Trump’s statements on and off social media in the days leading up to January 6 were certainly inflammatory and dangerous, but part of what made them so dangerous is that, for months before that day, many Americans had been exposed to staggering amounts of sensational misinformation about the election on Facebook’s platform, shunted into echo chambers by Facebook’s algorithms, and insulated from counter-speech by Facebook’s architecture. (emphasis added).
Algorithmic echo chambers on social media platforms fundamentally distinguish these contexts from government regulations of speech. While governments may broadly shape information environments through licensing and infrastructure investment, they cannot (thankfully, yet) tailor the messages sent and received by citizens with anything near the degree of precision that Facebook can and does. Rules designed to prevent States from interfering with the speech of their citizens, such as Article 19 of the ICCPR, do not grapple with the impacts of such control over information architecture.
This dynamic also gives the lie to Facebook’s own justification of its historically laissez faire approach to more overt forms of content moderation. Far from a single “marketplace of ideas” where harmful or false speech would be driven out by higher quality speech – like a subpar vendor banished from a market by choosy customers – Facebook instead operates countless individualized echo chambers in which it determines precisely what speech is heard by each consumer. A customer selects from the options that actually appear in the “marketplace” – and these options are controlled by Facebook’s algorithmic design.
In a recent, striking example of the real-world political impacts of Facebook’s algorithmic design choices, the company announced prior to the verdict in the trial of Derek Chauvin, “As we have done in emergency situations in the past, we may also limit the spread of content that our systems predict is likely to violate our Community Standards in the areas of hate speech, graphic violence, and violence and incitement.” This decision preemptively increased “friction” on certain types of content that the algorithm predicted might cause harm – in other words, it limited the capacity of certain speech to reach as wide an audience as it otherwise would have under Facebook’s default policies. As Evelyn Douek put it, “Facebook … turn[ed] down the dial on toxic content for a little while. Which raises some questions: Facebook has a toxic-content dial? If so, which level is it set at on a typical day?” Facebook’s claimed “marketplace” justification for hosting harmful content is incompatible with this reality in which it exercises absolute power to “turn down” – or up – the dial on such content.
This is not to say that such algorithmic organization of content is inherently malign – indeed, it is absolutely necessary to sort through the unimaginable volume of content generated each day on the internet. But if, as Zuckerberg has claimed, “the long journey towards greater progress requires confronting ideas that challenge us,” then Facebook’s current echo-chambered design is a barrier, not an asset, to that progress.
When a Government Official Speaks
Arguments about Facebook’s regulation of speech – whether through algorithmic design or content and user moderation policies – form the backdrop of the more specific debate on how Facebook should regulate the speech of government officials, politicians, and candidates, including former President Trump. In this context, Facebook has generally argued for leaving posts up – even when they may violate Facebook policies – citing citizens’ rights to hear what their leaders are saying. Indeed, in 2019, Facebook publicly committed to leaving up posts from politicians, including elected and appointed officials as well as candidates, even if they violated the company’s Community Standards. They reversed the policy in June 2020, announcing that they would affix labels to content from politicians that violated their policies, and remove hate speech and voter suppression content, even from politicians.
The Trump case highlights the contradictory implications of political speech by government officials on Facebook that violates Facebook policies. In both its referral and the original decision to suspend his account, Facebook makes reference to the likelihood of Trump inciting violence while noting the right of populations to access the statements of their leaders. The official rationale for referring the case to the FOB references this tension:
We have taken the view that in open democracies people have a right to hear what their politicians are saying — the good, the bad and the ugly — so that they can be held to account. But it has never meant that politicians can say whatever they like. They remain subject to our policies banning the use of our platform to incite violence. It is these policies that were enforced when we took the decision to suspend President Trump’s access. (emphasis added).
These two factors give rise to competing conclusions:
Position 1: Government officials’ posts should be left up – perhaps even if violative of some standards – to give their constituents access to their statements. The Knight Institute alludes to this position in their submission on the Trump case, arguing:
The heavy presumption in favor of political speech is especially important with respect to political leaders, many of whom don’t have access to the kinds of alternative media platforms that a U.S. president does. Facebook should remove political leaders’ speech only as a last resort because access to that speech is vital to the public’s ability to understand and evaluate government policy, and to the public’s ability to hold political leaders accountable for their decisions.
Position 2: Government officials’ posts should be closely monitored for compliance with Facebook’s Community Standards – or perhaps held to higher standards given their increased capacity to influence public action including potential violence – and should be taken down if close to the line. Some have recommended that Facebook gauge its moderation decisions by a completely different standard when evaluating politicians’ statements: “Preserving democratic accountability, especially free and fair elections, should be the standard by which Facebook judges the expression of the politicians that use its platform.”
The Oversight Board has implicitly recognized at least a mild version of this latter position, stating in its decision to overrule Facebook’s takedown of a post with the potential to incite violence: “The user not appearing to be a state actor or a public figure or otherwise having particular influence over the conduct of others, was also significant.” Implicitly, then, the Board might apply a higher standard to the posts of public figures, including perhaps the president of the United States, compared to ordinary users when assessing whether speech with the potential to incite violence, or undermine democratic processes, will stay up or be taken down.
A separate consideration in balancing the response to speech that violates community standards is the availability of alternative speech outlets. Facebook’s dominance, along with that of a few other tech giants such as Twitter and Google, means that individual speakers banned from these platforms may have few other options for engaging in public speech. Government officials, on the other hand, often have independent access to powerful platforms to express their ideas, making their access to Facebook less crucial for either their right to free expression or their constituents’ right to access their statements. However, as illustrated by recent statements issued by Trump since his removal from most social media sites (and since leaving government office with its attendant public platform), substitute methods of communication may not have quite the same impact, especially for an opposition leader.
Finally, the appropriate response to community standard-violating speech by political leaders may depend in part on the robustness of media independence in each particular political context. In contexts where independent media is curtailed or lacks a tradition of challenging official narratives, the right to access politicians’ statements directly may be of even greater importance. Likewise, in environments dominated by state-run media, the right of the public to access the direct statements of leaders, including controversial ones, is of increased importance in assessing government actions. Yet in the same context, the potential for leaders’ statements to incite violence may be greater, as they are less likely to be challenged, contextualized, or factchecked by independent media.
Assessing the Oversight Board’s Decision
In its decision on whether to uphold or overturn Facebook’s ban on Trump, the Oversight Board must grapple with these competing considerations not just with regard to Trump himself, but as applied to government officials, politicians, and opposition leaders around the world. Since the Trump ban, Facebook has taken steps to ban other political leaders and parties with the aim of preventing offline harm and incitement to violence. For example, since the Feb. 1 coup in Myanmar, Facebook has expanded its existing ban on the Myanmar military, its leaders, and any “military-controlled state and media entities… as well as ads from military-linked commercial entities.” As in the Trump ban case, Facebook cited “emergency” conditions to explain its suspension of the relevant accounts. In the Oversight Board’s decision on the Trump ban, it must grapple with the precedent that would be set by overturning an emergency account suspension in the context of an attempted insurrection – or a military coup. If it upholds the ban, on the other hand, it must consider the precedent set by explaining restrictions on free expression with reference to “emergency” conditions, as determined by Facebook.
Likewise, in evaluating the Board’s ruling, the public should consider the impact not just on a particularly polarizing U.S. political figure, but also on what platforms are available to government leaders and opposition figures and the impact of that access on citizens’ rights to hear from these leaders. Given the considerations outlined above, including availability of alternative platforms and independent media, it may be that the right of the public to hear (as opposed to the right of an individual to speak) provides a more compelling lens through which to assess the merits of content moderation policies.
However, balancing these expression rights against real world harms requires making moral choices not dictated by either international human rights law or First Amendment jurisprudence. We must recognize that these choices are already being made, through explicit content regulation and takedowns but also through decisions about the algorithms that determine the reach of speech. Both Facebook and the Oversight Board must grapple honestly with these moral questions. They – and we – cannot duck the responsibility for these moral choices with references to human rights or constitutional laws that do not answer these thorny questions.
[Editor’s note: Readers may also be interested in Rebecca Hamilton’s De-platforming Following Capitol Insurrection Highlights Global Inequities Behind Content Moderation]