Earlier this year, Meta reinstated former President Donald Trump’s accounts on Facebook and Instagram, following his two-year suspension for praising rioters as they stormed the U.S. Capitol on Jan. 6, 2021. According to Meta, the risk to public safety from Trump had “sufficiently receded” to allow him back on its platforms, and it had introduced new guardrails to deter repeat offenses.
As with Twitter, where he was reinstated in November 2022, Trump has not yet posted on Facebook and Instagram. These platforms were key to his previous campaigns, however, and as election season heats up, it may be hard to resist their lure. Twitter under Elon Musk has moved away from robust content moderation. In contrast, Facebook under pressure from its Oversight Board has instituted a suite of safeguards meant to prevent a repeat of Trump’s 2020 tactics and to improve transparency about its content moderation processes. What has changed and will it really help?
Proliferating Policies
Facebook relies on several overlapping policies to make content moderation decisions. Some (e.g., its Community Standards) have long been public, others have only recently come to light (e.g., cross-check), and still others have undergone significant changes (e.g., newsworthiness). It has also rolled out new policies in the past two years. Since this thicket of old, new, and revamped policies is found in separate blog posts, we have summarized the main ones relevant to the Trump reinstatement below.
Prevailing Policies
- Facebook’s Community Standards, which apply to all users, prohibit violence and incitement, hate speech, and bullying and harassment. Despite the plethora of Trump posts that seem to violate Facebook’s Community Standards (e.g., “when the looting starts, the shooting starts”), the company maintains that the former president violated the standards on only one occasion, when he fat-shamed an attendee at one of his rallies.
- Facebook’s Dangerous Individuals and Organizations policy prohibits “praise” and “support” of designated individuals and organizations and events that the company deems to be “violating.” This was the primary basis of the company’s decision to boot Trump off its platforms and was upheld by the Board on the theory that the Capitol attack was a “violating event.” The overall policy has long been criticized, including by the Oversight Board, due to the ambiguity of terms like “praise” and “support,” and the lack of clarity on how individuals or organizations are deemed to be “dangerous.”
- The Board’s decision on Trump’s suspension brought to light Facebook’s “cross-check” system, which diverts posts by high-reach users from the company’s normal content moderation system and shuttles them over to a team of senior officials. While it may make sense to have such a special system, it can result in overly deferential treatment for users who drive engagement on the platform and extended delays in removing posts (on average, more than five days). In response to the Oversight Board’s recommendations, Meta recently committed to taking immediate action on “potentially severely violating” content and to reducing the program’s backlog, but rejected several other important recommendations.
Revamped and New Policies
- Facebook’s “newsworthiness” exemption previously presumed that a public figure’s speech is inherently of public interest. It is now a balancing test that aims to determine whether the public interest value of content outweighs the risk of harm by leaving it up, an approach the Oversight Board recently criticized as “vague and leaves significant discretion.”
- In June 2021, Meta issued a policy on public figures’ accounts during civil unrest, which recognized that because of the influence exercised by such people standard restrictions may be insufficient. For a public figure who violated policies “in ways that incite or celebrate ongoing violent events or civil unrest,” Meta specified that it could restrict their accounts for up to two years.
- The 2021 policy promised that Meta would conduct a public safety risk assessment with experts (weighing factors such as instances of violence, restrictions on peaceful assembly, and other markers of global or civil unrest) to decide whether to lift or extend the restriction. In August 2022, the company announced a new crisis protocol to weigh the “risks of imminent harm both on and off of our platform,” which it used in letting Trump back on Facebook. Although the policy is not public and it’s not clear what factors Meta will consider, Meta may use the types of factors listed in its civil unrest policy. For Trump, these included “the conduct of the U.S. 2022 midterm elections” and “expert assessments on the current security environment.”
- Finally, under another new policy, when a public figure is reinstated, Meta may now impose penalties for content that does not violate its Community Standards “but that contributes to the sort of risk that led to the public figure’s initial suspension.” These penalties are largely similar to those that could apply for violations of Meta’s “more severe policies” under its Community Standards.
There are three main takeaways from this web of overlapping policies. First, the Oversight Board’s scrutiny and the bright spotlight on social media companies generally has obliged Meta to provide a fair amount of transparency about its processes. The company’s cross-check system, for example, was not public knowledge until it was raised in the Board’s review of cases.
Second, many of Meta’s changes are procedurally-oriented, seemingly designed to address the legality principle of the international human rights law framework typically used by the Oversight Board to evaluate Facebook’s content moderation decisions. There is no doubt that the Board has consistently—and rightly—pushed the company to take a rules-based approach (e.g., taking Facebook to task for imposing an indefinite suspension on Trump when its rules included no provision for such a penalty). Meta’s new policies also nod to the framework’s necessity and proportionality principles by articulating a sliding scale of penalties.
Third, despite all the new policies and the Oversight Board’s push for more clear and accessible rules, Meta has just as much discretion as ever about how to respond—and in some cases has granted itself latitude to act even when no substantive rules are violated.
Content Moderation Policies in Practice
Imagine, if you will, a scenario where Biden and Trump are running for re-election in 2024. As he did in 2020—and continues to do on the Truth social platform—Trump shares a post casting doubt on the fairness of the upcoming election. Even if the post did not violate Meta’s Community Standards, under its new guardrails for public figures returning from a suspension, the company could limit the reach of Trump’s posts because it relates to the reason for his initial suspension. The same would be true if he promoted QAnon content. If Trump was undeterred and continued such posts, Meta could go further—restricting access to its advertising tools and even disabling his account.
All these decisions are discretionary and ultimately depend on how the company weighs the risks posed by Trump. The fact that they are untethered from Facebook’s Community Standards creates an additional layer of uncertainty about the basis for the decisions, although the risk of potential abuses is somewhat ameliorated by the required link to past transgressions. In the case of Trump, his history of encouraging political violence may lead Meta to respond more forcefully and quickly than it did in the 2020 election season—at least if he continues to rely on the same narrative.
If Trump were to move away from the rigged election/QAnon narrative but violate the company’s policies in other ways “that incite or celebrate ongoing violent events or civil unrest,” Meta’s rubric for public figures in times of civil unrest would come into play. In deciding whether to impose penalties under that framework, the company would evaluate (1) the severity of the violation and the person’s history of violations; (2) their “potential influence over, and relationship to, the individuals engaged in violence”; and (3) “the severity of the violence and any related physical harm.”
But it is unclear how this content will be reviewed, particularly considering Meta’s cross-check policy. The operation of cross-check vis-à-vis Trump seems an obvious failure—though Meta’s new commitment to taking immediate action on “severely violating” content may alleviate some issues. And of course, this all hinges on Meta deciding that a particular situation amounts to “civil unrest” and does not meet its “newsworthiness” exemption.
It is worth considering other applications of this “civil unrest” model. Facebook has been taken to task for failing to act against senior military officials in Myanmar who spread hate speech and incited violence against Rohingya. The “civil unrest” rubric could be very useful in those types of situations, but also needs to be accompanied by the allocation of sufficient resources to provide decision-makers at Meta with context and language expertise—a recommendation Meta recently committed to implementing for its cross-check policy.
What about contexts such as the summer of racial justice protests following the killing of George Floyd, which at times involved instances of property damage? These were painted by the Trump administration as a threat to national security requiring the deployment of homeland security officers and the activation of counterterrorism measures. Would Meta consider itself entitled to shut down the accounts of protest leaders on the theory that they were celebrating “civil unrest”? Given that civil rights groups have long complained about asymmetries in the company’s enforcement that have disadvantaged minority communities, the scenario is one worth considering.
Ultimately, Meta, like other social media platforms, has struggled to articulate clear and accessible policies surrounding content moderation that are sufficiently flexible to respond to rapidly evolving threats. Its new and rejiggered policies (like the old ones) leave the company with copious discretion. Their efficacy will depend on how they are enforced. And on that score, Meta’s record leaves much to be desired.