Cross-published at Tech Policy Press.
Social media platforms’ recent decisions to reinstate former President Donald Trump’s accounts were based on empirical claims about the threat landscape for political violence in the United States. These kinds of assessments are the type that the social media companies could apply in other countries as well. Their evaluation of the threat in the United States may face its first test in the days and weeks ahead. We asked several experts to evaluate the companies’ assessments of the threat of political violence.
On Saturday, the former U.S. president indicated in a post on his social media network, Truth Social, that he anticipates he will be arrested on charges stemming from Manhattan District Attorney Alvin Bragg’s investigation into hush money payments to adult film star Stormy Daniels. Trump urged his followers to protest and “take our nation back!”
The timing of any potential indictment is unknown, but reports suggest a grand jury could decide as early as this week. So far, Trump’s call for protest has drawn only small crowds near his Mar-A-Lago residence in Florida and at the Manhattan Criminal Court in New York. But whether his arrest may spur larger crowds in the future remains to be seen. On fringe sites such as The Donald, Gab and 4Chan, there is talk of “civil war” and in some instances threats of violence, though some far-right activists are apparently too concerned about a supposed law enforcement ‘trap’ to demonstrate.
Nevertheless, Trump’s call for protests “echoed his rhetoric before his supporters stormed the U.S. Capitol on Jan. 6, 2021,” reported The Washington Post. Multiple analyses, including that of the House Select Committee that investigated the January 6 attack on the Capitol, noted the importance of Trump’s appeals on social media to summon the crowds that day, and to propagate the false claims that motivated many to violence.
In recent weeks, Facebook and YouTube reinstated Trump’s accounts citing a reduction in the risk of political violence, the threat of which served as part of the rationale for their decisions to suspend him following the events of Jan. 6, 2021.
“We carefully evaluated the continued risk of real-world violence, balancing that with the importance of preserving the opportunity for voters to hear equally from major national candidates in the run up to an election,” said YouTube vice president of public policy Leslie Miller last Friday, a day before Trump’s latest call for protests.
“To assess whether the serious risk to public safety that existed in January 2021 has sufficiently receded, we have evaluated the current environment according to our Crisis Policy Protocol, which included looking at the conduct of the US 2022 midterm elections, and expert assessments on the current security environment,” wrote Meta president of global affairs Nick Clegg in January. “Our determination is that the risk has sufficiently receded,” and thus Trump was reinstated on Facebook.
(Shortly after acquiring the platform in November last year, Elon Musk reinstated Trump’s Twitter account after running a Twitter poll.)
If Trump is indicted, as the criminal process proceeds, it may represent the first true test of the platforms’ threat assessment. And, that test may have implications beyond the narrow question of whether it was prudent to reinstate Trump’s accounts. It may indicate whether the platforms are prepared to take swift action in the case of future demagogues, in the U.S. and abroad, who use their accounts to incite violence or propagate false claims about the result of an election.
In order to understand whether their more relaxed posture is consistent with independent analyses on domestic extremism and the potential for civil unrest, we put the following question to the seven experts:
Is your assessment of the current threat of domestic extremist violence related to Donald Trump congruent with the assessment of these social media platforms?
Below, find responses from:
- Jacob Glick: Glick is Policy Counsel with the Institute for Constitutional Advocacy and Protection at the Georgetown University Law Center. He previously served as Investigative Counsel on the House Select Committee to Investigate the January 6th Attack on the U.S. Capitol, where he was a lead counsel on the Committee’s investigations into domestic extremism and social media’s role in the attempted insurrection
- Donell Harvin, DrPH: Harvin is on the faculty at Georgetown University, where he teaches on the subjects of homeland security and terrorism. He is the former Executive Director for the Washington, DC Fusion Intelligence Center and oversaw it during the insurrection on January 6th. He met with and testified before the House Select Committee investigating January 6 on several occasions.
- Jared Holt: Holt is a Senior Research Manager at the Institute for Strategic Dialogue, working on topics of hate and extremism in the United States. Prior to joining ISD, he worked at The Atlantic Council’s DFRLab, Right Wing Watch and Media Matters for America.
- Tom Joscelyn: Joscelyn was a senior professional staff member on the House Select Committee to Investigate the January 6th Attack on the U.S. Capitol and has testified before Congress on more than 20 occasions.
- Mary B. McCord: McCord is Executive Director of the Institute for Constitutional Advocacy and Protection (ICAP) and a Visiting Professor of Law at Georgetown University Law Center. She served as legal counsel to the U.S. House of Representatives Task Force Capitol Security Review appointed by Speaker Nancy Pelosi after the January 6 attack.
- Candace Rondeaux: Rondeaux is director of the Future Frontlines program at New America, a professor of practice at the School of Politics and Global Studies, a senior fellow with the Center on the Future of War at Arizona State University, and the author of an investigative report into the role of alt-tech platforms such as Parler in the attack on the U.S. Capitol.
- Peter Simi: Simi is a Professor of Sociology at Chapman University. He has studied extremist groups and violence for the past 20 years, is coauthor of American Swastika: Inside the White Power Movement’s Hidden Spaces of Hate, and frequently serves as an expert legal consultant on criminal cases related to political extremism.
* * *
Jacob Glick
The Select Committee’s evidence showcased the crucial importance of mainstream social media networks in President Trump’s attempt to incite his supporters and topple American democracy. In deposition after deposition, witnesses described how they learned about the President’s summons to “be there, will be wild” as it circulated on Twitter and Facebook, which is how many of them decided to travel to D.C. for January 6th. We also collected testimony from employees inside Twitter and Meta that illustrated how these companies were blindsided by Trump’s willingness to embrace political violence and extremism. Ahead of January 6th, they hesitated to act against his brazenly authoritarian conduct and gave him an extraordinary amount of leeway as he railed against the results of the 2020 election.
By refusing to decisively confront pro-Trump extremism, these companies helped to enable the insurrection at the U.S. Capitol, and only belatedly acted to ban his accounts once the damage had already been done. Now, as Trump calls for protests ahead of his expected indictment, it’s clear that he is once again preparing to leverage his social media megaphone to incite his most fringe supporters. Over the weekend, his initial post echoed the bellicose language he deployed prior to the Capitol attack. He also made his first post on Facebook since his account was reactivated, in a clear signal that he plans to take advantage of Meta’s decision to allow him back on the platform.
This is the dangerous – and entirely predictable – result of the decision to re-platform Trump. Since the insurrection, the former president has only tightened his embrace of the Big Lie, violent conspiracies like QAnon, and even political violence. The evidence for this should have been plainly recognizable to major social media companies. Over the past year, we’ve seen Trump incite an attack against the FBI for their search of Mar-a-Lago, dismiss a brutal attack on Paul Pelosi that was fueled by the Big Lie, and amplify a message that called for his supporters to be “locked and loaded” ahead of the 2024 election. His verbal attacks on the LGBTQ+ community also illustrate his enduring symbiosis with violent extremist groups like the Proud Boys. This all should make it obvious that Trump remains aware of his ability to rally his supporters to engage in intimidation and violence when it suits his political needs.
Despite these clear signals, major social media companies have decided to act as if the threat has passed. This places all Americans at great risk, despite these companies’ promises to keep Trump in check this time around. There is no reason to believe that the political considerations that convinced Meta, Twitter, and other companies to tiptoe around Trump in 2020 will be any different now, as he attempts to re-energize his followers with a sense of conspiracy and grievance. In failing to learn the lessons of January 6th, these companies have paved the way for Trump to launch another, even more embittered assault on our system of democratic self-government. Let’s hope that American democracy can survive their mistake.
Donell Harvin
The current threat of extremist violence associated with Trump is incongruent with the assessment of the social media (SM) platforms, but it is a complicated situation that these companies find themselves in. There are several important factors that must be considered in discussing how social media companies engage in content moderation of the former President:
- The assessment that went into the decision to re-platform Trump likely unfolded over a period of time, and those engaged in the process would not be expected to predict the recent events associated with the former President. The question is, are they committed to reevaluating their decision should the need arise, and have they developed a fair and transparent mechanism to deplatform him for future incidents of incitement? Multiple studies have shown that deplatforming those that spread hate speech and violent rhetoric is highly effective in decreasing its spread.
- Trump supporters and those on the right often decry their deplatforming as a violation of “freedom of speech,” however the First Amendment does not apply to these social media companies. Private entities can create user agreements and remove users and their posts without running afoul of the Constitution. Yet, while they may have legal, moral and ethical grounds for deplatforming, they may assess that doing so may be inconsistent with their business model. Twitter has made the decision to replatform individuals, including the former president, that spread mis and disinformation, and other hateful and other unsavory views online. Since Elon Musk took over the platform, there has been an exponential rise in antisemitism, anti-minority, misogynistic, homophobic and anti-LGBTQ+ rhetoric.
- Trump’s latest calls for protests were made on his own platform, Truth Social, and not posted on his official accounts on other social media platforms. Social media companies would be hard-pressed to deplatform a national figure for views expressed on another platform, unless there is a clear violation of their user terms of agreement. This makes it difficult for social media companies to take action, while also providing them cover for failure to do so.
- The ability for social media companies to accurately determine the current extremist threat environment is questionable, and not necessarily their responsibility. The government is tasked with homeland security, and considering that multiple federal intelligence and law enforcement entities were unsuccessful in recognizing the threat that Trump’s supporters posed in the lead up to January 6th, it is unreasonable to expect private companies to assume that responsibility or be more successful at threat analysis. That said, the social media companies play an outsized role in online extremist radicalization and should be held accountable for the consequences that their lack of content moderation and algorithms play in contributing to the explosion of online violent extremism in this country.
- Lastly, OSINT (open-source intelligence) has become limited in accurately detecting violent actors or predicting widespread violence. OSINT entails the collection and analysis of online content to determine if individuals or groups pose a threat. The collection and analysis of OSINT is resource intensive, and when performed by the government, is fraught with legitimate civil rights and civil liberties concerns. Post January 6th, many domestic extremists and potential violent lone actors have abandoned (or been deplatformed) from the sites that OSINT is routinely gleaned from. These malign actors now share violent ideologies, rhetoric, memes and conspiracies on platforms and encrypted chat rooms that do little to no moderation, such as Reddit, 4chan, 8kun and online video games. Detecting violent intent through “leakage” from online actors across multiple platforms has become a daunting enterprise for the government, and it is not the responsibility of social media companies to police sites other than those that they control.
Hate is a profitable enterprise in the US and the reality is that the public should not expect social media companies to accurately assess and respond to evolving threats, especially if that response is inconsistent with their financial interests.
Jared Holt
The landscape around domestic extremist violence has changed in major ways since the Capitol riot and the legal, social, and political fallout afterward that fell upon far-right groups that were supportive of Trump. There are also valid questions as to whether Trump is still able to wield influence over the spectrum of far-right movements as he was in 2020–something I would argue is not the case, at least to the degree he once did. De-platforming Trump certainly played some sort of role in those shifts, though I don’t know that kicking Trump off big platforms did a whole lot to actually change the trajectory of extremist and political violence in the United States. Banning Trump and other movements that were most visible on January 6 probably disrupted those organizing spaces enough to prevent further wreckage, but extremist movements adapted and overcame those hurdles, like they always do. It’s a fluid problem.
For Meta and Google, I think what ultimately matters here is Trump’s behavior, which I’d argue hasn’t changed at all since he lost the 2020 election. (I’m not going to pretend Elon Musk is interested in content moderation policy.) Trump is living his own Groundhog Day, waking up every morning and stirring up his most loyal followers with forms of election denialism, hate, and conspiracy theories. Meta and Google might believe the broader cultural conditions have changed, but I can’t imagine any coherent argument to claim Trump will behave better once he starts using the platforms again. Trump’s behavior is a crucial part of assessing the risk here, especially considering he is the widely presumed front-runner for the Republican Party’s 2024 presidential nomination and that whatever loss of influence he may have suffered is theoretically still up for grabs in the years ahead.
Tom Joscelyn
On Dec. 19, 2020, then President Trump announced via Twitter that there would be a “Big protest in D.C. on January 6th.” He added: “Be there, will be wild!” As demonstrated in the January 6th Select Committee’s hearings and final report, rightwing extremists from around the country read this tweet as a call to arms. Within hours, they began planning for violence on January 6th. Within days, the Proud Boys and others made plans to storm the U.S. Capitol. There is no material dispute over these facts. For the first time in American history, there was no peaceful transfer of power on January 6, 2021. Trump’s incendiary use of social media caused the violence we witnessed. So we should not be surprised if Trump’s tweets and other social media posts incite violence once again.
Mary McCord
It’s hard to comprehend what the social media platforms were considering when determining that the risk to public safety from Trump’s presence on their platforms has receded. Knowing his history of calling on his base whenever he feels threatened, and knowing he is the subject of multiple ongoing criminal investigations–one of which he already used to publicly put a target on the backs of federal law enforcement, DOJ officials, and judges–the social media platforms had more than enough reason to continue their suspensions of Trump. The recent escalating calls to “TAKE OUR NATION BACK!” and “PROTEST, PROTEST, PROTEST!!!” along with the veiled threats against the Manhattan District Attorney affirm that. The platforms should answer now how they will treat posts like those on Truth Social over the last several days.
Candace Rondeaux
It appears America is facing another DeJa’Vu moment as Donald Trump once again whistles his dogs onto the streets and tech platform companies are set for yet another rude awakening. It’s clear the corrective lies with Congress but few are likely to be motivated to take action in the run up to the 2024 elections.
Peter Simi
No, these social media platforms once again seem to be putting their bottom line ahead of public safety and democracy. Their assessments lack credibility and, of course, transparency, so there is no way for experts or anyone else to evaluate how these companies made their determinations. What is clear, however, in terms of the threat landscape is that threats to public officials are at all time highs and many of those threats are communicated on these very same platforms. The threat environment is not receding as some of the social media officials claim, and most experts that I am aware of have grave concerns about the current threat level and a rapid increase in the threat landscape as we inch closer to the 2024 presidential election.