This essay is co-published with Tech Policy Press.
The now-disbanded House Select Committee on the January 6th Attack on the U.S. Capitol has received well deserved attention for its painstaking reconstruction of former President Donald Trump’s central role in the multifaceted and ultimately violent campaign to block the peaceful transfer of presidential power following the 2020 election. But the committee decided to leave its investigative team’s important findings about social media on the editing room floor.
Fortunately, some of the former committee staff investigators who dug into the intersection of social media and violent extremism have taken it upon themselves to discuss their work publicly. Dean Jackson, Meghan Conroy and Alex Newhouse co-wrote an invaluable article published last week in Just Security and Tech Policy Press, and those three were joined by investigative counsel Jacob Glick for a podcast interview posted by Tech Policy Press. In Just Security, Glick wrote about the committee’s findings on violent extremism with Georgetown’s Mary McCord. Separately, Rolling Stone reported on what it described as a 120-page leaked summary of the investigators’ findings on social media, which the committee did not publicly release.
Members of the committee’s Purple Team — one of several color-coded investigative squads that did the unglamorous labor enabling a series of dramatically choreographed hearings — have addressed a number of the deleterious effects social media companies have had on U.S. democracy. Their work, however, also points to the limits of potential government responses to those damaging effects. I’ll deal with each in turn.
I. Identifying the Problem
Top insights from Purple Team investigators include the following assessments:
- Mainstream platforms such as Meta’s Facebook and Instagram, Google’s YouTube, and Elon Musk’s Twitter are not the sole, or necessarily even the primary, cause of political extremism. What they do is add fuel to the fire. Election denialism, QAnon conspiracy mongering, and militia-minded thuggery stem from many sources. These include political figures like Trump and newly elected House Speaker Kevin McCarthy’s most fringe members of his party’s conference; Tucker Carlson and his fellow Fox primetime personalities; podcasters like Steve Bannon and Ben Shapiro; the prolific Breitbart disinformation machine; and Gab, 4chan, and other ultra-right platforms. Complementing this motley crew, mainstream social media venues, like Twitter, “were used to broadcast conspiracies and calls for violence directly to mass audiences,” the investigators found. Others, including Facebook, “were used to plot criminal activity in exclusive chat groups.” The major platforms were at worst aiders and abettors or facilitators, but not the initiators, of the fanaticism leading to the historic attack on the Capitol.
- To their credit, these mainstream platforms sometimes have tried to enforce content moderation policies banning hate speech, violent incitement, and misleading claims about elections and public health. But during the 2020 campaign and its aftermath, political considerations influenced the platforms’ top management to refrain from vigorous application of their policies. Contrary to the false conservative claim that Left Coast liberal bias shapes such behavior, the Purple Team found time and again that fear of right-wing backlash caused mainstream platforms to pull their content moderation punches. Facebook, for example, didn’t enforce its policy against election delegitimization “to avoid angering the political right, regardless of how dangerous its messages might become,” Purple Team investigators found.
- Another illustration of the fear of conservative backlash relates to Musk’s “Twitter Files,” a series of CEO-orchestrated leaks of internal documents that purport to unmask left-leaning bias within the platform’s former management. Those framing the Twitter Files in this fashion — especially documents related to Twitter’s post-insurrection suspension of Trump — “have it completely backward,” the Purple Team investigators wrote. “Platforms did not hold Trump to a higher standard by removing his account after January 6th. Rather, for years they wrote rules to avoid holding him and his supporters accountable; it took an attempted coup d’etat for them to change course.”
II. In Search of Remedies
In addition to these and other revelations, the members of the Purple Team addressed potential remedies, seeking to identify what can be done about the inadequacy of platform self-governance. This effort, however, points to some sobering realities.
Presently, the platforms weigh content moderation decisions behind a wall of secrecy. These decisions include how their human content reviewers — mostly modestly paid contract employees of third-party outsourcing companies — ought to interact with the automated systems that have to handle the bulk of moderation, given the enormous scale of social media traffic. “These decisions are too important to society to be left to corporations that lack democratic legitimacy,” the investigators wrote. After all, we’re talking about judgments affecting the political speech of many millions of Americans — and billions of people, if you consider the global arena.
But the hard truth is that, at least in the United States, these decisions have to be left to the corporations in question.
As the Purple Team members argued, Congress should pass legislation requiring social media companies to disclose more data about content moderation and the spread of harmful content. With that information, researchers, policymakers, and the public at large would be able to reach better-informed conclusions about the relationship between social media and extremism. The last Congress debated worthy transparency-enhancing legislation. But these measures didn’t make much progress when Democrats held slim majorities in both houses, and the Republican leadership in the House of Representatives will almost certainly preclude passage for at least another two years.
Even if transparency requirements were enacted — and even if they were part of a broader enhancement of the Federal Trade Commission’s consumer protection authority, as I’ve proposed elsewhere — the government’s oversight would have to be limited to procedural issues, like data disclosure and assuring that platforms fulfill the promises they make in their terms of service.
Under the First Amendment’s well-settled prohibition of government regulation of speech — which covers “expressive conduct” such as a social media company’s content moderation — neither the FTC nor any other official body may dictate private platform policies related to speech, let alone particular content decisions. Transparency obligations may create incentives for more responsible platform self-governance, but the First Amendment prevents Congress or regulators from dictating to Facebook, YouTube, or Twitter what content they may and may not host. (There is a difference, of course, between forbidden government censorship and government communication of a point of view – for example, that a particular post might be based on false information – where platform executives retain full authority to exercise their judgment independently. Backers of the Twitter Files have confusingly conflated the two.)
To be sure, not everyone in a position of authority has thought carefully about the First Amendment and content moderation. In separate enactments, Republican lawmakers in Texas and Florida have defied free speech principles and tried to stop social media companies from removing certain politically oriented content. A three-judge panel of the U.S. Court of Appeals for the Fifth Circuit, in a brazenly partisan ruling, upheld the Texas law, while an Eleventh Circuit panel struck down most of the Florida law. The Supreme Court is widely expected to accept appeals of these rulings to resolve the conflict.
This essay is not the place to get into the nuances distinguishing legitimate transparency legislation from unconstitutional government attempts to control platform decisions about digital speech. Suffice to say that if the Supreme Court clarifies this distinction, it will also remind us, at least implicitly, that we cannot rely on government to temper extremism online. That will and should remain the responsibility of the platforms themselves — and, crucially, of all of the rest of us who are willing to commit to action and expression that promotes democracy and counters extremism, online and off.