Editor’s Note: This article is part of Regulating Social Media Platforms: Government, Speech, and the Law, a symposium organized by Just Security, the NYU Stern Center for Business and Human Rights, and Tech Policy Press.
Lawmakers are right to worry about platforms’ power over public discourse and democracy. But legislative responses too often seek to empower the government to set new rules for online speech. Courts have rightly held that such laws violate the First Amendment. Attempted work-arounds like using Federal Communications Commission (FCC) authority or statutory immunities to target “lawful but awful” speech have similar problems.
For some lawmakers, this constitutional barrier is a bug. For the rest of us, it is decidedly a feature. The First Amendment is meant to protect us from the short-sightedness about state power that afflicted many Democrats during the Biden administration, and Republicans in this one.
Recent Political Context
Critics on the left and right often disagree about which speech is important for democracy. Social media posts about votes wrongly added to Biden’s tally in the 2020 election, for example, may strike some Republicans as important political discourse, but constitute dangerous disinformation in the eyes of many Democrats. These disagreements frequently center on speech that, while controversial and even dangerous, is clearly protected by the First Amendment.
Democratic and Republican lawmakers have also favored different legislative models to curb threats to democracy online. Democrats often seek to make platforms take down more content, while Republicans look for ways to make them leave it up. This divide is a major reason we have seen almost no new federal platform regulations despite years of Congressional hearings and posturing.
The legislative picture is different at the state level. There, both sides’ approaches have become laws. California’s Democratic-controlled statehouse passed legislation restricting election-related deepfakes, for example. Both New York and California enacted platform transparency mandates for content like disinformation and hate speech. Republican lawmakers in Texas, by contrast, adopted a “must-carry” law requiring viewpoint-neutral content moderation — meaning a platform could only remove racist content, for example, if it also removes anti-racist content. Florida sought to protect democratic debate by restricting platforms’ ability to moderate posts about political candidates. (These have run into substantial constitutional challenges, as discussed below.)
Shared Values and Fears
I believe some key underpinnings of these laws are genuinely bipartisan—among ordinary people, if not pundits and politicians.
First, few social media users actually want to see all the scams, pro-anorexia videos, racist screeds, and other Internet dross that most platforms weed out. People may disagree with some platform speech rules, of course. But wanting to avoid at least some lawful but awful speech is bipartisan.
Second, nearly everyone has reason to fear platforms’ power to suppress speech. Speakers of every political stripe have believed themselves unfairly silenced by platforms. Platforms also act as proxies for state power in ways that are almost embarrassingly obvious. Facebook and Instagram expanded their rules against Covid disinformation during the Biden administration, for example. But once President Donald Trump was re-elected, Mark Zuckerberg insisted that his prior stance was a product of bullying by Biden officials (who publicly said mean things about him, and threatened to ask Congress to change some laws). Meta is now, he says, committed to the anti-immigrant and anti-transgender policies preferred by Trump (who threatened to put Zuckerberg in jail and said Meta’s speech rule changes were “probably” a result).
The bottom line is that most Internet users want content moderation. At the same time, anyone can find themselves on the wrong side of platforms’ evolving rules, or simply mistrust tech companies as rulemakers for speech.
First Amendment Protections and Platform Regulation
Courts have blocked “democracy-protective” laws from both parties. A majority of Supreme Court justices rejected Texas’s and Florida’s must-carry rules for social media. Legislators could not, they explained, override Facebook’s editorial choices about its newsfeeds, or tell YouTube what videos must appear on its homepage.
First Amendment challenges to Democrat-backed state laws (including the New York and California laws mentioned above) have succeeded in lower courts. Using state power to burden lawful speech online is unconstitutional, courts have said, even under laws that attempt to avoid direct speech regulation by only requiring platforms to “mitigate” speech-based harms. These cases, mostly brought by platform trade associations, have unsurprisingly produced rulings emphasizing platforms’ First Amendment rights. But the laws affect Internet users, too. By regulating platforms’ choices about content, lawmakers may violate ordinary people’s First Amendment rights.
Using Section 230 Immunities to Regulate Lawful Speech
The same First Amendment standards that prevent state lawmakers from imposing new rules for lawful online speech also pose problems for a common federal approach: stripping platforms’ immunity for disfavored speech. A 2021 bill, for example, sought to amend a provision of the Communications Decency Act known as Section 230 to eliminate platform immunity for “medical misinformation” – leaving that term to be defined by the Secretary of Health and Human Services (HHS). (Currently, that’s Robert F. Kennedy, Jr.).
Nothing in the bill purported to make publishing HHS-designated misinformation illegal in the first place, though. The proposed law would have done nothing to change platforms’ actual liability for most medical misinformation – much of which is lawful. Immunities don’t matter in court if no one actually violated any laws. For the most part, the bill just would have created a mechanism for HHS to formally designate state-disapproved legal speech and made it more expensive for platforms to litigate frivolous claims about that speech.
Using FCC Authority to Regulate Lawful Speech
The United States does have one major legal model for regulating otherwise lawful speech on privately owned communications channels: the rules administered by the FCC for media like broadcast and cable.
FCC speech rules deserve special attention for two reasons. First, the Supreme Court has applied more lenient First Amendment scrutiny in this context. It has upheld laws requiring suppression of lawful speech, and also laws requiring carriage of speech, including election-related material. This special leeway for regulation is, the Court said, justified by the scarcity of broadcast spectrum and by cable carriers’ “bottleneck” control over speech. The Court firmly declined to extend that reasoning to the Internet in its seminal Reno v, ACLU ruling. The Internet, it said, has no scarcity of avenues for speaking. The “vast democratic forums of the Internet” are thus not “subject to the type of government supervision” applied to older media. The Court could change its mind about that, though, perhaps pointing to platform consolidation as a grounds for doing so.. It is only a matter of time before litigants tee up the Court’s next opportunity to do so.
The second reason to pay close attention to the FCC is because of the Trump administration. Its position, and that of new FCC chair Brendan Carr, is that the Commission already has authority to regulate Internet platforms by interpreting Section 230. That assertion of authority is dubious for a whole lot of reasons. But the administration spelled out how it planned to use such authority back in 2020. Among other things, it would interpret Section 230 to protect platforms that remove harassing, violent, or sexual content – presumably including materials about reproductive health or LGBTQ+ identity – but to provide no such protection when platforms remove disinformation or hate speech.
FCC-watchers expect Carr to advance this agenda soon. In the meantime, he has kept busy with other priorities – like threatening to revoke NBC’s license based on Kamala Harris’s Saturday Night Live appearance; block a CBS merger because of Harris’s appearance on 60 Minutes; and intervene in ABC’s contracts with affiliates because the network “contributed to the erosion in public trust” by allegedly defaming President Trump. He has also formally accused platforms of “censorship” for appending fact-checking labels from a company, NewsGuard, operated by a former Wall Street Journal publisher.
* * *
Protecting democracy from threats created by Internet platforms is a laudable goal. But it is not worth the cost imposed by legislative attempts so far: empowering the government to control legal speech online. Lawmakers’ attempts to impose their own top-down speech rules are particularly unwarranted given the far more promising possibilities offered by user-controlled and decentralized content moderation systems. Twenty-five years ago, the Supreme Court wrote that “Technology expands the capacity to choose; and it denies the potential of this revolution if we assume the Government is best positioned to make these choices for us.” That remains true today.