Last month, Montana became the first U.S. state to pass a bill banning TikTok from operating within its borders. If Governor Greg Gianforte signs some version of the bill, it will become the first statewide ban in the country to take direct aim at the popular social media app, which various U.S. government officials have warned poses a serious national security threat. But while Montana may be the first to act, significant gaps remain in the public debate surrounding both the nature of the threat that TikTok presents, and the constitutional questions that trying to regulate it might create.
On the security side, state and federal lawmakers have spoken with great concern about the seriousness of the threat posed by the Chinese-owned app. But they have remained unfortunately general in describing the exact nature of their concerns – officials often cite to one or more worries about the proliferation of disinformation, the compromise of personal data, or threats to U.S. national security more broadly – and have done little to clarify how exactly TikTok’s dangers distinguish it from any number of other online platforms and data collection operations that channel disinformation and vast quantities of similar data to the open market (which are accessible to Beijing or anyone else).
On the speech side, civil liberties advocates and scholars have, in contrast, been remarkably clear: “banning TikTok” would violate the First Amendment (see here, here, and here). Yet given the variety of forms that legislative or executive action against the platform might eventually take, the constitutional question is quite a bit more complicated than it initially appears or is currently acknowledged. The outcome of any court challenges to a ban will depend in key part on the factual strength of the government’s claims. As it stands, the failure of current public debate to engage the complex reality serves neither the interests of national security nor freedom of expression. This piece explains why.
The First Amendment Question Depends on Whose Speech is Regulated and Why
Start with the First Amendment. On the text of the Amendment itself, it is easy to imagine that it categorically bans any government rule that even vaguely burdens “the freedom of speech.” But that is not how the First Amendment works. The speech universe has long been divided into “protected” and “unprotected” or “less protected” forms of speech. The Supreme Court has said that the government only needs to provide a modest justification when it regulates less protected forms of speech (such as defamation, incitement of violence, and commercial fraud). Even within the realm of protected speech, regulations that aim merely at the time, place, or manner of speaking – rather than the speech’s content – have regularly passed First Amendment muster, so long as the government can show that its interest in regulating the speech is significant, that the regulation is no more restrictive than necessary, and that a potential speaker has ample opportunity to convey the same message at some other time or place. In short, the availability of First Amendment protection may greatly depend on whose speech would be regulated, why, and how.
Three Models for Thinking About TikTok and the First Amendment
To understand how the First Amendment applies to TikTok, it might help to consider three theories about whose free speech rights are implicated: (1) TikTok’s rights as a platform; (2) the rights of TikTok’s U.S. users to speak on a major social media platform; and (3) the rights of TikTok’s U.S. users to access content available on the platform.
Option 1: TikTok’s Rights as a Platform
As the Supreme Court has made clear in many contexts, corporations have speech rights just like (or almost like) individuals. It’s easy to imagine TikTok’s lawyers arguing that their client has the same right to speak in the United States as the New York Times, Verizon, or any other U.S. company. But two important issues make this claim problematic. First is that foreign individuals or corporations outside the United States may not have any cognizable rights under the First Amendment. It is in part for that reason that the Court has generally treated cases involving the restriction of foreign speech from outside the United States as implicating the First Amendment rights of U.S. listeners, rather than foreign speakers (more on that below).
Foreign nationals inside the United States of course have all kinds of constitutional rights, and one might argue that to the extent TikTok is operating within the territorial United States, it should enjoy the same First Amendment protections that any other U.S. publisher or speaker enjoys. But that brings us to the second issue: Whether social media apps are indeed speakers or publishers at all. That debate is at the heart of the giant, looming question that overhangs all social media regulation in the United States at the moment – namely, what is the status of social media companies for First Amendment purposes? The Supreme Court hasn’t weighed in yet, but if it treats social media companies as publishers or speakers, they may well enjoy roughly the same First Amendment rights available to other corporate speakers in the United States. On the other hand, if the Court considers them more like, say, common carriers or conduits – an equally live possibility – then the government would have much more room to regulate. Or, the Court could conclude, social media companies are publishers for some purposes, and conduits for others. But this simply reposes the question: what is TikTok’s status here?
Option 2: TikTok’s U.S. Users’ Right to Speak on a Major Public Social Media Platform
The doctrinal hurdles to TikTok itself asserting a First Amendment claim make it far more tempting, then, to revert to Option 2: TikTok’s U.S. users’ right to speak on a major public social media platform (of which TikTok, with more than 150 million U.S. users, is surely one). Here, the theory is far more intuitively attractive: TikTok (like Twitter was or Instagram perhaps remains) is the modern-day equivalent of the town square, a traditional public forum in First Amendment terms in which any government restriction on the content of speech must be justified by a compelling government interest, and must be narrowly tailored to be the least speech-restrictive means possible to achieve that compelling interest. It was some version of this town square idea that a federal district court in California embraced when it stopped the Trump administration from banning WeChat, a Chinese-owned instant messaging and social media app widely used by the Chinese-speaking and Chinese American community in the United States. But the district court’s opinion in WeChat offered little discussion (or for that matter citations) to support its vague public-forum analogy. The court instead rested its willingness to temporarily suspend the effect of the proposed ban almost entirely on its factual finding that WeChat was “effectively the only means of communication” with friends and relatives in China among the community that used it – not only “because China bans other apps,” but also because, as the court repeatedly emphasized, “Chinese speakers with limited English proficiency” have no other options. The situation for TikTok’s U.S. users – including millions of teens who are equally active and adept at using closely analogous and widely popular U.S.-owned platforms like Instagram – hardly seems the same.
In any case, TikTok and platforms like it are not in any traditional sense a “public” forum, in that they are owned or maintained by the government; they are privately owned services that other courts may yet conclude are not any kind of “forum” at all, but are rather, again, private speakers or publishers, common carriers, or some combination of the two – the answer to which has potentially conclusive implications for whether U.S. users have any “right” to speak on it at all. We don’t mean for a moment to suggest that U.S. users’ speech rights are not implicated here. A law banning TikTok could indeed limit U.S. users’ speech on something that functions something like a public forum. Montana’s law is especially vulnerable under almost any First Amendment analysis, aimed as it is at a single platform and expressly identifying the content on the platform it finds objectionable, rather than focusing solely on the manner in which it collects and secures data (more on that below). But where a lawmaker could design a ban not aimed squarely at the site’s content, and where ample alternative channels for the same speech on other platforms remain, existing doctrine offers no guarantee the First Amendment would be offended.
Option 3: TikTok’s U.S. Users First Amendment Rights to Access TikTok Content
A third theory goes as follows: TikTok’s U.S. users’ First Amendment rights are separately burdened when they are deprived of the ability to access the content otherwise available on the platform. Indeed, because one of the core purposes of the First Amendment has long been thought to protect a diverse “marketplace of ideas” sufficient to sustain democratic governance, the Supreme Court has repeatedly recognized the First Amendment right of listeners to access information in that marketplace. More than half a century ago, the Court struck down a federal law barring the mail delivery of “communist propaganda” from abroad unless the intended recipient specifically asked the postal service to deliver it. The Court held that it was wrong for the government to interfere with the mail and attempt “to control the flow of ideas to the public.” This right of access was also part of the Court’s rationale in a more recent decision striking down a sweeping North Carolina law that barred convicted sex offenders from “accessing” any “commercial social networking website” with child members. In that case, the Court wrote at length about the indisputable importance of the government’s interest in passing the law in the first place, mainly to protect children from sex offenders. But a ban of “such staggering reach” – involving a law that could be read to block access to everything from Facebook to WebMD to Amazon – was not remotely tailored enough to survive any degree of constitutional scrutiny from the Court.
The Court reached the right answer in both of those right-of-access cases. But it’s easy to imagine a “TikTok ban” written much more carefully than Montana’s initial version to have far more targeted scope. For instance, a time, place, and manner regulation that effectively precludes U.S. users from accessing foreign-owned platforms on U.S.-based mobile devices for so long as those platforms offer insufficient safeguards to prevent, for example, the geolocation and facial recognition data of U.S. users from being shared with foreign adversaries. In this form, a “ban” starts to look far less problematic than longstanding regulations on foreign speech that still operate in the United States. For better or worse, the First Amendment rights of U.S. listeners have never before posed an obstacle to federal restrictions on foreign involvement (through speech or otherwise) in federal elections, or even federal regulations surrounding the distribution of “foreign political propaganda.” As it stands, U.S. copyright law already precludes a foreign broadcaster from directing copyright-infringing performances into the United States from abroad. The Court has never suggested that such restrictions run afoul of U.S. listener’s right to access that information. If the government were actually able to demonstrate a genuine threat to U.S. national security, and if the content available on TikTok could also be accessed elsewhere, it is not obvious how a court would reconcile the speech and security interests-on-all-sides.
The Need for Deeper Engagement
All of this uncertainty highlights why it is important for the public conversation to engage more deeply with the questions of both security and speech around a TikTok ban. It is entirely right to assume in the first instance that an outright government ban of a major social media platform violates the First Amendment. There is no question that any proposed government restriction on the operation of a social media platform available in the United States raises First Amendment concerns, especially when a ban targets a single platform by name. But the issue, should it eventually make its way to Court – like so many of the current questions in the online speech space – will inevitably present a novel question of constitutional law. Suggesting otherwise makes it more likely that speech advocates will be unprepared for the serious litigation battle they are sure to face if, and when, any TikTok-specific (or, as even more likely, non-TikTok-specific) regulation is enacted. The illusion that the resolution of an issue is settled or entirely certain likewise tends to relieve scholars of the burden of helping judges to think through the genuine range of interests and issues at play. (“If the question is easy, why write an amicus brief or article about it?”)
That over-simplification also disserves security policymakers, who need to understand the full landscape of arguments and litigation risks that potential legislation or administrative action is likely to face. Blanket statements that a TikTok ban would violate the First Amendment suggests to policymakers that there is little point to setting forth in detail – for litigation or even in their own minds – the specific, evidence-backed reasons why the government’s interest is compelling, or how that interest might most narrowly be achieved. (“If the case is a sure loser, they may assume, why even try to go to the trouble?”)
Perhaps most important, the expectation that the courts will step in to correct any constitutional failings may make law or policymakers more likely to take action they fear is constitutionally defective. Legislators get to “take the political win,” show constituents they are acting to address a problem, and then wait for the courts to sort it out. As one Senator put it in a different context, “the Court will clean it up” later. But depending on the actual terms of any federal action, it is simply unclear the courts will.
The current information ecosystem is new, the global threats are complicated, and the facts on the ground are changing quickly. There is reason to worry about the First Amendment implications of many of the proposed online speech regulations circulating these days. But those are not the only worry. We also worry that the confidence that often comes with deep expertise – whether in legal training, security experience, or technical know-how – often promotes a false sense of certainty. That confidence may prove more of a hindrance when trying to solve problems that require collaboration across disciplines.
The views expressed in this article are those of the authors and do not necessarily reflect the official policy or position of the U.S. Government.