On Tuesday, Meta CEO Mark Zuckerberg announced plans to “eliminate fact-checkers” across Facebook, WhatsApp, and Instagram, signaling a major shift in how the company handles misinformation. He also outlined plans to adopt a user-driven “community notes” system, inspired by Elon Musk’s approach on X. Zuckerberg stated Meta would overhaul its approach to content moderation by removing certain guidelines and raising the threshold for removing prohibited content. Additionally, the company plans to relocate its trust and safety operations to Texas, where, as Zuckerberg put it, “there’s less concern about the bias of our teams.” Zuckerberg concluded by proclaiming that, “we’re going to work with President Trump to push back on governments around the world.” Meta’s policy changes follow its $1 million donation to President-elect Donald Trump’s inaugural fund, the addition of Trump ally, Dana White, to its board and its appointment of Joel Kaplan, a former Republican operative, as chief global affairs officer, signaling a deliberate strategic shift toward the incoming administration.

Meta’s new policies include reducing prohibitions on  hate speech, harassment, and misinformation, effectively allowing more controversial and potentially harmful content to remain on its platforms. For instance, posts targeting marginalized groups such as transgender individuals or immigrants may no longer meet the stricter criteria for removal unless they incite direct violence. Similarly, Meta’s reduced intervention in combating gender-based hate speech and disinformation campaigns disproportionately affects women and other vulnerable populations who already face systemic biases online. These changes reflect a broader trend of deregulation, aligning with Zuckerberg’s vision of prioritizing free expression over community safety—a shift that critics warn will deepen existing inequities and exacerbate the normalization of harmful rhetoric. 

Zuckerberg’s announcement deserves closer scrutiny than can be achieved simply through reading the transcript of his remarks. Actually watching his speech reveals a chilling display of rhetoric and intent. Three distinct, but interconnected, themes emerge from his vision: dismantling content moderation and fact-checking, distorting language, and, ultimately, destabilizing democracy. 

Dismantling Content Moderation and Fact-checking

The consequences of dismantling content moderation are well-documented. U.S. users of X have begun to experience what an online platform looks like when it retrenches content moderation and relies on users to fact-check content. Yet, what has unfolded on X in recent months pales in comparison to what Zuckerberg has announced. The best parallel is not X today, but Facebook’s role in Myanmar from 2013- 2017, where the absence of meaningful moderation fueled genocide against the Rohingya.

Facebook entered Myanmar against the backdrop of a society emerging from three decades of dictatorship, where there was little tradition of a free press, and where persecution of the minority Muslim population, the Rohingya, was endemic. Failing to invest in local language competency, Facebook launched (as it has done in other Majority World countries) without anything approaching meaningful content moderation systems in place. The result, predictably enough, was online incitement to genocide against the Rohingya. Facebook had been relying on its users in Myanmar to flag problematic content. As I wrote in a 2020 article recounting the period:

“Such a flagging system operates on the assumption that offensive content is created and tolerated by only a minority of an online community… Unfortunately, persecution of the Rohingya is tolerated across all parts of Burmese society. In other words, expecting Burmese users to alert Facebook to incitement against the Rohingya in Myanmar would be like relying on the majority Hutu population to call out incitement against the Tutsi in 1994 Rwanda.”

Today, the example is revealing for two reasons. First, outsourcing platform governance to users is highly problematic. In Myanmar, the platform was relying on users to flag hateful content. Moving forward, Meta is relying on users to flag false content. But in both cases, the structure of the underlying problem remains: The flagging system reproduces whatever norms exist in a community of users. To Zuckerberg and Musk (and, presumably, President-elect Trump) this is a feature. For any marginalized group in a society, it is a bug. If most users on Meta believe that a Black Lives Matter protest is the work of Antifa, few will flag posts asserting those lies as false content.

The second point to absorb from the Myanmar example is that online and offline spaces are co-constitutive. One direction of that relationship is already well appreciated; a harmful online situation often does migrate to offline contexts. Less acknowledged is the degree to which offline contexts simultaneously impact online activity, such that the relationship between online and offline is effectively a giant looping mechanism. This matters in the United States at this moment, because changes in content moderation policies online are playing out against an offline context that is increasingly hostile to marginalized communities. 

Distorting Language

A close bedfellow of dismantling fact-checking is Zuckerberg’s turn towards distorting language and even meaning itself. In his speech, he repeatedly and consistently replaces the term “content moderation” with the word “censorship.” (See, e.g. “The problem is that the filters make mistakes, and they take down a lot of content that they shouldn’t. So, by dialing them back, we’re going to dramatically reduce the amount of censorship on our platforms.”) 

Content moderation, in the social media context, refers to the (automated or human) process of reviewing content to ascertain if it aligns with an existing set of policy standards set by the social media company and, if it doesn’t, to remove the content. It is generally understood as an imperfect process developed to stop online platforms turning into cesspools of vile content reflecting the worst parts of human societies. Censorship refers to the suppression of speech that, when done unlawfully by the government, can violate the U.S. Constitution, or international human rights law. 

Technically, both content moderation and censorship can do the same thing – shutter speech. But, just like two different instruments playing the same note have a different  quality, so too, the term content moderation has a different quality than the word censorship. Replacing “content moderation” with “censorship” degrades our understanding of both terms.

Destabilizing Democracy

The erosion of content moderation and the distortion of language are not just theoretical concerns; they strike at the heart of democratic governance. Access to accurate, reliable information is a cornerstone of democracy. It enables the public to hold leaders accountable and to make informed decisions. This is why a free press has always been an indispensable requirement of sustaining a system of democratic governance. Unless the public has access to accurate information about what its representatives are doing, and the impact those decisions are having on the society they live in, then governmental accountability becomes impossible even if free and fair elections continue to be held.

Over the past decade, the public’s reliance on online platforms for information has grown exponentially. However, the infrastructure enabling this access has increasingly consolidated in the hands of a few ultra-wealthy individuals, raising serious concerns about accountability and democratic resilience. This development has not gone unnoticed, and many actors have worked hard to develop systems of accountability to mitigate the risks inherent in this structure. (Others, like Ethan Zuckerman,  argued presciently for the need to develop entirely different structures altogether). Yet, by and large, Big Tech has been effective in shaping the discourse in favor of accountability mechanisms that have been as much about reputation management as anything else.

The business and human rights framework, developed in response to the reality that international human rights law was established to hold states, not corporations, responsible for human rights violations has helped improve corporate behavior in many spheres, including on social media platforms. But it is inherently limited by its non-binding nature and, in the hands of sophisticated Big Tech players, has been deployed cynically in the cases where it has been needed the most. 

After facing widespread condemnation in the Western media over the way its platform enabled incitement to genocide in Myanmar, Meta commissioned a human rights impact assessment, carefully framed to understand the “mistakes” and “shortcomings” of its actions in Myanmar but laser focused on a “forward-looking analysis.” For years, this became the playbook for responding to any and all criticism: Apologize for “mistakes” and promise to do better going forward.

Communities harmed by social media have long recognized that Meta’s approach to accountability is largely performative—a public relations exercise contingent on the C-suite’s assessment of whether such actions will mitigate reputational damage that could affect profitability. Yesterday’s announcement gave a broader segment of the American public the opportunity to come to this realization as well. Fact-checking and content moderation were policies Zuckerberg championed when the political climate made the reputational—and financial—costs of inaction greater than the benefits. With the Trump administration about to return to power, that calculus has now shifted. The lesson is that users everywhere have become reliant on an information ecosystem over which we have almost no control, and is structured to avoid democratic accountability. 


Image: The logos of applications, WhatsApp, Messenger, Instagram and facebook belonging to the company Meta are displayed on the screen of an iPhone in front of a Meta logo on February 03, 2022 in Paris, France. (Photo illustration by Chesnot/Getty Images)