Editor’s Note: This article is part of Regulating Social Media Platforms: Government, Speech, and the Law, a symposium organized by Just Security, the NYU Stern Center for Business and Human Rights, and Tech Policy Press.
Elections have consequences.
In July 2019, the European Parliament elected Ursula von der Leyen to become president of the European Commission. Under her leadership, between 2019 and 2024, the Commission successfully shepherded through a landmark package of regulations to rein in Big Tech, among them the Digital Services Act (DSA), the Digital Market Act (DMA), and the Artificial Intelligence Act (AI Act). At the time, many in Europe hoped that the laws would become a model for other jurisdictions along the lines of the General Data Protection Regulation (GDPR), EU laws designed to safeguard people’s information online. But the DSA is unlikely to serve as the global standard for platform governance, nor is the United States likely to import any lessons from the DSA. That was true before the elections, and even more so now.
Shifting Political Winds
For over three years during the Biden administration, the United States and the EU engaged in constructive dialogue through a high-level Trade and Technology Council (TTC). Although both sides agreed on AI principles, formal discussion of other digital platform regulation was off the table.
U.S. President Joe Biden signed executive orders on AI, got major firms to embrace an AI voluntary code of conduct, and accelerated the development of generative AI, microchips, and infrastructure. The U.S. Justice Department and Federal Trade Commission filed antitrust lawsuits against major platforms, including Apple and Google, while the EU brought competition cases against the same large players under the DMA.
But no federal legislation was enacted in the United States to rein in Big Tech, although several states passed laws with opposing goals, either to require — or to curtail — online content moderation.
Then the political winds shifted. Republicans, led by House Judiciary Chairman Jim Jordan (R-OH), attacked platforms, academics, and NGOs, alleging they conspired with the Biden administration to suppress conservative voices, violating First Amendment rights. Academics and NGOs were subpoenaed and sued, causing election disinformation research to stop, and silencing researchers and foundations.
Elections followed on both sides of the Atlantic.
Anti-EU populist parties gained seats in the EU Parliament and in many European capitals. Then last summer, a startling report on the Future of European Competitiveness by former European Central Bank President Mario Draghi warned European lawmakers that their propensity for over-regulation was thwarting development of a robust and innovative tech sector.
Von der Leyen, who won reelection in July 2024, took Draghi’s message to heart. Her new Commission pivoted sharply from the aggressive regulatory agenda of her first term, and championed European competitiveness, security, and tech sovereignty. She also moved to slash burdensome regulations. In a September mission letter to Henna Virkkunen, the Commission’s new executive vice president for tech sovereignty, security and democracy, Von der Leyen emphasized strengthening security and competition; it barely mentioned DSA enforcement.
To improve European competitiveness, Von der Leyen has pushed omnibus packages to streamline regulations and reduce red tape, including the newly minted AI Act, which Virkkunen argues should be made more innovation-friendly. The DSA was not mentioned.
The Trump Effect
Meanwhile, President Donald Trump has repeatedly lashed out at the EU. He threatened tariff reprisals for EU enforcement actions against American Big Tech companies. In a Feb. 25 directive, he attacked European legislation, including the DMA and the DSA, warning that “regulations that dictate how American companies interact with consumers in the European Union” will be scrutinized by the U.S. government. The Trump administration is targeting EU and British rules that cause American companies to develop their products “in ways that undermine free speech or foster censorship,” in the White House’s words.
And in a blistering speech at the Munich Security Conference in February, Vice President JD Vance accused European leaders of suppressing free speech and censoring far-right voices. He railed against the DSA’s moderation provisions and code of practice targeting the spread of disinformation and election interference.
At home, the Trump administration immediately dismantled U.S. efforts to fight foreign election interference by firing the Cybersecurity and Infrastructure Security Agency staff that had alerted the public to foreign operations, and shuttering an FBI task force responsible for stopping foreign interference in U.S. elections. On Feb. 26, House Judiciary Committee Chairman Jim Jordan subpoenaed Alphabet, Amazon, Apple, Meta, Microsoft, Rumble, TikTok and X Corp, seeking each company’s communications with foreign governments regarding compliance with foreign laws, regulations and judicial orders that in his words “censor speech.”
Finally, Brendan Carr, the new chairman of the Federal Communications Commission (FCC), piled on in a Feb. 26 letter to the heads of American Big Tech companies, asserting that the DSA is positioned to “thwart freedom of speech and diversity of opinion both within the United States and worldwide.” He asked the companies to brief him on how they plan to reconcile the DSA with America’s free speech principles.
What It Means for Europe
All of this pressure coming from the Trump administration raises the possibility that the European Commission will relax its enforcement of the DSA, but so far, that hasn’t been the case. For now, major tech platforms are cooperating with the processes and timetables of the DSA, submitting their required risk assessments and mitigation reports, updates on codes of conduct and practice, and outside audits. Compliance on researcher access to data, however, remains spotty.
Although Brussels may soften some rules to improve European competitiveness, it is unlikely to touch the DSA. Improving online social media safety, especially for children, still enjoys public support, as does protecting election integrity across Europe. Plus, most DSA obligations fall on the 25 largely American Big Tech companies, and not on smaller European firms.
Time will tell whether the Trump administration’s actions on Ukraine or Vance’s private, disparaging remarks about Europe, which were made public by the Signalgate scandal, will intensify Brussels’ commitment to digital sovereignty and its resolve to enforce the DSA within the borders of the EU.
As expected, earlier this year, the Commission converted its 2022 Code of Practice on Disinformation, which had been “voluntary,” into a mandatory Code of Conduct. The commitments by the very large platforms would now be evaluated as evidence of risk mitigation in their compliance audits.
But with global deregulatory winds at their backs, in their recent Code submissions, major platforms have withdrawn commitments on political advertising and fact checking. All platforms unsubscribed from the commitment to develop, fund, and collaborate with an independent third-party body to enable data access for researchers.
Meta recently announced plans to end its independent fact-checking program in the United States in favor of a crowd-sourced, “community notes” system, which it is likely to deploy later in the EU. While fact checking is not specifically required under the DSA, the Commission is investigating Meta’s change to ensure that the switch does not impair the company’s effectiveness in addressing disinformation.
In 2024, the Commission initiated high-profile enforcement proceedings against Meta (for content moderation transparency, protection of minors, dark patterns, mechanism for flagging illegal content), X (for dark patterns for verified accounts, ad transparency; data access for researchers), and TikTok (for addictive design, harmful content, and other matters), among other companies. TikTok withdrew its TT Lite Rewards program, – which awarded users points for performing certain tasks on the platform, such as watching videos and liking content–after the Commission objected to its targeting of children.
It is too early to say whether the Commission will initiate new enforcement proceedings or just digest its current caseload.
The DSA’s Effects Abroad
Some tech reformers hoped that European regulations could push tech companies to reform their practices beyond Europe, but that does not appear to be happening. Rather than extending the regulatory benefits of the DSA to non-EU markets, many tech companies opted to geo-fence their content moderation efforts, that is, offering one version consistent with EU law in Europe and different versions outside of that geographical boundary. They are doing only what is necessary under the DSA, and limiting any user rights and disclosures to the EU market.
And wholesale adoption of the DSA by nations, apart from EU members and, perhaps, EU accession states, is unlikely.
The DSA is a highly complex regulatory framework, primarily designed to advance a European single digital market by respecting, but largely preempting, conflicting national laws. And, in any event, leading democracies had already begun to regulate online platforms while the DSA was being negotiated.
That said, as governments update existing online safety rules, countries such as Australia are considering concepts that are central to both the DSA and the UK Online Safety Act (UK OSA) frameworks.
For example, Australia already has an Online Safety Act. Enacted in 2021, it differs from the DSA in its approach by focusing on specific online harms – especially child safety — in addition to illegal content, and is more co-regulatory than the DSA. The Feb. 4 Statutory Review of the Australian Online Safety Act recommended revisions that would bring it more in line with the United Kingdom’s OSA by imposing a “Duty of Care” on platforms serving a large number of Australians and smaller platforms with greater risk of harm. Platforms in scope would have to assess risk and mitigate identified harms. Other recommendations include adopting researcher access to platform data, similar to the DSA requirement. Whether the Review’s recommendations will translate into legislation in Australia will largely depend upon the outcome of national elections in May.
The UK OSA and the DSA were developed concurrently and rely on similar concepts such as proactive risk assessments, mitigation, and transparency. They differ both in the scope of regulation (the DSA covers a broader range of services including illegal products, while the OSA covers both illegal and harmful content) as well as enforcement philosophy (Ofcom, the lead UK OSA regulator, promotes working with platforms to improve outcomes, while the DSA takes a more adversarial stance.) Ofcom has expressed interest in adding a provision requiring researcher access to platform data akin to the DSA Article 40.
In the United States, many large platforms have dialed back their content moderation programs and reduced their trust and safety staff. After buying Twitter, Elon Musk fired his election integrity staff and a third of his trust and safety team, and withdrew from the EU voluntary Code of Practice. Under the banner of protecting “free speech,” Musk has opened X’s floodgates to hate speech, violence, disinformation, and consumer fraud.
Even prior to the Musk acquisition, Twitter had begun to experiment with community notes, a crowdsourced system to add context and accurate information to posts. Twitter (now X) deploys a bridging algorithm designed to minimize gaming the system by only publishing notes that are rated “helpful” by contributors who previously disagreed on ratings. But when a community note on a Musk post surfaced citing polls contradicting Trump on Ukrainian President Volodymyr Zelensky’s popularity, Musk declared that Community Notes “is being gamed by governments and the media,” and vowed to “fix” it.
In January, in addition to terminating fact checking in the U.S., Meta’s Mark Zuckerberg ominously announced major changes to the company’s terms of service. These changes effectively allow more hate speech, misogyny, racism, and anti-LGBTQ content on Facebook, Instagram, and Threads. Zuckerberg argued that there were too many false positives resulting in too much censorship. He also threatened to complain to the White House if the EU persisted in heavily taxing U.S. tech companies and censoring speech.
Looking Ahead
Geopolitical winds are gusting to the right with deregulatory fervor. Even the European Commission is now pushing tech innovation, competition, and national sovereignty. Security has replaced online safety as a core concept. And DSA enforcement continues its plodding path in the EU, off the front page.
As for the United States, it has always differed with Europe in its approach to online content regulation. During the U.S.-EU dialogue that took place under Biden, European platform regulation was largely off the table. Now, with Congress and the White House increasing their attacks on European content moderation, and Trump threatening Europe (and the United Kingdom) for enforcing — within their respective jurisdictions — their rules against American Big Tech, there is little positive on the horizon for the DSA in the United States.
So what U.S. platform governance measures are possible before midterm congressional elections in 2026?
Past congressional efforts have failed to pass platform legislation, including privacy protections. However, there is growing bipartisan support to protect children from online abuse and to criminalize non-consensual publication of sexually explicit images, which might be strong enough to withstand today’s deregulatory push.
The Trump administration’s first-term assault on Section 230 of the Communications Decency Act may advance in his second term as leaders in both parties push to remove the legal safe harbor from liability that platforms enjoy for moderating user-generated content online. A draft bipartisan bill, spearheaded by Senators Lindsey Graham (R-S.C.) and Dick Durbin (D-MI), would sunset Section 230 by 2027 unless tech platforms agree to new (unspecified) rules. And FCC Chairman Brendan Carr has fervently argued that Section 230 protections should be available only in limited cases where platforms remove user-generated content that is illegal.
The truth is that elections have consequences, and we are seeing that play out on both sides of the Atlantic.