(Editor’s Note: This article is part of our series, “Tech Policy under Trump 2.0.”)

In January, U.S President Donald Trump tasked his advisors to develop by July 2025 an AI Action Plan, a roadmap intended to “sustain and enhance America’s AI dominance.” This call to action mirrors the early days of nuclear energy — a transformative technology with world-changing potential but also grave risks. Much like the nuclear industry was derailed by public backlash following disasters such as Three Mile Island and Chernobyl, AI could face a similar crisis of confidence unless policymakers take proactive steps to prevent a large-scale incident.

A single large-scale AI disaster—be it in cybersecurity, critical infrastructure, or biotechnology—could undermine public trust, stall innovation, and leave the United States trailing global competitors. Recent reports indicate plans to cut the government’s AI capacity by dismantling the AI Safety Institute. But this would be a self-inflicted wound—not only for safety, but for progress. If Washington fails to anticipate and mitigate major AI risks, the United States risks falling behind in the fallout from what could become AI’s Chernobyl moment.

When Innovation Meets Catastrophe: Lessons from Nuclear Energy

For many Americans, AI’s transformative promise today echoes the optimism around nuclear power in the early 1970s, when more than half of the public supported its expansion. Yet the 1979 accident at Three Mile Island—a partial reactor meltdown—shattered that optimism, with support for nuclear energy dropping precipitously by the mid-1980s. By 1984, nearly two-thirds of Americans opposed the expansion of nuclear energy. Statistical analysis suggests that the Three Mile Island incident was associated with a 72 percent decline in nuclear reactor construction globally. Following the deadlier 1986 Chernobyl incident, countries were more than 90 percent less likely to build nuclear power plants than prior to this accident.

Just as many nations envisioned a renaissance for nuclear energy, the 2011 Fukushima disaster in Japan triggered renewed public skepticism and policy reversals. Fukushima — the only nuclear disaster besides Chernobyl to ever reach the highest classification on the International Nuclear and Radiological Event Scale — caused public support for nuclear energy to plummet around the world. The Japanese government halted all plans for new nuclear reactors. Germany shut down all 17 of its nuclear power generation facilities, ultimately leading to increased dependence on Russian fossil fuels, compromising both its energy security and climate goals. The world is still paying the opportunity cost today: Limited access to clean, reliable nuclear power remains a critical bottleneck for AI development and other energy-intensive innovations.

While AI systems have not yet caused significant and widespread harm, risks may not be far away. Experts warn of looming threats that could likewise trigger consequential backlash. AI-generated code, for example, may have hidden vulnerabilities that hackers can exploit, amplifying society’s exposure to cyberattacks as software development is increasingly automated. Future AI systems may exceed human experts in chemical and biological sciences, and equip a much broader pool of actors with the know-how to create devastating bioweapons. Leading AI companies—including OpenAI, xAI, and Anthropic—explicitly cite bioweapon creation as a potential risk, with OpenAI recently stating that its “models are on the cusp of being able to meaningfully help novices create known biological threats.” With today’s blistering pace of advancement, driven by Stargate-sized investment and algorithmic breakthroughs advancing AI’s ability to reason, private sector action alone will likely be insufficient in managing such risks. If such dangers do materialize, public opinion could turn sharply against AI, prompting a wave of restrictive regulations that stifle innovation.

A serious AI incident would increase already-growing public opposition to the technology. Despite optimism from the tech sector, the majority of Americans are more concerned than excited about AI. A large-scale incident could amplify these concerns. As with nuclear energy, this could shatter the social license for AI innovation and instead galvanize public support for overly burdensome regulation, curtailing the technology’s considerable potential benefits.

The Stakes of Maintaining America’s AI Edge

Losing momentum in AI innovation would have profound implications. A stalling AI ecosystem would delay life-saving applications like drug discovery, diminish economic growth from AI tools, and threaten national security. Slowdown from an incident could also allow China to leapfrog the United States in AI’s economic and military applications. Moreover, it would globally discredit the United States’ balanced approach to AI development, opening the door to competing approaches. Withdrawing from AI security leadership would cede the future of AI governance to others — either the European Union’s overly burdensome regulations or China’s use of AI to enhance social control and restrict freedom of expression.

Preventing AI’s Chernobyl moment is about more than simply maintaining America’s technological and economic leadership. It is also about safeguarding democratic values on the global stage. China has already deployed AI technology for mass surveillance — and exported that surveillance technology to over 83 countries. The U.S. AI Action Plan is correct in focusing on U.S. innovation and leadership. Doing so is a necessity for defending U.S. security at home and promoting democratic values abroad.

Securing AI’s Future through Expertise and Agility—Not Red Tape

The path forward demands precision. Navigating the fine line between vigilance and overregulation takes agility and expertise. The United States cannot let speculative fears trigger heavy-handed regulations that would cripple U.S. AI innovation. Yet it also cannot dismiss the possibility of serious — even catastrophic — risks simply because they are uncertain. The solution lies in staying nimble — encouraging innovation while detecting emerging threats early.

Smart, lightweight oversight can help policymakers spot emerging dangers while readying government and industry to act when needed. This approach won’t burden AI companies with excessive red tape. But it requires something crucial: a government with the expertise and resources to truly understand AI’s rapidly advancing capabilities.

The reportedly planned firing of 500 employees of the National Institute of Standards and Technology would gut that expertise. Since its creation last year, NIST’s U.S. AI Safety Institute has become the center of AI knowledge in the U.S. government — drawing in leading technical talent from AI companies and universities — and will remain crucial for assessing emerging risk in collaboration with the private sector. Evaluations of the most advanced models, like those in development by the U.S. AI Safety Institute in partnership with Scale AI, are a critical first step. Similarly, close partnerships between U.S. national security agencies and leading AI companies will be essential for testing how AI capabilities might affect the cyber, biological, nuclear and radiological capabilities of foreign adversaries.

Still, evaluations alone are insufficient. As AI systems move from labs into high-stakes deployment contexts, the federal government must establish a central reporting system for AI-related incidents—akin to how cybersecurity breaches are tracked. A central reporting system to track AI-related incidents would allow the government to maintain visibility and update its approach to evaluations, where appropriate. It would also give policymakers the data to craft targeted safeguards — replacing blanket regulations that could strangle innovation with mitigations tailored to specific risks.

No single company can handle major AI incidents alone — whether it is a large-scale system failure or a sophisticated attack. Just as the U.S. government coordinates with industry to respond to national cyber incidents, it must forge similar partnerships to address AI risks that could impact national security. The Department of Homeland Security’s Artificial Intelligence Safety and Security Board offers a starting point, but the United States needs a comprehensive framework for public-private coordination that clearly defines responsibilities and enables rapid response. If an AI crisis hits, government and industry must be ready to act together, based on established responsibilities and response plans, instead of improvising amidst a potential crisis. These precautions would give policymakers an early warning system for AI risks while creating clear protocols for action — all without hampering innovation or imposing heavy-handed regulation on AI companies.

The stakes are high: Without smart and targeted oversight now, the United States risks a Chernobyl moment — a public backlash that could cripple AI development in the same way successive, preventable accidents stalled nuclear energy for generations. By acting thoughtfully today, policymakers can protect both public safety and technological progress, avoiding the false choice between innovation and security.

It would be easy to speed through AI development in accordance with the Silicon Valley mantra “move fast and break things.” But for a technology as powerful as AI, breaking things is the surest way to put the brakes on progress.

IMAGE: President Donald J. Trump speaks about infrastructure and artificial intelligence to reporters with Larry Ellison, chairman of Oracle Corporation and chief technology officer, Masayoshi Son, SoftBank Group CEO, and Sam Altman, OpenAI CEO in the Roosevelt Room at the White House on Tuesday, Jan 21, 2025 in Washington, DC. (Photo by Jabin Botsford/The Washington Post via Getty Images)