Even prior to the release of DeepSeek-R1, a contentious policy debate was taking shape: should open-source AI be regulated? In the United States, this debate has been dominated by two competing perspectives. One emphasizes geopolitical risk and global power dynamics, with a focus on Chinese misuse of U.S. open-source AI. The other is rooted in ideological values — innovation, transparency, and democracy — championed by the open-source community. U.S. policymakers face the formidable task of reconciling these seemingly competing priorities.

If policymakers wish to balance geopolitical and ideological considerations, export controls on open-source models are not the solution. Such attempts to partially limit access to information would likely be porous and ineffective, while potentially disrupting innovation and American influence. A more effective alternative would be to focus on risk endemic in each model and thus determine an appropriate mode of release, instead of trying to prevent specific actors from accessing public information.

Geopolitics of Open-Source AI  

To date, open-source AI has been a notable exclusion from several AI policies. The Biden administration’s Framework for AI Diffusion, which introduced export controls on model weights, explicitly targeted only closed (proprietary) models. One reason for this exclusion is exemplified by the intense backlash against California Senate Bill 1047 (SB-1047) for its inclusion of open models. Although SB-1047 was ultimately vetoed by Governor Gavin Newsom, the rise of DeepSeek is now resurfacing the debate on open-source regulation. Senator Josh Hawley recently introduced the “Decoupling America’s 5 Artificial Intelligence Capabilities from China Act” that, if passed,  would ban the export to and import of all models from China — including open-source models.

The open-source community and certain members of industry have argued fervently against any regulation and were particularly vocal in response to SB 1047. Common arguments made can be distilled into three ideological benefits of open-source AI. The first is that open-source AI accelerates innovation. The second is that it mitigates concentration of power, a theory that Meta has leveraged for its slogan “Open-Source AI: available to all, not just the few.” Finally, a popular argument is that open-sourcing promotes transparency and therefore helps ensure model safety.

However, these benefits, while aligned with core U.S. values, warrant closer scrutiny, since open-sourcing is not the only way to achieve them. Open-source AI likely accelerates certain forms of innovation, particularly in adaptation and iteration — at least 100,000 models have been built on Meta’s Llama foundation model series — but may be less relevant in advancing frontier capabilities. Historically, frontier capabilities have been advanced by closed models and significant computational resources. DeepSeek-R1 and DeepSeek-V3, open models achieving cutting-edge performance through algorithmic ingenuity and limited compute, defy this trend. However, DeepSeek built not on another open model, but leveraged OpenAI’s o1 for inspiration and training, raising questions about whether open models are core drivers of frontier capabilities.

Regarding transparency, external audits and model reports can also help ensure safety. Anthropic’s model constitution, informed by a representative cross-section of Americans, illustrates an alternative method to gather wider input on model capabilities and values. Similarly, ensuring that decisions about model safety and public release are not exclusively made by a handful of tech executives could prevent concentration of power. Initiatives like Anthropic’s Responsible Scaling Policy, if wielded transparently and with sufficient external oversight, could help with this.

Policymakers must grapple with these ideological benefits and balance them with three geopolitical considerations.

First, there is the potential misuse of U.S. open models. Here, marginal risk is determined by availability of similar capabilities and the resources of potential actors. If comparable open models already exist, or actors have access to advanced closed models, the marginal risk of releasing another may be low. Conversely, if a model enables new dangerous capabilities, such as in biological weapons design, it could pose a large risk. This risk analysis is not the same for all potential adversaries; while state actors like China may already have access to powerful closed models, smaller or non-state actors may rely more heavily on open-source AI.

Second, open models may harbor security vulnerabilities. Businesses, agencies, and individuals using open models with malicious backdoors could be subject to remote access or intelligence collection. Reports indicate that 10 percent of U.S. businesses using open-source AI tools discovered malicious code in their systems — and that is just those that detected it. If U.S. critical infrastructure systems were manipulated by adversarial actors via open-source technologies, the consequences could be devastating.

Third, some argue the United States must compete for dominance in the global open-source AI ecosystem. According to this perspective, open-source primacy is a strategic tool to cultivate economic dependencies and extend soft power. Chinese models’ strict adherence to state censorship raises concerns about influencing Global South countries, though U.S. models have also demonstrated similar reluctance to comment on politically sensitive topics.

However, this perspective requires caution. Open-source AI is but one instrument of national influence, alongside trade, investment, alliance, and cultural diplomacy. Preoccupation with open-source AI as national power could also foment an unfettered and reckless arms race. If the United States completely sidelines safety, it is not only forfeiting strategic advantage — models must be reliable in critical moments — but it also increases the chances of unintended destabilizing incidents.

Policy Recommendations: A Model-by-Model Approach

Export controls on open models, while seemingly straightforward, offer an imperfect solution to these competing considerations. They would impose broad access restrictions, requiring developers to implement know-your-customer (KYC) protocols and destroying the core of open access. Doing so would likely stifle domestic innovation and erode U.S. leadership in open-source AI without effectively mitigating foreign risks. Moreover, such controls do little to address domestic misuse and are particularly vulnerable to circumvention through digital theft or intermediaries — challenges already evident in semiconductor export controls.

A better alternative would involve targeted risk assessments. Developers could be required to evaluate each model’s risks, considering existing capabilities and mode of release. Higher risk models, such as those with potential for bioweapon design, might be restricted to vetted researchers, who need access to all model components for research. Conversely, models with lower risk profiles would remain broadly accessible. This is relatively similar to Meta’s new Frontier AI Framework, but it would be coupled with independent oversight, through industry bodies or government entities, to mitigate conflicts of interest.

This model-by-model approach would balance innovation with security more effectively than blanket export controls. Since less risky models would be widely available, the open-source community would remain free to innovate. It better mitigates both international and domestic misuse risk by addressing the inherent dangers of specific models rather than attempting to restrict access based on the identity of potential users.

The debate over open-source AI regulation sits at the intersection of innovation, security, and great power competition. Rather than viewing open-source governance as a binary choice between complete openness and restriction, policymakers should pursue targeted interventions that preserve the benefits of open-source development while addressing geopolitical considerations. Such an approach can help maintain U.S. technological leadership while fostering responsible innovation in AI development.

IMAGE: Visualization of U.S.-China tech competition (via Getty Images)