On Nov. 1, Reuters reported that Chinese researchers, including ones affiliated with the People’s Liberation Army (PLA), used one of Meta’s Llama models for military purposes last year. The news led to a quick and robust reaction from many, including U.S. policymakers, arguing for further restrictions on open source AI. Michael McCaul, Chairman of the House Foreign Affairs Committee, said the recently proposed ENFORCE ACT–a bill that could effectively prohibit American AI developers from releasing open-weight models–was necessary to “keep American AI out of China’s hands.”
Unlike models such as OpenAI’s ChatGPT or Anthropic’s Claude, the Llama family of language models are “open-weight,” meaning that their weights–the numbers that define its functionality–are available for anyone to download for free online. Other well-known open source AI providers include Mistral, based in France, and Falcon, developed in the United Arab Emirates. For years, debates have been raging n the strategic benefits and risks of two AI ecosystems: one that is based primarily on proprietary and closed-source AI systems and one that is supportive of open source.
While public access to open-weight models does represent a real tradeoff between control, security, and innovation – as the Llama example underscores – the story is more complicated. Critics of open source models fail to recognize the key role these models will play in securing U.S. security interests in the long term. Rather than focusing on the risks of open source AI, policymakers should ask whether the world should rely on U.S.-developed AI – or the increasingly capable open source models from China.
The Risks of Open Source AI
Those who are more skeptical of open source AI argue that the best way to mitigate AI’s negative impacts and security risks is to develop new regulations and restrict its global distribution. Threat actors can modify open models and remove critical safety features, creating new security risks. Moreover, as open models can be run on anyone’s hardware, the original developer cannot monitor its usage for dangerous or harmful applications in ways that closed model providers can (at least, in theory). It is for this reason that closed-source AI companies are investing vast resources to prevent the theft or export of their model weights.
From this perspective, open source is viewed as a way to simply hand American frontier AI technology to the Chinese and other competitors for free. Instead, it is often argued that the United States should lead in AI by supporting an ecosystem of large, closed-source AI developers, limiting AI exports, and imposing additional regulations on AI developers that build or work with open source models.
If the inner workings of these closed-source models are concealed from the public, it is alleged that they might be more secure – even though Chinese firms can easily exfiltrate AI model data by processing their outputs (“model distillation”). Indeed, some of China’s most capable models, such as Alibaba’s Qwen, continue to tell users that it is Claude, a closed-source model built by San Francisco-based Anthropic, and firms, including OpenAI, have begun to block access to their models in China to mitigate this risk. Yet, An AI security policy built on these principles would quickly find it necessary to block U.S.-developed AI from reaching the world or, eventually, anyone at all.
Why Open Source AI Still Matters
In contrast, proponents of open source AI point to decades of research showing large and tangible benefits that accompany open source ecosystems, from greater innovation to improved security. Openly-released software is a cornerstone of America’s current dominance in the global technological ecosystem, with nearly every digital service–including every app and website enabled in some way by open source software. Open-weight AI translates many of the benefits associated with open source software to the world of AI.
Due to the high costs associated with training cutting-edge AI, individuals, researchers, and businesses alike find it more efficient to build on each other’s work: this reduces costs, encourages innovation, and improves security. Already the widespread availability of open models has created new competitive ecosystems, enabling players to compete on the global stage. This is something that the National Telecommunications and Information Administration (NTIA) recently issued a comprehensive report that similarly acknowledged these benefits, leading them to issue policy recommendations that embrace openness. For national security, too, research has suggested that a strong open source AI ecosystem would benefit the United States Department of Defense (DoD) and U.S. national security by increasing “supplier diversity, sustainment, cybersecurity, and innovation.”
Yet, perhaps even more importantly, for many countries open source AI represents the only opportunity to engage with the technology due to its prohibitively high development and training costs. As AI becomes increasingly integrated into the world’s digital infrastructure, the importance of open source AI will grow, too, as it is likely to be a key building block in driving AI’s global diffusion and adoption. The country that can develop these building blocks will be rewarded with strategic security advantages.
Countering China in the Open
China is one country that has realized this. In the face of growing restrictions on their technology ecosystem spearheaded by the United States, China has moved to decouple itself from American technology and is investing heavily in open source development across the entire technology sector, from chips and semiconductors to operating systems to AI models. Far from being a laggard in AI development, Chinese models have improved rapidly, with some ranking among the top ten models in the world according to the LMSYS Chatbot Arena, a popular AI model evaluation benchmark. China seeks to become the globe’s AI provider, and in many countries around the world, China’s open source AI models have seen a growth in usage, especially in the Global South. Chinese regulations, too, appear to be broadly supportive of open source AI development. Regardless of the policies pursued by the United States and its allies, China has made clear its intention to compete and win in the global open source AI arena.
At a time when China has numerous domestic models that match and, in some cases, surpass models developed and released by U.S. companies, their usage of Llama 1 should not be interpreted as a threat. This model came out in early 2023 – ancient by AI standards. While it is understandable to be concerned about China’s usage of an American open source AI model, it is also important that U.S. policymakers do not overreact. Efforts to limit the ability of U.S. open source AI to compete in the global market would be disastrous for the country’s national security, limit the innovative potential of the U.S. tech industry, and hand a significant portion of the global AI market to U.S. adversaries for free.