On Oct. 24, the White House publicly released its long-awaited National Security Memorandum (NSM) on AI, mandated by the Biden administration’s October 2023 Executive Order on AI. The “Memorandum on Advancing the United States’ Leadership in Artificial Intelligence; Harnessing Artificial Intelligence to Fulfill National Security Objectives; and Fostering the Safety, Security, and Trustworthiness of Artificial Intelligence” and corresponding fact sheet provide the first comprehensive strategy for governing AI use in national security systems, notably in defense and intelligence agencies.

In response to this pivotal policy development, we asked leading experts whether and to what extent the memo addresses key concerns, what the next administration should do with respect to implementation, how U.S. efforts to govern national security uses of AI compare to other countries, and what more work still needs to be done.

Suresh Venkatasubramanian, Professor of Data Science and Computer Science at Brown University; former Assistant Director for Science and Justice in the White House Office of Science and Technology Policy in the Biden-Harris administration: 

The much awaited National Security Memorandum (NSM) on AI was released yesterday by the White House. The national security apparatus was charged with producing a counterpart to the Executive Order on AI, and this document will serve as the guide for how they should be approaching the use (and misuse) of AI.

National security has always been a tricky “out” for AI policy. The White House Blueprint for an AI Bill of Rights – which I co-authored – had within it an exception for national security applications, and the lines between what’s a national security use case and what’s a domestic use case have always blurred, at the U.S. border for one, and for surveillance scenarios involving putative terrorist activity.

So it was always going to be a question of whether any meaningful restrictions would be placed on the use of AI in national security settings. This memorandum’s answer is: “kind of, maybe.” There’s a lot of pretty strong language around the responsible use of AI, the importance of preserving civil liberties and civil rights, and the importance of these efforts as a signal for continued US support of democracy. This is encouraging, but there are enough exceptions to the provisions that time will tell whether these are real edge cases or holes big enough to drive an AI-generated nuclear bomb through.

I found the companion document – the framework – more promising. Firstly, as a policy tactic, it’s a good idea to have the NSM refer to the framework. The NSM can’t be easily changed, but there’s a much easier interagency process to update the framework. This means that policies can adapt and evolve as the technology and the use cases do.

I was encouraged to see even the idea of prohibited use cases in the framework, and to see already fraught tech-like emotion detection systems in that list. Again, there are caveats (“lawful and justified reasons”), but even noting this list of use cases as being serious enough to be avoided is a positive sign. I was also happy to see strong alignment with the Office of Management and Budget (OMB) memo, and a similar idea of high-impact use cases that require heightened scrutiny, and scrutiny that includes civil rights protections.

Ultimately, it’s hard to be anything but skeptical about claims that the national security apparatus will use AI responsibly, but these documents go a lot further and are more forceful about protections for people than what is strictly required. I give the Biden administration credit for that and remain at least somewhat hopeful that this marks a new area in the responsible use of AI.

Ashley Deeks, Class of 1948 Scholarly Research Professor at the University of Virginia Law School; former White House associate counsel and deputy legal adviser to the National Security Council: 

I give the Biden administration significant credit for wrestling seriously with the challenges that national security AI poses to democratic values.  The NSM contains a range of valuable requirements, from requiring Defense, State, and the Office of the Director of National Intelligence (ODNI) to evaluate the feasibility of co-developing and co-deploying AI systems with close allies, to requiring NSA to develop the capacity to conduct classified testing of AI models’ ability to generate offensive cyber operations, to recognizing that whistleblowing protections may play an important role in detecting problems with AI use.  I had two primary reactions, based on my forthcoming book that frames the use of AI inside the national security agencies as a “double black box.” That is, these algorithms themselves are black boxes, and now we will be using them inside national security operations that are themselves shrouded in secrecy.

First, the country’s key goal should be to ensure that our national security agencies develop and use AI in ways that are legal, accountable, and effective and that require the agencies to justify its use to some set of actors.  There’s a lot to like in the NSM on the legality front, including a requirement that covered agencies develop and use AI in a manner “consistent with United States law and policies, democratic values, and international law” (4.2(b)).  There’s attention to accountability, too, in that the NSM treats lack of accountability as a risk (4.2(c)). And the NSM drafters clearly want the U.S. government’s use of AI to be effective, as evidenced by the NSM’s interest in streamlining acquisition, engaging in robust testing, and sharing risk information across agencies.

But it is especially important when operating in secrecy to justify decisions to actors who are somewhat removed from the fray.  Knowing as outsiders that the Executive must justify its decisions would increase our confidence that the use of AI is lawful, effective, and accountable.  The NSM does not appear to contemplate external oversight.  Although the NSM requires agencies to provide several reports to the National Security Adviser or the President, there is no requirement to report any information to Congress.  Perhaps entities created by the NSM, such as agency AI Governance Boards or the Chief AI Officer Coordination Group, will suffice as fora in which decisionmakers will have to justify their policy, technical, and legal decisions to each other.  But Congress should seek to understand how the Executive is implementing this NSM – as should our close allies.

Second, section 4.2(e) envisions a “Framework to Advance AI Governance and Risk Management in National Security” (AI Framework) approved by the NSC Deputies.  Substantively, the Framework will provide guidance on “high impact” AI activities, including “AI whose output serves as a principal basis for a decision or action that could exacerbate or create significant risks to national security, international norms, . . . or other democratic values.”  I previously have argued that Congress should enact a statute to regulate “high risk national security AI” or, in the alternative, that the Executive should create an interagency process akin to those that reportedly exist for high risk activities such as targeted killings or offensive cyber operations.  The NSM suggests that the Framework will contain “high impact” guidance that will apply to each covered agency independently, but it does not seem to envision an interagency process for assessing “high impact” operations.  Indeed, 4.2(e)(ii)(I) envisions a “waiver process for high-impact AI use,” presumably from robust mitigation measures, based on the need to use AI to “preserve and advance critical agency missions and operations.”  Those developing the AI Framework should consider requiring “high impact” operations generally to be reviewed in an NSC-led interagency process and to set a high bar for when covered agencies may exercise waivers to avoid such review.

Brianna Rosen, Senior Fellow, Just Security; Strategy and Policy Fellow, University of Oxford; former White House National Security Council:

Yesterday marked a watershed moment in U.S. policy on national security uses of AI. After a year of intensive interagency effort, the public release of the National Security Memorandum (NSM) provides a blueprint for how the White House plans to strategically leverage the opportunities – and mitigate the risks – of the increasing integration of AI into national security systems. The Memorandum outlines three main lines of effort: securing American leadership in AI through people, hardware, and power; harnessing AI to transform the full spectrum of military and intelligence operations; and strengthening partnerships and international norms around global AI governance.

Significantly, the Memorandum envisions the first-ever government-wide Framework on AI Risk Management, which should be sufficiently flexible to adapt to new capabilities or legal issues as they emerge, and updated regularly. The Framework will outline “minimum risk management practices” for “high impact” AI activities, described as “AI whose output serves as a principal basis for a decision or action that could exacerbate or create significant risks to national security, international norms, human rights, civil rights, civil liberties, privacy, safety, or other democratic values.” These minimum risk practices include mechanisms for assessing data quality, testing and evaluation practices, mitigation of biases and discrimination, ongoing monitoring, human training, and oversight requirements. The Memorandum goes further in specifically calling for additional safeguards to be applied by members of the military and intelligence communities – a key provision given that AI is already being used in intelligence collection and analysis, military targeting, and national security decision-making.

Whether these protections are sufficiently robust, particularly within the Intelligence Community (IC), remains to be seen. For all its many references to principles, democratic values, human rights, and law, the Memorandum also contains carefully worded carve-outs. Notably, the Memorandum tasks “covered agencies” (the IC) with developing and implementing “waiver processes for high-impact AI” where risk mitigation measures must be balanced with “the need to utilize AI to preserve and advance critical agency missions and operations.” In other words, the executive branch is not tying its hands – the policy guidance for high-risk AI activities may be set aside in favor of national security demands. That is not surprising, but what is notably absent here is transparency and accountability, for example, in the form of a mandatory external oversight board. As with other compartmented programs, if the IC is tasked both with overseeing the waiver process and grading its own homework – that’s a problem. One encouraging countervailing point is that the Memorandum explicitly calls for whistleblower protections surrounding AI development and use, but this is hardly a substitute for proper oversight.

Regardless of such potential shortcomings, the Memorandum’s establishment of interagency processes for national security uses of AI is a crucial first step toward establishing more robust forms of governance. Until now, the U.S. national security community has been following a fragmented patchwork of rules, procedures, and organizational processes for integrating AI into their respective missions, while also potentially relying on different data streams and technologies. That presents a huge interoperability challenge, for one, but it also presents legal and ethical concerns – particularly since not all agencies are acting with the same level of transparency and accountability. The Memorandum makes important strides toward addressing this issue, but additional safeguards are still needed.

Finally, while much of the focus has been on the Memorandum, U.S. National Security Advisor Jake Sullivan’s remarks on it at the National Defense University are just as revealing. More than a call for restraint, the National Security Memorandum on AI is a call to action – one that still envisions a world in which the United States must “win the competition for the 21st century.” That kind of framing will not help build bridges to China – and the rest of the world – to transform aspirational notions of global AI governance into reality. And it does not allow much room for humility; reflecting on that other fateful speech at the National Defense University more than a decade ago, where we similarly believed we could bend new technologies to our will without sacrificing fundamental values.

Thompson Chengeta, Professor of International Law and Artificial Intelligence Technologies at Liverpool John Moores University; GC REAIM Commissioner: 

​​I will focus on Section 5 of the Memorandum, which addresses the United States’ role in promoting international governance of AI within the military and national security sectors. In this section, the Memorandum highlights the U.S. commitment to “fostering a stable, responsible, and globally beneficial international AI governance landscape” and references its historical leadership in guiding the global governance of emerging technologies. Specifically, it states that, “the United States Government shall advance international agreements, collaborations, and other substantive and norm-setting initiatives” on AI in military and national security contexts.

The Memorandum also underscores current U.S. contributions to norm-building in military AI, such as the “Political Declaration on Responsible Military Use of Artificial Intelligence and Autonomy” (hereafter, the Political Declaration). For the United States to lead in this area, it must prioritize a balanced consideration of all relevant branches of international law, use terminology aligned with international legal standards, situate national security measures within the broader framework of universal human security, and spearhead efforts to establish legally binding treaties on military AI governance. For example, the Memorandum references domestic and international law, but not specific branches that are pertinent to the global governance of military AI, including the jus ad bellum, international humanitarian law (IHL), international human rights law (IHRL), international criminal law (ICL), and international environmental law (IEL). Instead, it specifically endorses the Political Declaration, which focuses exclusively on IHL.

It is essential for the United States to align its terminology with established international law obligations. The Memorandum correctly points out that “AI use may lead to unlawful discrimination and harmful bias, resulting in, for instance, inappropriate surveillance and profiling, among other harms.” Yet, in the Political Declaration – and elsewhere – the US notes that “a principled approach to the military use of AI should…minimize unintended bias and accidents” and that “states should take proactive steps to minimize unintended bias in military AI capabilities.” However, under IHRL, the obligation is to eliminate discrimination, not merely minimize it. Furthermore, the focus should not be on whether discrimination is intended or unintended but on whether specific AI technologies may result in unlawful discrimination.

A core principle the United States should uphold in its AI and national security policy is that national security must be rooted within the broader frameworks of universal human security and international security. This approach is essential not only because IHRL obligations apply extraterritorially but also because true national security is more robustly safeguarded when it is aligned with global and universal human security.

The U.S. Memorandum references several non-binding initiatives, like the Political Declaration, which play a valuable role in establishing an incremental path toward global governance of military AI. However, these non-binding norms should not be seen as a final solution. There exists a significant legal gap in the international regulation of military AI—a gap that only legally binding treaties can fill. To fulfill the leadership role outlined in the Memorandum, the United States should spearhead efforts to develop binding international treaties on military AI. This could take the form of a Framework Convention on AI in the Military Domain, complemented by specific binding protocols addressing particular military AI technologies – such as autonomous weapon systems – or applications intended for national security.

The U.S. Memorandum represents a positive step toward establishing a national governance framework for military AI. Other states would do well to consider adopting similar practices. However, as noted, it is essential to incorporate certain international law principles that are often overlooked in this area.

*New* Keith Dear, Managing Director of the Centre for Cognitive and Advanced Technologies at Fujitsu; Former Expert Advisor to the U.K. Prime Minister on Defence Modernisation and the Integrated Review:

Read from across the pond in the United Kingdom, the U.S. National Security Memorandum on Artificial Intelligence should be a wake-up call. It is a document that makes clear the intention that “The United States must lead the world in the responsible application of AI to appropriate national security functions.” The document prioritizes AI and offers a clear plan to lead in it. This should be reassuring, as well as alarming, to U.S. allies.

First the reassurance. Russian President Vladimir Putin was right to say, “Whoever leads in AI will Rule the World,” even if it is vanishingly unlikely it will be Russia. The Memo on AI should reassure U.S. allies because by far the worst outcome for democratic nations across the world is that the Chinese Communist Party (CCP) – or another autocracy – wins the race to Artificial General Intelligence and Superintelligence (AGI, ASI), and “rules the world.”

A CCP’s AGI/ASI would be able to devise and implement strategies that are constantly ahead of and unanticipated by CCP’s opponents domestic and foreign. If this same CCP AGI or ASI were capable of recursive self-improvement, it would be something close to a doomsday scenario for democracies – who might never be able to match the intelligence that could be deployed by China.

The United States’ articulation of a clear plan in the National Security Memorandum will mean democracies and other U.S. allies and partners are shielded from such an outcome. Much like the U.S. nuclear umbrella shields much of the world from autocratic nuclear blackmail.

Why alarm? Like the nuclear umbrella, access to U.S. AGI/ASI protection will come at a cost to those that accept it and rely on it. And it is likely to be much higher.

If the U.S. plan succeeds, its power would be such that the relative power of the United Kingdom and all other allies will diminish dramatically. Such alliances would be less equal. The United Kingdom would be much more dependent on the United States. The U.S. memo talks of promoting “ equitable access to AI’s benefits” for allies and partners, not “equal access.” What might this mean?

To answer, consider U.S. military equipment sales to allies.  Users of U.S. technology don’t get the fully capable version the U.S. military operates itself. The United Kingdom, for example, helped build the F-35 as a Tier 1 partner and gets“more access to critical information than lower-tier partners.” But that is not the same as “equal access” to the United States.

Access to the unprecedented economic and military advantage that AGI and ASI would give the United States will come at a cost. Negotiating with the United States will be like playing AlphaGoZero at chess or Go. Nations will be constantly outmanoeuvred. Similarly, U.S. companies, likely given privileged access to U.S. models, will be more innovative and effective in design and execution of plans than their non-U.S. rivals.

In short, America’s relative power in the international system will so outstrip everyone else’s that future global prosperity and security, will be determined in Washington. Not nearly as worrying as it being determined by the CCP in Beijing. But alarming all the same. No nations have constantly aligned interests, no matter how close the alliance. Allies should be reassured that there is less risk to them from AGI now than prior to the National Security Memorandum’s publication. But they must not miss that the cost of free riding on the United States for their security is about to become vastly higher.

Faiza Patel, Senior Director, Brennan Center’s Liberty and National Security Program:

The just-released national security memorandum and accompanying framework document are the last in the suite of the Biden administration’s efforts to grapple with AI. These documents fill the gaps left by earlier guidance on AI issued by the Office of Budget and Management, which did not cover national security systems.

The administration is to be commended for taking on the thorny issues posed by national security AI, which have not been addressed by other major initiatives, such as the EU AI Act passed earlier this year. It is also notable that these rules—and the remarks made by National Security Adviser Jake Sullivan on their release—acknowledge the pressing need to tackle the risks these uses of AI pose risks to human and civil rights and democratic values. As civil society groups, including the Brennan Center where I work, have long argued: biased AI is hardly going to be effective or engender trust. The rules also make clear that a broad range of systems come under the umbrella of national security. In addition to weapons systems deployed overseas, domestically oriented intelligence and law enforcement programs as well as immigration matters are very much part of the national security AI ecosystem.

Included in the framework document is a list of so-called prohibited uses. While these may seem like a big deal, most are very narrowly scoped. For example, the prohibition on using AI to intentionally target a person based “solely” on their exercise of constitutional rights, such as free speech. Agencies can typically point to some factor in addition to speech as the reason for targeting an individual. More interesting are the rules around high impact uses of AI. The framework both lists high impact use cases and tells agencies to develop their own inventory of these. The “rights impacting” uses of AI uses identified by OMB (e.g., facial recognition, social media monitoring) are covered to the extent they occur within the U.S., impact U.S. persons, or bear on immigration or other entry to the U.S. Watchlisting and designating a person as a national security threat, as well as decisions relating to refugee status and asylum too come within the ambit of these rules. For all of these, additional safeguards are prescribed. I haven’t studied these in detail, but overall they resemble those set out by the OMB, with a welcome addition requiring agencies to assess the quality of data used in developing and testing the AI and its fitness for the tool’s intended purpose.

Unfortunately, all the good work that these documents mandate will occur almost entirely behind closed doors. There are very few ways for the public to gain insight into how the governance structures and mechanisms for assessing and mitigating risks are actually operating.

To start, we don’t know which programs will fall under the rubric of the national security memorandum and which ones will come under rules published by the OMB earlier this year. It’s a bit difficult for civil society to even begin to assess guardrails if we don’t even know which guardrails are supposed to apply to a particular program.

Like programs covered by the OMB memo, even “high impact” national security AI systems can obtain waivers from safeguards through the agency’s chief AI officer. These are reported to the national security adviser and the total number of waivers must be published. A potentially important provision requires civil rights/civil liberties and privacy officials to include information on all waivers (scope, justification, supporting evidence) in their regular annual reports. I’m taking a wait and see approach to how much information will emerge through this process. In the past, these mechanisms have often produced sanitized reports with little detail (perhaps due to vetting by agency leadership) and an overbroad reading of the requirement that reports must be consistent with appropriate protection of sources and methods might constrain them further.

Overall, the national security memorandum and the framework rely heavily on internal oversight mechanisms to serve as guardians of our rights. Agencies are required to have officials or offices that oversee privacy, civil liberties, transparency, safety related to agency AI use and to issue annual reports. But it is unclear that such offices will be able to perform this critical role. At DHS, for example, which has both an Office for Civil Rights and Civil Liberties and a Privacy Office, weak authorities and lack of leadership support have meant that oversight offices have consistently failed to check abuses such as racial and religious profiling, targeting Americans for surveillance based on their political beliefs, and compiling intelligence reports on journalists. I would have liked to see more about how these offices will be set up to perform this role (e.g., by ensuring that some of the AI technical personnel that the memorandum encourages agencies to hire are dedicated to oversight functions).

External oversight is conspicuously absent from both these documents and the administration’s statements on national security AI. The Privacy and Civil Liberties Oversight Board is examining national security AI issues within its remit. It could, as Patrick Toomey and I argued in these pages, serve as a template or even a forum to provide this type of oversight.

I’ll end by naming the elephant in the room. While many of us will spend the next days poring over the details of these documents, there is no certainty that they will remain in force come the new year and a new administration.

Justin Hendrix, CEO and Editor of Tech Policy Press: 

The memo’s drafters should be lauded for emphasizing the need to protect “human rights, civil rights, civil liberties, privacy, and safety” while harnessing AI in the context of national security systems and activities. However, while repetition of phrases and terms is common in such documents, it is notable that this particular combination of terms appears seven times in the document, almost as if it is a boilerplate refrain. Independently, these terms are sprinkled even more liberally across the memo’s sections: “human rights” appears twenty-two times; “civil rights,” sixteen; “civil liberties,” eighteen; “privacy,” twenty. “Safety” appears a reassuring forty-five times, and the reader encounters the phrase “democratic values” sixteen times.

Amidst all of these appeals to principle, the reader might be forgiven for forgetting that the underlying purpose of this document is to prepare bureaucracies to carry out the work of developing and integrating artificial intelligence into every aspect of the nation’s military, intelligence, security, and law enforcement operations. It is about improving the nation’s surveillance and investigative capabilities, securing its diplomatic and military advantages, and improving the lethality of its soldiers and weapons. But don’t worry, this document seems to say: we’re the good guys!

This subtext is most apparent in how the memo frames the role of the United States in defining the terms of technology adoption. “Throughout its history, the United States has played an essential role in shaping the international order to enable the safe, secure, and trustworthy global adoption of new technologies while also protecting democratic values,” the reader is reminded. That’s true to a certain extent, but it’s also true that the past two decades have witnessed the U.S. government abdicate its role in the face of transformative technologies, allowing Silicon Valley companies to experiment on U.S. citizens and the populations of nearly every country on Earth with minimal accountability. Now, the United States appears poised to do the same with AI, as we witness the rapid deployment of AI systems with known risks of bias, misinformation, and potential for misuse.

There are other interesting artifacts worth pondering. The document notes that “Recent innovations have spurred not only an increase in AI use throughout society, but also a paradigm shift within the AI field — one that has occurred mostly outside of Government.” It says the government “must urgently consider how this current AI paradigm specifically could transform the national security mission.” This might be read as an admission that the government is, in many ways, now reliant on the private sector — and, in reality, a handful of companies — for any advantage it may have in AI. These companies will participate in “voluntary” testing programs, and the government will pursue “partnerships” with industry. The government appears to recognize it is not in control; it is a facilitator, a coordinator, and a customer, but the awesome power of AI is not something it fundamentally commands, no matter how many governance boards this document may spawn.

Patrick Toomey, Deputy Director, American Civil Liberties Union’s National Security Project:

Despite taking some valuable steps, the Biden administration’s new National Security Memo on AI does not go nearly far enough to protect individuals from harmful and unaccountable AI systems. The use of AI to automate and expand national security activities poses some of the greatest dangers to people’s lives and civil liberties, both in the United States and abroad. Agencies are increasingly exploring the use of AI in deciding who to surveil, who to stop and search at the airport, who to add to government watchlists, and even who is a military target. Unfortunately, the new policy falls short in areas essential to meaningful accountability, leaving glaring gaps with respect to independent oversight, transparency, notice, and redress.

But first, it’s worth acknowledging what’s good in the memo. The new policy adopts a number of commonsense measures and nods to important democratic principles. For example, it requires national security agencies to better track and assess their AI systems for risks and bias, to appoint a Chief AI Officer, and to update their internal guidance on the testing and deployment of AI. The framework that accompanies the memo also prohibits some dangerous AI uses, such as using AI to make “final determinations” about a person’s immigration status or to unlawfully burden the right to free speech. But the prohibited uses are drafted narrowly and often qualified in ways that dilute their impact. For instance, the prohibition on using AI to target or monitor individuals applies only if the targeting is based “solely on” the person’s exercise of First Amendment rights. As we have long seen with both DHS and FBI intelligence activities, these kinds of restrictions are easily circumvented by pointing to some additional basis or purpose, however vague or unfounded.

Most troublingly, the new guidance lacks robust oversight and transparency requirements, allowing national security agencies to continue policing themselves when it comes to AI. As we have repeatedly seen before, this is a recipe for dangerous technologies to proliferate in secret. The new rules contain little in the way of independent oversight. Rather, they allow national security agencies to decide for themselves—behind closed doors—how to mitigate the risks posed by systems that have immense consequences for people’s lives and rights. For example, neither the memo nor the framework mention the Privacy and Civil Liberties Oversight Board, which plays an essential role in overseeing AI in counterterrorism activities. And while there are a handful of transparency measures, they do not incorporate many of the hard-learned transparency lessons that emerged from the U.S. government’s vast, secret expansion of surveillance in the decades after 9/11. If national security systems are not subject to meaningful oversight and transparency requirements, the serious threat they pose for civil rights and civil liberties cannot be properly addressed by Congress, the courts, or the public.

Finally, the memo is silent on two other mechanisms essential to accountability: notice and redress for individuals. The memo appears to take the view that AI systems used for national security purposes need not provide any baseline protections for individual notice and redress, in contrast to other federal uses of AI covered by recent guidance issued by the Office of Management and Budget (OMB). While the content, timing, and exact mechanics of notice and redress may depend on context, they do not disappear entirely simply because national security matters are concerned. This is a huge hole, one that will leave people harmed by AI systems with virtually no information and little recourse in the courts or otherwise.

If developing national security AI systems is an urgent priority for the country, then adopting strong rights and privacy safeguards is just as urgent. Without transparency, independent oversight, and built-in mechanisms for individuals to obtain accountability when AI systems err or fail, the policy’s safeguards will be inadequate and place  civil rights and civil liberties at risk.

Brandon Pugh,  Policy Director, Cybersecurity and Emerging Threats at R Street Institute; Non-resident Fellow, Army Cyber Institute at the U.S. Military Academy at West Point: 

The National Security Memorandum (NSM) on AI correctly recognizes that leveraging AI in the national security arena is critical. While AI has been leveraged by the national security community for years to some degree, it is positive to see this push continue and to evolve with both defense and offensive applications. At the same time, efforts to secure America’s leadership position on AI are timely given the increasing desire of adversaries to surpass us and potentially leverage the technology in nefarious ways.

A common theme throughout the NSM is adhering to guardrails and following transparency, evaluation, and risk management requirements. This stems from concerns about how malicious actors might target the technology and how it could be misused contrary to U.S. values. However, the technology presents an opportunity to leverage it to combat these very concerns, like using it to defend against adversaries and to protect the values some feel might be undermined. In fact, there are already examples of both.

Many of the requirements will be coordinated and acted on by a non-national security entity under the Commerce Department. There is a role for these activities, but we must ensure an appropriate balance is maintained with the need to innovate and lead on AI in a responsible manner centered. Competitors, including China, will not respect U.S. limitations or guardrails, so we need to be careful to not go too far in restricting efforts and industry in the United States, while still ensuring we are using AI responsibly.

For instance, the NSM envisions an accompanying “Framework to Advance AI Governance and Risk Management in National Security” that identifies prohibited uses for AI and high-impact use cases that require stricter oversight and due diligence (section 4.2(e)). These classifications should be reviewed initially and assessed continuously to ensure potential national security uses are not unduly limited, especially since the technology is rapidly evolving and it is hard to appreciate all advantages and disadvantages at the present time. However, the framework intentionally will be separate from the NSM to make changes easier.

An overarching question is how actions like the October 2023 Executive Order (EO) and this NSM would fare under a new administration and Congress, especially if there is a party shift. Should there be a Republican administration, repeals have already been called for. Republican concerns about the EO and other efforts to regulate AI have largely centered around requirements that are too burdensome, permit agency overreach, or hamper innovation. The precise way a future administration will handle AI is unclear, but there  seemingly are points of agreement between the parties, including the central role of the private sector in AI development, the need to leverage AI for national security, and the potential for misuse.

Bill Drexel, Fellow, Technology and National Security Program at the Center for a New American Security:

The National Security Memorandum solidifies the government’s intentions to promote domestic compute capacity, government adoption of AI tools, and cybersecurity and counterintelligence to keep the U.S. private sector’s AI development safe from adversaries that would seek to siphon IP and AI models. All of these measures are welcome and critical to maintaining the U.S. edge over China, especially in frontier model development. But the Memorandum also replicates one of the major issues that plagues the United States’ approach to AI internationally: a tendency to talk a great deal about norms, standards, agreements, and risks rather than focusing on actually building out AI ecosystems in other countries, especially in the Global South. China, meanwhile, is trying to bill itself as the leader of global AI growth for developing economies, with a clear emphasis on practically building skills, infrastructure, and systems on the ground in often-overlooked places. This approach could ultimately win China more influence in norms and standard-setting for AI. Finally, the Memorandum is heavily focused on boosting resource-intensive frontier model development, despite the uncertain returns of that approach for defense, which some have criticized as having more to do with tech hype than substance. Indeed, many of the most transformative uses of AI for defense to date are largely unrelated to frontier models, with a long runway ahead for further progress that the Memorandum does little to support. More balance between frontier development and other AI tools for national security would be very welcome.

IMAGE: U.S. Vice President Kamala Harris delivers remarks with President Joe Biden about their administration’s work to regulate artificial intelligence during an event in the East Room of the White House on October 30, 2023 in Washington, DC. (Photo by Chip Somodevilla/Getty Images)