At the heart of national security decisions lies a paradox: these decisions are among the most consequential a government can make, but often the least transparent and democratic. The “black box” of national security decision-making — driven by extensive classification and characterized by real difficulty overseeing executive actions — has expanded in the United States as executive power continues to grow. Over the past two decades, this expansion has significantly eroded the constitutional checks and balances that we rely on to superintend presidential authority. Although Congress at times works hard to be a faithful surrogate, thin staffing, limited expertise, and politics complicate those efforts. Meanwhile, the courts largely defer to the executive on these matters. As a result, it is increasingly hard to confirm that the executive is acting consistent with public law values such as legality, accountability, and the requirement to justify decisions.

Rapid advances in AI are compounding this trend. Defense and intelligence agencies, including the National Security Agency, the CIA, and the Departments of Defense and Homeland Security, have begun to deploy AI in decision-making processes and operations. For example, the Department of Defense is using AI-powered computer vision tools to identify threatening activities — and ultimately potential targets — among thousands of hours of drone footage. Cyber operations are increasingly driven by AI, raising the possibility that autonomous U.S. and foreign cyber tools could clash and escalate attacks to an armed conflict — without any affirmative human decision to do so. It will not be long before AI takes a seat in the Situation Room. AI systems, however, are often “black boxes”: users and even programmers generally cannot access the algorithms’ internal processes, making it very hard for the users to understand why or how the system reached the recommendation that it did.

Adding black box AI into the existing black box of national security means that U.S. national security decisions are being made inside a double black box. This implies that the range of actors seeking to check the executive in the national security space will have an increasingly hard time doing so, and even those making the national security decisions cannot fully understand how they are made. As I argue in my forthcoming book, it will be crucial to rely heavily on both the traditional set of actors who check and balance classified executive policy-making – Congress, the courts, executive branch lawyers, inspectors general, and whistleblowers – and on alternatives to these traditional surrogates to ensure that the U.S. government complies with public law values. These actors will need to be creative, while recognizing that there is no one silver bullet to the double black box problem.

Making Existing Oversight Harder

If overseeing classified activities is largely about ensuring that the government acts lawfully, effectively, accountability, and with a requirement of justification, AI will compound our existing military and intelligence oversight problems. Difficulties in identifying unlawful executive branch activities will be amplified by the challenges of understanding whether the use of a particular AI tool complies with the law. The difficulty in obtaining enough information to evaluate whether a particular national security policy choice is optimal will be compounded by the difficulty in understanding the quality of AI recommendations and predictions informing that choice. The difficulty in deciding who is at fault for an illegal or ill-advised policy choice will be amplified by the challenge of determining who is responsible for significant algorithmic errors. And the non-transparency and inexplicability of AI algorithms will complicate the ease with which the executive branch can avoid defending its national security decisions due to classification. How can democratic publics be confident that they should trust both the systems and the people relying on these systems to act responsibly in their names? Difficult national security questions become even harder when the questions implicate AI tools.

To express concerns about classified executive activities generally – or about the “double black box” in particular – is not to argue that executive officials act with bad intent or incompetence. But oversight is still imperative: history is rife with examples where the government drifted off course, especially when operating in secret. Secrecy can facilitate sloppiness, group think, and other confounding habits. We have to assume that national security officials suffer from the same cognitive biases that we all experience, which can be heightened in operating environments infused with secrecy. Giving access and voice to external actors with different perspectives, missions, and ambitions helps counter those biases.

Chipping Away at the Double Black Box 

The use of national security AI is inevitable; indeed, it has already arrived. As the military and intelligence communities embed AI systems more deeply in their operations, they need a healthy set of checks on those systems. Ensuring executive compliance with public law values will require diligence by a wide range of actors. The United States will need to rely on its traditional surrogates, especially the congressional committees overseeing the military, intelligence community, and diplomats. Congress could, for instance, enact a framework statute to regulate high-risk national security AI, using the covert action statute as a model. Even if Congress cannot agree on a framework statute, it should enact legislation requiring the executive to report to it about its use of national security AI. There are many models for national security reporting to Congress, including reporting on offensive cyber operations, 48-hour and six-month War Powers reports, annual reports on the use of the Foreign Intelligence Surveillance Act, and notification requirements for changes to the legal framework for the use of military force. Or Congress could create a Joint Committee on Artificial Intelligence, including a sub-committee focused on national security-related AI. The Committee, which should have a permanent staff, could serve as an in-house think tank, providing standing expertise on AI technologies and their societal impacts.

Executive lawyers, too, must play an important role: they can provide critical early policy and legal input to assess whether to use a machine learning algorithm at all in a given context, and what kinds of data and parameters that algorithm should use or avoid. Whether or not Congress enacts a framework AI statute, the executive should create an interagency process for reviewing high-risk AI tools. The executive could model that process on those established by Presidents Barack Obama and Joe Biden around the use of targeted killings and, reportedly, offensive cyber operations.

The executive will also need to rely on non-traditional secrecy surrogates, including foreign allies, states and localities, and U.S. technology companies. These actors have specific expertise about new threats and targets. They have access to information or infrastructure that the federal government needs to execute its national security mission. And they often have legal or political commitments not to reveal the information. Foreign allies, for example, are likely to have some visibility into the types of tools the U.S. military is developing, acquiring, or deploying. To the extent that the United States wants or needs to work with these allies to accomplish its military, counterterrorism, or intelligence goals, these partners have leverage over and may serve to constrain U.S. actions. Even the public has a role to play. By articulating its views about unclassified AI systems that may have classified analogues behind the curtain of classification, the public can influence the types of AI tools that “common use” companies such as Google, Apple, and OpenAI develop, which in turn may affect the types of tools that national security officials can and do choose to employ.

Conclusion

This merely scratches the surface of the types of actors who should be engaged in reducing the size and opacity of the double black box and of the kinds of approaches they could take, ideas that my book will detail in greater depth. The executive needs buy-in from Congress, the U.S. public, and U.S. companies in its AI-related national security choices to maximize which companies will work with U.S. national security agencies, garner support from allies and non-committed states, and avoid future backlash against overly aggressive national security activities. The struggle to check activities that occur behind the curtain of secrecy is part of the ongoing democratic project in the United States. There is no one Platonic ideal of how to balance secrecy and transparency: the public must remain active participants in this shared project.

IMAGE: Visualization of data points (via Getty Images).