Transparency is one of the core values animating White House efforts to create rules and guidelines for the federal government’s use of artificial intelligence (AI). But exemptions for national security threaten to obscure some of the most high-risk uses of AI: for example, determining who is surveilled, who is questioned and searched at the airport, who is placed on watchlists, and even who is targeted using lethal force. While there may be aspects of these AI systems that should remain under wraps, the approach taken by the Biden administration to date leans far too heavily toward secrecy.
The Biden administration needs a much more nuanced approach — one that applies the lessons of the past to foster public trust in this new generation of technologies. Blanket secrecy and needless overclassification will only undermine that goal. Ten years ago, on the heels of Edward Snowden’s shocking surveillance revelations, intelligence officials acknowledged that the failure to disclose basic facts about the breadth of the National Security Agency’s (NSA) activities had badly damaged public trust and undercut principles of democratic consent. The intelligence agencies pledged to disclose significantly more information going forward. But when it comes to AI, national security officials appear to be losing sight of those lessons and promises. The White House should recommit to providing meaningful transparency with respect to national security-related systems, especially those that impact people in the United States.
The Forthcoming National Security Memorandum on AI
As we have discussed previously, the president’s October 2023 executive order on AI adopts a two-track approach. Federal agencies using AI for decisions relating to matters such as law enforcement, housing, hiring, and financial services are generally required to follow guidance issued by the Office of Management and Budget (OMB). While this guidance can be further strengthened, it includes important safeguards for AI systems — such as transparency, mandatory risk assessments, and robust independent testing. National security systems are slated to be covered by a separate National Security Memorandum (NSM) currently being prepared via an interagency process led by senior White House officials. In developing the NSM, the administration must hew to its oft-stated commitment to transparency, using the OMB guidance on AI as a baseline, even if some adjustments are needed for handling classified information.
Transparency is in some ways even more necessary with respect to national security systems, such as the surveillance and vetting programs run by the Federal Bureau of Investigation (FBI), Department of Homeland Security (DHS), and the NSA that can have a profound impact on people’s lives. The individuals impacted often have little direct knowledge — or means of obtaining information — when they are subject to national security surveillance, and thus opportunities for accountability and redress are severely limited. These agencies make secret judgments that affect constitutional rights like deciding who should be barred from air travel, whose communications or social media posts should be collected or scrutinized, and whether travelers can go through ports of entry without invasive screening. Reliance on inscrutable forms of algorithmic decision-making exacerbates longstanding problems with accuracy, due process, and discrimination, some of which have resulted in extensive litigation. Bias is a fundamental concern with respect to the use of AI in policing and criminal systems, with the brunt of algorithmic discrimination too often borne by communities of color. Transparency about AI tools is a starting point for mitigating these concerns, shining a light on whether measures to ensure that these systems are effective, accurate, fair, and rights-respecting have been properly implemented.
Fortunately, the past decade has seen the development of transparency practices aimed at building trust in the activities of national security agencies, which could serve as a roadmap for AI transparency. As it drafts the NSM, the Biden administration can and must increase transparency by taking the following steps:
1. Clearly define the parameters and criteria for “national security systems.”
The definition of “national security system” referenced in Executive Order No. 14110 is extremely broad, and its limits are unclear in practice, potentially covering systems involved in everything from military targeting to domestic surveillance to immigration enforcement. The NSM should provide authoritative guidance and specific criteria to agencies for evaluating whether AI systems fit within this category, as well as examples of what does or does not qualify as a national security system — especially in the context of FBI and DHS activities that span law enforcement, immigration, and border security. The definition should be construed narrowly so that this category does not become a wide loophole for agencies seeking to avoid OMB’s requirements. The NSM’s guidance and examples should be public.
2. Develop procedures for creating and maintaining detailed inventories of “national security systems.”
The administration should have a full catalog of national security systems, a process for ensuring that agencies properly designate such systems, and mechanisms for releasing information about these AI systems to the public.
To start, the NSM should direct each agency to submit to designated authorities — for example, the Defense Department and the Office of the Director of National Intelligence — an inventory of the AI systems it believes qualify as “national security systems.” These inventories should be sufficiently descriptive and detailed to provide essential information about each system, including how it relies on AI or other forms of automation; the purpose(s) for which the system was designed, and any modifications made; and the purpose(s) for which the system is actually used by the agency and any other agencies, including whether the system is used for non-national security purposes such as law enforcement or immigration. The catalog should also include a description of how government personnel interact with the system, a summary of the training data used, categories of data inputs and outputs, privacy protections, and any risks that use of the system creates for civil rights and liberties.
Agencies should document the basis for concluding that each of these AI systems qualifies as a “national security system.” Agency designations should be rigorously reviewed by OMB (for unclassified systems), ODNI (for classified intelligence systems), and DOD (for classified military systems). If a system does not meet the criteria for a national security system, then as Executive Order 14110 makes clear, it is subject to the OMB guidance.
3. Undertake mandatory declassification reviews and disclosure of key documentation on AI use.
The administration should adopt strategies that promote transparency with respect to both unclassified and classified systems. Overclassification is a well-recognized obstacle for both public transparency and critical information-sharing within government. The NSM should mitigate this risk, drawing on the transparency principles and practices developed both inside and outside national security agencies in the past decade.
In practice, this means mandating disclosure of key documentation — such as agencies’ AI inventories, impact assessments, efficacy studies, and controlling policies and legal memoranda concerning the use of AI — for unclassified systems. And it means mandating declassification review of the same materials for classified systems. Although classification may prevent the public disclosure of details related to the operation of specific national security systems used for defense or foreign intelligence purposes, agencies should be able to disclose general rules and legal analyses, and they should be required to declassify key facts about the existence, nature, functioning, and impacts of AI systems to the greatest extent practicable, particularly where they affect Americans and individuals in the United States. Where redactions would risk leaving key documents unintelligible to the public, the NSM should direct the relevant agency to prepare an unclassified summary for publication.
Mandatory declassification review and disclosure of analogous information has proven possible and valuable in the context of other classified intelligence activities. For example, the Attorney General and Director of National Intelligence (DNI) are required to conduct a declassification review of certain significant opinions issued by the Foreign Intelligence Surveillance Court and to declassify those opinions for public release to the “greatest extent practicable.” The declassified opinions typically provide a description of the specific surveillance activity under review and how it operates, a description of any incidents of non-compliance with applicable laws, agency rules, and court orders, and legal analysis addressing whether the surveillance is lawful. Although operational details are sometimes redacted, the government has been able to declassify such details in many instances, and the opinions have been an invaluable source of public understanding of the government’s activities under the Foreign Intelligence Surveillance Act (FISA).
Some of these disclosures may go beyond what is currently required on the face of the OMB memorandum. But because security-related systems can have extreme consequences for the rights and liberty of people in the United States, there is an especially great need for robust public oversight and attention to civil rights impacts. Indeed, agencies like DHS and the FBI already provide similar disclosures about their use of some AI systems. For example, in the context of biometric systems relying on AI — like DHS’s CBP One and the FBI’s facial recognition programs — both agencies have published Privacy Impact Assessments (PIAs) that provide basic information about how those systems operate and the risks they pose.
4. Provide meaningful public information.
The NSM should incentivize agencies like DHS and the FBI to improve their public reporting. For example, these agencies release information about electronic systems and programs that handle sensitive data and personal information through PIAs and System of Record Notices (SORNs). While these reports provide the public and policymakers with important insight into intrusive tools — like those used for risk scoring of travelers, iris matching criminal suspects, and facial recognition of asylum seekers — agencies’ current reporting has serious shortcomings. Required documentation is often issued years after the system is in operation; notices are skipped for systems with new capabilities on the theory that they are covered by earlier notices; and the fragmented nature of documentation, which generally describes only individual systems, makes it difficult to understand interconnected operations. In other instances, the agencies’ PIAs have glaring holes. For example, the FBI uses AI as a part of its Threat Intake Processing System to process and score tips and leads received from a variety of sources including calls, electronic leads, and social media posts — and yet the FBI’s PIA for this system makes no mention of AI at all.
As the National Security Commission on AI recommended, the executive branch can take steps to improve the utility of PIAs and SORNs so that they effectuate transparency rather than obscure government operations. The NSM should direct agency heads to undertake a swift review of these systems and institute changes to speed up the issuance of documents, to ensure that new systems are not deployed under cover of earlier documentation, and to provide information on how each system acquires, collects, uses, and stores personal information.
In addition, the NSM should require each agency to issue an annual transparency report on its national security AI systems. These reports should consolidate information from various sources, ensuring accessibility and comprehensive disclosure of agency AI usage. The report should provide insight into the efficacy and impacts of AI systems, including information about decisions made based on the system, error rates and thresholds, data inputs and outputs, who is authorized to access each system, and the policies in place to protect privacy, civil rights, and civil liberties. (Data mining reports required for counterterrorism programs provide a model for such overviews, although they often do not contain enough detail.)
For systems that impact the rights or safety of people in the United States, transparency reports should include annual statistical reporting on the nature and scale of those impacts. This reporting could be modeled on the annual statistical transparency reports issued by the DNI that provide data on orders issued under FISA and the number of persons impacted by certain FBI, NSA, and CIA surveillance activities. The administration should work proactively to ensure that agencies are prepared to provide a robust public accounting of national security systems using AI.
* * *
Transparency is just one of many critical safeguards needed to govern AI systems used for national security purposes. An approach that allows AI systems to proliferate in secret under the banner of “national security” will not produce genuine accountability and trust, let alone democratic consent. The Biden administration should learn from past mistakes, anticipate and avoid known pitfalls such as overclassification, and embrace practices that provide meaningful public oversight.