As national security agencies race to adopt AI, it is critical that Congress and the Biden administration shore up oversight of these technologies, which they acknowledge pose serious risks to privacy, civil rights, and civil liberties. The Office of Management and Budget (OMB) recently issued guidance that mandates transparency for federal uses of AI, identifies numerous uses that are presumptively rights- and safety-impacting, and requires minimum risk management practices—including a requirement that agencies provide individuals with notice and an opportunity to challenge decisions made using AI. But national security systems—which include everything from domestic intelligence programs to autonomous weapons systems—are exempted from that guidance.
This exemption raises immense concerns about oversight, not least because protections like notice help affected individuals unlock information that can be used to challenge biased or inaccurate decision-making in the courts. Contesting national security decisions that affect individuals—even Americans and people living in the United States—is an enormous challenge. Notice and remedies are the exception rather than the rule and, even where they exist, they are often ineffectual. Litigants face severe barriers in accessing the courts, given the classified nature of many national security systems and the government’s use of secrecy to thwart court review. Black box AI systems—where the public lacks critical information about the underlying algorithms, such as the data used to train them or the rules they rely on—will only exacerbate this oversight gap, which deserves immediate attention.
To begin to address this problem, the Biden administration and Congress can look to the Privacy and Civil Liberties Oversight Board (PCLOB) as a model. Created in 2007 to mitigate the privacy and civil liberties risks posed by post-9/11 counterterrorism programs, the PCLOB has brought independent expertise and evaluation to some of the government’s most sensitive activities. While we have not always agreed with the board’s conclusions—and its activities are no substitute for individualized notice to people impacted and the provision of remedies—PCLOB has provided a critical measure of transparency and accountability in a system with extremely limited oversight.
However, neither the current mandate nor capacity of PCLOB equips it to exercise oversight over all AI systems used for national security purposes. Its counterterrorism mandate would cover many of these systems but could leave out the use of AI in efforts such as the recent targeting of racial justice protestors under the guise of protecting federal monuments and counterintelligence programs targeting Chinese Americans. Moreover, the board’s public reports have so far not focused on the civil rights aspects of the programs that it examines, such as impact on racial groups or minority communities, a critical concern with respect to both AI systems and national security programs.
PCLOB is also not resourced to review the myriad of AI systems that national security agencies are poised to use. Only the chair of the five-member board serves in a full-time capacity, and the board has often had vacancies that prevented its operation. For years, the board has had a miniscule staff relative to its mandate and the sweeping counterterrorism programs it oversees, operating as one of the smallest federal agencies. Even at the end of 2023, it had only 25 staff, only one of whom was identified as a technologist. The limited capacity of PCLOB is reflected in its public output: since its creation, it has issued just eight public reports on counterterrorism programs and four follow-up assessments.
To provide urgently needed independent oversight of national security AI, Congress could either expand the jurisdiction and resources of PCLOB so that it could serve this role, or it could create a new body. Whichever course it chooses, it should ensure that the AI oversight authority’s mandate and resources are commensurate with the burgeoning systems it will need to watch over. Below we offer suggestions for how to structure an oversight authority for AI national security systems.
Independence
The importance of independent supervision cannot be overstated, especially in the national security context. As we have seen repeatedly with surveillance, watchlisting, immigration and travel vetting, and “insider threat” detection, internal oversight mechanisms by alone are unlikely to provide adequate protection for individual rights. Civil liberties offices too often are subject to agency capture or sidelined by agency leadership, culture, or structure. AI guardrails may not be reliably enforced—or may be interpreted away by determined agency lawyers—without the pressure and accountability provided by external overseers.
PCLOB has brought this kind of independence to its work reviewing programs like the NSA’s bulk collection of Americans’ call records and warrantless surveillance of Americans under Section 702 of FISA. Intelligence agencies such as the FBI, CIA, and NSA may chafe when the board’s oversight yields different conclusions about the efficacy, lawfulness, or wisdom of their programs after accounting for privacy and civil liberties interests. But that is precisely PCLOB’s job, and it is a function that will be sorely needed as complex and intrusive AI systems proliferate.
Mandate
Just as PCLOB’s purview broadly covers “actions the executive branch takes to protect the Nation from terrorism,” the AI oversight authority should have a broad mandate that covers all AI used in national security systems. This matches the scope of the systems that are explicitly excluded from the OMB’s rules for rights-impacting AI and which therefore are most in need of oversight. It also parallels PCLOB’s authority. The board covers all counterterrorism programs except covert operations and does not distinguish between domestic and international or military and civilian, although in practice it has focused primarily on intelligence efforts that have serious impacts on people in the United States.
PCLOB is charged with balancing the need for counterterrorism programs with the need to protect privacy and civil liberties. This seems to have been a workable formulation. With respect to the Section 215 call records program and U.S. person queries under Section 702, PCLOB has contributed substantially to public understanding and reform debates by providing transparency about how these programs operate, evaluations of their efficacy based on information held by agencies, and reform recommendations for mitigating privacy and civil liberties impacts. This is not to say that PCLOB has always met expectations (see, e.g., criticism of its weak report on Executive Order 12333), but it has by and large promoted transparency and reform of opaque government programs.
Whether the oversight agency is PCLOB or a new body, we recommend that its mandate encompass the following:
First, the AI oversight authority’s mandate should explicitly include protection of civil rights. This would be consistent with the President’s AI Executive Order 14110 and the OMB guidance, both of which recognize serious concerns about algorithmic discrimination and bias.
Second, the mandate should make clear that the AI oversight authority is charged with reviewing the full AI lifecycle, including the initial decision to deploy the AI system, and any impact assessments and audits that have been performed. As emphasized in both the OMB guidance and the National Institute of Standards and Technology’s AI Risk Management Framework, adequately identifying and managing AI’s risks and impacts requires ongoing assessments throughout the AI system’s lifecycle—from planning and development to deployment and use, including changes to systems or the conditions of their use after deployment.
Third, the mandate should explicitly state that if the AI oversight authority determines that the need for an AI national security system does not meaningfully outweigh the risks to privacy, civil rights, and civil liberties, it may recommend that agencies stop using that system. The OMB memorandum envisages putting an end to the use of AI-based systems or tools in these circumstances and PCLOB has previously made such recommendations for counterterrorism programs.
In addition to reviewing national security systems, the AI oversight authority should be charged with ensuring that privacy, civil rights, and civil liberties concerns are appropriately considered in the development and implementation of laws, regulations, and policies relating to AI national security systems. (PCLOB has this type of advice authority for counterterrorism activities.) As Congress considers how to regulate AI and agencies move quickly to incorporate it into their functions, advice provided by the AI oversight authority prior to system launch can help ensure that safeguards are built into the system from the outset.
Leadership and Resources
The AI oversight authority must have the leadership, expertise, and resources to carry out its functions effectively and expeditiously.
The qualifications for PCLOB members provide baseline criteria for appointments to the AI oversight authority. Board members are to be appointed “solely on the basis of their professional qualifications, achievements, public stature, expertise in civil liberties and privacy, and relevant experience.” For the AI oversight authority, qualifications should also include expertise in computer science and machine learning, as well as in investigating the impact of AI systems. Members are meant to be chosen “without regard to political affiliation” although no more than three members of the Board can be “members of the same political party.”
To keep pace with evolving technologies, we recommend that the AI oversight authority be explicitly authorized to create a committee of outside technical experts to provide advice to members and staff.
The current term of six years for PCLOB members with the possibility of reappointment seems about right for the AI oversight authority as well; a significant length of service allows board members to gain experience and work on complex issues. But all AI oversight authority members should be engaged on a full-time basis. This would improve on the PCLOB structure where, except for the chair, board members serve on a part-time basis, paid for each day they are engaged in PCLOB duties and limited in the number of days per year in which they may perform board work. This may have contributed to the considerable turnover at PCLOB (only one member has served a full six years, most have served about three years and the board has frequently been without a quorum). This structure has also likely contributed to the slow pace of its work noted above.
The authority should have enough full-time employees with the requisite skills, knowledge, training, and expertise to perform their oversight responsibilities, including relevant technical expertise (including in auditing systems) and expertise in privacy, civil rights, and civil liberties. PCLOB is seriously under-resourced, with a 2023 budget of just $12.3 million and 25 staff members assigned to oversee the entire U.S. counterterrorism enterprise, which involved spending estimated at approximately $175 billion in 2018. PCLOB’s budget pales in comparison to the budgets of oversight bodies like DHS’s Office of Inspector General ($214.9 million) and DOJ’s Office of Inspector General ($149 million). While those offices oversee a broader range of issues—albeit within one agency—the authority’s scope of oversight across the executive branch and the administration’s intention to further integrate technologically complex AI in intelligence and military operations warrant a sizeable budget. At present, the intelligence agencies that would be a significant focus for the AI oversight authority have a budget of more than $70 billion, and defense spending on AI nearly tripled from 2022 to 2023 and is certain to swell further.
Congress should also address the salary gap that has prevented many federal agencies from recruiting personnel with technical expertise. The Executive Order on AI directed the Office of Personnel Management to help agencies acquire AI subject matter experts by using pay flexibilities, benefits, and special salary rates to offer competitive salaries. Using this flexibility, DHS recently announced that it would hire a 50-person “AI Corps.” This type of salary flexibility should also be available to the AI oversight authority.
Access to Information
Access to information will be critical to the success of the AI oversight authority. PCLOB provides a workable model: where the board finds it necessary to carry out its responsibilities, it has access to documents and personnel of federal agencies, including classified information, with the possibility of elevating problems to the head of the agency. While the definition of documents to which PCLOB has access is expansive, for the AI oversight authority, Congress should make clear—as it considered doing in 2021—that additional information relating to underlying data, training and testing processes, and models should be made available. PCLOB can also obtain a subpoena (via the Attorney General) to obtain documents and information from persons outside the executive branch. This will be particularly important for the AI oversight authority since significant relevant information may be held by third parties who design and develop the AI systems. Notably, PCLOB has not expressed any issues with access to information in its reports, although oversight bodies typically engage in some negotiations in this regard.
External oversight bodies also need information to decide where to focus attention when confronted with sprawling agencies and programs. Privacy and civil liberties officers at agencies can help; they are, for example, required to provide semiannual reports to PCLOB. However, the extent to which these offices have information about AI national security systems will likely vary. We therefore recommend that Congress also require each agency head to designate a senior official to provide the AI oversight authority with notice of any existing or proposed national security AI system. If the White House creates a system for agencies to inventory and report their national security systems to a specified authority as we have recommended, these documents too should be made available to the AI oversight authority.
Promote Transparency and Accountability
The AI oversight authority can also play an important role in bringing transparency and enhancing public trust in national security uses of AI. PCLOB has often played this role. For example, it has brought to public attention critical information about the Section 215 and Section 702 surveillance programs. The board is mandated to make its reports to Congress available to the public “to the greatest extent that is consistent with the protection of classified information and applicable law.” Despite that statutory mandate, PCLOB has been criticized, including by a board member, for failing to seek declassification of its report on the NSA’s XKeyscore program. To promote transparency, we recommend that the AI oversight authority be required to seek declassification of its reports, including by requesting public interest declassification of information and raising with agencies instances where the authority considers information to be overly or improperly classified.
****
National security systems using AI are simply too high stakes for the American people to be left to unaccountable and opaque internal oversight mechanisms. To mitigate these risks, Congress must create an AI oversight authority with the mandate and resources to help build safeguards into these systems from the outset, to keep a close eye on their operation, and to ensure that the public and policymakers are informed about their breadth, use, and impact.