Editor’s Note: This article is the next installment of our Symposium on AI and Human Rights, co-hosted with the Promise Institute for Human Rights at UCLA School of Law.
Across the U.S. public sector, the AI revolution has already begun. At the Patent and Trademark Office AI-based programs have become a critical research tool for assessing applications. The Securities and Exchange Commission and the Internal Revenue Service use AI to search for suspicious or illegal behavior. AI is routinely employed as part of post-market surveillance efforts by the Food and Drug Administration. And if you submitted a comment to the Federal Communications Commission lately, there’s a good possibility it was processed using AI
These use cases are the tip of the iceberg. Administrative agencies at all levels of government are racing to develop and deploy new AI-based tools for everything from interfacing with the public to regulatory enforcement. While lawmakers are slowly waking up to the dangers that AI can pose, there is still little formal oversight, or even centralized tracking, for how these technologies are proliferating within the government itself. This gap is particularly concerning since, for all the talk of AI’s existential risk to humanity, its potential to chip away at our fundamental rights presents a far more realistic threat than some Terminator-style apocalypse. While there are plenty of legitimate areas of concern related to the role of Big Tech in the AI ecosystem, governments are the primary duty-bearers for guaranteeing fundamental rights, and have a responsibility to be accountable to the people they serve. There are particularly serious concerns where public agencies misuse these tools, for example, in ways that discriminate against certain individuals or restrict access to healthcare. As more and more government functions are outsourced to machines, there are troubling implications for the future of democracy.
None of this is intended to discount the potential benefits that AI can bring to the complicated challenges of governing, especially in the context of ongoing demands for administrative agencies to do more with less. Innovation in the provision of public services is something we should welcome. In some cases, government uses of AI may be necessary to keep pace with the growing technical complexity of their oversight functions. But at a time when public trust in government, and in our broader public institutions, has declined to critical levels, the potential for these systems to further erode the relationship between the people and the administrative state should be an extremely serious consideration. Developing appropriate safeguards to guide the piloting, development, and deployment of AI across the public sector, especially when these systems have the potential to impact fundamental rights, is vital.
U.S. Responses
To their credit, successive U.S. administrations have taken this challenge seriously. In January 2023, the National Institute of Standards and Technology (NIST) published their Artificial Intelligence Risk Management Framework (AI RMF), which provides a model assessment process for agencies to map potential risks, develop tracking mechanisms, and respond appropriately. In March 2024, the Office of Management and Budget (OMB) published an AI policy memo including a number of requirements and recommendations for executive branch agencies. Importantly, this includes a requirement to track and publicly report all AI use cases. The lack of any central monitoring or coordination around how these technologies are being deployed has been a serious deficiency: it is difficult to come up with a coherent public policy response to the use of AI in the public sector if nobody has a comprehensive understanding of what exactly is going on. The guidance also exempts national security systems, a large carve-out that could seriously impact individual rights. The AI policy memo also requires agencies to designate Chief AI Officers, and agencies established by the Chief Financial Officers (FCO) Act to convene an AI Governance committee, in order to guide and coordinate issues related to AI implementation, including managing risks.
Strengthening AI Governance
While these are largely positive developments, their efficacy depends, in large part, on good faith engagement by administrative officials in ensuring that AI deployment is safe, accountable, and reflective of the democratic principles that are meant to guide the exercise of state power. In the absence of meaningful engagement, it is easy for risk assessment processes to devolve into a box checking exercise, where any concerns about the broader impacts of AI on agency operations are outweighed by pressure to slash budgets, downsize workforces, and present a façade that the agency is on the leading edge of innovation. While we all share a collective interest in maintaining public trust in government, individual managers within agencies may face competing short-term incentives that outweigh these structural values. In the absence of meaningful oversight, decision-making around whether a system is performing appropriately may also be unduly influenced by concerns about sunk costs, and the negative implications of abandoning a tool that has been developed at considerable time and expense.
Robust governance over the use of AI in the public sector requires centralized, specialized oversight of decision-making at administrative agencies, including where these systems are succeeding or failing, and whether the use of AI is appropriate in the first place. However, this presents a challenge since AI risk assessment is heavily contextual. Part of the reason why the AI RMF delegates so much flexibility to the individual agencies is that an accurate risk assessment requires an intimate understanding of the agency’s workflow, procedures, and external stakeholders, as well as the specific way that a tool fits into these complex dynamics. One cannot properly assess risks without a comprehensive understanding of the tool’s operational context, which is difficult for those outside of the agency to fully grasp.
These tensions are not irreconcilable, however, and may be overcome by formalizing a system which combines robust first-instance risk assessments within the relevant agency with an appropriate mechanism for centralized oversight over these decisions, and around broader agency policies related to the use of AI One model could be to delegate oversight to administrative law judges, who already play a parallel role in adjudicating certain aspects of agency conduct. However, judicial enforcement tends to be expensive and time consuming, and this model is generally not conducive to cultivating the sort of collaborative relationships which are necessary to develop robust administrative infrastructure. Oversight could also be concentrated within an existing agency, such as the OMB, or the Government Accountability Office, the latter of which already plays an important role in technology assessments, and which also enjoys the advantage of being located within the legislative branch, making it better able to act as an accountability mechanism over executive agencies. However, it is unclear whether either of these agencies possess the requisite authority to impact meaningful change across the administrative state.
As an alternative model, it may be interesting to consider the establishment of a specialized administrative oversight body. Such a specialized oversight body would not be entirely unprecedented under U.S. law: the Privacy and Civil Liberties Oversight Board (PCLOB) serves a roughly analogous function in assessing risks from counterterrorism programs. However, it too has a mixed record of impacting positive change across its areas of responsibility, due in part to its limited budget and powers, as well as persistent challenges in finding a quorum. Around the world, a number of the most successful models of administrative oversight may be found within freedom of information (or right to information) systems, which are often overseen by an independent information commission or commissioner. While there is no existing body which carries out this function for the federal Freedom of Information Act, they are not unusual within state transparency systems. Where these systems are well-designed, they anticipate significant bureaucratic resistance, since public transparency programs function as a critical public accountability and oversight mechanism. Processing public information requests also requires a sophisticated understanding of agency operations, not only to know where responsive information may be found for a given request, but also to accurately apply any exceptions to disclosure. Information Commissions or Commissioners play a role in both developing appropriate standards for local information officers to apply, and in overseeing agency compliance with these standards, as well as in supporting public understanding of the right to information. Information Commissions or Commissioners are likewise often empowered to exercise a more general oversight function over agency operations, and to interface with the public in promoting their right to information, including through hearing appeals against administrative non-compliance and ordering remedial measures. Information Commissions can even be multistakeholder agencies, incorporating perspectives from law enforcement and national security agencies, civil society, and a range of other subject matter experts. Their importance to sustaining core democratic values is demonstrated by the fact that, among backsliding democracies, information commissions are often one of the first targets for attack. However, as a result of this tendency, there is a robust set of international best practices for ensuring these agencies’ independence and resilience.
Applying this model to AI governance, it is possible to conceive of a specialized, independent, multistakeholder body which may be tasked with ensuring that agency efforts to incorporate AI into their operations comply with centrally set standards, and reviewing agency generated risk assessments to ensure that they are carried out in an effective and meaningful way. These reviews could either be carried out periodically, with each agency reporting back to the “AI Commission” on an annual or semi-annual basis, or be prompted in response to public complaints or appeals against the use of AI for a particular function. Importantly, the Commission would need to incorporate appropriate safeguards to guarantee its independence, and its ability to push back against bad decision-making at other agencies, as opposed to acting as a mere inter-agency hub.
While a specialized agency dealing only with government uses of AI may seem like overkill in the present moment, it is appropriate and necessary in consideration of the likely transformative impact of these technologies across the administrative state. It is vital to equip some sort of voice within government that can push back, in an adversarial way if need be, against the rising tide of automation of a growing number of government functions. However, in addition to challenges related to the cost and political will associated with standing up such an agency, it would require significant conceptual work to determine how this body might fit into the existing U.S. federal apparatus, particularly given growing regulatory fragmentation on AI, and broader judicial attacks on the administrative state. Nonetheless, as AI continues to be integrated into an increasing number of core government functions, it is important to think creatively about how to harness the advantages of these new technologies without losing the core qualities that make our government responsive, accountable, and fundamentally human.