In light of the escalating public debate surrounding AI’s role in society, the U.S. Department of Justice (DOJ) has intensified its focus on how artificial intelligence is reshaping the justice system. In February, U.S. Deputy Attorney General Lisa Monaco announced that the DOJ would convene a “Justice AI Initiative” tasked with reporting to President Joe Biden on uses of AI within the justice system. Flowing from Biden’s 2023 Executive Order on AI safety, security, and trustworthiness, the Justice AI Initiative has convened six times so far, meeting with key stakeholders to discuss important issues, including how automated decision-making impacts rights and opportunities; fairness, equity, and democracy; information sharing with private industry; and malicious uses of AI by criminal actors. The diversity of issues canvassed to date offers a snapshot into the extent of ongoing research in the space: right now, AI is everywhere. For the public and legal sectors, AI appears poised to transform business as usual, meaning regulators have started to pay careful attention.

In her announcement, Monaco emphasized the DOJ’s mission to “uphold the rule of law, to keep our communities safe, and to protect civil rights.” These are laudable objectives; ones that showcase the law’s potential as a social organizing force that can be used to preserve human rights. However, as a legal construct, the Rule of Law can be tricky to pin down. The epitome of a contested concept, the Rule of Law’s malleable nature lends itself to being invoked in pursuit of varying ideologies. It’s somehow possible to hear the Rule of Law both extolled as the cornerstone of democracy and proffered as justification for controversial exercises of power. When it comes to power exercised by the state, the Rule of Law can offer appropriate checks and balances, or it can present a shield behind which to hide more nefarious objectives. Adding AI technology into the mix only amplifies this effect, supercharging both technical capabilities and the possibility of authoritarian oversight.

AI technology’s incursions into law and legal services are occurring in key areas, including law enforcement, routine decision-making in administrative agencies, enhancing legal research methods, and transforming the everyday practice of law. In some scenarios, using AI technology might alleviate burdensome or mundane tasks, reducing overall workload with enhanced processing powers, improving accuracy, efficacy, and repeatability, and maybe even unleashing new human potential through AI-human partnerships. Lawyers tout the advantages of AI systems for simple classification tasks such as e-discovery or the use of large language models (LLMs) for rote communications and other menial tasks. Meanwhile, many suggest using automated decision-making processes to streamline high-volume, low-impact decisions in the administrative state, and for good reason: law has long been vulnerable to criticism that human decision-making is noisy, inconsistent, and slow.

But the increasing integration of AI into the justice system also carries significant risks.  In particular, what I’ve termed “Rule of Law problems” can emerge, where the checks and balances that preserve the extent of legal power that can be deployed against individuals are destabilized. These problems can be exacerbated by the blunt force of mathematical models, often discussed through the lens of fairness, accountability, and transparency (i.e., algorithms are weapons of math destruction). AI tends to reinforce dominant hierarchies, relying on data-driven approaches to justify trends anchored in discriminatory legacies and disproportionate impacts on individuals within marginalized groups, often being first deployed in low-rights environments.

Around the world, countries are beginning to review how best to respond to the challenges presented by AI. The European Union’s Artificial Intelligence Act entered into force on Aug. 1, 2024, with pieces of the framework taking effect over the next two years. Within six months, AI systems that pose unacceptable risks will be banned. (e.g.,  those that engage in cognitive behavioral manipulation, social scoring systems, biometric categorization, and systems like facial recognition that identify people in real-time). Given the well-known “Brussels Effect”, where countries worldwide mimic the EU’s regulatory choices, this framework may have significant follow-on effects, like the GDPR did for privacy. Yet, so far, other countries have been slower to adopt new laws. Canada’s proposed Digital Charter Implementation Act has been in progress since 2022, languishing on the order paper while stakeholder groups continue to debate its merits. Countries like Britain and Singapore have downplayed the need for overarching AI regulation, preferring a sector-based, “pro-innovation” approach spearheaded by existing regulatory bodies. China, on the other hand, has proposed more permissive draft legislation promoting industrial AI development.

Global movements so far demonstrate tensions in AI governance: Is AI different enough to demand its own regulatory framework? If so, should the framework be restrictive, protecting human rights; or expansive, fostering innovation? Could it potentially do both at once? The Justice AI Initiative ought to consider these framing issues in light of their stated commitment to the Rule of Law. To this end, I raise three concerns to guide policymaking, cataloging “Rule of Law problems” revealed by law’s new reliance on AI.

Technological Neutrality May Not be Sufficient to Regulate AI 

Emphasizing the general applicability of laws, Monaco invokes Judge Easterbrook’s well-known admonishment about the “Law of the Horse.” Asked to speak to law students at the University of Chicago about Law and Cyberspace in 1997, Easterbrook famously cautioned against an undue focus on novelty — arguing that law students were best positioned for the legal world awaiting them when they studied subjects of general applicability. Through this lens, Easterbrook argued that nothing useful comes from gathering all the cases from different areas of law dealing in horses. The same sorts of critiques have been made against various subjects of special study (the prototypical “Law and _______” classes). Law and technology topics tend to fall victim to this criticism, given their propensity to deal in brand new, almost science fictional subject matter. Harnessing this lesson, Monaco reminds legal observers that existing laws remain intact; AI-powered discrimination is still discrimination, AI-powered identity theft is still identity theft, and so on.

When it comes to emerging technologies, the best laws tend to be those that are at least somewhat technologically neutral. Statutes that apply regardless of the technology used can better withstand the driving pace of progress, able to regulate innovations under an umbrella of general applicability rather than requiring substantial rewriting every time new products arrive on the market. Yet, sometimes, the unique characteristics of new technologies can put pressure on existing legal frameworks. Early internet scholarship grappled with similar questions, observing that the transformative aspects of the world wide web reconfigured questions about jurisdiction, regulation, and paths of behavior. Code is law, Lawrence Lessig taught us: the way internet architectures are constructed prescribe particular pathways of behavior that become non-negotiable.

Lessig’s “code is law” adage has been a mainstay of internet scholarship for more than two decades. The advent of AI, however, has put new strain on this metaphor. While Lessig anticipated an online world of coded constraints, AI’s chief innovation is its adaptability. Rather than a world of rigid structure, where digital architecture mandates behavior in a manner analogous to (or even stricter than) law, AI mutates and transforms — sometimes arriving at solutions powered by rationale opaque to even the most experienced programmer. This matters, I argue, in a legal universe designed on deliberative processes and key criteria establishing how decisions should be made. Monaco rightly notes AI’s potential to accelerate harms; in my view, the risk goes beyond this, complicating our very conception of the Rule of Law. As automation continues to displace human discretion in legal control mechanisms, it reveals an authoritarian character: the Rule of Law risks conversion into a Rule by Law. Automated mechanisms can mete out automated decisions with tremendous speed and minimal transparency, foregoing the longstanding oversight of judicial bodies. At the same time, AI presents significant risk of data-driven approaches encoding bias or discrimination in law’s administration, masked by opacity or corporate control.

Private Companies’ Involvement  in Public Functions Risks Abuses of Power

When it comes to injecting automated solutions into both the administration and practice of law, much of the actual technological work tends to be outsourced to private industry. Rule of Law problems can arise when private corporations begin controlling the functions of public entities, injecting for-profit motivations and corporate obscurity into aspects of state administration traditionally conceived as public-facing, open, and transparent. While some government departments do build in-house solutions, others simply use whatever happens to be available on the market. In the law enforcement context, we’ve already seen this happen with COMPAS and Clearview AI, among other tools. Individuals sentenced by the COMPAS algorithm remain at the mercy of its proprietary 137-point questionnaire, with no insight into how their recidivism scores were calculated. Individuals identified by a police force’s use of Clearview AI are subject to the constitutional concerns of its database being built without consent (and run further risks of misidentification given facial recognition’s poor legacy of identifying people with darker skin). People often raise concerns about transparency and opacity here, but I argue these concerns are also about the Rule of Law.

Usually, we think about the Rule of Law only in relation to state power: is the power being exercised under appropriate authority, non-arbitrarily, and prospectively rather than retrospectively? However, public bodies’ use of private technologies muddies the waters. Anywhere we see the potential for abuses of power encourages us to become concerned about the associated Rule of Law implications. While government-created AI solutions powered by internal datasets of legitimately collected data may still raise concerns of bias, discrimination, or rote determinations of important decisions, these concerns are supercharged by the participation of actors from private industry. Profit motives can thereby creep into government, powered by this millennium’s new extractive exploit: personal data. While there might be some appropriate uses of private sector technologies in public settings, constraints are needed to ensure carefully designed mechanisms of legislative oversight and intentional flexibility are balanced against industry’s promises of new automated procedures. Otherwise, the transparency and explainability demanded in administrative decision-making contexts risks replacement by opaque, proprietary procedures, often justified with appeals to consistency and impartiality. And, before long, overreliance on technological methods might altogether displace human expertise.

Automation Bias Alters How Decisions are Made

Finally, and perhaps most importantly, the introduction of AI into human decision-making risks a well-established psychological phenomenon: automation bias. In situations of human-automation interaction, humans commonly begin to defer to the automated system over their own judgment. Tasks might be done improperly, or important details glossed over, because the human assumes the machine is simply better at the task at hand. Simultaneously, people asked to manage a lot of automated tasks simultaneously can experience automation complacency, becoming inclined to rely on the automated system instead of their own instincts. This is exacerbated by situations of many bells and whistles demanding a human operator’s attention at the same time (think of the advanced driver assistance systems in many new vehicles). Adding to the chaos, when humans see an AI-generated response first, they can become irrationally anchored to that decision — trusting it inappropriately even when given subsequent information that ought to destabilize that trust.

Automation bias therefore risks a dangerous slippage in who is ultimately making decisions. As AI-powered tools are introduced to legal spaces, a common refrain is the ongoing involvement of the human-in-the-loop. Surely, retaining human oversight will mitigate AI’s dangerous effects, right? And at first, the human-in-the-loop might question the AI tool as it presents its outputs, providing the proverbial smell test. But, after a while, as our trust in the technology grows, so too does our reliance. I’ve already observed this phenomenon as people attempt to use ChatGPT as a research tool. ChatGPT’s ability to mimic human language is convincing; we can ask it questions and receive a meaningful facsimile of human prose which might even be mostly accurate (if painfully shallow). Yet, even when people are told ChatGPT hallucinates information, its cunning anthropomorphization lulls users into a false sense of security — sleepwalking in what Langdon Winner calls technological somnambulism. I’ve seen ChatGPT provide lists of real legal cases interspersed with just a few hallucinations, all presented in the correct style and format. Let’s say you check the first six citations and confirm the cases are real; will you definitely keep going to find the two fabricated ones hiding towards the end of the list? Misunderstanding the technology’s capabilities is how we end up with ChatGPT-hallucinated cases cited in court, but also how we slowly allow it to encroach upon the purview of human judgment, placating us with its just-close-enough outputs.

* * *

Where does this leave us? In the thick of AI’s promises to secure a better future, there may well be something unique about AI that requires specific, focused regulation when it comes to its application in the justice system. AI’s adaptability, powered by advanced pattern-matching capabilities that draw out novel insights from massive datasets, creates an acceleration effect. Its speed and perceived reliability risk transforming potential answers into concrete outcomes with worrying ease. Deploying AI-powered tools developed by private industry within the legal sphere, especially the administrative state, requires thinking hard about balance. Achieving the appropriate balance will require thinking through the degree of impact we want to allow corporate entities to have within our public institutions. We risk undue corporate influence, coupled with automation bias, through overreliance on AI-powered tools. Those risks are simply too high when it comes to the cornerstone of functioning democracies — a transparent and robust legal system that aims to guarantee just outcomes to its citizens through the Rule of Law.

IMAGE: A digital judge’s gavel covered in binary code (via Getty Images).