Last month, the Department of Homeland Security (DHS) published an inventory of DHS systems that incorporate AI, covering systems that are active, in development, and sunsetted. The DHS AI inventory provides information about every one of its 158 active AI use cases, representing a significant achievement compared to the now-archived DHS 2023 inventory, which contained just 67 use cases. In many places, the inventory complements and expands on other DHS privacy documentation, while clarifying inconsistencies from the previous iteration. It also contains a more detailed inventory that provides information on several technical metrics.

But while the inventory represents a significant expansion of the department’s public disclosures on its use of AI, it also falls short in important ways. In this article, we assess areas for improvement in the DHS inventory that the Trump administration should consider as it reevaluates AI policies.

Background on the AI Inventory

The AI inventory was published pursuant to four significant White House and agency directives, as well as related legislation, spanning both the Trump and Biden administrations. A 2020 executive order from President Donald Trump, which directed federal agencies to “design, develop, acquire, and use AI in a manner that fosters public trust and confidence while protecting privacy, civil rights, civil liberties, and American values,” required most federal agencies to publish an annual inventory of their uses of AI. The AI in Government Act, enacted the same year, directed the White House’s Office of Management and Budget (OMB) to set out best practices to mitigate the discriminatory impacts of AI. Two years later, in the Advancing American AI Act, Congress directed the Secretary of Homeland Security to issue policies and procedures that would, among other things, give “full consideration to the privacy, civil rights, and civil liberties impacts of artificial intelligence-enabled systems.”

In October 2023, President Joe Biden issued an Executive Order (EO) on AI requiring agencies to take additional steps to protect Americans’ privacy, civil liberties, and civil rights, including by improving “the collection, reporting, and publication of agency AI use cases.” In March 2024, OMB issued a memorandum pursuant to the AI in Government Act and the Advancing American AI Act, as well as President Biden’s EO. The memo required that, starting in December 2024, agencies report which of their AI systems affect people’s rights or safety, institute minimum risk management practices for those systems, explain the risks (including risks of inequitable outcomes) and how they were being managed, and cease using any AI system the agencies could not bring into compliance with minimum practices. Finally, in September 2024, DHS released a plan explaining its process for complying with the OMB memo.

Upon returning to office on Jan. 20, President Trump rescinded the October 2023 EO on AI. It is not yet clear what, if anything, will take its place. High-level personnel changes at DHS are already underway, along with a shift in its focus and priorities. But the department’s obligation to publish an inventory continues, under the Trump Executive Order and statutes. Moreover, the OMB memo remains in place, at least for now.

Room for Improvement

Until and unless the Trump administration withdraws the OMB memo and its requirements for rights- and safety-impacting systems, there are several problems that should be addressed. We detail two major areas here: (1) systems categorized as not rights- or safety-impacting that, when viewed more holistically, clearly affect individuals’ rights, and (2) insufficient explanations of risk mitigations and extensions.

More broadly, the inventory  highlights ongoing, systemic DHS practices that demand a significant shift in approach. DHS must address these issues if it is to fulfill the collective goals of previous policy and legislative directives: fostering public trust and confidence; protecting privacy, civil rights, civil liberties, and American values; ensuring that AI is transparent, safe, and secure; and advancing equity and combating discrimination. Regardless of the administration, these are values that are critical to protecting the American people and carrying out DHS’s mission safely and effectively.

Narrow Designation of Systems that Impact Rights or Safety

The OMB memo subjects AI that is “safety and/or rights impacting” to certain risk management practices. OMB’s guidance focuses on individual impact, downplaying the systemic context in which these tools are used. And in turn, DHS narrowly interprets the standard, limiting the number of covered use cases.

Under the OMB guidance, AI is rights-impacting when its “output serves as a principal basis for a decision or action concerning a specific individual or entity” and has a “significant effect” on rights, liberty, privacy, equal opportunity, or the ability to apply for government resources, such as housing. AI is safety-impacting if it “produces an action or serves as a principal basis for a decision that has the potential to significantly impact the safety of” human life, among other things.

Both of these standards rely on the term “a principal basis.” Neither OMB nor DHS has offered any further public explanation of the standard, which is challenging to quantify and invites officials to exercise substantial discretion. By its terms, however, “a principal basis” need be only an important factor in a decision – not the predominant reason.

Nevertheless, DHS has applied this standard narrowly in practice. In its latest inventory, DHS identified 39 use cases that impact safety or rights. In many other cases, it appears to have concluded that if a human reviews the output of an AI tool, the tool is not rights-impacting – even where it is used as part of a process that is likely to impact rights.

For instance, an autonomous Customs and Border Protection (CBP) system that identifies the presence of and distinguishes between persons, animals, and vehicles on the border was deemed not to be rights-impacting because a human CBP agent reviews its output before making a decision. But the entire point of the AI tool is to initiate a series of events to locate, identify, interdict, and detain people along the border, directly and significantly impacting their rights.

Similarly, Immigration and Customs Enforcement (ICE) is developing a real-time translation tool to assist personnel speaking with non-English speakers. DHS has determined that this tool does not impact rights, seemingly in part because it “will not be used alone for materials vital to an individual’s rights or benefits.” But it is likely that agents will use the tool to explain rights to migrants, communicate agency expectations, and even convey critical deadlines that, if missed, may waive a person’s rights. ICE has also deployed AI tools to translate and extract information from email and mobile phones. DHS has designated these as non-rights-impacting because a human reviews the output, but this information may be fed into an investigation or prosecution, which will have an impact on rights. In addition, the tendency of humans to defer to automated outputs is well-documented, undercutting the mitigation of a human-in-the-loop.

DHS concluded that USCIS’s large language model trainer for refugee officers is not rights-impacting, even though it teaches personnel how to accomplish a critical job with an impact on human rights, and the model may incorporate the historical bias of its training data. One senior DHS official expressed optimism about the pilot but referred to DHS’s ChatGPT training data as “garbage,” magnifying concerns.

In its initial use case inventory, DHS presumed that all AI uses related to immigration were rights-impacting. Once it applied its strict interpretation of the “principal basis” standard, it rescinded that determination in many cases. But this initial view – that the immigration context makes any AI use rights impacting – is correct: Risks posed by AI tools cannot be separated from the DHS operations for which they are designed and used. The principal basis standard is faulty in focusing on the risks of the AI’s output alone, not those inherent in the DHS operation into which the AI is integrated. DHS’s implementation of the standard also demonstrates the perils of leaving agencies to implement a vague and highly subjective threshold for whether basic safeguards should apply – safeguards that should apply to any AI system as a baseline matter.

Yet in a department where data trumps everything, many decisions are made based on a range of information, making identification of a principal basis difficult. Officers will likely make decisions based in part on AI output in conjunction with the information and analysis in border security tools like the CBP’s Automated Targeting System (ATS). That system screens people based on a range of data and analytic techniques, each of which can inject bias into a system such that it impacts rights. Because an officer uses it in conjunction with other data, an AI output may not be a “principal” basis for a decision, or it may be hard to quantify how principal a basis it plays, but it is nevertheless one that impacts rights and should be subject to risk mitigation. Everything, or nearly so, about these operations is rights-impacting.

Inadequate Transparency 

Second, while the inventory leads its peers in transparency, it should embrace additional disclosure in two areas: explanation of risk mitigations and explanation of extensions.

Under the OMB memo, the department must detail the risks of each of the 39 rights- and safety-impacting systems it has identified and explain how it manages those risks. More generally, it must provide “public notice and plain-language documentation” for each entry in the inventory. The inventory technically complies with the OMB memo’s vague requirement. But there is little consistency across components when it comes to revealing the actual steps taken to mitigate risks to critical rights.

Previous White House guidance allowing for non-disclosure where “sensitive law enforcement [and] national security” information is involved swallows the rule when it comes to Customs and Border Protection (CBP). Every single one of CBP’s rights- and safety-impacting use cases for which no extension was requested – accounting for a third of all of DHS’s deployed rights- or safety-impacting use cases – has the same boilerplate language about “key identified risks and mitigations,” which reveals no real information:

This AI system has been tested in operational or real-world environments and risks have been identified and mitigated. Information about risks and mitigations for this use case may be law enforcement sensitive; DHS continues to review details for potential future disclosure in accordance with applicable law and policy.

Take CBP One, a mobile app developed by DHS to streamline the process of applying for asylum at the U.S. border. The app has been harshly criticized by immigration and human rights groups, lawmakers in the House and Senate, and DHS’s own inspector general for a multitude of failings, including the fact that it functions in only three languages, requires a smartphone, frequently glitches, is not accessible to users with a range of disabilities, and fails to recognize darker-skinned migrants. Complaints about its impact on migrants are also under investigation by DHS’s Office for Civil Rights and Civil Liberties (CRCL). (Note: On Monday, January 20, 2025, CBP One was shut down. It is not clear whether CBP One is permanently ended or if it will be restarted in an amended version, perhaps with even fewer protections for asylum seekers.)

In light of the extensive reporting on the problems with CBP One, it strains credulity that all information about its risks would be law enforcement sensitive. (In fact, the entry only indicates it “may” be sensitive, suggesting the department may not have undertaken a full inquiry.) Indeed, DHS published an updated privacy impact assessment several months earlier that includes extensive details on the AI-driven element of CBP One, which further suggests that the information need not be entirely restricted. In any event, it is hard to see how revealing information about making a publicly available mobile phone app function more effectively, and with less discriminatory impact, would pose a risk to law enforcement equities.

True transparency – and the ability to hold DHS accountable, including to assess whether systems should be retired, as directed by Trump’s 2020 Executive Order on AI for systems that cannot be brought into compliance – demands more. The same is true of CBP’s Passenger Targeting and Vetting system, which uses AI to analyze passenger information, including travel patterns and historical records, to identify passengers for additional scrutiny and integrates the outputs into ATS. The Brennan Center has previously written about the issues with ATS as well as the significant tensions between ATS and the White House’s Blueprint for an AI Bill of Rights, which is meant to promote civil rights and democratic values.

The other main immigration-related components – U.S. Customs and Immigration Service (USCIS) and Immigration and Customs Enforcement (ICE) – have a somewhat better record of disclosure, but they too are a mixed bag.

One USCIS use case describes ARGOS, a system that crunches public datasets to help analysts determine the risk of fraud by companies using E-Verify, an electronic system that allows employers to confirm that potential employees are legally permitted to work in the United States. The inventory lists a variety of risks from ARGOS, including the skewing of results due to prioritization of content by search engines (“data collection bias”), inconsistent performance across different industries (“lack of domain-specific accuracy”), poorer performance on testing data than on training data (“limited generalization to unseen data”), and failure to recognize sarcasm or irony (“misinterpretation of sentiment”).

These are wide-ranging and consequential, raising questions not only about this tool but about other tools aiming to analyze large volumes of data to predict risk and potential fraud. But the inventory’s description of how these risks were mitigated is essentially indecipherable to an outsider: “All risks identified in testing and evaluation phases. Bias monitoring, fine-tuning data balancing, and statistical precision deviation monitoring have been put in place.” These beg the question: monitoring for what bias, and based on what thresholds and parameters? Whether these are terms of art for those in the know, this is a woefully inadequate public explanation that makes it impossible to know whether the risks truly have been addressed.

Similarly, ICE provides some information, though not always in sufficient detail. ICE reveals, for instance, that its immigration enforcement arm, Enforcement and Removal Operations, calculates a “Hurricane Score” that is used to determine the likelihood that non-citizens who are under ICE’s management but not in detention will abscond; the system produces a score “based on absconding patterns” learned from earlier cases. The higher the score, the more intensive the case management – and potentially the more intensive the surveillance as well.

ICE’s inventory appropriately designates the Hurricane Score as rights-impacting, and notes that risks of the system include incorrectly giving high scores to people who are not likely to abscond or using unrepresentative training data, leading to poor performance of the tool. But the inventory provides little information about the mitigations, saying only: “These risks are identified and if necessary remediated through error analysis during model testing and model performance monitoring.” An explanatory memo published by DHS’s Chief Artificial Intelligence Officer and Chief Information Officer under President Biden, Eric Hysen, does provide some more detail, but it focuses specifically on bias. Hysen notes that during his review of the system, his team determined that “the algorithm was not being sufficiently tested for bias in the output”; after conducting additional testing, they concluded that the Hurricane Score did not show demographic bias, and ICE agreed to continue conducting bias testing. This is an important role for the Chief AI Officer to play, but it does not address the other risks identified in the inventory.

Extension Disclosure 

Finally, the OMB memo authorized Chief AI Officers to apply to OMB for non-renewable one-year extensions for uses of AI that cannot meet the minimum requirements. While Hysen requested only five extensions, a small number, the inventory simply states for each system that a compliance extension was granted through November 30, 2025 (which may be shorter in practice). For two systems, the more detailed inventory adds that the required documentation is missing. Providing even basic information about the reason the system could not come into compliance by the deadline, as well as a justification for why the system was critical enough that the department needed an extension rather than ceasing to use it, would enhance transparency and aid in quelling concern about the extensions.

DHS’s Activities Still Pose Risks to Civil Rights and Civil Liberties 

The inventory also highlights that DHS continues to use tools and practices that are unproven and that pose risks to civil rights and civil liberties, including social media monitoring and risk assessments. We have written extensively about the scope and risks of social media monitoring and data collection by the government, particularly DHS.

According to the inventory, CBP has used two commercial tools that ingest and analyze social media: Babel, which searches and aggregates open-source social media data to help identify travelers for additional screening (or to exempt travelers from additional screening), and Fivecast ONYX, which uses “vast amounts of publicly and commercially available data” from social media platforms and online sources to assist in identifying people and analyzing the strength of connections between users. Fivecast also uses the data to produce risk assessments, an area that we have identified as being in dire need of testing, proof of efficacy, additional public transparency, and adequate oversight and guardrails. (Fivecast ONYX is evidently being discontinued in 2025 due to budgetary constraints.)

DHS requested and received compliance exemptions for both systems, which means that no information is available in the meantime about the risks they pose or the mitigations the department has put in place. This lack of information may also be due in part to challenges inherent in relying on private firms to conduct security work; those relationships tend to be subject to contracts that require little transparency into or oversight of the analysis they sell the government. In any event, in light of their known pitfalls, the dearth of information is disappointing; it is urgent that the department bring them into compliance and release detailed information about their risks and mitigations.

The potential deployment of AI by DHS’s Office of Intelligence and Analysis (I&A) raises significant concerns as well. I&A, which has an explicitly domestic focus, has developed and may be using AI tools, including to analyze Americans’ social media posts. The office has a troubled history of targeting Americans’ political expression, inaccurately assessing the meaning of online content, and disseminating bogus or hyperbolic intelligence reports. But because I&A is part of the U.S. Intelligence Community, its use of AI systems is subject to a separate framework established in a national security memorandum that does not require public disclosure. At DHS, that means that key potential use cases are conspicuously missing from its inventory. The department must provide more information to the public about those uses and others hiding behind the screen of the national security memorandum.

Finally, the department could foster greater AI transparency overall by expanding the scope of its privacy review process. The e-Government Act of 2002 obligates agencies, including DHS, to issue privacy impact assessments (PIAs) when the agency plans to collect, store, or disseminate individuals’ personally identifiable information. The department publishes PIAs about a range of programs, but these documents do not systematically disclose or explain the use of AI. The department should promptly update these privacy disclosures to include a description of any AI tools integrated into the systems and routinely update them as the tools evolve, and should disclose its underlying privacy threshold analyses, as we have previously urged.

IMAGE: Visualization of data