Earlier this month, a high-level, congressionally mandated commission released its long-awaited recommendations for how the United States should approach artificial intelligence (AI) for national security. The recommendations were part of a nearly 800-page report from the National Security Commission on AI (NSCAI) that advocated for the use of AI but also highlighted important conclusions on key risks posed by AI-enabled and autonomous weapons, particularly the dangers of unintended escalation of conflict. The commission identified these risks as stemming from several factors, including system failures, unknown interactions between these systems in armed conflict, challenges in human-machine interaction, as well as an increasing speed of warfare that reduces the time and space for de-escalation.
These same factors also contribute to the inherent unpredictability in autonomous weapons, whether AI-enabled or not. From a humanitarian and legal perspective, the NSCAI could have explored in more depth the risks such unpredictability poses to civilians in conflict zones and to international law. Autonomous weapons are generally understood, including by the United States and the ICRC, as those that select and strike targets without human intervention; in other words, they fire themselves. This means the user of an autonomous weapon does not choose a specific target and so they do not know exactly where (or when) a strike will occur, or even specifically who (or what) will be killed, injured or destroyed.
AI-enabled autonomous weapons — particularly those that would “learn” what to target — complicate matters even further. Developers may not be able to predict, understand, or explain what happens within the machine learning “black box.” So how would users of the weapon verify how it will function in practice, or assess when it might not function as intended? This challenge is not unique to the United States or the types of technologies it is pursuing. It is a challenge fundamental to the international debate on AI-enabled and autonomous weapons.
From a humanitarian perspective, the potential risks that autonomous weapons pose to civilians and civilian infrastructure in an armed conflict setting become quite stark when one considers that autonomous targeting functions could be a feature of (m)any of the expanding array of highly mobile armed drones – in the air, on land, or at sea.
There are also serious legal issues to consider. Humans—not machines—must apply the rules of international humanitarian law (IHL, also known as the law of war or the law of armed conflict) and make context-specific judgements in attacks to minimize risks for civilians and civilian objects. The unpredictability of autonomous weapons undermines this decision-making process at worst and complicates it at best, including by potentially speeding up the process beyond human control.
The NSCAI recommends that the United States excludes the use of autonomous nuclear weapons. Almost everyone agrees on this, but the question remains: What other constraints on autonomous weapons are needed to address humanitarian, legal, and ethical concerns?
Finding these answers is becoming urgent as autonomous weapons are being rapidly developed and militaries are seeking to deploy them in armed conflicts.
The commission itself points out the need for greater constraints, and heightened levels of human control in environments where more civilians are present – such as in urban areas. The ICRC, including in a recent report with the Stockholm International Peace Research Institute, has made some additional suggestions (submitted during the NSCAI’s consultations). Essentially, strict limits are needed on the types of autonomous weapons and the situations they are used in, as well as requirements for humans to supervise, intervene, and be able to switch them off.
Some of these limits are borne of fundamental ethical concerns for humanity (and not only law). Public opinion surveys suggest most individuals would tend to agree that an algorithm should not “decide” whether someone lives or dies. After all, inanimate objects – software included – have no moral (or legal) agency, and over-reliance on algorithms can interfere with human agency.
What this may mean is not that there would be a “moral imperative” to pursue autonomous weapons to replace human decision-makers, but rather that there may be a moral imperative to exclude autonomous weapons that target humans directly. This approach has gathered some support among experts with often opposing views on the risks, and is even reflected in concerns expressed by a former U.S. defence secretary about the proliferation of autonomous weapons.
The commission also entered the debate on whether an international agreement, such as a new legally binding treaty, is needed to address autonomous weapons. It is clearly opposed, citing the obstacle of overcoming the lack of an internationally agreed definition; difficulty in verifying compliance; and concerns that U.S. adversaries would not comply. On the other hand, the NSCAI expressed concerns that adversaries might interpret existing legal obligations (and ethical considerations) differently and recommended that the United States “develop international standards of practice for the development, testing, and use of AI-enabled and autonomous weapon systems.”
The ICRC has, since 2015, called for internationally agreed limits on autonomous weapons systems—whether new rules, policy standards or best practices; and developments at the international level are somewhat encouraging in this respect. There is increasing understanding among States about the types of international limits on autonomous weapons needed to address legal and ethical concerns. Such limits would have the dual advantage of building confidence in mutually agreed legal and ethical interpretations, while also managing the wider risks of conflict escalation. These are not problems that are likely to be solved by national approaches alone.
None of the ICRC’s suggested limits would prevent countries from using AI to the extent it can support human decision-making in warfare that better complies with IHL and minimizes risks for civilians. But the unconstrained development and use of autonomous weapons pulls in the wrong direction for legal compliance and civilian protection.