This post is the latest installment of our “Monday Reflections” feature, in which a different Just Security editor examines the big stories from the previous week or looks ahead to key developments on the horizon.
Late last month, Stephen Hawking (former Lucasian Professor at Cambridge), Elon Musk (CEO of Tesla and SpaceX), Steve Wozniak (Apple co-founder) and more than 1,000 artificial intelligence and robotics researchers co-signed a letter urging a ban on autonomous weapons. The letter has understandably drawn global interest.
This was but the latest volley in a debate over autonomous weapon systems (AWS) that first drew international attention with publication of Human Rights Watch’s Losing Humanity report. Presently, the Campaign to Stop Killer Robots and its distinguished spokespersons, Jody Williams, Noel Sharkey, and Steve Goose, are leading the main effort to ban the systems. The venerated ICRC has also taken on the topic of AWS, having convened an important workshop for government representatives and legal experts, while the Convention on Certain Conventional Weapons (CCW) Meeting of Experts has likewise been addressing the subject. Together with cyber, AWS is the primary topic du jour in the field of means and methods of warfare.
The Open Letter is worth quoting in relevant part. It argues that,
[t]he key question for humanity today is whether to start a global AI arms race or to prevent it from starting. If any major military power pushes ahead with AI weapon development, a global arms race is virtually inevitable, and the endpoint of this technological trajectory is obvious: autonomous weapons will become the Kalashnikovs of tomorrow. Unlike nuclear weapons, they require no costly or hard-to-obtain raw materials, so they will become ubiquitous and cheap for all significant military powers to mass-produce. It will only be a matter of time until they appear on the black market and in the hands of terrorists, dictators wishing to better control their populace, warlords wishing to perpetrate ethnic cleansing, etc. Autonomous weapons are ideal for tasks such as assassinations, destabilizing nations, subduing populations and selectively killing a particular ethnic group. We therefore believe that a military AI arms race would not be beneficial for humanity. There are many ways in which AI can make battlefields safer for humans, especially civilians, without creating new tools for killing people.
The signatories conclude that “[s]tarting a military AI arms race is a bad idea, and should be prevented by a ban on offensive autonomous weapons beyond meaningful human control.”
To understand the debate, it is necessary to grasp the categories into which weapons fall vis-à-vis autonomy. Autonomous weapon systems are those that look for and engage targets without human intervention (e.g., the Israeli “Harpy”). These “human out of the loop” systems must be distinguished from those with a “human in the loop,” such as semi-autonomous missiles that for decades have guided themselves after the pilot selects the target (“fire and forget” missiles), and “human on the loop” ones, in which an individual with the ability to override the system monitors the engagement, as is the practice with the Israeli Iron Dome system. AWS designed for point defense, like the US Navy’s CIWS that defends warships against in-bound missiles, are not at issue in the debate.
I admit to finding the logic of the Open Letter viscerally appealing. However, a robust and objective debate on the topic requires greater granularity. Allow me to offer a few thoughts.
1) Arguments permeating the debate that AWS necessarily violate international humanitarian law (IHL) are flawed (see here and here). Whether AWS are unlawful by nature because they are indiscriminate or cause superfluous injury or unnecessary suffering to combatants depends on the particular sensors aboard the systems, the weapons employed, and the environment in which they are intended to be used, not the fact that they operate autonomously. Some conceivable AWS would obviously be indiscriminate per se, like a system programmed to identify and attack any humans in urban areas, while others, such as one designed to identify and attack warships when no civilian vessels are nearby, would not be. Indeed, much of the technology needed for AWS to reliably identify certain key categories of military objectives (like combat aircraft) already exists, as does technology that can help ensure they comply with IHL dictates such as proportionality, for instance, by identifying humans that might be present in the target area such that anti-radar AWS do not engage.
A fair debate must also acknowledge that AWS can enhance a force’s ability to comply with IHL on the battlefield. Modern warfare teaches us that humans are often less able to deal with the fog, friction and fear of combat than advanced sensors, particularly when multiple sensors are brought to bear to develop technological synergy. And human override capability does not always enhance matters, as aircraft accidents in which fighter pilots who “trusted their gut” instead of their instruments have tragically demonstrated. In fact, AWS will often out-perform humans in accurately identifying a target, identifying potential collateral damage, and suspending or canceling an attack that has become legally questionable. And sometimes they will not. Therefore, each autonomous weapon system and situation must be assessed individually to determine whether it is more or less capable of complying with IHL in a particular situation than a system integrating a human either in, or on, the loop.
Accountability for the actions of AWS is an additional topic that sometimes has been the subject of muddled discourse. Clearly, any commander who decides to launch AWS into a particular environment is, as with any other weapon systems, accountable under international criminal law for that decision. Nor will developers escape accountability if they design systems, autonomous or not, meant to conduct operations that are not IHL compliant. And States can be held accountable under the laws of State responsibility should their armed forces use AWS in an unlawful manner.
What is often missed is that it is already unlawful to employ AWS when the use of other weapon systems will result in less harm to civilians and civilian objects, so long as the use of the other systems is feasible in the circumstances and yields similar military advantage. Indeed, in my view, the use of an autonomous weapon system that is available to a commander is legally required if it can avoid or minimize harm to persons and objects protected by IHL. Those who would ban the system must understand that commanders will consequently be deprived of a means of warfare that might avoid civilian casualties, one that they would otherwise have to employ as a matter of law.
The focus on the autonomy aspect of the systems is a red herring. This is not to suggest that AWS present no IHL challenges. Two loom especially large. First, pursuant to the rule of proportionality, an attack is prohibited if the expected collateral damage is “excessive” relative to the anticipated military advantage anticipated to result from the operation. The dilemma is that the military advantage of any particular strike is situational, and the situation on the battlefield can change rapidly and dramatically. Thus, unless AWS are preprogrammed to engage only at no or low levels of collateral damage, the more time that elapses after a system’s deployment and the greater the distance they range, the less reliable they will be in meeting the requirements of proportionality, since the situational military advantage of a particular engagement may have changed since launch.
Second, like the signatories of the Open Letter, I am concerned about very advanced learning systems, for experts tell me it is theoretically possible that such systems will learn the “wrong lesson.” I asked one expert whether it is possible that an AWS might eventually “learn” that it is more efficient to kill all humans rather than attack only those that meet preset parameters for lawful targets. His answer was certainly not in the present or immediately foreseeable future, but he could not absolutely rule out the possibility as AI technologies improve. The reply causes me great pause.
Other challenges to IHL rules, such as that requiring acceptance of surrender, also exist with respect to AWS. But at the present, much of the debate as to whether AWS can meet the demands of IHL is counter-normative, counter-factual, or incomplete. If the systems are to be banned in the near term, the step should not taken based primarily on IHL issues.
2) Although IHL is not the problem, the Open Letter clearly evidences great angst as to whether compliance is likely. In other words, although IHL may be adequate in governing AWS on paper, in practice the systems will be used unlawfully because they are so attractive to those who would employ them for malevolent purposes. After all, AWS may be employed in areas that are otherwise difficult to reach, pose little risk to the operator, and can be comprehensive in their destructive effect. These and other AWS features can be leveraged for good…and bad. The question, therefore, is whether the likelihood that IHL will be ignored is so great, and the consequences thereof so horrendous, that the systems should be banned?
There is little doubt that AWS will, like most other weapons and weapon systems in history, be used to commit war crimes, crimes against humanity, and perhaps even genocide. However, the issue is not whether they will be used for ill, but instead whether the net humanitarian and military benefits of using them is outweighed by the particular risks they pose (i.e. the classic IHL balancing of humanitarian concerns and military necessity). In this regard, one must consider those aspects of these systems, such as rendering the battlefield environment more transparent and removing negative human factors like fear or confusion, that enable AWS to better achieve humanitarian objectives than non-autonomous systems that would be employed in lieu of them. Additionally, AWS might be used to stop war crimes and other criminal acts that occur during an armed conflict. For instance, in the aftermath of the First Gulf War in 1991, the Iraqis used helicopters to slaughter Kurds and Marsh Arabs. An autonomous airborne system could have been used to put an end to the killing and police the skies to ensure that did not recur.
Finally, I do not accept the premise that enhanced survival of soldiers should be ignored. IHL has long recognized that minimizing harm to military personnel is a legitimate humanitarian end, for instance in its prohibitions on “no quarter” orders and attacking those who are hors de combat. I also reject the argument that it is somehow unfair to use AWS against enemy forces that are not equipped with them. War is not about a “fair fight.” No State would countenance surrendering a weapon system to balance the battlefield; to expect them to do so is ludicrous.
In my estimation, then, a definitive conclusion on banning AWS is premature in light of the nascent state of the technology, the lack of robust consideration of AWS employment options (good and bad), and the oft-confused discourse over ethical and military issues. That said, I concede the point that waiting too long to address AWS risks letting the genie out of the bottle.
3) In light of the likely unwillingness of States that actually go to war to ban AWS in the immediate future, it is fair to query whether the call for an outright ban is the most productive course of action. Note that the signatories of the Open Letter recommend banning defensive AWS “beyond meaningful control.” Presumably, then, they would not extend the ban to those offensive systems that remain within meaningful control. For instance, an AWS would arguably be within meaningful human control if “tethered” such that it could be easily recalled, pre-programmed in a manner that narrowly limits engagement parameters, or designed carefully to control the method by, and extent to, which it “learned.”
If meaningful human control is the goal, the better approach might be regulation rather than a prophylactic ban. The distinction can be illustrated by the differing approaches to anti-personnel land mines taken in the CCW (Protocol II and Amended Protocol II) and the Ottawa Convention, with the former setting limits on, for instance, the period mines remain active, as distinct from the latter’s outlawing of their use by States Party. Analogously, States could agree to impose requirements on AWS that address many of the concerns that their critics harbor. Similarly, the CCW (Protocol III) limits the use of incendiaries against a military objective located within a concentration of civilians. Again, it would likely be more palatable to many States to, for instance, prohibit the use of AWS in urban and other areas where they pose a particular danger to civilians than to ban them altogether. And, of course, limits could be emplaced on their sale or other transfer, as is the case with other weapons in the Arms Trade Treaty.
In sum, I do not necessarily oppose a ban at some later date. States have long imposed bans on various weapons that are militarily useful, such as chemical weapons, when they conclude the humanitarian costs to both combatants and civilians are too high. Indeed, they have imposed bans on weapons merely because the weapons shock the collective conscience, as with blinding lasers. Such rationales have merit.
But I remain unconvinced that the debate has yet evidenced significant granularity to objectively conclude a full ban is justified, particularly in light of the potential benefits of AWS to avoid mistaken attacks and reduce collateral damage, and I applaud States such as the United States and United Kingdom that are proceeding slowly and carefully with respect to their development and use. I also question whether the States that matter would sign up to any such instrument. In my view, regulation by policy and treaty offers greater prospect for success in limiting the potentially harmful effects of the systems.
The views expressed are those of the author in his personal capacity and do not necessarily represent those of the US government.