[Editor’s Note: This is Part II of a two-part guest post from Paul Scharre, a fellow and Project Director for the 20YY Warfare Initiative at the Center for a New American Security, on autonomy weapons systems. Be sure to check out Part I, which was published earlier today.]
In a recent post , I covered how autonomy is currently used in weapons and what is different about potential future autonomous weapons that would select and engage targets on their own without direct human involvement. In this post, I will cover some of the implications for the current debate on autonomous weapons, in particular the concept proposed by some activists of a minimum standard for “meaningful human control.”
If fully autonomous weapons are so valuable, then where are they?
As I mentioned in a previous post, autonomous weapons do not generally exist today, although the Israeli Harpy, a loitering air vehicle that searches and destroys enemy radars, is a notable exception. It is worth examining why there are not more simple autonomous weapons like the Israeli Harpy in use today. It is curious that unlike, say, active protection systems for ground vehicles where there are multiple variants built by many countries, there is only a single example of a wide area loitering search-and-attack weapon (Harpy) in use today. This is particularly worth examining because one common assumption about autonomous weapons is that more sophisticated autonomy that begins to approach human-level cognition for targeting decisions raises challenging legal and moral issues but that states are likely to employ simple autonomous weapons in relatively uncluttered environments. That may be the case, but if so it begs the question why states have not already done so. The underlying technology behind the Harpy is not particularly sophisticated. In fact, the ability to build a wide area loitering anti-radiation weapon dates back several decades.
The United States actually had such a system employed in the 1980s called the Tomahawk Anti-Ship Missile (TASM). (Despite the name, it bears no functional relation to the more commonly known Tomahawk Land Attack Missile (TLAM).) TASM went out of service in the 1990s, and the reason why is instructive. Anecdotally, U.S. Navy officers have reported that the weapon was removed from service because the doctrine for employing it was never well understood. The weapon was intended for over-the-horizon engagements against Soviet ships. However, U.S. Navy ships did not have a good way to cue and target the missile. (If they did, it would have been a homing missile, not a wide-area search-and-destroy missile.) Without information on where the Soviet ships were likely to be, however, Navy officers would not know where to fire the missile!
Now, many of these problems could conceivably be overcome by a more modern weapon that had a wider search area, a longer loiter time, or that searched a likely area of enemy activity, such as a border or choke point. Nevertheless, the point about military utility is a valid one. In the 1990s, the United States developed a loitering wide-area munition called the Low Cost Autonomous Attack System (LOCAAS). It was never deployed, and the newer version of LOCAAS currently in development includes a human in the loop feature. This actually follows a similar trend with the Israeli Harpy. The newest version, the Harpy 2 or “Harop,” has a person in the loop on target selection and engagement.
All things being equal, militaries are likely to favor weapons that have greater connectivity with human controllers for sensible operational reasons. Keeping humans in the loop decreases the chances of weapons striking the wrong target, resulting in fratricide or civilian casualties, or that they simply miss their target entirely, wasting scarce and expensive munitions. When communications links cannot be assured, such as in the case of enemy jamming, militaries face a tradeoff when considering whether to delegate the task of selecting and engaging particular targets to a machine that is out of communications with human controllers. Giving a machine wide latitude to hunt and destroy targets requires a high degree of trust in the machine’s target discrimination algorithms, not only from the perspective of being error-free but also in terms of the algorithms’ ability to capture all of the relevant information. Military professionals know better than anyone how much use of force decisions depend on context and judgment, and the consequences of getting those decisions wrong.
Is there a trend toward greater autonomy?
Whether one sees autonomy increasing over time depends on what is meant by autonomy, or which of the three aspects of autonomy one is referring to. From the perspective of overall sophistication of weapons, the answer is most certainly yes. Many functions that are currently controlled by humans will be automated in future weapon systems, particularly in unmanned vehicles.
However, in terms of a shift from human in the loop functionality for selecting specific targets for engagement to a human out of the loop model, it isn’t clear that there is a trend in that direction. There are strong operational reasons for militaries to want to include a communications link with future munitions so that they can be re-targeted in flight, and many next-generation weapons include such a link.
The key issue arises with respect to situations where communications links are degraded or disrupted due to enemy jamming or attacks. In such cases, militaries will have a choice about what these weapons, including unmanned vehicles, should do. If they are directed to attack only pre-programmed fixed targets that have been previously selected by a human, then they are functioning in a semi-autonomous fashion. Even though there are no real-time communications links, the specific target to be engaged would have been already chosen by a person. This is similar to how a cruise missile works today.
But if unmanned vehicles are allowed to hunt and engage targets of opportunity, or if they are authorized to fire in self-defensive, perhaps preemptively or perhaps in response to hostile actions, then they would be autonomous weapons. The machine would be selecting and engaging targets on its own, without human authorization of the specific targets to be engaged. It seems inevitable that nations will face such choices as they develop next-generation unmanned vehicles.
Complexity and where to draw the line on what is an “autonomous weapon”
From the perspective of the complexity of the weapon, none of the weapons in use today, discussed in my previous post, come close to being “autonomous” in the sense of being self-learning or self-directed. At most, they could be considered automatic or automated by their level of complexity. It is not clear that this distinction is the most important one, however, or one that is even possible to make between various systems.
There are two sensible places where one could draw a line to define autonomous weapons. The first is to define the term as the U.S. Department of Defense does and include simple weapons that are intended to select and engage targets that have not been specifically chosen by a person, such as the Harpy. The second option is to include a requirement for a higher degree of reasoning for a system to be considered “autonomous.” The ICRC nods toward such an approach in their May 2014 report, as does the International Committee for Robot Arms Control in their statement at the CCW meeting in Geneva in May 2014.
The advantage of first definition, used by the U.S. Department of Defense, is that it is clearer. Whether a weapon is considered autonomous or not depends on the role of the person vice the machine in the task of selecting and engaging specific targets, and the complexity of the machine’s algorithms is irrelevant. The advantage of the second approach is that it seems to capture the intuitive notion that the level of complexity matters. And it does matter in meaningful ways. As the machine becomes more complex, it may become more challenging for human operators to fully understand the machine, including instances where the machine may interact with the environment in unexpected ways. As such, it can become harder to understand the risk associated with using the machine and where it might fail.
The downside to this approach is twofold. One, it isn’t clear where the draw the line. Artificial intelligence is a vanishing definition. Once humans succeed in building a machine to solve a problem that seems like something that requires “intelligence,” like playing chess or driving, we often decide after the fact that the machines we’ve built to perform these tasks aren’t actually “intelligent.” Deep Blue isn’t “thinking;” it is merely calculating when it plays chess. Watson, the Jeopardy-playing computer, doesn’t “know” facts in the sense that human contestants do, or at least it doesn’t feel that way to us. Thus, there is a danger that if we attempt to wait to declare systems “autonomous” until they reach human-level reasoning, we may never actually reach that point. At what point does a machine become so complex that it is “deciding”? That line isn’t clear.
The danger in focusing on the level of complexity is that we may move down a slippery slope where we begin to develop systems that are taking actions on their own and we ourselves do not understand when we have ceded control over decision making to machines. Focusing on human reasoning and decision-making seems appealing, but it could result in a functionally meaningless definition.
The other issue is that autonomous weapon systems don’t actually have to be that complex in order to be problematic. The U.S. Patriot air defense system, which most would probably consider “automated” in terms of level of complexity, was engaged in two fratricide incidents in 2003 where it shot down friendly aircraft and killed the pilots. The reasons for these incidents are complex, but a lack of complete human understanding over the functionality of the weapon was a major factor. If nations begin deploying large numbers of weapons that select and engage targets on their own but do not have human-level reasoning, many of the concerns regarding autonomous weapons will be present nevertheless.
Understanding where to draw the line on what is an autonomous weapon is important not only from the perspective of regulating autonomous weapons, if states choose to do so, but also for the internal decision-making of militaries concerned about maintaining human control over increasingly sophisticated weapons with greater autonomy. Often missing from discussions about autonomous weapons by activists, who tend to focus on the potential humanitarian implications, are the very real military imperatives to maintain control over the use of force on the battlefield. Fratricide, civilian casualties, and inadvertent escalation are all very real risks from autonomous systems gone awry. Responsible militaries, perhaps more so than any other institution, should be concerned about maintaining control over the use of force by qualified military professionals. Clarity in terminology can help militaries understand when increasingly advanced systems warrant greater scrutiny before deployment.
Implications for “meaningful human control”
It is my hope that this explanation of existing weapons will be useful to policymakers, activists, and academics grappling with these issues. Often, arguments about “meaningful human control” occur in a vacuum, divorced from an understanding of how weapons actually exist today. For example, the definition put forward by International Committee for Robot Arms Control for the “minimum necessary standards for meaningful control” are:
First, a human commander (or operator) must have full contextual and situational awareness of the target area and be able to perceive and react to any change or unanticipated situations that may have arisen since planning the attack.
Second, there must be active cognitive participation in the attack and sufficient time for deliberation on the nature of the target, its significance in terms of the necessity and appropriateness of attack, and likely incidental and possible accidental effects of the attack.
Third, there must be a means for the rapid suspension or abortion of the attack.
While this may represent an idealized version of human control, it is hard to take this definition seriously as a minimum necessary standard for human control when vast swathes of existing weapons – to which no one presently objects – fail to meet this definition. In fact, if this definition were adopted and interpreted strictly, virtually every weapon since the invention of the catapult would be banned. A more sensible approach would be to look at existing weapons and articulate what is different about an autonomous weapon, however one defines the term, and why that distinction matters.
I believe there are significant differences, and that the definition of an autonomous weapon is quite simple. If the human is selecting the specific target or particular group of targets to be engaged, then the weapon is semi-autonomous. If the machine is selecting the specific targets and the human is observing in real-time and can intervene if necessary, then the human is exercising on the loop control over a human-supervised autonomous weapon. And if the machine is selecting the specific targets and the human is unaware or unable to intervene, then the human is out of the loop for the selection of specific targets and the weapon is fully autonomous. These definitions are simple and clear, and they generally succeed in distinguishing between existing and future weapons. Fully autonomous weapons, under this definition, generally do not exist today with two rare exceptions: the Harpy and the PMK-2 encapsulated torpedo mine.
Most importantly, this definition captures the essence of the issue with autonomous weapons, which is the legal, ethical, policy, moral, social, and military operational concerns that arise when the task of selecting particular targets for engagement is transferred from humans to machines. Focusing on distant issues like artificial intelligence and human-level reasoning are a distraction, as are attempts to articulate an idealized version of human control that is not compatible with existing weapons. Autonomous weapons raise very serious challenges, but the first step in grappling with these challenges is figuring out what, exactly, everyone is talking about.