[Editor’s Note: This is Part I of a two-part guest post from Paul Scharre, a fellow and Project Director for the 20YY Warfare Initiative at the Center for a New American Security, on autonomy weapons systems.  Stayed tuned for Part II later today.]

In May of this year, the United Nations Convention on Certain Conventional Weapons (CCW) held the first multilateral discussions on autonomous weapons or, as activists like to colorfully refer to them, “killer robots.” Discussion was robust, serious, and thoughtful, but through it all ran a strong sense of confusion about what exactly participants were, in fact, talking about.

There are no internationally agreed-upon definitions for what an autonomous weapon is, and unfortunately the term “autonomy” itself often leads to confusion. Even setting aside the idea of weapons for a moment, simply the term “autonomous robot” conjures up wildly different images, ranging from a household Roomba to the sci-fi Terminator. It’s hard to have a meaningful discussion when participants may be using the same terminology to refer to such wildly different things. Further complicating matters, some elements of autonomy are used in many weapons today, from homing torpedoes that have been in existence since World War II to missile defense systems that protect military installations and civilian populations, like Israel’s Iron Dome. A significant amount of the discussion taking place on autonomous weapons, however, both at CCW and in other forums, often occurs without a sufficient understanding of how – and why – militaries already use autonomy in existing weapons.

In the interests of helping to clarify the discussion, I want to offer some thoughts on how we use the word “autonomy” and on how autonomy is used in weapons today. In particular, there are two overarching themes that run through much of the commentary on the issue of autonomy in weapons. The first is the notion that what we are concerned with is not weapons today, but rather potential future weapons. The second is the idea, championed by some activists, that the goal should be “meaningful human control” over decisions about the use of force. Unfortunately, some of the concepts put forward for “minimum necessary standards for meaningful control” assume a level of human control far greater than exists with present-day weapons, such as homing munitions, that are widely used by every major military today. Setting a bar for minimum acceptable human control so high that vast swathes of existing weapons, to which no one presently objects, fail to meet it almost certainly has missed the essence of what is new about autonomous weapons. Increased autonomy in future weapons raises challenging issues, and a critical first step is understanding what one could envision in future weapons that would result in a qualitatively different level of human control compared to today. In the interests of readability, I’ll cover these issues in two posts, this first one which will examine autonomy in existing weapons, and a second which will explore some implications for the debate on autonomous weapons, in particular the notion of “meaningful human control.” I hope that by explaining how autonomy is used in weapons today, and how it is not used, this can be a useful launching point for discussions among policymakers, academics, and activists alike as they grapple with the issue of autonomy and human control in weapons.

What is autonomy?

One of the things that makes understanding autonomy so challenging is that we use the same word to refer to three different ideas: the human-machine command-and-control relationship; the complexity of the system; and the type of task being automated. These can be thought of as three dimensions, or aspects, of autonomy.

The human-machine command-and-control relationship

The first dimension, or aspect, of autonomy is the human-machine command-and-control relationship. Machines that perform a function for some period of time, then stop and wait for human input before continuing, are often referred to as “semi-autonomous” or “human in the loop.” Machines that can perform a function entirely on their own but have a human in a monitoring role, with the ability to intervene if the machine fails or malfunctions, are often referred to as “human-supervised autonomous” or “human on the loop.” Machines that can perform a function on their own and humans are unable to intervene are often referred to as “fully autonomous” or “human out of the loop.” These distinctions are meaningful because as one progress from human “in the loop” to “on the loop” to “out of the loop,” the machine has greater freedom of action and humans have less direct control over the machine.

Complexity of the machine

The word “autonomy” is also used in a different meaning to refer to the complexity of the machine. Regardless of the human-machine command-and-control relationship, words such as “automatic,” “automated,” and “autonomous” are often used to refer to a spectrum of complexity of machines. The term “automatic” is often used to refer to systems that have very simple, mechanical responses to environmental input. Examples of such systems include trip wires, mines, toasters, and old mechanical thermostats. The term “automated” is often used to refer to more complex, rule-based systems. Self-driving cars and modern programmable thermostats are examples of such systems. Sometimes the word “autonomous” is reserved for machines that execute some kind of self-direction, self-learning, or emergent behavior that was not directly predictable from an inspection of its code. Examples include self-learning robots or the Nest “learning thermostat.”

Type of function being automated

It is meaningless to refer to a machine as “autonomous” or “semi-autonomous” without specifying the task or function being automated. Different tasks have different levels of risk. A mine and a toaster have radically different levels of risk, even though both have humans “out of the loop” once activated and both use very simple mechanical switches. The task being automated, however, is much different. A machine that might be “fully autonomous” for one task, such as navigating along a route, might be fully human-controlled for another task, such as choosing its final destination.

Even in tasks relating to the use of force, there are many different engagement-related functions, not all of which are equally significant when it comes to thinking about the role of human control. Engagement-related tasks include, but are not limited to, acquiring, tracking, identifying, and cueing potential targets, aiming weapons, selecting specific targets for engagement, prioritizing targets to be engaged, timing of when to fire, maneuvering and homing in on targets, and the detonation itself. Many reports on the subject of autonomy in weapons, including from the U.S. Department of Defense, Human Rights Watch, the UN Special Rapporteur on extrajudicial, summary or arbitrary executions, and the International Committee of the Red Cross (ICRC), have highlighted the function of selecting and engaging specific targets as worthy of special concern. (In some formulations this is expressed as “selecting and attacking” or “selecting targets and delivering force.”) I believe this is, in fact, precisely the right place to focus attention, and I believe that examining how autonomy in used in weapons today will help understand why. (In full disclosure, I played a role in shaping the U.S. Department of Defense policy when I worked at the Pentagon from 2008-2013.)

Use of autonomy in weapons today

Human in the loop, or semi-autonomous, selection of specific targets to be engaged

Automation, or autonomy, is widely used in weapons today for a variety of functions. Radars and other sensors are used to help acquire, track, identify, and cue potential targets to human operators. For many sophisticated weapons that require precise engagement, the timing of precisely when to fire the weapon and/or when to detonate are automated. And homing munitions that use autonomy to maneuver the weapon toward the intended target have been widely used since World War II and are used by nearly every military today. In fact, giving munitions the ability to autonomously home in on their intended targets is essentially a necessity for engaging moving targets, particularly those that are moving in three dimensions, such as undersea or in the air.

But for most weapons in use today, the decision over which particular target is to be engaged is made by a human. A cruise missile flies autonomously to its target, but the choice of the target is made by a person. The missile does not choose its own target. Even homing munitions that are “fire and forget” and thus cannot be recalled and, in some cases, “lock on” their target after launch are still designed and used to go after specific targets that have been chosen by people.

To give a specific example where there is a high degree of automation used for a variety of functions, but the decision of which specific target or group of targets are to be engaged is made by a person, consider air-to-air missile engagements. In the case of beyond visual range engagements, potential targets are identified by a computer which conveys this information to the pilot; the pilot does not have the ability to visually confirm the identity of the target, nor does the pilot even have a picture image of the target. Instead, the pilot is relying on information fed from a computer, typically based on radar. Based on this data as well as other information such as altitude, airspeed, identification friend or foe (IFF) signals, and an understanding of the overall situation, the pilot makes a determination whether it is appropriate to engage this particular aircraft, or group of aircraft. If the pilot decides that it is appropriate to engage, then the pilot launches an air-to-air homing missile at the enemy aircraft. If there are multiple aircraft to be engaged in the same area, the pilot might launch multiple missiles at the group of aircraft. If the missiles function with a lock-on-before-launch feature, then specific target data will be passed to the missile before launch. If the missile is functioning in a lock-on-after-launch capacity, then the missile does not have any specific targeting data on the particular enemy aircraft to be engaged. Instead, the missile will fly to a point in space and then activate its seeker, looking for targets that meet its parameters. The pilot ensures that the missile only engages the targets that he or she intends to engage by the use of tactics, techniques, and procedures (TTPs) to ensure that, when the missile activates its seeker, the only targets within the seeker’s acquisition basket are those that the pilot intends to engage.

Note that in this situation, the missile is “autonomous” in the sense that, once it is fired, the pilot does not have the ability to recall or control the missile. There is also a high degree of automation in numerous stages of the decision cycle. However, a person is in the loop for the decision over which particular target to engage. The weapon is not designed to search over a wide area and select its own targets. Instead, the weapon is designed and used to go after the specific target or group of targets that the pilot intends to be engaged.

Most weapons in use today fall within this paradigm – autonomy is used for many functions, but the selection of specific targets for engagement is retained with a person.

One particular weapon that is worthy of specific mention is the U.S. sensor fuzed weapon, an air-to-ground area weapon launched from an aircraft against enemy tanks or vehicles. Some have suggested the sensor fuzed weapon might be characterized as an autonomous weapon. While it does have a high degree of automation, it does not meet the U.S. Department of Defense definition for an autonomous weapon. When a pilot sees a group of enemy tanks or vehicles and determines they are appropriate for engagement, he or she launches the weapon at the group. It deploys a series of small “skeets” with onboard infrared sensors over an area. Once they detect a hot object underneath – a ground vehicle – they deploy a shaped charge at the vehicle. Despite this automation, sensor fuzed weapon is a semi-autonomous weapon under the U.S. Department of Defense definition because the pilot is choosing which specific group of vehicles to engage. Similar to launching a large number of air-to-air lock-on-after-launch homing munitions against a group of enemy aircraft, the pilot is not choosing which specific skeet will engage which specific vehicle. Rather, the pilot is choosing to deploy the weapon as a whole against a particular group of enemy vehicles. Because the pilot has awareness over which specific group of targets is being attacked, the pilot can make a determination about the appropriateness of that particular group of targets for engagement prior to weapon launch. It is worth pointing out that if a sensor fuzed weapon was launched blindly into an area without knowledge of specific enemy vehicles, then the weapon would be selecting its own targets. This highlights the importance of looking at not only design of the weapon itself, but also intended use.

Sensor fuzed weapon is an unusual case in that it deploys a series of smaller homing sub-munitions against a particular group of targets, but homing munitions in general have been widely in use around the world for decades. In all of these cases, automation is used to assist the machine in destroying the specific target or group of targets that were chosen by a person for engagement. There are a few weapons in use today, however, where machines actually select the targets themselves.…

Human on the loop, or supervised autonomous, selection of specific targets to be engaged

A number of nations have automatic or automated defensive systems that, once activated, will engage incoming threats – typically aircraft, rockets, artillery, mortars, or missiles – without further intervention by a human operator. Notable among these are the U.S. Aegis ship-based defensive system and the Patriot land-based missile defense system. Both have modes where humans can remain in-the-loop for engagements, but they also have wartime operating modes where, once activated, the system will engage incoming threats that meet its specified parameters without any further action from a human controller. In this mode, humans remain in supervisory, or on the loop, control. This means that if the system begins malfunctioning, humans can intervene. It does not necessarily mean that humans will be able to intervene before there are any inappropriate engagements. In fact, human operators may be unaware the system is malfunctioning until there is an inappropriate engagement. But human operators can intervene to shut the system down, ideally before things get too out of control.

Another example of a type of weapon with supervisory human on the loop autonomy to defend against time-critical saturation attacks is active protection systems for ground vehicles. Active protection systems are used to autonomously engage missiles and rockets against ground vehicles by shooting interceptors at the incoming rounds to destroy them before they hit the vehicle. Because of the extremely short time of the engagement, keeping a human in the loop on these engagements would not be possible. Many countries have active protection systems for ground vehicles either deployed or in development, including but not limited to Germany, France, Sweden, Russia, Israel, South Africa, Singapore, and the United States.

While a significant number of nations have weapons that operate with supervised autonomy for the task of selecting and engaging specific targets, it is worth noting that the context in which they are used is quite limited. Their use is limited today to onboard defense of manned vehicles or static defense of manned installations in order to defend against time-critical or saturation attacks. (U.S. Department of Defense policy also scopes this quite narrowly, along similar lines.) It is also worth noting that the need for human “on the loop” autonomy in these weapons derives directly from the threat and the extremely short reaction times necessary to complete the engagement, in some cases far exceeding human abilities. Defending against these time-critical saturation attacks while retaining a person “in the loop” on target selection would simply not be feasible in most cases.

Human out of the loop, or fully autonomous, selection of specific targets to be engaged

There are a limited number of existing weapons that have a human fully out of the loop for the selection of specific targets to be engaged, such that the weapon itself is selecting the targets. The U.S. Department of Defense policy specifically authorizes this degree of autonomy for “non-lethal, non-kinetic force, such as some forms of electronic attack, against materiel targets.” An example is the U.S. miniature air-launched decoy jammer, or MALD-J, an expendable aircraft-launched air vehicle that cruises through enemy territory emitting decoy signals and jamming in the electromagnetic spectrum to deceive and blind enemy radars, but does not use lethal force.

But there are also at least two cases of weapons that autonomously select and engage targets with lethal force.

The first, and most often cited, example is the Israeli Harpy, which has been sold to a handful of countries: Turkey, South Korea, India, and China. (The Chinese are also widely reported to have reverse-engineered their own variant.)

The Harpy is a wide-area loitering anti-radiation munition. (It is frequently referred to as an unmanned aerial vehicle (UAV) or unmanned combat air vehicle (UCAV), but the Harpy is not recoverable. It is a one-way trip only.) Once launched, the Harpy flies over a wide area searching for enemy radars. If it detects any radars that meet its criteria, the Harpy then dive-bombs into the radar, destroying it.

In this sense, the Harpy functions much differently than a homing munition. In the case of a homing munition, the weapon is designed to engage specific targets that have been selected by a person. In the case of the Harpy, the person launching the weapon directs it to a general area where there are believed to be enemy radars, but does not direct the weapon against any specific radars or even groups of radars. From a technical standpoint, the actual design of the Harpy and an anti-radiation homing munition that is designed to go after specific targets chosen by a person may not be radically different. A loitering wide area munition like the Harpy would have a longer time of flight and most likely a wider seeker field of view than an anti-radar homing munition, but the actual functionality of how the seeker works may be the same. The key difference, however, is how the weapon is intended to be used and the decision in the mind of the human operator when he or she launches the weapon. The combination of the wider seeker field of view and longer loitering time results in a weapon that has a very different purpose, function, and use. Again, this highlights the importance of looking at intended use of the weapon.

The difference between a Harpy and a sensor fuzed weapon is also instructive. While the sensor fuzed weapon deploys multiple homing sub-munitions, the weapon as a whole is launched against a specific group of enemy vehicles. In fact, it would be difficult to use sensor fuzed weapon any other way, since the time of flight of the weapon is extremely short. For sensor fuzed weapon, the human launching the weapon knows exactly which particular group of vehicles are to be attacked. In the case of Harpy, the weapon itself loiters and searches over a wide area, looking for targets. The person launching the Harpy does not known which particular radars are to be engaged, only that radars that meet the Harpy’s programmed parameters will be engaged.

Besides the Harpy, there is one other system worth mentioning that selects and engages targets on its own, and that is an encapsulated torpedo mine. Mines are generally excluded from discussions about autonomous weapons and for good reason. While there is clearly no person in the loop when the mine detonates, mines are extremely simple devices, and therefore predictable, and a person emplaces the mine so the location of the mine is known. Thus, human control of mines is different than with other weapons, but the mine itself still has very limited freedom of action. Encapsulated torpedo mines are a bit different, however.

Encapsulated torpedo mines are a type of sea mine that, when activated by a passing ship, instead of exploding, open a capsule which then releases a torpedo that engages a target. In this case, the torpedo is not being used to home in on a target that has been selected by a person. Nor is the mine simply blowing up in place. Instead, the mine is given a greater freedom of maneuver to release a torpedo which will then track onto a nearby target. De facto, the mine is selecting and engaging targets on its own.

Encapsulated torpedo mines were part of the U.S. inventory for a number of years, but have since been retired. Russia and China have an encapsulated torpedo mine, the PMK-2, in service today.

It is worth examining why there are not more wide area, loitering anti-radiation weapons like the Israeli Harpy in use, since the underlying technology behind it is not particularly complicated. I’ll cover this issue, as well as implications of current technology for the notion of “meaningful human control,” in a follow-on post.