Regular readers of Just Security will know that the United States and Russia do not see eye to eye on many matters touching on war and peace, not least around cyber, information security, and the conflict in Syria. But you do not have to squint to glimpse how the two are, in several important respects, similarly positioned on one side of an enduring impasse on autonomous weapons. While there is no definition in international law of autonomous weapons, one shorthand is weapons that, once initiated, can nominate, select, and apply force to targets without further human intervention. The debate is not purely academic: a handful of systems falling into this relatively narrow definition are already in use, such as so-called loitering munitions; once launched, those systems can linger in the air over several hours while scanning for targets and then strike without in-the-moment clearance by a human operator.
The spectrum of States’ views is on display in the Group of Governmental Experts (GGE) on emerging technologies in the area of lethal autonomous weapons. At its core, the deadlock concerns whether existing international law mostly suffices (as the United States, Russia, and a handful of others have asserted) or new legal rules are needed (as dozens of other States have contended). In brief, beyond largely generic reaffirmations of existing rules and agreement on the importance of the “human element” in the use of force, States disagree in certain critical respects on how to frame and address an array of legal dimensions concerning autonomous weapons.
The divide was once again apparent — but with a twist — during the first week of the GGE’s 2020 meetings, which took place online and at the U.N. in Geneva on September 21–25. For the first time since the GGE was established in 2016 under the Convention on Certain Conventional Weapons (CCW), Russia did not participate in the meetings, and it did not publicly disclose (so far as I can detect) a reason for its absence. In its practice, the GGE operates on a consensus model under which any member can torpedo a substantive development. Russia’s absence did not escape notice among the governments in the room. In one evocative expression, the Chilean delegate stated during the final session that “the bear that’s not in the room needs to be in the room” (at 1:05:13 in the recording).
At least in theory, two sets of concerns — neither of which can be confirmed at this point — might be motivating the call for Russia to return. One is that Russia’s absence might delay the work of the GGE. Another is that its absence might signal that Russia will withdraw from the GGE altogether or even seek to close down the process for all CCW parties. If Russia withdraws, the argument goes, it may be difficult for the remaining participants to move forward because other countries, not least those with advanced technological capabilities, may not want to commit to something that Russia is also not on board with. If the GGE is discontinued, the possibility that some State or group of countries might initiate a process outside of the CCW might become more likely. It seems doubtful, though, that all of the other States that are in the GGE will take part in a process outside of the CCW, which could pose additional challenges for a legal regime that by and large strives to be universal, uniform, and comprehensive.
For now, it seems too early to tell what to make of Russia’s absence last week. But for an already wobbly initiative, it certainly did not signal that the impasse on autonomous weapons is more likely to be resolved anytime soon.
The GGE’s Complex Mandate
Legal and other concerns around technologies related to autonomous weapons have been discussed at the intergovernmental level at least since 2013. The intervening years have witnessed a cascade of States’ views as well as advocacy efforts, scholarly studies, and policy engagements. Some essential agreements among States can be detected — but so, too, can significant divergences. The discussion sites have included the U.N. General Assembly’s First Committee in New York and the Human Rights Council and the CCW in Geneva.
In November of 2019, the Meeting of the CCW High Contracting Parties decided to endorse 11 “guiding principles” affirmed by the GGE. The Meeting also mandated the GGE “to explore and agree on possible recommendations on options related to emerging technologies in the area of lethal autonomous weapons” in 2020 and 2021 — a formulation that, to put it mildly, is open to different interpretations of what is called for. Furthermore, in terms that are difficult to parse even for well-versed experts, the Meeting also tasked the GGE with considering those “guiding principles,” the GGE’s “work on the legal, technological and military aspects,” and the GGE’s previous conclusions in order to “use them as a basis for the clarification, consideration and development of aspects of the normative and operational framework on emerging technologies in the area of lethal autonomous weapons systems.”
Following delays caused by the COVID-19 pandemic, in recent months States and organizations submitted commentaries on the “guiding principles.” And a few weeks before the meeting in late September, the then-chair of the GGE circulated a paper that attempted to identify six “commonalities” based on those commentaries.
One of the endorsed “guiding principles” is that international humanitarian law (IHL) “continues to apply fully to all weapons systems, including the potential development and use of lethal autonomous weapons systems.” Along the same lines, one of the purported commonalities identified by the chair is that “[o]verall, international law, including [IHL], regulates emerging technologies in the area of lethal autonomous weapon systems.” A close reading of States’ views suggests that — beyond those generic formulations on IHL’s applicability (vital as they are) — States nevertheless hold seemingly irreconcilable positions on whether existing law is fit for purpose or new law is warranted.
The Current Impasse
The current impasse on autonomous weapons might be traced to at least two factors. The first concerns definitions. There are widely differing conceptions of autonomous weapons and their technical characteristics (at least for purposes of the GGE). And there is also a divergence on sequencing, especially whether States must first agree on minimal definitional elements before taking more concrete steps or, alternatively, whether countries can develop a political declaration (or even a legal instrument) without first establishing agreement on what specific technologies are of concern. The second factor is a significant difference of views on what international law already permits, mandates, and prohibits in practice and corresponding positions on whether or not the law is satisfactory.
GGE debates on the law most frequently fall under three general categories: IHL rules on the conduct of hostilities, especially on distinction, proportionality, and precautions in attacks; reviews of weapons, means, and methods of warfare; and individual and State responsibility. Perhaps the most pivotal axis of the current debate, which touches on all three categories, concerns the desirability, or not, of developing and instantiating a concept of “meaningful human control” (or a similar formulation) over the use of force, including autonomy in configuring, nominating, prioritizing, and applying force to targets. Alongside States, several organizations have strived to provide more information and greater specificity around what the law does or should require with respect to the “human element” debate. These include detailed considerations put forward by the International Committee of the Red Cross and the Stockholm International Peace Research Institute as well as the International Panel on the Regulation of Autonomous Weapons. For its part, the U.N. Institute for Disarmament Research stands out as a trusted partner for several States in producing digestible insights on intersections between law, technologies, and operations. Recent examples include UNIDIR publications on targeting practices and robotic swarms, as well as on predictability and understandability in military applications of artificial intelligence.
Here is where there seems to be relatively little meaningful daylight between the positions set out in broad brushstrokes by the United States and Russia in the GGE — at least while Russia was still participating in the meetings. Alongside a handful of other States (such as Australia and Israel), the United States and Russia point to purported humanitarian benefits and potential for greater legal compliance from limited autonomy, or at least automation, in certain weapon systems. Ahead of the latest GGE meeting, the United States and Russia each circulated respective papers that envision greater legal compliance through such technologies. (U.S. views are set out more fully in papers submitted in 2018 and 2019, and Russia’s position is also set out in earlier submissions.) There are differences in presentation and specificity between the positions elaborated by the United States and Russia. But a basic idea underlying both is that technical configurations underlying autonomy in weapon systems can, if prudently adopted and appropriately constrained, lead in concrete cases to better-informed decision-making and better outcomes from a humanitarian perspective.
According to an informal translation, in its paper submitted before the latest meeting, Russia asserts that
[t]he existing complexes of a high degree of military autonomy in the Russian Federation significantly contribute to the compliance during hostilities with such key principles of IHL as proportionality and distinction. This is due to the fact that, in addition to their technological advantages (accuracy, speed, effectiveness), such weapons neutralize human-caused risks (operator’s mistakes due to his or her mental or physiological state, ethical, religious or moral attitudes), and thus reduce the probability of unintentional attacks against civilians and non-military targets.
For its part, the United States argues that emerging technologies in the area of lethal autonomous weapons systems could (among other things) reduce risk to civilians, including through autonomous self-destruct mechanisms (such as self-destructing ammunition that destroys the projectile after a period of time to pose less risk of inadvertently striking civilians and civilian objects), increased awareness of civilians on the battlefield (including using artificial intelligence to autonomously identify objects of interest from imagery), and automated target identification, tracking, selection, and strike capabilities. In short, the U.S. delegation contends, following extensive testing and verification and in line with preset limits on particular systems, these technologies can be employed to make consequences more foreseeable, to more strictly limit effects, and to more soundly effectuate a commander’s intent.
States on the other side of the stalemate often frame their arguments by asserting that it is unethical and should be illegal to deploy combinations of sensors, data, algorithms, and machines that can kill humans without sufficient human oversight, intervention, control, or judgment. They contend that a consensus can be reached on minimal definitional elements to move to negotiations for a new treaty. And they question whether proponents’ arguments in favor of limited autonomy in weapons can be satisfactorily verified. Buttressed by calls for new law from a growing array of advocacy organizations, especially the Campaign to Stop Killer Robots, Austria, Brazil, and Chile have issued an appeal to negotiate a legally binding instrument that would codify “meaningful human control over critical functions in lethal autonomous weapon systems.”
In the middle sits a not-insignificant number of countries in favor of more detailed discussions but not pressing for new legal instruments — at least not yet. In practice, these States prioritize seeking incremental progress to close the normative chasm while maintaining the process in the CCW. For at least some of the middle-ground States, a concern is that perceptions of growing recalcitrance by the United States, Russia, and like-minded countries may lead States calling for new law to initiate a separate process outside of the CCW. That could, so the argument runs, risk creating both a bifurcated normative regime on one of the highest-profile contemporary IHL debates and a possibly unworkable new norm.
What’s Next
The second week of the 2020 GGE meetings is scheduled for November 2–6. On November 11–13, the Meeting of CCW High Contracting Parties will decide for how long (somewhere between 10 and 20 days) the GGE will meet in 2021. Perhaps Russia will participate in the second week of 2020 GGE meetings — or perhaps it will not. Meanwhile, attempting to build on Brazil’s and Germany’s recent efforts outside of the CCW, Japan reportedly plans to host an international conference on autonomous weapons. Austria may organize an event in early or mid-2021 on related themes as well. Off in the distance is the 2021 Sixth Review Conference of the CCW, which will decide the future of the GGE.