As recently reported by Sarah Knuckey and Bonnie Docherty, the “Certain Conventional Weapons (CCW) Informal Meeting of Experts on Lethal Autonomous Weapons Systems” was held at the United Nations in Geneva between May 13–16, 2014. This post goes into some further detail in three areas: article 36 reviews, the principle of humanity / human dignity, and “meaningful human control.”
Article 36 reviews
As Sarah noted, many States saw reviews under article 36 of Additional Protocol I (or the similar, but possibly more limited, requirement under customary international law) as a fruitful area for further discussion. Article 36 reviews are highly relevant to the discussion of lethal autonomous weapons systems (LAWS) as article 36 reviews are required not just of new weapons, but also “means or methods” of warfare — noting that the “interesting” feature of LAWS is not the effect caused by the weapon (like blinding laser weapons) or the weapon itself (like anti-personnel land mines), but the process for target selection and engagement (i.e., the autonomy).
It would appear, though, that there is some disagreement between some States and experts on the one hand and some NGOs on the other hand on whether a LAWS that could fully comply with existing black-letter international humanitarian law (IHL) rules (hypothetically in the view of some, potentially in the view of others) would nonetheless still not be lawful. Some argue that the very nature of autonomous target selection and engagement is contrary to international law. In particular, it would violate the “principles of humanity” and “dictates of [the] public conscience” found in the Marten’s clause, as well as the principle of human dignity found in human rights law.
LAWS, the Marten’s clause and human dignity
There are currently a number of problems with this line of argument. First, the types of LAWS that might be developed should not be considered as a homogenous group. Arguably there are qualitative differences between a LAWS that might be programmed to undertake battlefield interdiction and seek out and attack enemy tanks, a LAWS that might be designed to identify and attack a named individual, and a LAWS that might be developed to undertake so-called signature strikes.
Would the public conscience truly be offended by a LAWS that does battlefield interdiction when compared with a pilot flying over uncontested airspace and attacking enemy tanks from 15,000 feet? Rather than a targeting algorithm, does the principle of humanity require there to be a human operator of an unmanned aerial vehicle, perhaps sitting thousands of kilometres away, so that a human decides to fire a hellfire missile at a tank that may have zero, one, two or five unseen crew inside? That seems like a fairly nuanced and thin distinction — recalling that the starting assumption for this part of the discussion is that the targeting algorithm can fully comply with all black-letter IHL.
Assume that the target review process has identified a person as being subject to lawful attack, but that person’s current location poses too high a risk of disproportionate collateral damage. What is the qualitative difference between surveilling that person with a unmanned aerial vehicle and waiting to conduct an attack until the target is clear of civilians where: (1) it is a person who fires the missile (noting in all likelihood that the person who fires the missile would not have decided who was to be attacked, but merely is waiting for an opportunity in which to do so); or (2) using a LAWS to undertake the same attack (indeed, possibly with even more precision — for example, instead of firing a weapon, the LAWS itself might be the weapon and could detonate upon direct contact with the target’s body, thereby allowing an extremely small charge to be used)? In both of these cases, the decision to kill the target has already been made, it is just a matter of suitable parameters being met (i.e., target sufficiently clear of civilians based on the effects radius of the chosen weapon) for the weapon to be fired.
We need much more discussion about the principles of humanity and dignity before concluding that these principles could supposedly permit the above two unmanned vehicle attacks but not the LAWS attacks.
LAWS and ‘meaningful human control’
Both Sarah and Bonnie note the discussion of a term new to IHL — “meaningful human control.” It would appear that this term is being used to achieve two things. First, to inject the asserted requirement of “humanity” into targeting decisions. And second, to provide for individual accountability (not just state responsibility) for targeting decisions in cases of breaches of IHL.
As to the humanity requirement, again not all types of LAWS are the same. At the moment, it would appear that it is being stated, as opposed to argued, that the process of autonomy invests all types of attacks by LAWS as being inhumane. But is this the case when compared with others methods of warfare such as trench warfare or aerial bombing of poorly trained conscripts by a high-technology opponent, which as methods of warfare are both perfectly lawful.
As for individual accountability, more discussion and careful legal analysis is required. A common question raised is who would be accountable if a LAWS made a mistake? To which one might ask, who would currently be accountable if a guided weapon system did not track accurately? Or if a soldier at a checkpoint fired on a vehicle that the soldier thought was a threat but upon review contained (non-threatening) civilians? Arguably, there is a meaningful legal distinction between: (1) a LAWS that has been subject to appropriate test and evaluation and then does not perform as expected; and (2) the use of LAWS in an environment where the person who activates the system does not have the required level of confidence that the LAWS will satisfy the IHL requirement that in “the conduct of military operations, constant care shall be taken to spare the civilian population, civilians and civilian objects.” Simply put, individual accountability is a means to an end (protecting civilian and other non-combatants, and less LOAC violations generally) and not an end in itself.
Perhaps there should be less emphasis on the currently undefined concept of “meaningful human control” and much greater emphasis on reduced risk to the civilian population and compliance with IHL. Take the hypothetical example of a LAWS that had no “meaningful human control” (noting that this term is not well-defined) but, for the sake of the hypothetical, would be more discriminate than a human-in-control weapon. Would that hypothetical LAWS be “lawful” in the view of the opponents to LAWS? If your answer is “no,” that means a method of warfare that was more discriminate (surely a laudable goal and a desired “end-state” of IHL) would not be lawful because of the method (autonomy, or in other words, a process) used. This conclusion would mean that an implied requirement of IHL (to have “meaningful human control” over a weapon system) would take precedence over an explicit rule of IHL to “take all feasible precautions in the choice of means and methods of attack with a view to avoiding, and in any event to minimizing, incidental loss of civilian life, injury to civilians and damage to civilian objects.”
Again, we need much more discussion about this concept of “meaningful human control” and what it adds to IHL. Much like accountability, should we strive for “meaningful human control” as an end in itself, or merely as one way, but not necessarily an exclusive way, to reduce the risk to the civilian population and compliance with IHL? Perhaps a better approach would be to say simply that the use of LAWS must meet the requirements of IHL — the very law that was created to reduce the impact of armed conflict on combatants and non-combatants alike.
Conclusion
Anyone interested in the area is encouraged to read the materials available from the May 2014 CCW Meeting of Experts. At the same time, caution should be exercised against drawing any firm conclusions at this time. As with the early stages in the development of new technologies, new ideas and theories also need to be tested and evaluated. Some of those ideas and theories will stand the test of time, while others will merely be stepping stones on the way to more useful ideas and theories.
This note was written in the personal capacity of the author and does not necessarily represent the views of the Australian Government or the Australian Department of Defence.