Technological advances in robotics and other fields are already assisting soldiers with dull, dirty, and dangerous jobs on the battlefield. Within the military such advances should be encouraged, particularly in such areas as detecting explosive devices, search and rescue, and certain engineering tasks. However, many in uniform — both serving and retired — are seriously concerned about the prospect of delegating responsibility for the critical functions in weapons systems over to machines — that is, assigning decisions on whether, what and when to engage lethal force, to machines.

The emerging technology to do just that raises many technical, legal, operational, ethical and moral challenges.

A number of nations, the International Committee of the Red Cross, Human Rights Watch and others have raised questions and concerns about how the use of such fully autonomous weapons systems could comply with international humanitarian law and international human rights law, and the problem of ensuring legal accountability for a machine when it violates the laws of war.

Certainly the challenges of asymmetric and urban warfare compound the difficult decision-making process for humans. Exactly who is a combatant and how does that combatant behave in every situation? And how can a machine be programmed in advance to make the kinds of decisions that surely call for human judgment? Using any pre-programmed set of criteria for an autonomous weapon might not enable the weapon to abide by international humanitarian standards even if it never malfunctioned or committed an error. But what if it did? 

Here’s a lesson from the past. “No operation extends with any certainty beyond the first encounter with the main body of the enemy,” observed the German military strategist Helmuth von Moltke in the mid-1800s. Over time that statement has been simplified aptly to “no plan survives contact with the enemy.” How then in building scenarios, probably years in advance and probably in a laboratory, do we ensure the machine is correctly programmed to meet changing situations and that it meets the stringent, and often context-dependent, requirements of the laws of war? There are bound to be significant flaws in any such programming—both within militaries that have the most advanced computer technology and in the wider swath of militaries with less technological knowhow. History has shown us that when the more modern militaries remove weapons from their inventories they tend to trickle down to less capable forces and those forces often do not incorporate the same standards and safeguards of the original producer. Ultimately who is going to be held accountable?

I’m neither an ethicist nor technophobe nor necessarily an outspoken human rights advocate. But after serving in the Canadian Army from 1969 to 2002 and then in Canada’s Department of Foreign Affairs until 2013, last month I joined the increasing ranks of people concerned about this issue to participate in the Campaign to Stop Killer Robots delegation to the UN meeting on “lethal autonomous weapons systems.”

At one point during the week-long Convention on Conventional Weapons (CCW) meeting in Geneva, the head of the US delegation was compelled to make a remark in his personal capacity as a military veteran to object to an “expert” speaker’s characterization of human soldiers as “openly racist and nationalistic, acting out of hate, anger, and other degrading motivations.”

I strongly support that statement by the US head of delegation. While it may be true that prolonged periods of combat can dehumanize soldiers and their commanders, there is no better current or foreseeable way to ensure that the laws of war are rigidly applied than to trust in human judgment. In terms of making life and death decisions in war, there is not a more “perfect” system. In that respect I believe it is a bit like democracy. As Winston Churchill said “democracy is the worst form of government, except all those others that have been tried.”

Abrogating life and death decisions to a machine that does not have the skills, knowledge, intelligence, training, experience, humanity, and morality that men and women in uniform combine with situational awareness and knowledge of the laws of war to make decisions during conflict is morally, ethically and legally flawed.