This week nations meet at the United Nations to discuss lethal autonomous weapon systems (LAWS), including robotic weapons that might hunt for targets on their own. It has been 18 months since meetings were last held. This year’s discussions, the fourth since 2014, mark the first time talks will be held as a Group of Governmental Experts (GGE), a more formal discussion format than earlier talks. While the shift to a more formal format might seem like progress toward reaching an international consensus on what to do about autonomous weapons, the reality is that the pace of diplomacy continues to fall far behind the speed of technological advancement. Those advancements include major capabilities but also newly discovered limits in autonomy and artificial intelligence.
When nations first began discussing autonomous weapons in 2014, the issue was fairly forward-leaning. Lethal robotic weapons seemed like a distant future problem (even though simple versions had been used in limited ways for decades). In the years since, however, the field of artificial intelligence and machine learning has grown by leaps and bounds. Powered by advances in big data, computer processing power, and improvements in algorithms, AI-enabled systems are now beginning to tackle many problems that have been intractable for decades. AI systems have beaten humans at poker and the Chinese strategy game Go, including most recently reaching super-human level play at Go in a mere three days of self-practice with no human training data. AI systems can translate languages and transcribe speech. Self-driving cars are taking to the roads. Nation-states have deployed armies of Twitter bots to push propaganda. AI is being applied to medicine, finance, media, and many other industries. Our lives are increasingly influenced by algorithms. What might have seemed like science fiction when nations began talks only a few years ago is fast becoming the everyday.
No country has yet said they plan to build fully autonomous weapons, but several major military powers have made clear that robotics and artificial intelligence are key elements of their military’s competitive strategy. The United States has made robotics and autonomy a centerpiece of its “Third Offset Strategy” to reinvigorate America’s technological edge. China released a new national strategy on artificial intelligence earlier this year. And most recently Russian President Vladimir Putin said that whoever leads in AI “will become the ruler of the world.”
These political and technological developments change the context for UN talks on autonomous weapons significantly. To the extent that leading military powers see robotics and automation as central to their continued competitiveness, it may become harder to convince them to adopt any restrictions on use. More broadly, though, it can be difficult for nations to find secure policy footing in an ever-shifting technological landscape. The rapid pace of developments in AI means that the art of the possible is constantly changing. The impossible today could very well be achieved tomorrow, and it is difficult to predict how AI technology will unfold.
The reality is that AI technology today has limitations. Artificial intelligence is not magic. Just like how the internal combustion engine allows for intercontinental travel but does not equate to teleportation, artificial intelligence is powerful but only within narrow parameters. Any new technology can seem magical before its limits are understood. Using a technology effectively – and setting policy – must take into account practical details of what the technology can and cannot do. There would be no point in setting requirements for airline or highway safety without understanding the particulars of how airplanes and cars work.
Today, we have the ability to create specific-purpose, narrowly intelligent machines that can solve many tasks as well as or better than humans. In some cases, the machines are not as good as the best humans (this is the case in cybersecurity today), but they are good enough to be useful. Because machines can be deployed at scale, they can often be used to solve problems more cost-effectively than humans even if they are slightly slower, less capable, or more error prone. For some tasks, though, we can create machines that are objectively better than even the best humans. This is the case today for many games such as checkers, chess, Go, poker, and Jeopardy. In other settings, such as driving, the reaction times and constant vigilance of machines may give them a major advantage over humans.
In all of these cases, however, the machines that we are building are only narrowly intelligent. They can understand the task they are designed to perform, but not the context surrounding that task. Some types of artificial intelligence involve hard-coding rules for behavior into the system. This is the case for an airplane autopilot for example. Other techniques involve machine learning, where a system is given a goal and fed large amounts of training data or given the opportunity to interact with an environment in order to learn the optimal behavior. Machines have learned to play Go and simple computer games in this fashion. Over time, as algorithms, computer processing power, and data sets improve it will become possible to apply these techniques to more complex problems in more sophisticated environments. Machine learning techniques have proven to be particularly powerful for training purpose-built machines to perform tasks, in many cases far better than humans, but only when certain conditions are met. The task needs to be clearly specified and there must be sufficient data (real-world data or “synthetic” data from simulations) to feed the learning process.
What these limitations mean is that many tasks will continue to be difficult to solve under current techniques. More importantly, under current methods the machines we are building will continue to be only narrowly intelligent within their respective domains. Machine intelligence, at least to-date, is brittle. When the environment or context for the task they are performing changes such that they are now operating outside the bounds of their programming, machines can go from super-smart to super-dumb in an instant. Machines that can beat the very best humans in their domains can also make inexplicable errors that appear to demonstrate a lack of “common sense.” One of the major limitations of machine intelligence today is that machines cannot understand the context for their actions. Additionally, while they can perform specific tasks, they lack the ability to apply what we might think of as judgment, weighing various contextual factors and competing values.
Some of these limitations may be overcome by smarter, more capable systems that can account for more variables. There are technical limitations to current machine learning approaches, however, that make it difficult for machines to transfer learning from one task to another or acquire expertise in multiple domains without suffering from “catastrophic forgetting.” It is possible that these limitations are overcome in the relatively near-term – and researchers are making progress on these problems – or these limitations could persist indefinitely.
Even as machines advance, however, these new and powerful techniques in artificial intelligence also introduce novel vulnerabilities. These can take the form of safety risks from machines that fail in unexpected ways or security risks from vulnerabilities that adversaries exploit. Machine learning systems can learn the wrong behavior, either because the data was bad or the goals were not properly specified. This could happen by accident or because adversaries deliberately poison training data, manipulate the machine into learning the wrong behavior, or manipulate flaws in the system’s goal structure. These could lead to the system simply failing to perform its task adequately or outcomes where the system acts in some harmful way. Deep neural networks, a powerful form of AI that is being used in a variety of applications today, have shown a particular vulnerability to a form of spoofing attack where the machine is fed false data to manipulate its decision-making. This can be done without access to the training data or any knowledge about the internal logic of the algorithm. Moreover, these spoofing attacks can be hidden so that they are invisible to humans, and there is currently no known effective defense against this form of attack.
These limitations don’t mean that nations or other actors will necessarily avoid using AI technology. Software is eating the world despite its glitches and cyber vulnerabilities. Artificial intelligence is, in turn, eating software as machines become more intelligent. But these improvements will introduce their own vulnerabilities that policymakers will need to understand, particularly in adversarial settings like warfare.
The problem that policymakers face in grappling with the challenge of autonomous weapons is not that machines have certain limitations and vulnerabilities today. It is the pace of progress that is so difficult for setting effective policy. If all progress in AI stopped today, it might take some time to understand what is and is not possible, but reasonable guidelines could be set for how to use the technology. The problem is uncertainty about what the future might bring. Part II of this article will cover some different approaches for how policymakers might deal with this uncertainty.