If killer robots are coming, many prominent artificial intelligence developers want no part in it. That’s the heart of a pledge, signed by over 160 AI-related companies and organizations, released to the public July 17 in Stockholm. The pledge is short, clocking in at under 300 words, and it has at its heart a simple, if somewhat unusual, promise: If violence is to be done, so be it, but life-ending decisions should be squarely the domain of humans, not machines.
“Thousands of AI researchers agree that by removing the risk, attributability, and difficulty of taking human lives, lethal autonomous weapons could become powerful instruments of violence and oppression, especially when linked to surveillance and data systems,” reads the Lethal Autonomous Weapons Pledge the pledge in part.
The pledge continues, “Moreover, lethal autonomous weapons have characteristics quite different from nuclear, chemical and biological weapons, and the unilateral actions of a single group could too easily spark an arms race that the international community lacks the technical tools and global governance systems to manage. Stigmatizing and preventing such an arms race should be a high priority for national and global security.”
In highlighting the threat posed by lethal autonomous systems, the authors group biological, chemical and nuclear weapons together as managed, solvable threats, a curious approach for buy-in based around the superiority of human control over algorithmic decisions. Chemical weapons are a hardly a relic of the past; their use by the Assad government in Syria has drawn international condemnation, singled out as a cruel and unconventional weapon in a war rife with cruelty from conventional weapons.
Nuclear arsenals, too, are only stable to the extent that policy makers and those with nuclear launch authority (in the United States that means control rests entirely with a single human in an executive capacity; elsewhere it is vested instead in a select council). The signals that feed into the broader structure of a nuclear command-and-control system are a mix of machines filtered by humans.
When Soviet lieutenant colonel Stanislav Petrov refused to pass along a warning of an American nuclear launch in 1983, it was because he did not trust the sensors that fed him that information and found no confirmation elsewhere. Petrov is perhaps a poster child for human control; he saw through a false positive and waited for more confirmation that never came, likely averting a thermonuclear exchange. But to treat the nuclear question as relatively solved and free from arms races is to assume a preponderance of Petrov’s throughout the nuclear establishment of several nations.
The pledge goes further than simply highlighting the danger posed by AI left to make lethal decisions instead of humans. The signatories themselves pledge that “we will neither participate in nor support the development, manufacture, trade or use of lethal autonomous weapons.”
The “how” and the “what” of lethal autonomous weapons is left undefined. To some extent, autonomy is already present throughout existing weapons, in everything from guided missiles to land mines and more. This is no small issue — the definition of lethal autonomy in international law remains a hotly debated subject, and militaries often formally disavow lethal autonomy while committing to greater degrees of human-overseen autonomous systems. Would the pledge signers agree to design autonomous sensor systems, which are then incorporated into a weapon system by a third party after completion? Is there a provision for auto-targeting built into defensive systems, like those made to track and intercept rockets or drones?
It is maybe too much to expect that the Lethal Autonomous Weapons Pledge define lethal autonomy before the term is grounded in international law. And for people concerned about private companies, university research teams and governments actively working on weapons that can think and decide who to kill, then the pledge is one effort to stop the harm before it’s committed. Yet the how and the what of the pledge are vital questions, ones that will likely need to be answered publicly as well as internally, if the signatories are truly to see a world where nations refuse to develop, field and use thinking machines for violence.
Without a clarity of what lethal autonomy means, the result could risk being another Washington Naval Treaty, a well-intentioned scheme to prevent future arms races that was easily tossed aside by nations as soon as it became inconvenient, remembered as little more than trivia today.
Kelsey Atherton blogs about military technology for C4ISRNET, Fifth Domain, Defense News, and Military Times. He previously wrote for Popular Science, and also created, solicited, and edited content for a group blog on political science fiction and international security.