Banning Killer Robots? A Way Forward

Photo Credit:

By: Paul Kumst, Columnist

The upcoming Fifth Review Conference of the Convention on Certain Conventional Weapons (CCW) may set the stage for a preventive ban of Lethal Autonomous Weapons Systems (LAWS), also referred to as “Killer Robots.” However, a variety of factors make it doubtful that a capable and committed Group of Governmental Experts (GGE) will be established to begin negotiations of a comprehensive ban. The lack of a sufficiently defined concept of autonomy, strong military incentives to apply this powerful new technology, and the probable US reluctance to support a new CCW Protocol intensify the already existing difficulties of preventive arms control. If the CCW Review Conference fails—once again—it seems plausible that the main supporter of a complete ban, the Campaign to Stop Killer Robots, will follow the example of the Mine Ban Treaty and the Convention on Cluster Munitions—of which both emerged after the CCW failed to take action.[i]

Autonomous weapons systems can select targets and apply lethal force without any human input or intervention. The relevance of adequate control was reinforced last year when 3,000 robotics and artificial intelligence experts described this technology as a potential revolution in warfare and warned of a “military AI arms race”.[ii]

Proponents argue that these systems offer great advantages, such as limiting the exposure of one´s own military personnel or facilitating faster battlefield operations by minimizing or eliminating control-communication latency.[iii] Also, an autonomous system could be programmed to use an “ethical governor,” theoretically improving its ability to discriminate between civilians and combatants, thus reducing negative human impacts due to stress and overreaction.[iv]

Advocates of a ban however warn of new strategic instabilities: excluding human vulnerability could lower the threshold for the use of force, and this new class of offensive means could undermine conventional deterrence capabilities, potentially triggering an arms race.[v] Furthermore, unforeseeable emergent interactions between two autonomous systems may increase the risk of unintended conflict escalation. To understand how this may work, look at recent “flash crashes” in the financial markets where unforeseen events triggered unexpected responses in high speed trading algorithms leading to massive losses.[vi] Lastly, omitting questions surrounding legal accountability, LAWS raise fundamental ethical questions about human dignity: should a machine be able to decide on its own whether to kill a human?

The greatest obstacle in reaching a comprehensive, pre-emptive ban on the development, production and use of LAWS, is the lack of an agreed-on concept of autonomy. Although this is unsurprising given how quickly this debate has evolved, it increases the risk of falling into rhetoric loopholes. Currently, autonomy is largely defined by the degree of human supervision, whether humans are “in,” “on,” or “out of the loop.”[vii] However, even within this particular perspective, two of the most important positions, the US Department of Defense (DOD) Directive 3000.09 and the “Losing Humanity” report by Human Rights Watch, disagree on core concepts such as “full autonomy.”[viii] Other positions further increase the complexity of the discourse. For example, they may evaluate a system’s autonomy based on its ability to operate for extended periods in an open, uncontrolled and complex environment.[ix] It is therefore safe to say that autonomy “is used by different people in different ways” and that therefore any potential negotiations are currently “particularly challenging.”[x]

There are only two historical examples for successful preventive arms control, the last one being an effective prohibition of blinding laser weapons in the CCW adopted Protocol IV in 1995. This time, the challenges are considerably greater. With LAWS’ disruptive potential not only are the stakes much higher, but the technological sophistication dramatically increases the difficulty of forging a broad international alliance when compared to 1995.[xi] This time, the private sector is heavily invested in robotics and autonomy, complicating the dual-use character and unintended negative implications of a potential prohibition. In addition, militaries are much more likely to claim the military necessity of LAWS than they were in comparison to blinding lasers.[xii]

Lastly, though not an obligatory prerequisite, the position of the United States will be a strong and essential determinant for the mandate of the CCW. Recent advances not only to reform their own UAV export guidelines in early 2015, but also the attempt to convince signatories of the Arms Trade Treaty to obey to the very same rules indicated a continued yet increasing sensitivity within the Obama Administration to such concerns.[xiii] While critics point out that these regulations do not grasp the technological nuances of the issue—demonstrating that not even the global leader in autonomy and robotics can create an adequate legal framework—they might still strengthen advances for internationally applicable rules.[xiv] However, the decreased likeliness that the incoming US administration will offer more restrictive and less ambiguous terms in a renewed Directive 3000.09, which will expire in 2017, generally increases the distress of activists working with the CCW.[xv]

Given these obstacles, it is particularly important for the Campaign to Stop Killer Robots to maintain momentum and build a broader base of support for a ban on LAWS. In 2015, activists were already disappointed when the CCW postponed the decision on a more formal GGE process to December 2016. This time, nothing but a formalized and credible commitment of the CCW will keep the ban advocates convinced that their cause won’t “die a slow death.”[xvi]

[i] The Campaign to Stop Killer Robots, “Frequently Asked Questions (FAQs)”, April 5, 2016,

[ii] Future of Life Institute, “Autonomous Weapons: An Open Letter From AI & Robotics Researchers”, July 28, 2015,

[iii] Frank Sauer, “Stopping ‘Killer Robots’: Why Now Is the Time to Ban Autonomous Weapons Systems”, Arms Control Association, September 30, 2016,

[iv] Ronald Arkin, “Lethal Autonomous Systems and the Plight of the Noncombatant”, AISB Quarterly, July 2013,

[v] Niklas Schörnig, „Krieg der Zukunft: Störfaktor Mensch“, inforadio rbb, November 25, 2016,

[vi] Jamie Condliffe, “Algorithms Probably Caused a Flash Crash of the British Pound”, MIT Technology Review, October 7, 2016,

[vii] Christof Heyns, “Report of the Special Rapporteur on extrajudicial, summary or arbitrary executions”, Human Rights Council, April 9, 2013,

[viii] Human Rights Watch, “Review of the 2012 US Policy on Autonomy in Weapons Systems”, April 15, 2013,

[ix] International Committee of the Red Cross, “International Humanitarian Law and the challenges of contemporary conflicts”, October 31, 2011,

[x] Paul Scharre, “Between a Roomba and a Terminator: What is Autonomy?”, War on the Rocks, February 18, 2015,

[xi] Rebecca Crootof, “Why the Prohibition on Permanently Blinding Lasers is Poor Precedent for a Ban on Autonomous Weapon Systems”, Lawfare, November 24, 2015,

[xii] Frank Sauer, “Stopping ‘Killer Robots’: Why Now Is the Time to Ban Autonomous Weapons Systems”

[xiii] Aaron Mehta, “US Seeking Global Armed Drone Export Rule”, Defensenews, August 25, 2016,

[xiv] Andrew Hunter and Andrew Metrick, “Obama´s New Drone Export Rules Won´t Sell More Drones”, Defense One, March 11, 2015,

[xv] Heather Roff and P.W. Singer, “The Next President Will Decide The Fate Of Killer Robots – And The Future Of War”, Wired, September 6, 2016,

[xvi] Frank Sauer, “Stopping ‘Killer Robots’: Why Now Is the Time to Ban Autonomous Weapons Systems”

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.