Moving the National Security AI Conversation Beyond Killer Robots

Drone in the sky. Photo credit: Pixabay.com 


By: Diane Pinto, Columnist                                   

When discussing the applications of artificial intelligence (AI) as they relate to the military and defense spaces, it is easy to think about terminators and cylons running amok killing indiscriminately. Elon Musk famously claimed that AI is humanity’s “biggest existential threat” and compared it to “summoning the demon.”[i] In reality, this is not the case. AI and machine learning technologies are mainly useful for simplifying tasks that require copious amounts of data to be processed quickly and accurately – tasks that are either difficult, inefficiently conducted by humans and traditional systems, or simply made easier with AI. National security entities like the military, law enforcement, intelligence community, and others can use AI technologies to enable better decision-making, often times in combat scenarios where the speed of accurate responses can mean life or death. Recently, the Pentagon announced a new Joint Artificial Intelligence Center to use AI to do just this and to find ways that AI can make Department of Defense services more efficient.[ii] In addition, the U.S. Government and military take ethical uses of AI into consideration so as to reduce inadvertent consequences and to ensure the accountability of human actors.[iii] Limiting the conversation for AI implementation in national security to “killer robots” is a mistake.

AI technologies are useful for a vast amount of national security and military purposes and simplify tasks by interpreting large sums of important data. In overwhelming battlefield situations that require quick decision-making, AI can speed the processing of situational data. For example, a fighter pilot that loses control of their aircraft might experience disorientation or have seconds to fix the situation before crashing or ejecting. An AI-enabled avionics system would be able to process aircraft sensor data to either enable the pilot to respond based on a specific malfunction or potentially assist the pilot at recovering quickly using AI-enabled autopilot systems.[iv]

AI tools can also help enhance the scope of information taken into consideration before planning a mission. Military planners can use AI tools to sift through surveillance data gathered by drones on geography, population, and armaments of a potential target. Using AI tools to process that data can enable military planners to determine target locations and perhaps find new targets based on patterns in the data not otherwise recognizable to human or traditional systems.[v] Once a precision-bombing mission is planned, this precise information can reduce the number of civilian casualties. This was the intent behind the Department of Defense’s Project Maven which enlisted commercial AI tools, originally from Google before it pulled out of its contract, to find military and intelligence targets using surveillance footage.[vi]

These examples are but a few of the various benefits that AI has to offer the military, intelligence services, and others in the national security space. They show that the primary use for AI is not for use in fully autonomous killing machines, but rather for processing immense amounts of data to make tasks like finding targets or flying a plane in an emergency situation much easier than if they were done manually or using traditional software tools. This enables, civilian partners, and many others to efficiently make better decisions than they otherwise would. AI tools generate valuable data and assessments for enhanced planning and execution of operations.

To be clear, there are still serious ethical concerns about AI’s application in the military and defense spaces. Using the ‘killer robot’ concern as an example, humans are appropriately involved in the decision-making process for acts of killing and destruction. Fittingly many policymakers and stakeholders have taken steps to ensure that these acts are not one hundred percent automated using AI tools. Thus far, US doctrine for AI application has shown that in the event that a piece of technology will be used to kill or cause destruction, a human is the ultimate executor of that decision.[vii] Innovators from around the world have pledged not to create tools that deal death and destruction at the behest of a machine.[viii] Both President Obama’s 2016 National AI Research and Development Plan[ix] and President Trump’s Executive Order on Maintaining American Leadership in AI[x] call for ethical implementation of AI systems. The EU passed a resolution banning ‘killer robots.”[xi] Even China’s AI National Strategy called for a separate government entity to study the ethics of AI applications.[xii] It remains to be seen how effective this ethics strategy will be for China due to different ideologies and values from Western powers, but it still shows that to some degree China is also concerned about potential misuses of AI in both civilian and military cases.

With international concern over the ethical implications of AI, fears over its implementation should be reduced. In any case, the U.S. Government should continue holding hearings and meeting with industry partners, civil society groups, and allies to ensure that any potential policy or regulatory approach takes into consideration the vast scope of stakeholders in the AI space to maximize AI’s potential in an ethical fashion. For the time being, though, it seems safe to say that in the U.S. at least, humans will never be completely removed from strategic planning, operationalization of that strategy, and the execution of military, intelligence, homeland security, and other similar national security operations. AI tools will simply help enhance the efficacy of how those in the US national security apparatus are able to carry out their duties. While there are certainly legitimate ethical concerns about the prolific implementation of AI systems in the national security space, the government continues to seek ethical ways for it to be able to implement AI technology. For the military, this has thus far meant that humans will always be the ultimate entity holding a trigger. Understanding how AI will be used to improve tasks will enable the conversation for policymakers and civilians to evolve beyond fears of killer robots and into more productive and ethical ways of implementing AI tools in the national security arena.

Bibliography

[i]  Matt McFarland, “Elon Musk: ‘With artificial intelligence we are summoning the demon,” The Washington Post. <https://www.washingtonpost.com/news/innovations/wp/2014/10/24/elon-musk-with-artificial-intelligence-we-are-summoning-the-demon/?utm_term=.287246ad9113>, October 24, 2014.

[ii]  Mark Pomerleau, “How DoD is getting serious about artificial intelligence,” C4ISRNet. <https://www.c4isrnet.com/c2-comms/2018/12/19/how-dod-is-getting-serious-about-artificial-intelligence/>, December 19, 2018.

[iii]  Patrick Tucker, “Pentagon Seeks a List of Ethical Principles for Using AI in War,” Defense One. <https://www.defenseone.com/technology/2019/01/pentagon-seeks-list-ethical-principles-using-ai-war/153940/>, January 4, 2019.

[iv]  Keith Button, “A.I. in the cockpit,” Aerospace America. <https://aerospaceamerica.aiaa.org/features/a-i-in-the-cockpit/>, January 2019.

[v]  Michael Horowitz, Paul Scharre, Gregory C. Allen, Kara Frederick, Anthony Cho, and Edoardo Saravalle. “Artificial Intelligence and International Security.” Center for a New American Security. <https://www.cnas.org/publications/reports/artificial-intelligence-and-international-security>, July 10, 2018.

[vi] Zachary Fryer-Biggs, “Inside the Pentagon’s Plan to Win Over Silicon Valley’s AI Experts,” Wired. <https://www.wired.com/story/inside-the-pentagons-plan-to-win-over-silicon-valleys-ai-experts/>, December 21, 2018.

[vii]  Luke Hartig, “Solving One of the Hardest Problems of Military AI: Trust,” Defense One. <https://www.defenseone.com/ideas/2019/04/solving-one-hardest-problems-military-ai-trust/155959/>, April 1, 2019.

[viii] “Lethal Autonomous Weapons Pledge,” Future of Life Institute. <https://futureoflife.org/lethal-autonomous-weapons-pledge/?cn-reloaded=1>.

[ix]  “The National Artificial Intelligence Research and Development Strategic Plan,” National Science and Technology Council, Networking and Information Technology Research and Development Subcommittee. <https://www.nitrd.gov/PUBS/national_ai_rd_strategic_plan.pdf>, October 2016.

[x]  Donald J. Trump, “Executive Order on Maintaining American Leadership in Artificial Intelligence,” The White House. <https://www.whitehouse.gov/presidential-actions/executive-order-maintaining-american-leadership-artificial-intelligence/>, February 11, 2019.

[xi]  Daphne Psaledakis, “EU lawmakers call for global ban on ‘killer robots,” Reuters. <https://www.reuters.com/article/us-eu-arms/eu-lawmakers-call-for-global-ban-on-killer-robots-idUSKCN1LS2AS>, September 12, 2018.

[xii] Graham Webster, Rogier Creemers, Paul Triolo, and Elsa Kania, “Full Translation: China’s ‘New Generation Artificial Intelligence Development Plan’ (2017),” New America. <https://www.newamerica.org/cybersecurity-initiative/digichina/blog/full-translation-chinas-new-generation-artificial-intelligence-development-plan-2017/>, August 1, 2017.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.