Stylized representation of Joint All Domain Command and Control. Photo Credit: Raytheon Intelligence and Space.
In 2018, the Department of Defense (DoD) set up the Joint Artificial Intelligence Center (JAIC) to consolidate the DoD’s artificial intelligence R&D projects under one organization. The JAIC is currently responsible for over 30 projects with broad applications, including battlefield AI as well as far more mundane programs.[i] Yet as the JAIC becomes increasingly focused on integrating AI technologies into battlefield roles, a host of ethical questions must be addressed, and the JAIC should make transparency a top priority. A robust public debate surrounding the acceptable uses of military AI may be uncomfortable but will ultimately lead to healthier development programs and the creation of norms and standards around the use of military AI.
Initially, JAIC projects focused on support tasks such as logistics, healthcare, and business operations. According to the JAIC’s chief technology officer Nand Mulchandani, the “early products … were really focused on kind of starter AI projects when it came to things like predictive maintenance and humanitarian assistance and disaster relief.”[ii] However as the JAIC evolved, its main focus has become to support joint warfighting and network-centric warfare.[iii]
Joint warfighting is a doctrine that calls for the integration and synchronization of capabilities across the different military branches into a single, unified action.[iv] To facilitate this level of coordination, the Pentagon has pioneered a concept called network-centric warfare wherein information technology and an increasing number of sensors are used to cut through the fog of war to gain decisive coordination and information awareness advantages. Though the term dates back to the 1990s, this network-centric trend continues today in the Joint All-Domain Command-and-Control program (JADC2), which aims to link every sensor and information system across the military branches to allow unprecedented insight into and control over the battlefield.[v] However, with an increase in the amount of incoming raw data comes a need to rapidly process and understand it. AI systems that are able to analyze this information, identify patterns, and provide recommendations will be key to making sense of these inputs in a timely manner.
Beyond information processing and joint warfighting support, more active autonomy has also been named as a future goal for the JAIC.[vi] Yet the use of AI in more active autonomous programs—like lethal autonomous weapon systems (LAWS)—is likely to receive notably more pushback. For instance, the Campaign to Stop Killer Robots maintains a list of 30 countries that have so far declared their support for a ban on LAWS.[vii] In open letters from the Future of Life Institute, over 4,000 AI experts and 200 technology companies have called for action against military AI proliferation.[viii] While such concerns have prompted work on international regulations for AI in the UN Convention on Certain Conventional Weapons, significant standards and norms will likely never be enacted or enforced unless major technological powers such as the U.S., UK, China, and Russia take the lead.
Given the array of potentiality and concerns with military AI, navigating ethical questions will be one of the biggest challenges for the JAIC, as it continues to develop as an organization. To their credit, the JAIC has already committed to “leading in military ethics and AI safety” in its core mission.[ix] To that end, the JAIC should focus on public transparency and lead a robust public debate on the acceptable uses of military AI technologies. This starts with developing and publicizing a framework for classifying projects by the level of decision making freedom and proximity to critical (e.g. life-or-death) situations. To the extent possible, it should then be used to categorize all ongoing projects and determine what safeguards and oversight are necessary for each project. Enough information should be available to be debated in a public forum, and while public transparency will likely engender scrutiny that the JAIC may wish to do without, it may also produce benefits. The first is that a public conversation is critical for the creation of socially acceptable norms and standards surrounding the use of military AI. AI will bring transformational change, and the public has the right to understand, debate, and discuss how it will be used to fight wars. The JAIC has correctly identified ethics and governance as a core mission, but the public also deserves the right to provide input into the process of shaping these AI norms. If we are to have any hope of creating international standards surrounding acceptable use of military AI, we have to start by creating such standards at home.
Additionally, transparency will make it easier to manage relationships with industry partners, which is another one of JAIC’s core functions. There has been some pushback from the tech workforce regarding military AI research. Project Maven—a joint DoD-Google project that conducted AI imagery analysis—suffered a setback in 2018 when around 4,000 Google employees publicly demanded that their company end the partnership, citing concerns over Google technology being used to develop weapons.[x] Eventually Google did, declaring that it would not work on AI projects that developed “weapons or other technologies whose principal purpose or implementation is to cause or directly facilitate injury to people.”[xi] Since then, the DoD has been working to rebuild relationships in the wider industry, including Google. A strong defense industrial base is key for the development and acquisition of any new technologies, AI or otherwise. However when it comes to AI, much of the experience and talent is contained in Silicon Valley tech companies, which have very different cultures, audiences, and expectations than traditional defense contractors and who may be hesitant to work with the DoD. Transparency and open dialogue may go a long way in building durable, lasting relationships with tech companies and their employees, which can ultimately strengthen the AI sector in the U.S.
The development of AI systems will transform nearly every sector and industry in the coming decades, including the military. With the JAIC set to play a fundamental role in this development, it should take this opportunity to commit to transparency and open dialogue. The resulting conversation will help develop domestic and international norms surrounding the military uses of AI and also help build trust with private industry players who will be central to the development of a robust and ethical AI sector.
Bibliography
[i] Yasmin Tadjdeh,“JUST IN: Joint Artificial Intelligence Center Embarks on Next Chapter,” National Defense Magazine, November 6, 2020, https://www.nationaldefensemagazine.org/articles/2020/11/6/joint-artificial-intelligence-center-embarks-on-next-chapter.
[ii] Yasmin Tadjdeh, “Joint Artificial Intelligence Center Keeps Branching Out,” November 3, 2020, National Defense Magazine, https://www.nationaldefensemagazine.org/articles/2020/11/3/joint-artificial-intelligence-center-keeps-branching-out.
[iii] “Rep. Artificial Intelligence and National Security,” Congressional Research Service, 2020, https://fas.org/sgp/crs/natsec/R45178.pdf.
[iv] “Rep. Doctrine for the Armed Forces of the United States,” Department of Defense, July 12, 2017, https://www.jcs.mil/Portals/36/Documents/Doctrine/pubs/jp1_ch1.pdf.
[v] “Rep. Joint All-Domain Command and Control (JADC2),” Congressional Research Service, 2020, https://fas.org/sgp/crs/natsec/IF11493.pdf.
[vi] Yasmin Tadjdeh, “Joint Artificial Intelligence Center Keeps Branching Out,” November 3, 2020, https://www.nationaldefensemagazine.org/articles/2020/11/3/joint-artificial-intelligence-center-keeps-branching-out.
[vii] “Rep. Country Views on Killer Robots,” Campaign to Stop Killer Robots, October 25, 2019, https://www.stopkillerrobots.org/wp-content/uploads/2019/10/KRC_CountryViews_25Oct2019rev.pdf.
[viii] “Open Letter on Autonomous Weapons,” Future of Life, March 15, 2019, https://futureoflife.org/open-letter-autonomous-weapons; “Lethal Autonomous Weapons Pledge,” Future of Life, Retrieved November 09, 2020, https://futureoflife.org/lethal-autonomous-weapons-pledge/.
[ix] “SUMMARY OF THE 2018 DEPARTMENT OF DEFENSE ARTIFICIAL INTELLIGENCE STRATEGY,” Department of Defense, 2018, https://media.defense.gov/2019/Feb/12/2002088963/-1/-1/1/SUMMARY-OF-DOD-AI-STRATEGY.PDF
[x] Daisuke Wakabayashi and Scott Shane, “Google Will Not Renew Pentagon Contract That Upset Employees,” June 1, 2018, https://www.nytimes.com/2018/06/01/technology/google-pentagon-project-maven.html.
[xi] Will Knight, “Military Artificial Intelligence Can Be Easily and Dangerously Fooled,” Technology Review, October 21, 2019, https://www.technologyreview.com/2019/10/21/132277/military-artificial-intelligence-can-be-easily-and-dangerously-fooled/.