A US soldier takes biometric information. Photo Credit: Defense Systems Information Analysis Center, Cpl. Alejandro Pena
In recent months, US politicians and activists have called for a nationwide ban of facial recognition systems for local, state, and federal law enforcement agencies.[i] The ban comes after years of activism from researchers detailing the tendency of facial recognition algorithms to misidentify Black, Indigenous, and people of color (BIPOC).[ii] For instance, organizations such as the Algorithmic Justice League and Fight for the Future have argued that unregulated facial recognition software contributes to social injustice by amplifying discrimination.[iii] In the wake of the George Floyd protests and widespread unrest over police brutality, lawmakers are beginning to take action. In June, Congressman Donald Beyer (D-VA) introduced H.R. 7235, a bill that would prohibit the use of facial recognition software on any image obtained from law enforcement body camera footage.[iv] In February, Senator Jeff Merkley (D-OR) introduced legislation to place a temporary moratorium on federal agencies’ use of facial recognition to identify individuals in the United States, at least until Congress could enact legislation with guidelines.[v] Against this backdrop, IBM, Amazon, and Microsoft announced a halt of sales of facial recognition technology to police departments in the U.S.[vi]
Less attention has been paid, however, to the military’s use of these systems and its ongoing efforts to expand facial recognition technology. In order to ensure the ethical use of facial recognition on the battlefield as well as at home, the military must increase transparency of when and how it uses facial recognition technology. Furthermore, Congress should deploy its powers of the purse and oversight to closely monitor the military’s use and development of facial recognition technology. Indeed, the same algorithmic racial biases reflected in American law enforcement likely permeate the face recognition systems of US Central Command (CENTCOM) and other military departments. As Joy Buolamwini describes in her viral Ted Talk, algorithmic bias can quickly spread when developers utilize the same biased facial sets to create their algorithms.[vii] Combined with the military’s firepower, these biases may have deadly consequences. In this arena, ethics regulations and supervision must come before and during implementation, not after misuse comes to light.[viii]
The Problems with Facial Recognition Go Beyond Inaccuracy
Numerous studies over the past half decade have documented the relative inaccuracy of facial recognition systems in identifying BIPOC, women, and children.[ix] In the United States, faulty facial recognition technology has already resulted in cases of misidentification. Earlier this year, Detroit police arrested Robert Williams after facial recognition technology falsely matched his driver’s license photo with surveillance footage from a local robbery.[x]
Proponents of facial recognition technology may argue that the algorithms need only be improved before they are implemented. However, as the US case has shown, the problems with facial recognition use extend into the realms of surveillance, privacy, and structural racism. As Tawana Petty, a fellow at Stanford’s Digital Civil Society Lab, said, “for Black people, surveillance ain’t safety.”[xi] Petty further describes the long history of surveillance as a means to police and control Black people.[xii] Lantern laws, for example, demanded that Black people carry lanterns with them at night when walking alone, so that they could be easily identified and surveilled by white people.[xiii] Thus, facial recognition technologies, especially when wielded by the military and law enforcement, play into a long history of racist surveillance. The “perfection” or refinement of algorithms cannot address the racism and bias inherent with when and how such systems are put to use.
While many researchers have already demonstrated the algorithmic flaws in facial recognition, others have attempted to advance the technology in troubling ways. In May, researchers at the Harrisburg University for Science and Technology claimed they had created an unbiased facial recognition software that could predict criminality based solely on a facial image.[xiv] Nearly a century ago, British eugenicist Francis Galton attempted to do the same thing, but was unable to distinguish a precise criminal “type.”[xv] Beyond criminality, the Israeli company Faception uses facial images to create individual personality scorecards using eight different personality types or “classifiers.” [xvi] These classifiers include the bingo player, academic researcher, pedophile, and terrorist. [xvii] Facial recognition systems such as these from Faception and Harrisburg University ultimately rely on and advance stereotypes and racism. For the Pentagon official, however, technology that claims to predict criminality or terrorist potential using only a facial image may be tempting.
How the Military Uses Facial Recognition
A lack of publicly available information combined with limited press scrutiny and public awareness prevent an adequate evaluation of the ethical questions surrounding the military’s use of face recognition.[xviii] According to publicly accessible information, CENTCOM currently employs the Army’s Video Identification, Collection, and Exploitation (VICE) software to broadly identify individuals who might threaten Department of Defense (DoD) missions.[xix] VICE has reportedly generated 25,000 biometric records and matched its data to the Pentagon’s Biometrically-Enabled Watchlists.[xx] VICE technology has also enabled CENTCOM to cull publicly available data from media sources and produce a database of individuals.[xxi]
In addition to VICE, the Army’s Defense Forensics and Biometrics Agency (DFBA) supervises the vast Automated Biometric Information System, which houses some 7.4 million biometric profiles of people who have crossed paths with the US military abroad.[xxii] Identity profiles include the faces, DNA, fingerprints, and other biometric information of a wide range of individuals, from allied soldiers working with US troops to terrorists attempting to harm them.[xxiii] These few details show that the military is vastly expanding the reach of its face recognition program, without divulging necessary details to evaluate its potential impact on human rights or other ethics issues.
At a Breakneck Pace, Technological Innovation Trumps Ethics
Keen to be on the cutting edge, the DoD has a history of developing and implementing new technology on the battlefield without examining the ethical implications of its use. In the 1960s, Dow chemicals joined with the Pentagon under a $5 million contract to develop napalm.[xxiv] The chemical quickly gave rise to the most salient images of the Vietnam war. Notably, a photo of a young Vietnamese girl covered in napalm burns running naked down the street shook the conscience of the American public. Popular outcry soon reached the international stage and in 1980 the United Nations banned napalm’s use against civilians.[xxv] A similar story may play out with facial recognition technology, one ending in questions of how we got here and general American embarrassment.
Ethical controversy surrounding the proliferation of drones also tells a cautionary tale of hasty military advancement. Unmanned aircraft quickly became the go-to cost-efficient alternative to conventional troop forces and largely outpaced regulation and oversight, leaving both Republican and Democratic administrations vulnerable to criticism on ethical grounds. Under President Obama, the Pentagon began categorizing all men aged 18 or older killed by drone strikes as combatants, unless explicit intelligence proved otherwise.[xxvi] President Trump also received criticism in 2019 when he revoked a provision requiring annual public reports outlining the number of civilian deaths from drone strikes.[xxvii] For some tech sector employees, the DoD’s use of drones underscored the inadequacy of policies to protect civilians. In 2018, Google withdrew its support for Project Maven, a DoD initiative to use artificial intelligence (AI) technology in drone footage analysis.[xxviii]
Military officials often frame discussions of new technologies as part of a battle for strategic dominance, a dangerous tendency that binds high-speed technological advancement to national security. For example, in a July news release posted on the DoD’s website, Acting Director of the Joint Artificial Intelligence Center Nand Mulchandani argued that the US military still leads in AI technology despite facing “formidable technological competitors and challenging strategic environments.”[xxix] He emphasized that the U.S. did not engage in the type of citizen surveillance and unregulated facial recognition that the Chinese have become known for.[xxx] Yet, Mulchandani’s statement contradicts the sentiments of DFBA Director Glenn Krizay who, in his closed-door presentation at the 2019 Identity Management Symposium, described a global surveillance system with interagency functioning that could match biometrics from a suspicious person in Detroit to DoD-collected data from “some mountaintop in Asia.”[xxxi]
The Pentagon also appears to be charging ahead with plans to expand facial recognition technology to the battlefield. In 2016, US Special Operations Command (SOCOM) began the Advanced Tactical Facial Recognition at a Distance project, which aims to develop a portable device that can identify individuals 1 kilometer away using facial recognition.[xxxii] This year, the DoD awarded a $4.5 million contract to Polaris Sensor Technologies and Cyan Systems to make a facial recognition device that can see in the dark using polarization enhanced thermal cameras.[xxxiii] When, in what scenarios, and against whom, the military will employ such technologies remains unclear. If the past provides any insight, these technologies will likely be employed without proper oversight or input from ethics experts.
The Way Forward
US police use of facial recognition clearly demonstrates the racial bias of these technologies and highlights ethical issues of a global surveillance system entirely maintained and monitored by the US government. The military, with its history of ignoring ethics issues in favor of pursuing technological supremacy on the battlefield, cannot be trusted to protect the liberties and identities of civilians overseas. Given the racist history behind face recognition, its biased technological applications today should come as no surprise.
The same companies and lawmakers who put the brakes on these technologies in the U.S. must place limits on military use of facial recognition technology as well. Employees at companies such as Leidos and Ideal Innovations Incorporated, which partially monitors the DoD’s database, should pressure their executives to increase transparency and push for regulations.[xxxiv] Similarly, employees at tech companies such as IBM and Microsoft should advocate for extending the ban on selling facial recognition technology to the US military. Such initiatives have proven fruitful in the past, for example when Google employees pressured the company to pledge it would not develop AI weapons.[xxxv] Lastly, in placing a temporary moratorium on the military’s use and development of facial recognition, Congress and the American public can evaluate what the ethical standards for facial recognition technology should be. A pause would also give the international community a chance to develop international regulations before ubiquitous use on the battlefield and abroad. Ultimately, the timely development of ethical standards will determine the future of facial recognition technology.
Bibliography
[i] Tawana Petty, “Defending Black Lives Means Banning Facial Recognition,” Wired, July 10, 2020, accessed September 18, 2020, https://www.wired.com/story/defending-black-lives-means-banning-facial-recognition/.
[ii] “NIST Study Evaluates Effects of Race, Age, Sex on Face Recognition Software,” NIST, December 19, 2019, accessed September 24, 2020, https://www.nist.gov/news-events/news/2019/12/nist-study-evaluates-effects-race-age-sex-face-recognition-software.
[iii] “About,” Algorithmic Justic League, accessed September 29, 2020, https://www.ajl.org/.; “Ban Facial Recognition,” Fight for the Future, accessed September 29, 2020, https://www.banfacialrecognition.com/.
[iv] “H.R. 7235 – Stop Biometric Surveillance by Law Enforcement Act,” Congress.gov, accessed September 24, 2020, https://www.congress.gov/bill/116th-congress/house-bill/7235/text?r=50&s=1.
[v] “S. 3284 – Ethical Use of Facial Recognition Act,” Congress.gov, accessed September 25, 2020, https://www.congress.gov/bill/116th-congress/senate-bill/3284/text?q=%7B%22search%22%3A%5B%22facial+recognition%22%5D%7D&r=3&s=1.
[vi] Larry Magid, “IBM, Microsoft and Amazon Not Letting Police Use Their Facial Recognition Technology,” Forbes, June 12, 2020, accessed September 25, 2020, https://www.forbes.com/sites/larrymagid/2020/06/12/ibm-microsoft-and-amazon-not-letting-police-use-their-facial-recognition-technology/#147bc0981887.
[vii] Joy Buolamwini, “How I’m fighting bias in algorithms,” November 2016, TEDxBeaconStreet, Boston, Massachusetts, 8:44. https://www.ted.com/talks/joy_buolamwini_how_i_m_fighting_bias_in_algorithms?language=en.
[viii] Joy Buolamwini, “We Must Fight Surveillance to Protect Black Lives,” One Zero, June 3, 2020, accessed September 28, 2020, https://onezero.medium.com/we-must-fight-face-surveillance-to-protect-black-lives-5ffcd0b4c28a.
[ix] “NIST Study Evaluates Effects of Race, Age, Sex on Face Recognition Software,” NIST, December 19, 2019, accessed September 24, 2020, https://www.nist.gov/news-events/news/2019/12/nist-study-evaluates-effects-race-age-sex-face-recognition-software.
[x] Bobby Allyn, “‘The Computer Got It Wrong’: How Facial Recognition Led to False Arrest of Black Man,” NPR, June 24, 2020, accessed September 24, 2020, https://www.npr.org/2020/06/24/882683463/the-computer-got-it-wrong-how-facial-recognition-led-to-a-false-arrest-in-michig.
[xi] Tawana Petty, “Defending Black Lives Means Banning Facial Recognition,” Wired, July 10, 2020, accessed September 25, 2020, https://www.wired.com/story/defending-black-lives-means-banning-facial-recognition/.
[xii] Ibid.
[xiii] Claudia Garcia-Rojas, “The Surveillance of Blackness: From the Trans-Atlantic Slave Trade to Contemporary Surveillance Technologies,” Truthout, March 3, 2016, accessed September 25, 2020, https://truthout.org/articles/the-surveillance-of-blackness-from-the-slave-trade-to-the-police/.
[xiv] Aaron Holmes, “Researchers said their ‘unbiased’ facial recognition could identify future criminals – then deleted the announcement after backlash,” Business Insider, May 7, 2020, accessed September 25, 2020, https://www.businessinsider.com/harrisburg-university-lab-facial-recognition-identify-future-criminals-2020-5.
[xv] Sahil Chinoy, “The Racist History Behind Facial Recognition,” The New York Times, July 10, 2019, accessed September 20, 2020, https://www.nytimes.com/2019/07/10/opinion/facial-recognition-race.html.
[xvi] Ibid.
[xvii] “Our Technology,” Faception, accessed September 25, 2020, https://www.faception.com/about-us.
[xviii] Dave Gershgorn, “EXCLUSIVE: This is How the U.S. Military’s Massive Facial Recognition System Works,” One Zero, November 6, 2019, accessed September 28, 2020, https://onezero.medium.com/exclusive-this-is-how-the-u-s-militarys-massive-facial-recognition-system-works-bb764291b96d.
[xix] Lauren C. Williams, “How the Army is advancing facial recognition technology,” The Business of Federal Technology, August 30, 2019, accessed September 25, 2020, https://fcw.com/articles/2019/08/30/facial-recognition-vice-vibes-army.aspx.
[xx] Ibid.
[xxi] Ibid.
[xxii] Dave Gershgorn, “EXCLUSIVE: This is How the U.S. Military’s Massive Facial Recognition System Works,” One Zero, November 6, 2019, accessed September 28, 2020, https://onezero.medium.com/exclusive-this-is-how-the-u-s-militarys-massive-facial-recognition-system-works-bb764291b96d.
[xxiii] Ibid.
[xxiv] Kevin Roose, “Why Napalm Is a Cautionary Tale for Tech Giants Pursuing Military Contracts,” The New York Times, March 4, 2019, accessed September 24, 2020, https://www.nytimes.com/2019/03/04/technology/technology-military-contracts.html.
[xxv] Robert Alden, “U.N. Committee Urges a Ban on Naplam,” The New York Times, October 18, 1972, accessed September 28, 2020, https://www.nytimes.com/1972/10/18/archives/un-committee-urges-a-ban-on-napalm.html.
[xxvi] Conor Friedersdorf, “Under Obama, Men Killed by Drones Are Presumed to be Terrorists,” The Atlantic, May 29, 2012, accessed September 25, 2020, https://www.theatlantic.com/politics/archive/2012/05/under-obama-men-killed-by-drones-are-presumed-to-be-terrorists/257749/.
[xxvii] “The Secret Death Toll of America’s Drones,” The New York Times, March 30, 2019, accessed September 24, 2020, https://www.nytimes.com/2019/03/30/opinion/drones-civilian-casulaties-trump-obama.html.
[xxviii] Nick Statt, “Google reportedly leaving Project Maven military AI program after 2019,” The Verge, June 1, 2018, accessed September 28, 2020, https://www.theverge.com/2018/6/1/17418406/google-maven-drone-imagery-ai-contract-expire.
[xxix] C. Todd Lopez, “Where it Counts, U.S. Leads in Artificial Intelligence,” Department of Defense, July 9, 2020, accessed September 23, 2020, https://www.defense.gov/Explore/News/Article/Article/2269200/where-it-counts-us-leads-in-artificial-intelligence/.
[xxx] Ibid.
[xxxi] Dave Gershgorn, “EXCLUSIVE: This is How the U.S. Military’s Massive Facial Recognition System Works,” One Zero, November 6, 2019, accessed September 28, 2020, https://onezero.medium.com/exclusive-this-is-how-the-u-s-militarys-massive-facial-recognition-system-works-bb764291b96d.
[xxxii] Luke Dormhel, “U.S. military face recognition could identify people from 1 km away,” Digital Trends, February 18, 2020, accessed September 25, 2020, https://www.digitaltrends.com/cool-tech/military-facial-recognition-tech-kilometer/.
[xxxiii] Dave Gershgorn, “The Military Is Building Long-Range Facial Recognition That Works in the Dark,” One Zero, January 13, 2020, accessed September 28, 2020, https://onezero.medium.com/the-military-is-building-long-range-facial-recognition-that-works-in-the-dark-4f752fa713e6.
[xxxiv] Dave Gershgorn, “EXCLUSIVE: This is How the U.S. Military’s Massive Facial Recognition System Works,” One Zero, November 6, 2019, accessed September 28, 2020, https://onezero.medium.com/exclusive-this-is-how-the-u-s-militarys-massive-facial-recognition-system-works-bb764291b96d.
[xxxv] Nick Statt and James Vincent, “Google pledges not to develop AI weapons, but says it will still work with the military,” The Verge, June 7, 2018, accessed September 28, 2020, https://www.theverge.com/2018/6/7/17439310/google-ai-ethics-principles-warfare-weapons-military-project-maven.
And whom is the first to embrace it and support it – Red China with Google’s assistance.
We see like the Soviets and Nazis of old, that this propaganda piece disregards truth of whom is doing what at present.