Racism is Systemic in Artificial Intelligence Systems, Too

Artificial intelligence systems have the capability to exploit existing systemic inequalities due to the ease for bias to seep in. Photo Credit: Geralt/Pixabay

When many people think of artificial intelligence (AI) systems, they think of robots or self-driving cars.  AI often elicits a sense of amazement, of wonder, of possibility for the future.  But just as AI offers advancements, there is also the potential for a bleak future – machines are prone to bias and racism.  These machines learn by running training data through algorithms, each crafted by human handlers.  Therefore, if the data inputted into the system is biased, the result will be biased too. 

This is dangerous and the threat needs to be addressed before biased AI systems become ubiquitous.  Ultimately, AI systems have the potential to deepen existing systemic inequalities, particularly in industries like healthcare, employment, and criminal justice.  This injustice could lead to growing distrust in private and governmental institutions as marginalized groups face further discrimination.  A deep-seeded distrust in these important institutions is detrimental to American national security, as democracy requires a certain level of trust from its citizens to function.  As such, we need to recognize and combat these biases to prevent discrimination and racist actions by machines that will likely be a large part of our future.

It is important to focus on private companies when considering AI because of their dominance in research and development, as well as their aim to proliferate this technology.  There are already countless examples of bias in AI exemplified in private companies’ practices.  For example, Amazon recently created a facial recognition system, “Rekognition,” that was proficient at detecting lighter-skinned men but had trouble identifying darker-skinned women and men.[1]  Facial recognition AI programs learn from training data, in this case millions of pictures inputted by humans.  The service misidentified women for men 19% of the time and darker-skinned women for men 31% of the time, but for lighter-skinned males, there was no error.[2]  While unknown, this was likely due to the skewed sample of training data pictures, with more containing lighter-skinned people than darker-skinned people.[3] 

Amazon has additionally had trouble with a biased AI recruiter tool, a system that scanned resumes to help rank job candidates.  The tool used data from past hires to pick future qualified candidates, but because historically most people hired were men, the system had taught itself that male candidates were preferable.[4]  When Amazon realized this problem, engineers tried to fix it by writing a “neutral” algorithm that wasgender-blind.[5]  Unfortunately, they found that the gender-based discrimination was too deeply ingrained in the system to easily technically solve, and Amazon pulled the program.[6]

Bias in AI has also bled into the criminal justice system.  A program used by the United States court system, Correctional Offender Management Profiling for Alternative Sanctions (COMPAS), was found to mistakenly label Black defendants as likely to reoffend – erroneously flagging them at a rate of 45% compared to 24% of white people.[7]  Investigative journalism organization ProPublica found that the system relied on data that reflected human prejudices such as arrest records, postcodes, social affiliations, and income levels.[8]  In other words, ProPublica found that this software – used across the country to predict future criminals – was biased against Black people because it relied on training data imbued with systemic bias.[9]

How is AI Biased?

It is first important to understand how artificial intelligence works before diving into policy solutions to prevent biased AI systems.  AI means teaching machines to learn and improve on their own using “neural networks,” technical networks that mirror a human brain.[10]  Machines learn through inputted training data (including images and information) and algorithms that govern how systems process information and make decisions.[11]  Humans provide the training data and construct the algorithms.  Developers like Google and DeepMind hope for AI to become nearly universal in the future, underpinning search engines, driving our cars, directing the policing of our streets, and helping provide healthcare.[12]  Technology, already embedded in our lives to some degree, is likely to become even further ingrained.[13]

The quintessential problem is that the information we feed the machines reflects the biases and inequalities in our own society.[14]  Machines on their own are not racist, but if we give them biased data or algorithms, their decisions will amplify discrimination.  There are really two causes for algorithmic bias: (1) historical human biases and (2) unrepresentative or incomplete training data and algorithms.[15]

The first cause for AI bias is rooted in historical human prejudices.  Human biases are shaped by deeply embedded prejudices against certain groups of people, and these biases can be amplified within computer models.[16]  AI systems thus intake existing biases in areas including healthcare, criminal justice, and education.  For example, in the aforementioned COMPAS algorithm, if Black people are more likely to be arrested in the United States due to historical racism and disparities in policing practices, this will be reflected in the training data.[17]  As such, AI systems that predict the likelihood of future criminal acts would also be biased and discriminatory against Black people, as they rely on the systemic racism coded into the data on this group.  Because systemic racism is ingrained in so many areas – everything from income to zip code – it will be extremely difficult to mitigate this problem.

The second cause for biases is unrepresentative or incomplete training data or algorithms.  The fault here is on the humans inputting information into machine learning systems, rather than systemic biases – handlers have neglected to give a complete paradigm for the computer.[18]  For example, in Amazon’s Rekognition tool, the biases were likely due to a surplus of lighter-skinned photos and not enough darker-skinned photos, resulting in the computer being insufficiently trained.[19]

It is also possible to have constructed an unrepresentative algorithm, the way systems process information.  Recently, an algorithm used by the health services company Optum was found to underestimate the needs of Black patients.[20]  The AI system was not intentionally racist and the algorithm specifically excluded race; it was only trying to predict which patients would benefit from extra medical care.[21]  However, to identify the patients for future medical care, it looked at how much patients’ treatments would cost the healthcare system based on their previous costs.[22]  But healthcare costs are biased and heavily influenced by systemic racism, and as such, this algorithm was unrepresentative and discriminatory toward Black patients.[23]

What Does Biased AI Mean For National Security?

Policymakers should be concerned about bias in AI harming American national security for a few different reasons.  To begin, of course, it hurts the American people to suffer discrimination that risks resulting in unfair prison sentences, negligible healthcare, and difficulty securing a job.  For example, one can imagine how painful it would be to be picked up by the police and placed in prison purely based on skin color and a mistaken read by an AI system or to receive an incorrect suggestion from a machine about proper healthcare treatments. 

But there is also a larger national security issue at risk: the potential for distrust in private and governmental institutions.  Inequity within our society always matters, but the stakes are even higher with AI – if this technology is imbued in our social and political institutions, patterns of exclusion will be exacerbated on a grander scale, as will the distrust.[24]  If Americans cannot trust institutions to be fair and unbiased due to the omnipresence of biased AI technology, there could be social uproar.  Americans (rightfully) protest when discrimination or inequality is hurting people, whether it is the recent Black Lives Matter movement against police brutality or segregation from the Civil Rights era.  Protests could happen on an even larger scale here too.  Polarization and deep systemic racial tensions would be worsened, and discriminated groups would be hard-pressed to trust the very institutions employing this technology.  The symbiotic trust that democracies require from its citizens to function effectively might be broken.  Overall, values that the United States strives to uphold on the global stage, such as opportunity and transparency, could be in danger.

Furthermore, foreign adversaries could exploit these existing biases.  Countries with extensive information operations and cyberwarfare programs like Russia and China could deliberately use biased data sets or algorithms to exacerbate polarization within the United States.[25]  Because the flaw of bias in AI is inherent and difficult to fix, malicious actions from foreign actors would be plausibly deniable and particularly effective.[26]  In other words, by “poisoning” algorithms and  tampering with data, foreign actors could deliberately use bias to destabilize the United States by inflaming racial tensions.[27] 

Unfortunately, biased AI is not just a risk to the United States but also to the globe.  Autocratic regimes could abuse its discrimination to maintain control over minority populations.  The Chinese Communist Party is already doing this in Xinjiang – government officials are employing biased surveillance technology to weed out the minority Uyghur population.[28]  This AI system looks specifically for Uyghurs based on their appearance and records their locations.[29]  In general, police or government officials could hide behind machines’ biased decisions, blaming the machines for any purposeful mistakes used to discriminate against a group of people.[30]  Therefore, biased technological systems could allow autocratic regimes to consolidate power by exploiting existing divides.  As the United States strives to uphold human rights and prevent genocide globally, this flaw in AI could give autocrats an edge.  It is made all the worse by the United States being unable to uphold values like trust and equality domestically as a valid alternative to the discrimination of autocracies.

How Can We Prevent Bias in AI?

Going forward, it is of the utmost importance to recognize both the technical and societal biases that are corrupting AI technology in order to prevent discrimination.  We must recognize that fixes to the technology or fixes to society are not enough, but social and technical approaches must be made in conjunction.[31]  These recommendations are primarily directed at private sector companies, as they dominate the research and development of AI systems, but governmental regulations to ensure bias is eliminated should be considered as well.  While this is a complicated issue without an easy solution, there are three steps AI companies should begin to take:

Rework algorithms and training data

First, on the technical side, care should be taken to provide machines algorithms and training data that are as representative as possible.  Engineers should rigorously test, monitor, and audit for bias and discrimination.[32]  Research will have to expand into a wider social analysis of how the data is used in context, and what areas might be particularly prone to systemic racism.[33]  At the same time, algorithms should be reworked as needed.  In the Optum healthcare example, AI researchers corrected the biased healthcare cost outputs by changing the algorithm’s outputs – instead of predicting which patient would cost the most in the future, it made predictions about patients’ future health conditions instead.[34]  Thus, they eliminated the historically biased “healthcare cost” variable and focused on future health conditions.  Once a bias is detected, it can be corrected, but we must be on the lookout in the first place.

Expand explainability

A second way that bias in AI can be mitigated is by advancing AI’s explainability.  A current technical flaw within AI is that it is unable to explain how it came to its result.  This will take research and development to improve AI technology, but once this is solved, transparency will allow humans to understand what biases potentially impacted a computer’s decision.[35]  Having a “human-in-the-loop” would create a way to pinpoint how a computer made a decision and double-check for any biases that may have been involved.[36]

Promote intersectional hiring

A third way to prevent bias is to improve workplace diversity.  Social change is needed in addition to the technical aspects mentioned above.  A diverse AI workforce would be better equipped to anticipate and spot bias, as well as engage with communities affected.[37]  Research shows that there is a diversity crisis in the AI sector – only 2.5% of Google’s workforce is Black and Facebook and Microsoft are each at 4%.[38]  Therefore, changing hiring practices to maximize diversity, while expanding recruitment opportunities (like investing in historically black colleges and universities) would help eliminate bias.  In other words, the technical challenges within AI systems are linked to historic issues of discrimination in the workforce, and promoting intersectional hiring initiatives could help lessen this problem.[39]`

Bias in AI is a difficult problem to solve, but identifying it is an important step to eliminating discrimination.  If AI is to play a large role in our health, safety, and education in the future, we must be cognizant of how easy it is for machines to exacerbate existing inequities, and the potential danger this poses to the United States’ national security. 

Bibliography

[1] Sarah West et al., “Discriminating Systems: Gender, Race, and Power in AI,” AI Now Institute, 2019, p. 8, https://ainowinstitute.org/discriminatingsystems.pdf.

[2] Cade Metz, “Who Is Making Sure the AI Machines Aren’t Racist?” New York Times, Mar. 15, 2021, https://www.nytimes.com/2021/03/15/technology/artificial-intelligence-google-bias.html.

[3] Matt O’Brien, “Face Recognition Research Fights Amazon Over Biased AI,” AP News, Apr. 3, 2019, https://apnews.com/article/north-america-ap-top-news-artificial-intelligence-ma-state-wire-technology-24fd8e9bc6bf485c8aff1e46ebde9ec1.

[4] Jeffrey Dastin, “Amazon Scraps Secret AI Recruiting Tool That Showed Bias Against Women,” Reuters, Oct. 10, 2018, https://www.reuters.com/article/us-amazon-com-jobs-automation-insight/amazon-scraps-secret-ai-recruiting-tool-that-showed-bias-against-women-idUSKCN1MK08G.

[5] Ibid.

[6] Sarah West et al., “Discriminating Systems,” p. 8.

[7] Stephen Buranyi, “Rise of the Racist Robots – How AI Is Learning All of Our Worst Impulses,” The Guardian, Aug. 8, 2017, https://www.theguardian.com/inequality/2017/aug/08/rise-of-the-racist-robots-how-ai-is-learning-all-our-worst-impulses.

[8] Ibid.

[9] Julia Angwin et al., “Machine Bias,” ProPublica, May 23, 2016, https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing.

[10] Vishal Maini and Samer Sabri, Machine Learning for Humans, Sept. 2, 2017, p. 68.

[11] Ben Buchanan, “The AI Triad and What It Means for National Security Strategy,” Center for Security and Emerging Technology, 2020, p. iii.

[12] Cade Metz, “Who Is Making Sure the AI Machines Aren’t Racist?”

[13] Miles Ellingham, “The Devil Is In the Dataset: Machines Can Be Racist Too,” Independent, Sept. 1, 2020, https://www.independent.co.uk/independentpremium/long-reads/ai-artificial-intelligence-dataset-learning-worst-human-trait-racism-a9674381.html.

[14] Stephen Buranyi, “Rise of the Racist Robots.”

[15] Nicol Turner Lee et al., “Algorithmic Bias Detection and Mitigation: Best Practices and Policies To Reduce Consumer Harms,” Brookings, May 22, 2019, https://www.brookings.edu/research/algorithmic-bias-detection-and-mitigation-best-practices-and-policies-to-reduce-consumer-harms//.

[16] Ibid.

[17] Ibid.

[18] Ibid.

[19] Cade Metz, “Who Is Making Sure the AI Machines Aren’t Racist?”

[20] Carolyn Y. Johnson, “Racial Bias In a Medical Algorithm Favors White Patients Over Sicker Black Patients,” Washington Post, Oct. 24, 2019, https://www.washingtonpost.com/health/2019/10/24/racial-bias-medical-algorithm-favors-white-patients-over-sicker-black-patients/.

[21] Ibid.

[22] Ziad Obermeyer, et. al., “Dissecting Racial Bias In an Algorithm Used to Manage the Health of Populations,” Science, vol. 366, Oct. 25, 2019, p. 447, https://science.sciencemag.org/content/sci/366/6464/447.full.pdf.

[23] Ibid.

[24] Sarah West et al., “Discriminating Systems,” p. 15.

[25] Douglas Yeung, “Intentional Bias Is Another Way Artificial Intelligence Could Hurt Us,” RAND, Oct. 22, 2018, https://www.rand.org/blog/2018/10/intentional-bias-is-another-way-artificial-intelligence.html.

[26] Ibid.

[27] Ibid.

[28] Paul Mozur, “One Month, 500,000 Face Scans: How China Is Using A.I. to Profile a Minority,” New York Times, Apr. 14, 2019, https://www.nytimes.com/2019/04/14/technology/china-surveillance-artificial-intelligence-racial-profiling.html.

[29] Ibid.

[30] Stephen Buranyi, “Rise of the Racist Robots.”

[31] Sarah West et al., “Discriminating Systems,” p. 32.

[32] Ibid, p. 4.

[33] Ibid.

[34] Carolyn Y. Johnson, “Racial Bias In a Medical Algorithm.”

[35] Stephen Buranyi, “Rise of the Racist Robots.”

[36] James Manyika et al., “What Do We Do About the Biases in AI?” Harvard Business Review, Oct. 25, 2019, https://hbr.org/2019/10/what-do-we-do-about-the-biases-in-ai.

[37] Ibid.

[38] Sarah West et al., “Discriminating Systems,” p. 3.

[39] Ibid, p. 6.

One thought on “Racism is Systemic in Artificial Intelligence Systems, Too

  1. Everything is attributable to racism except a failed culture. Time to end this ridiculous and unsubstantiated political propaganda. If anything, this insults one race and for the others, the violence alone against us and police speaks what the true problems are, but no one has the courage to address. If you promote and financially support failures, you perpetuate them.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.