Skip to main content

Algorithmic Bias

Algorithmic bias occurs when AI algorithms reflect human prejudices due to biased data or design, leading to unfair or discriminatory outcomes.

Algorithmic Bias

A bias is an inclination to prefer or disfavor an individual, group, idea, or thing. Biases against people based on their religion, race, socioeconomic status, gender identity, or sexual orientation are particularly unfair and therefore especially problematic. But all human beings hold such biases both consciously and unconsciously. Because human bias has led to unfair discrimination in hiring, promoting, housing, health care, lending, criminal sentencing, and many other areas of human endeavor, many hoped that replacing human decision-makers with computers would remedy, or at least minimize, such bias.  But we create the algorithms that guide computers’ decision-making, so those algorithms often reflect our biases. This is one form of algorithmic bias, and it arises in the design, testing, and application of computer systems.

Another form of algorithmic bias can happen with artificial intelligence where computers create their own code after being trained on vast amounts of data. These AI systems “learn” with training data and follow the principle of “garbage in, garbage out.” So, if an AI-based system is fed faulty or incomplete training data, its predictions will also be faulty.

Unfortunately, examples of algorithmic bias abound. For instance, one company built an AI-based computer system to handle its hiring processes. But the system was trained on a database from a 10-year period where the resumes were predominantly from white men, so the algorithm penalized resumes that were submitted containing the word “women’s.”

Another company created an algorithm used by judges to determine bail and sentencing decisions that systematically discriminated against people of color. And yet another algorithm used for clinical evaluations made black patients appear healthier than they were, which kept those patients from being fairly placed on the national kidney transplant waitlist.

The hazards of algorithmic bias are perhaps most obvious in the development of AI-based facial recognition software. These early AI systems were fed an overrepresentation of pale and male faces. The training data failed to represent the full range of human diversity. Consequently, facial recognition systems perform better in identifying men and people with lighter skin tones than in identifying women and people with darker skin tones. In fact, some facial recognition systems tagged black women as “men.” So although we tend to assume that using AI will lead to fair and neutral decisions, this is clearly a dangerous assumption.

Also of concern is the “black box” problem. We don’t know how Deep Learning AI systems make their decisions – the process is invisible to us. This lack of transparency is troubling and can lead to products that reinforce stereotypes and exacerbate explicit and implicit biases.

Fairness advocates urge companies to guard against algorithmic bias by evaluating their training data, making that data subject to public evaluation, disclosing the accuracy of their systems’ decisions, and allowing third-party auditing of their systems and tools.

Other advocates for ethical AI, like the Algorithmic Justice League founded by Dr. Joy Buolamwini, also encourage algorithmic justice. Dr. Buolamwini argues for inclusive coding and design teams, inclusive data sets, and more thoughtful consideration of the implications of AI-based systems in general.

So, while we can all agree that minimizing algorithmic bias is desirable – as with human bias – eliminating computer-based bias will be challenging. But with an emphasis on algorithmic justice, and with transparent AI development and design, it’s possible that instead of being an opponent, AI can become an ally in our efforts to create a just society.

Bibliography

Ifeoma Ajunwa, “Beware of Automated Hiring,” New York Times, Oct. 19, 2019, at https://www.nytimes.com/2019/10/08/opinion/ai-hiring-discrimination.html.

Julie Angwin et al., “Machine Bias,” ProPublica, May 23, 2016, at https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing.

Dina Bass & Ellen Huet, “Researchers Combat Gender and Racial Bias in Artificial Intelligence,” Bloomberg, Dec. 4, 2017, at https://www.bloomberg.com/news/articles/2017-12-04/researchers-combat-gender-and-racial-bias-in-artificial-intelligence.

Reid Blackman, Ethical Machines: Your Concise Guide to Totally Unbiased, Transparent, and Respectful AI (2022).

Brian Christian, The Alignment Problem: Machine Learning and Human Values (2020).

Jennifer Conrad, “Joy Buolamwini Call Out Bias in Facial Recognition Systems. Now She’s Urging Caution When Experimenting with AI,” Inc., Mar. 15, 2024, at https://www.inc.com/jennifer-conrad/dr-joy-buolamwini-called-out-bias-in-facial-recognition-systems-now-shes-urging-caution-when-experimenting-with-ai.html.

Mark Coeckelbergh, AI Ethics (2020).

Christina Couch, “Ghosts in the Machine,” NPR, Oct. 25, 2017, at https://www.pbs.org/wgbh/nova/article/ai-bias/.

Maria De-Arteaga et al., “Bias in Bios: A Case Study of Semantic Representation Bias in a High-Stakes Setting,” ACM Conference on Fairness, Accountability, and Transparency, Jan. 27, 2019, pp. 120-128.

Luciano Floridi, The Ethics of Artificial Intelligence: Principles, Challenges and Opportunities (2023).

Melissa Hamilton, “The Biased Algorithm: Evidence of Disparate Impact on Hispanics,” American Criminal Law Review Vol. 56, No. 4, p. 1553-1578 (2019).

Rebecca Heilweil, “Why Algorithms Can Be Racist and Sexist,” Vox, Feb. 18, 2020, at https://www.brookings.edu/articles/algorithmic-bias-detection-and-mitigation-best-practices-and-policies-to-reduce-consumer-harms/.

Nicol Lee et al., “Algorithmic Bias Detection and Mitigation: Best Practices and Policies to Reduce Consumer Harms,” Brookings, May 22, 2019, at https://www.brookings.edu/articles/algorithmic-bias-detection-and-mitigation-best-practices-and-policies-to-reduce-consumer-harms/.

Michele Loi & Markus Christen, “Insurance Discrimination and Fairness in Machine Learning : An Ethical Analysis,” Philosophy & Technology, March 13, 2021, at https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3438823. .

Cade Metz, “We Teach A.I. Systems Everything, Including Our Biases,” New York Times, Nov. 11, 2019, at https://www.nytimes.com/2019/11/11/technology/artificial-intelligence-bias.html.

Cade Metz & Adam Satariano, “An Algorithm That Grants Freedom, or Takes It Away,” New York Times, Feb. 9, 2020, at https://www.nytimes.com/2020/02/06/technology/predictive-algorithms-crime.html.

ReNika Moore, “Biden Must Act to Get Racism Out of Automated Decision-Making,” Washington Post, Aug. 9, 2021, at https://www.washingtonpost.com/opinions/2021/08/09/biden-must-act-get-racism-out-automated-decision-making/.

Shazia Siddique et al., “The Impact of Health Care Algorithms on Racial and Ethnic Disparities: A Systematic Review,” Annals of Internal Medicine, Vol. 177, No. 4 (Mar. 12, 2024).

https://www.ajl.org

Buolamwini, J., & Gebru, T. (2018). Gender shades: Intersectional accuracy disparities in commercial gender classification. In Proceedings of the Conference on Fairness, Accountability and Transparency (pp. 77-91).

https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing

https://www.ted.com/talks/joy_buolamwini_how_i_m_fighting_bias_in_algorithms?subtitle=en

 

AI Ethics

AI Ethics

AI ethics focuses on ensuring that AI is developed and deployed responsibly, promoting fairness, transparency, accountability, and societal well-being while minimizing harm.

View

Artificial Intelligence

Artificial Intelligence

Artificial intelligence (AI) describes machines that can think and learn like human beings. AI is continually evolving, and includes subfields such as machine learning and generative AI.

View

Technological Somnambulism

Technological Somnambulism

Technological somnambulism refers to the unreflective, blind creation and adoption of new technologies without consideration for their long-term societal and ethical impacts.

View

Implicit Bias

Implicit Bias

Having implicit bias means we unconsciously hold attitudes towards others or associate negative stereotypes with them.

View

Cognitive Bias

Cognitive Bias

Cognitive biases are errors in thinking that affect people’s decision-making in virtually every situation.

View