The Perfect (Mis)Match: Algorithms and Intentions

This post is prompted by a forthcoming article in the American Criminal Law Review by Melissa Hamilton, entitled “The Biased Algorithm: Evidence of DisparateImpact on Hispanics.”  Hamilton makes the point that because judges tend to be human beings and therefore subject to all the decision making foibles uncovered by behavioral psychology and related fields in recent years, their decisions on whether to grant bail and in what amounts, how long to sentence convicted offenders, and the like are inherently subjective and often flawed.  Therefore, many experts have looked to automated risk assessment as a substitute.  This seems like a very promising development, one that we support.

            However, an article in ProPublica had found evidence that COMPAS (Correctional Offender Management Profiling for Alternative Sanctions), a risk tool used in many court systems, discriminated against Blacks in that it produced a much higher rate of false positives when predicting recidivism than it did for Whites.  Hamilton’s article finds similar problems of discrimination against Hispanics.  COMPAS is a tool that, she concludes, is “not well calibrated for Hispanics.”

            Whether you’re talking Big Data, Machine Learning, Artificial Intelligence or some related notion, the underlying algorithms and their results can be biased.  We are confident that most people developing risk assessment tools like COMPAS are acting in good faith and certainly without any explicit racial bias.  Sometimes the bias in an algorithm is simply reflecting the bias of the real world. And it is possible that even a slightly biased algorithm might easily improve on human decision making, which is also subject to implicit and explicit bias.  Nonetheless, Microsoft research Adam Kalai was right when he wrote: “We have to teach our algorithms which are good associations and which are bad the same way we teach our kids.”

Implicit bias’s unfortunate impact on algorithms can sneak into the process in a number of ways. 

  • If you plug biased data into a system, the system will be biased. 
  • Many, if not most, algorithms are deemed proprietary by their creators and are maintained as secrets.  This lack of transparency minimizes the opportunities to detect bias.
  • Data bases are also typically kept secret, minimizing access to large amounts of data for AI developers to train on.  Smaller data sets are more likely to be biased.
  • If you test a system on a biased sample, its results may well be biased.  One facial recognition software system misidentified darker-skinned women 35% of the time and dark-skinned men 12% of the time, both much higher rates than for Whites.  Available data sets tend to contain more White faces than others.
  • Biased algorithms can bake bias into an entire system, which can be worse than having multiple human decision makers with varying degrees of bias.
  • Artificial neural networks can now write new AI programs without human input and that are beyond current abilities to analyze.

            In her book Weapons of Math Destruction, Cathy O’Neil criticizes a popular recidivism model—the LSI-R (Level of Service Inventory—Revised) which asks questions such as when the subjects were first involved with the police, which is way more likely in tough parts of town than in suburbs, even with the same conduct.  She writes:

A person who scores as “high risk” is likely to be unemployed and to come from a neighborhood where many of his friends and family have had run-ins with the law.  Thanks in part to the resulting high score on the evaluation, he gets a longer sentence, locking him away for more years in a prison where he’s surrounded by fellow criminals—which raises the likelihood that he’ll return to prison.  He is finally released into the same poor neighborhood, this time with a criminal record, which makes it that much harder to find a job.  If he commits another crime, the recidivism model can claim another success.  But in fact the model itself contributes to a toxic cycle and helps to sustain it.   

Although the tech industry has much work to do, there is increasing recognition of these problems.  The current draft of a new version of the Association of Computing Machinery (ACM) Code of Ethics and Professional Conduct addresses discrimination in Section 1.4:

1.4 Be fair and take action not to discriminate.

The values of equality, tolerance, respect for others, and justice govern this principle. Computing professionals should strive to build diverse teams and create safe, inclusive spaces for all people, including those of underrepresented backgrounds. Prejudicial discrimination on the basis of age, color, disability, ethnicity, family status, gender identity, labor union membership, military status, national origin, race, religion or belief, sex, sexual orientation, or any other inappropriate factor is an explicit violation of the Code. Harassment, including sexual harassment, is a form of discrimination that limits fair access to the virtual and physical spaces where such harassment takes place.

Inequities between individuals or different groups of people may result from the use or misuse of information and technology. Technologies and practices should be as inclusive and accessible as possible. Failure to design for inclusiveness and accessibility may constitute unfair discrimination.

Good intentions are important, but good results are better.  We urge algorithm creators to watch our new video:   https://ethicsunwrapped.utexas.edu/video/implicit-bias.

Resources

Association of Computing Machinery, Code of Ethics and Professional Conduct, available at https://www.acm.org/binaries/content/assets/about/acm-code-of-ethics-and-professional-conduct.pdf.

Julia Angwin et al., “Machine Bias,” ProPublica (May 23, 2016), available at https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing.

Diana Bass & Ellen Huet, “Researchers Combat Gender and Racial Bias in Artificial Intelligence,” Bloomberg, Dec. 4, 2017, available at https://www.bloomberg.com/news/articles/2017-12-04/researchers-combat-gender-and-racial-bias-in-artificial-intelligence.

Lizette Chapman & Joshua Brustein, “A.I. Has a Race Problem,” Bloomberg Businessweek, July 2, 2018.

Christina Couch, “Ghosts in the Machine,” PBS, Oct. 25, 2017, available at http://couchwins.com/portfolio/ghosts-in-the-machine/.

Hannah Fry, Hello World: Being Human in the Age of Algorithms (2018).

Melissa Hamilton, “The Biased Algorithm: Evidence of Disparate Impact on Hispanics,” 56 American Criminal Law Review (forthcoming).

Will Knight, “Forget Killer Robots—Bias is the Real AI Danger,” M.I.T. Technology Review, Oct. 3, 2017, available at https://www.technologyreview.com/s/608986/forget-killer-robotsbias-is-the-real-ai-danger/.

Cathy O’Neil, Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy (2016).

Seth Stephens-Davidowitz, Everybody Lies: Big Data, New Data and What the Internet Can Tell Us About Who We Really Are (2017).

Darrell M.West, “The Role of Corporations in Addressing AI’s Ethical Dilemmas,” Brookings, Sept. 13, 2018, available at https://www.brookings.edu/research/how-to-address-ai-ethical-dilemmas/.

Comments are closed.
Shares