To make sound ethical judgments, people must know the facts. In the realm of artificial intelligence (AI), it is difficult to ascertain with certainty a key fact—whether AI is the most consequential technology in the history of the world as claimed by its proponents (“AI Boosters”) or is mainly snake oil and hype as claimed by “AI Doubters” such as Arvind Narayanan and Sayash Kapoor in their book AI Snake Oil: What Artificial Intelligence Can Do, What It Can’t, and How to Tell the Difference (2024) and Emily Bender and Alex Hanna in their book The AI Con: How to Fight Big Tech’s Hype and Create the Future We Want (2025)?
In Part 1 of this blog post, we examined arguments on both sides of the debate, focusing on the case made by these four AI Doubters. This post (Part 2) is prompted by our having read a new and more balanced book by Richard Susskind, How to Think about AI: A Guide for the Perplexed (2025).
Because we often focus these blog posts on behavioral ethics—the psychology of moral decision making that highlights, among other things, the biases that impact humans as they make moral and other types of decisions (see our video at https://ethicsunwrapped.utexas.edu/glossary/behavioral-ethics), it is of particular interest to us that all three books address the psychology of human decision making (as well as the digitization of machine decision making).
Bender and Hanna stress the self-serving bias (see our video at https://ethicsunwrapped.utexas.edu/video/self-serving-bias). Obviously, people often consciously make decisions that advance their own perceived self-interest. And the self-serving bias—the tendency people have to gather, process, and even remember information in a self-serving manner–can operate at an unconscious level as well. Both Bender and Hanna and Narayanan and Kapoor argue that whether conscious or unconscious, self-interest is a key driver of the AI hype that both of their books decry. You can see this in the two quotations that wrap up our previous blog post.
Susskind does not believe that AI is either pure hype or snake oil. He is more optimistic about AI’s potential to improve the world, noting:
“Most ambitiously, some AI enthusiasts believe that artificial intelligence will come to the rescue of humanity and help us meet our most serious challenges—from climate change to cancer, poverty to conflict, space exploration to global education, crowded cities to the erosion of democracy.”
Bender and Hanna have heard this argument before, and they’re not buying it:
…AI boosters want to be unfettered by regulation that might constrain their ability to amass power and capital, but they also sometimes even argue that it’s a moral imperative to be able to innovated quickly, because (in their worldview) AI is going to save us all. For example, on an AI panel in 2018, in response to a call for a slower pace of research that leaves time and space to consult with the communities potentially impacted, Oren Etzioni, then CEO of the Allen Institute of Artificial Intelligence, said:
«Are you worried at all that when you slow things down, while you’re going through that deliberative process, with the best of motivations, that people are dying in cars and people are dying in hospitals, that people are not getting legal representation in the right way? I think one reason for urgency is commercial incentives, but another reason for urgency is an ethical one. While we in Seattle comfortably debate these fine points of the law and these fine points of fairness, people are dying, people are being deported. So yeah, I’m in a rush, because I want to make the world a better place.»
But in the years since Etzioni made those remarks, we haven’t seen miraculous improvements in highway safety, health outcomes, or the treatment of migrants. (p. 177)
Susskind, for his part, dips his toe into the biases that shape human decision making, noting three that he thinks might be warping the views of AI Doubters.
First, is something he calls “technological myopia,” which he defines as “a tendency when evaluating the long-term potential of a new technique or technology, to pay excessive attention to the current version and its current limitations.” In other words, Bender and Hanna, as well as Narayanan and Kapoor may be right now, but may not be correct in the future when the vast benefits of AI become manifest.
Second, Susskind calls attention to irrational rejectionism, “which rears its unworthy head whenever critics dogmatically dismiss the relevance, utility, or potential of particular systems without taking the trouble to see them in action. Too often, for instance, [Susskind says] I hear people speaking unfavorably about ChatGPT, and I am amazed to learn, after a little probing … that they haven’t actually used the systems themselves.”
Third, there’s “Not Us Thinking,” an additional explanation for why people resist adopting potentially beneficial new technology, like AI. To explain “Not Us Thinking,” Susskind notes two cognitive biases that sound familiar to us. The initial one appears to us to be a manifestation of the self-serving bias. Susskind has found that all professionals tend to believe that while other white-collar workers are eminently replaceable by AI, their particular profession is peculiarly immune to digital replacement. The next bias that contributes to “Not Us Thinking” is what we call loss aversion (https://ethicsunwrapped.utexas.edu/video/loss-aversion), people’s tendency to hate losses more than they enjoy gains and to sometimes take unusual risks to avoid such losses. Susskind notes that many people resist AI because they take pride in their careers and have invested so much of themselves into those careers that their minds won’t let them believe that AI tools could replace them.
Not to be outdone in the behavioral realm, Narayanan and Kapoor offer several psychological explanations for the enduring influence of what they call “AI Snake Oil” and Bender and Hanna call “AI Hype”:
- Automation Bias: people’s “tendency to over-rely on automated systems, such as when airline pilots followed incorrect advice from an automated failure-detection system.”
- Illusion of Explanatory Depth: “a cognitive bias where individuals believe they understand complex concepts more deeply than they actually do. This false sense of understanding leads to overconfidence (see our video on the overconfidence bias: https://ethicsunwrapped.utexas.edu/video/overconfidence-bias) and, in turn, a failure to ask critical questions or explore alternative explanations.”
- Halo Effect: “our tendency to judge a product or technology based on a few select examples.” (https://ethicsunwrapped.utexas.edu/glossary/halo-effect). Thus, people may see that an AI model defeated a world champion at chess, Go, or Jeopardy and then buy that model, not seeming to care whether it can efficiently do what the specific buyer needs it to be able to do.
- Illusory Truth Effect: occurs where “the mere repetition of inaccurate information can lead us to think it’s true.” Who among us hasn’t heard repeated hyperbolic statements regarding the potential miracles that AI can perform, or soon will be able to perform?
- Anchoring Bias: also referred to as anchoring and adjustment “refers to the fact that individuals rely heavily on the first piece of information encountered when forming opinions or making decisions. The initial information, or ‘anchor,’ disproportionately influences later judgments and opinions, even after receiving contradictory information. People can latch onto overblown claims about AI’s capabilities made by companies.”
- Quantification Bias: people “tend to overvalue quantitative evidence to the detriment of qualitative or contextual evidence about an application.” Therefore, they may grasp onto impressive-sounding accuracy numbers that an AI firm throws out without asking sensible follow-up questions.
Thus, psychological flaws in human decision making arguably contribute to both the problem that Susskind stresses—human reluctance to adopt new AI technology—and the problem that Bender & Hanna and Narayanan & Kapoor emphasize—human susceptibility to baseless AI hype.
We recommend that you read the three books discussed in this blog post and the preceding one. However, we do not guarantee that they will settle for you whether you should join the AI Doubter camp or the AI Booster camp. They haven’t settled the matter for us, but they have given us a lot more to think about.
Sources:
Emily Bender & Alex Hanna’s The AI Con: How to Fight Big Tech’s Hype and Create the Future We Want (2025).
Arvind Narayanan and Sayash Kapoor’s AI Snake Oil: What Artificial Intelligence Can Do, What It Can’t, and How to Tell the Difference (2024).
Richard Susskind’s How to Think About AI: A Guide for the Perplexed (2025).
Richard Susskind & Daniel Susskind, The Future of the Professions: How Technology Will Transform the Work of Human Experts (2015).
Blog Posts:
“Techno-Optimist or AI Doomer?: Consequentialism and the Ethics of AI”: https://ethicsunwrapped.utexas.edu/techno-optimist-or-ai-doomer-consequentialism-and-theethics-of-ai
Videos:
Behavioral Ethics: https://ethicsunwrapped.utexas.edu/glossary/behavioral-ethics
Halo Effect: https://ethicsunwrapped.utexas.edu/glossary/halo-effect
Loss Aversion: https://ethicsunwrapped.utexas.edu/video/loss-aversion
Overconfidence Bias: https://ethicsunwrapped.utexas.edu/video/overconfidence-bias.
Self-Serving Bias: https://ethicsunwrapped.utexas.edu/video/self-serving-bias.