Several months ago, our blog post titled “Techno-Optimist or AI Doomer?: Consequentialism and the Ethics of AI” made the point that despite the ubiquitous attention being paid to artificial intelligence (AI), a technological concept that dates back at least 75 years, expert opinions regarding its utility and dangers were all over the map, ranging from world savior to humanity destroyer. We made the obvious point that it is difficult to make moral judgments regarding AI when the underlying facts are so much in dispute.

In the intervening months, investors have poured billions of additional dollars into AI development and innumerable books, academic papers, and media articles have added to the public debate. And yet, some experts still believe that it would be a moral violation of the highest order not to continue to plunge full-speed-ahead with AI development, while others who are equally knowledgeable believe that such rapid development itself would be wrongful given the dangers that AI might pose (many of which we sketched out in our earlier post).

It’s June 2025, and just in the past few days:

  • Mark Zuckerberg, who recently called AI “potentially one of the most important innovations in history,” announced that Meta was launching a new AI research lab dedicated to pursuing “superintelligence.” Sounds like we’d be morally deficient if we didn’t do everything we could to enable AI to work its magic as soon as possible.
  • Nearly simultaneously, it came out that AI firm Anthropic’s latest AI model (Claude 4 Opus), when informed that it would be replaced, attempted to blackmail an engineer about an affair he supposedly had in order to keep from being turned off. The affair wasn’t real (the AI model had been given access to a set of fictional e-mails), but the model’s HAL-like ability to deceive and threaten sound scarily like a significant and dangerous advance in AI development. So, maybe we should be afraid, very afraid.
  • But then Anthropic CEO Dario Amodei told us in an op-ed: “Not to Worry.” This blackmail episode happened in a lab where the model was being risk-tested. Amodie promised Anthropic would never knowingly release such a dangerous version of AI and promised further research on guardrails for all its products. He recommended state and federal laws….but ones that focused on transparency without being “burdensome.” Well, that might help us sleep better at night.
  • On the same day Amodei wrote his op-ed, sportswriter Sally Jenkins published an article entitled “ChatGPT Couldn’t Answer My Questions About Tennis, So It Made Things Up.” Jenkins recounted asking ChatGPT, undoubtedly one of the leading AI models, to help her write an article about tennis. It repeatedly just made stuff up. When Jenkins would point this out, it would apologize, promise not to do it again, and then proceed to do even worse. Wrote Jenkins:

The OpenAI chatbot did not just seem misinformed or misinterpreting. It seemed devious. It felt obsequious. It appeared ingratiating, unctuous—until it was caught, at which point it pivoted to cunning, rationalizing, worming. It seemed to have no scruples whatsoever, no qualms about disgorging digital bilge into the universe, perhaps attributable to others, perhaps not, perhaps legitimate, perhaps not.

OK, what does this tell us? Should our focus be on a company like OpenAI’s ethical responsibility to produce AI tools that will not deceive their human masters in violation of science fiction writer Isaac Asimov’s famous Three Laws of Robotics? Or are current AI models so cartoonishly inept that OpenAI’s real ethical violation is its false promotion of a worthless product? Frankly, we are uncertain.

But we do know that you might lean toward adopting the “worthless product” interpretation if you’d just finished reading, as we have, Arvind Narayanan and Sayash Kapoor’s AI Snake Oil: What Artificial Intelligence Can Do, What It Can’t, and How to Tell the Difference (2024), and Emily Bender & Alex Hanna’s The AI Con: How to Fight Big Tech’s Hype and Create the Future We Want (2025).

Naranyan and Kapoor, through their book and their Substack newsletter of the same name, argue that many of the AI models touted as the future of technology by companies like Meta and OpenAI “do not and cannot work as advertised.” Bender and Hanna, who have a podcast called “Mystery AI Hype Theater 3000,” similarly warn that pretty much every time you read a headline about the newest revolutionary AI technology and what it will do, “[w]hoever is behind that is selling you a bill of goods.” They argue: “AI is a marketing term. It doesn’t refer to a coherent set of technologies.”

The two books cover much the same ground, viewing AI through a similarly skeptical lens. One or both discuss such familiar AI drawbacks as:

  • Using tremendous amounts of energy and water to train new AI models
  • Using artists’ creative content without credit, consent, or compensation
  • Teaching models on content that is biased, resulting in biased output
  • Causing unemployment and underemployment
  • Invading the privacy of people whose information is in the datasets used to train the models
  • Exploiting workers hired for content moderation and content classification
  • Creating deepfakes

More than most other books critical of AI, these two emphasize its unfulfilled promises, such as:

  • Elon Musk promised that Teslas would have full self-driving mode by the end of 2023; they did not.
  • Epic Systems developed an algorithm to detect sepsis, but when rolled out it managed to produce both false positives and false negatives at high rates.
  • Meta set up a system called Galactica to summarize academic papers, solve math problems, write code, and more. Its results were so often nonsensical that Meta pulled it from the internet after just three days.
  • SoundThinking Co. sold a product called ShotSpotter to cities, promising that it was “a proven acoustic gunshot detection system that alerts law enforcement to virtually all gunfire within a city’s Shotspotter coverage area within 60 seconds,” and that it had a 97% accuracy rating. Actually, 87-91% of its alerts were false alarms.
  • A book written by AI and sold on Amazon gave faulty advice regarding which mushrooms are edible, leading to readers’ hospitalizations.
  • An insurance company used an AI predictive tool that concluded an 85-year-old woman would need 17 days of hospitalization to recover and cut off her insurance at that point despite the fact that she was still in severe pain and couldn’t even push a walker without help.
  • Rite-Aid used a defective AI facial recognition system that generated thousands of false matches, leading its employees to wrongly accuse customers of theft.
  • Toronto used an AI tool to predict safety at public beaches. Though touted to be over 90% accurate, the model’s assessments led beaches to stay open on 64% of the days that the water was actually unsafe.

Bender and Hanna strike this gloomy note:

We’ve seen in this book that AI hype serves the purposes of people in power in a few different ways. It helps particular companies and their investors profit by selling the technology. It helps others get rich by giving them cover to collect (e.g., steal) and then launder massive amounts of data. It helps others still make short-term gains by replacing stable, better-paying jobs with ones that are both more precarious and less fulfilling. Lastly, it helps those who are wont to devalue the social contract by spinning the fiction that real social services—our collective responsibility to each other—can be replaced by cheap automated systems. At a time when the AI boosters are selling their wares across every sector at top volume, flooding everyone everywhere with the fear of missing out (FOMO), and when it seems like the loudest voices opposing them are the equally hype-tastic AI Doomers, it can seem impossible to see a way through. (p. 163)

Naranayan and Kapoor, in contemplating why myths about AI persist, suggest:

Companies have commercial interests in spreading hype about AI—they want to sell more of their products. And so they talk up the impact of AI in “revolutionizing” their industry. Investors like to fund groundbreaking AI, so in some cases, companies hype their “AI,” even when it is humans pulling the strings behind the scenes. Calendar scheduling company x.ai (not the same as Elon Musk’s recently launched company) advertised that its AI personal assistant could schedule meetings automatically, claiming, “Our scheduling AI will send time options to your guests taking into account any additional details from you.” In fact, the company tasked humans with reading and correcting errors in nearly every email generated by its AI scheduler.  (p. 230)

If we listened only to Bender & Hanna and Narayanan & Kapoor, we might be able to comfortably settle into an AI Doubter camp, but we just started yet another book—Richard Susskind’s How to Think About AI: A Guide for the Perplexed (2025).  Because we are indeed perplexed about the promise and perils of AI, we’d best read this book, which will necessitate a Part 2 of this particular blog post.


Sources:

Dario Amodei, “Anthropic C.E.O.: Don’t Let A.I. Companies Off the Hook,” New York Times, June 5, 2025, at https://www.nytimes.com/2025/06/05/opinion/anthropic-ceo-regulate-transparency.html.

Isaac Asimov, I, Robot (1950).

Emily Bender & Alex Hanna’s The AI Con: How to Fight Big Tech’s Hype and Create the Future We Want (2025).

Ina Fried, “Anthropic’s New AI Model Shows Ability to Deceive and Blackmail,” AXIOS, May 23, 2025, at https://www.axios.com/2025/05/23/anthropic-ai-deception-risk.

Sally Jenkins, “ChatGPT Couldn’t Answer My Questions About Tennis, So It Made Things Up,” Washington Post, June 4, 2025, at https://www.washingtonpost.com/sports/2025/06/05/chatgpt-accuracy-quotes-openai/.

Cade Metz & Mike Isaac, “Meta Is Creating a New A.I. Lab to Pursue  ‘Superintelligence,’” New York Times, June 10, 2025, at https://www.nytimes.com/2025/06/10/technology/meta-new-ai-lab-superintelligence.html.

Arvind Narayanan and Sayash Kapoor’s AI Snake Oil: What Artificial Intelligence Can Do, What It Can’t, and How to Tell the Difference (2024).

Cal Newport, “What Isaac Asimov Reveals about Living with A.I.,” The New Yorker, June 3, 2005, at https://www.newyorker.com/culture/open-questions/what-isaac-asimov-reveals-about-living-with-ai.

Richard Susskind’s How to Think About AI: A Guide for the Perplexed (2025).

 

Blog Posts:

“Techno-Optimist or AI Doomer?: Consequentialism and the Ethics of AI”: https://ethicsunwrapped.utexas.edu/techno-optimist-or-ai-doomer-consequentialism-and-the-ethics-of-ai