Many interested in AI have been eagerly awaiting the just-published, provocatively-titled book If Anyone Builds It, Everyone Dies: Why Superhuman AI Would Kill Us All by Eliezer Yudkowski and Nate Soares. Yudkowski has been a major AI naysayer for 20 years and, with Soares, founded the nonprofit Machine Intelligence Research Institute (MIRI) in 2005.

The general message here is that Yudkowski and Soares (Y&S) began researching AI, focusing particularly on the “alignment” problem—how can we ensure that advanced AIs’ actions are aligned with humans’ interests? They concluded that we don’t really know how to ensure such alignment and therefore:

If any company or group, anywhere on the planet, builds an artificial superintelligence using anything remotely like current techniques, based on anything remotely like the present understanding of AI, then everyone, everywhere on Earth will die.

Y&S don’t sugarcoat much in this book, but do allow as how “the situation is not hopeless; machine superintelligence doesn’t exist yet, and its creation can still be prevented.” With this book, they hope to enlist the rest of us in that prevention effort. Interestingly, the book came out just two days before an article in the New York Times described a seemingly unstoppable drive to create artificial super intelligence (ASI)—where software can far outperform humans in most every thinking category. The article quoted OpenAI CEO Sam Altman who says that in the near future his company alone will spend trillions of dollars trying to create ASI.

Is ASI feasible? If so, is it near? If created, will it (as Y&S firmly believe) definitely end humanity? As we have noted in earlier posts (e.g., https://ethicsunwrapped.utexas.edu/ai-ethics-just-the-facts-maam), we are not experts regarding any of these questions, and given that there are large numbers of exceedingly qualified experts who disagree strongly on all these questions, we hesitate to weigh in on them.

We have particular difficulty assessing Y&S’s often speculative arguments. In describing how ASI will come to be, how it will evolve, how it will learn to want, how humans will become an inconvenience to them, how it will seize power from humans and then eradicate them, etc., Y&S frequently resort to fairy tales, parables, and science fiction to make their points. This is interesting, entertaining, and generally facially plausible. In fairness, Y&S face a world in which (a) ASI may never happen, (b) if it does happen that may occur months, years or decades from now, and, (c) if it happens reasonably soon will work in ways that we cannot foresee now and may not be able to even remotely understand then. All these contingencies deeply handicap Y&S in supporting their claims.

The one set of claims that we are qualified to assess are in Chapter 12 (“I Don’t Want to Be Alarmist”), where the authors argue, in part, that “[h]istory is full of [] examples of catastrophic risk being minimized and ignored.” Y&S note, giving the Chernobyl and Titanic disasters as examples:

When a disaster is unthinkable—when authority figures insist with conviction that it’s not allowed to happen, when it’s not part of the usual scripts—then human beings have difficulty believing in the disaster even after it has begun; even when the ship beneath their feet is taking on water. (p. 200)

Why do humans often underestimate even existential threats? Why are many of the leading figures in the AI industry plunging full speed ahead in an effort to create AGI (artificial general intelligence) and even ASI not long after many of them signed a March 2023 open-letter statement calling for a moratorium on AI development until we could get a handle on safety issues? Many of the reasons Y&S cite are fairly obvious. We pointed them out ourselves six months ago when we published our “Moral Judgments Swamped by Competitive Forces” blog post.

First, Y&S cite incentives. Although they should be deeply concerned about catastrophic results, many AI researchers, coders, investors and others plunge ahead spurred on by the potentially lavish riches, the possible glory of invention (“Dare we think: Nobel Prize??!!”), and/or the satisfaction of tackling and taming some of the world’s most challenging puzzles. Check out our video on the self-serving bias and how such incentives can cause us to sideline our moral concerns. Y&S cite a favorite observation of ours—Upton Sinclair’s statement that it is difficult to get a man to understand something when his salary depends upon his not understanding it.

Second, Y&S cite what we call the optimism bias, the human tendency to overestimate the likelihood of good results and underestimate the likelihood of disaster. Here in Texas, the July 4 flood near Kerrville is a salient example. Generally, Y&S believe that “[i]t’s normal for a scientific community to be overly optimistic in the early days.” More specifically, Y&S believe that even those in the scientific community “are in denial about how hard the alignment problem is.” If the ASI alignment problem cannot be solved before ASI is created, Y&S believe that catastrophe is inevitable.

Third, there’s what we call the tangible & abstract –the tendency people have to be influenced more heavily by factors that are more tangible on various dimensions (e.g., time and distance) than those that are less tangible. It is relatively easy for AI scientists and investors to envision what might happen to their companies, their jobs, and their money if they suspend their search for ASI or are beaten to the punch by another company. However, it’s hard to even imagine how the first ASI might subjugate or even wipe out humanity because there is no such tool in existence now and it is difficult to even imagine how one might operate if it appears. This may cause AI scientists to underestimate the likelihood of an ASI takeover. Y&S point out: “Nobody knows the exact point at which an AI realizes that it has an incentive to take a test and pretend to be less capable than it is. Nobody knows what the point of no return is, nor when it will come to pass.”

The New York Times article mentioned above quotes Oren Etzioni, founding CEO of the Allen Institute for AI, who adds one more motivating factor. He opines that it is “FOMO—fear of missing out—” that is the strongest driver of this arguably reckless ASI obsession.

At this point, with billions of dollars already spent with much more committed and on the way, the sunk cost effect—the tendency people who have invested substantially in a certain endeavor to stay invested long after it makes more sense to abandon ship–certainly plays a role as well.

With so much disagreement by so many experts, we cannot comfortably judge the validity of much of Y&S’s argument that everyone will die if ASI is created. But we do second their opinion that various psychological factors are driving AI researchers, coders, executives, investors, and their governments recklessly forward in this endeavor to create artificial superintelligence. Unfortunately, we are not as optimistic as Y&S appear to be in Chapter 14 (“Where There’s Life, There’s Hope”) that we can successfully turn the tide by delaying creation of ASI anywhere in the world until alignment has been guaranteed.

 


Resources

James Barrat, Our Final Invention: Artificial Intelligence and the End of the Human Era (2013).

Emily Bender & Alex Hanna, The AI Con: How to Fight Big Tech’s Hype and Create the Future We Want (2025).

Brian Christian, The Alignment Problem: Machine Learning and Human Values (2020).

Keach Hagey, The Optimist: Sam Altman, OpenAI, and the Race to Invent the Future (2025).

Neil D. Lawrence, The Atomic Human: What Makes Us Unique in the Age of AI (2024).

Cade Metz & Karen Weise, “What Exactly Are A.I. Companies Trying to Build? Here’s a Guide,” New York Times, Sept. 16, 2025.

Eliezer Yudkowsni & Nate Soares, If Anyone Builds It, Everyone Dies: Why Superhuman AI Would Kill Us All (2025).

Blog Posts

AI Ethics: “Just the Facts, Ma’am,” at https://ethicsunwrapped.utexas.edu/ai-ethics-just-the-facts-maam.

AI Ethics: “The Obligation to Design for Safety,” at https://ethicsunwrapped.utexas.edu/ai-ethics-the-obligation-to-design-for-safety.

AI Ethics: “Is AI a Savior or a Con?—Part 1,” at https://ethicsunwrapped.utexas.edu/blog/page/2.

AI Ethics: “Is AI a Savior or a Con?-Part 2,” at https://ethicsunwrapped.utexas.edu/ai-ethics-is-ai-a-savior-or-a-con-part-2.

AI Ethics: “Is the Precautionary Principle Helpful?” at https://ethicsunwrapped.utexas.edu/ai-ethics-is-the-precautionary-principle-helpful.

Ethical AI: “Moral Judgments Swamped by Competitive Forces,” at https://ethicsunwrapped.utexas.edu/ethical-ai-moral-judgments-swamped-by-competitive-forces.

“Techno-Optimist or AI Doomer? Consequentialism and the Ethics of AI,” at https://ethicsunwrapped.utexas.edu/techno-optimist-or-ai-doomer-consequentialism-and-the-ethics-of-ai.

Videos

Optimism Bias: https://ethicsunwrapped.utexas.edu/glossary/optimism-bias.

Self-serving Bias: https://ethicsunwrapped.utexas.edu/video/self-serving-bias.

Tangible & Abstract: https://ethicsunwrapped.utexas.edu/video/tangible-abstract.