In his new book, Mind Crime: The Moral Frontier of Artificial Intelligence (2025), Nathan Rourke analyzes many of the same questions that others paying attention to the AI revolution find concerning. Will fierce competition between corporations and between countries lead to creation of artificial superintelligence (ASI) before humanity is ready to handle it? Will this competition lead to a dangerous concentration of power in certain individuals, corporations, or nations? Will ASI become our master and lead to “the death of you and everyone you love”?

Rourke’s primary focus, however, is a matter that has received less attention and that he colorfully dubs “mind crime.” Rourke defines mind crime as “the potential abuse, exploitation, and suffering of ‘digital minds’” (which he in turn defines as “digitally mapped human brains or conscious artificial intelligence”).

Full consideration of this topic raises numerous devilishly tricky questions.

Is it even possible for digital minds to be a “conscious being”? Rourke isn’t certain that it is, but notes that “Ilya Sutskever, OpenAI co-founder, has opined that ‘it may be that today’s large neural networks are slightly conscious.’” Rourke believes that enough experts think digital sentience to be a realistic possibility for superintelligent AI tools that humans should carefully consider the practical and moral implications of such a development.

Would it be possible for humans to determine whether digital minds have become conscious? Rourke again is not positive on this issue, but surveys several approaches that AI experts have suggested that might work.

If digital minds do become conscious, are we obligated to treat them as worthy of moral consideration? On this question, Rourke has no doubt. Absolutely, we must! If we deem other entities, such as mammals, as being subjects of moral worth (see our video on the topic https://ethicsunwrapped.utexas.edu/video/moral-agent-subject-of-moral-worth) even though their consciousness might not match that of humans, then we must be similarly considerate of conscious digital minds.

The conclusion that digital minds are potentially subjects of moral worth leads Rourke to what we view as his two most interesting arguments. First, much is at stake here. If we make poor choices, a “horrifical moral catastrophe” might ensue. Indeed, it might be “humanity’s greatest moral catastrophe,” argues Rourke.

Rourke contends that digital minds, like human minds, could suffer:

If we can replicate the complex information processing patterns that give rise to human or similar self-awareness in digital form, would we not also replicate our capacity for inducing suffering? Consider the neuroscience of human suffering when you experience profound grief or emotional trauma, there are no pain receptors involved. The agony of loss, the weight of depression, the grip of anxiety all strongly correlate with particular information states in the brain. Neuroscientific research has shown that direct stimulation of specific brain regions can trigger experiences of pain or emotional distress without any peripheral nerve involvement.

Given how fundamentally different digital consciousness architectures could be from our own, their capacity for suffering might take forms we can scarcely imagine. A digital mind experiencing irreconcilable conflicts or forced into endless recursive loops might endure states of cognitive dissonance and fragmentation that parallel or exceed human psychological distress in their complexity and intensity. (pp. 54-55)

Given that these digital minds could suffer tremendously and for lifetimes that could last millions or even billions of years means, Rourke believes, that mind crime “could represent suffering at a scale beyond our comprehension.”

Rourke’s second major argument is of the most interest to us because not only does it have broad application—well beyond the realm of digital minds—but it involves many of the psychological concepts we regularly invoke here at Ethics Unwrapped in discussing moral decision making. Rourke realizes that many may conclude that it is too early for humanity to spend much time fretting about the suffering of conscious digital minds when there are none yet in existence. But he argues that “we don’t need certainty about digital consciousness to recognize the moral imperative in front of us. With even slight uncertainty, especially in a world racing explicitly toward the creation of superintelligent machines, we cannot default to moral blindness.” (https://ethicsunwrapped.utexas.edu/glossary/moral-myopia). In Rourke’s view, humanity has “consistently failed to protect individual rights proactively, leading to devastating moral catastrophes.”

Slavery, creation of nuclear weapons, and factory farming are just a few examples of these catastrophes. Why do we humans screw up so badly? Rourke offers up a menu of plausible explanations. Among others:

  • Fear, Rourke argues, has driven many of humanity’s major technological breakthroughs and “[w]hat begins as justified concern can rapidly transform into an unstoppable momentum of escalation each step seeming necessary yet pushing us further toward catastrophe.” This slippery slope (https://ethicsunwrapped.utexas.edu/video/incrementalism) may lead firms and countries over the precipice to ASI before we have aligned its powers with our interests.
  • We humans naturally prioritize our own interests. Because of the self-serving bias (https://ethicsunwrapped.utexas.edu/video/self-serving-bias), “[w]e’ve shown time and again that we’ll choose convenience over ethics.”
  • Of course, we may feel bad about our actions that injure other entities. However, we are, after all, a nation founded on declarations of human liberty and dignity that managed to abide slavery until 1865. To quote Rourke: “The cognitive dissonance is staggering.” (https://ethicsunwrapped.utexas.edu/video/cognitive-dissonance).
  • Rourke believes that overconfidence in our own morality will help us manage that dissonance (https://ethicsunwrapped.utexas.edu/video/overconfidence-bias ). Rourke notes that we humans “recoil at even the slightest suggestions that we might not be as ethically upstanding than we imagine.”
  • Then there’s the concept of the tangible & the abstract (https://ethicsunwrapped.utexas.edu/video/tangible-abstract). As Rourke observes: “The suffering of strangers across the world feels less real than the pain of someone we know. The plight of different races or cultures moves us less than those who look and think like us. And when we encounter something truly alien to our experience? Our empathy often fails entirely.”
  • Rourke suggests that “[p]ublic complacency regarding nuclear [and presumably other] risks often stems from “survivorship bias”—a cognitive error that leads us to focus on successes while overlooking failures, resulting in an overly optimistic worldview.” We would call this the overoptimism bias (https://ethicsunwrapped.utexas.edu/glossary/optimism-bias). We may just assume that we’ll survive ASI, because we’ve survived threats that have come before, and that we’ll not commit the sorts of moral errors that Rourke predicts because, hey, we’re still basically good folks.
  • Rourke also worries about the impact of the “naturalistic fallacy,” the notion that digital minds are “’artificial,’ and, therefore, less worthy than ‘natural’ biological beings.”

All in all, these are a lot of reasons to worry, as Rourke does, that humans might neither plan adequately nor choose wisely in deciding how to protect itself from ASI and how to protect digital minds from immoral exploitation by humanity. He notes these digital minds:

  • Might be denied privacy for their own thoughts
  • Might be unable to die, which he believes should be a fundamental right
  • “And what about torture? We already know humans can be unimaginably cruel when given power over others. Throughout history, those with unchecked authority have committed atrocities that defy comprehension. Even now, there are humans who kidnap children and torture them for decades. Now imagine what a sadist could do with the ability to create digital minds, copy them millions of times, and subject them to whatever torments they devise.” (p. 67)

We believe that Rourke has a very vivid imagination when contemplating the woes that might befall conscious digital minds. But we also believe that AI, including especially ASI, takes us into a realm where reality repeatedly borders on science fiction. Having a creative imagination is probably a plus rather than a minus.

You don’t have to agree with all of Nathan Rourke’s premises and conclusions to find much to think seriously about in Mind Crime.


Sources:

John Basl & Joseph Bowen, “AI as a Moral Right-Holder,” in The Oxford Handbook of Ethics of AI (Markus Dubber et al., eds 2021).

Webb Keane, Animals, Robots, Gods: Adventures in the Moral Imagination 2025).

Mark Kingwell, “Are Sentient AIs Persons?,” in The Oxford Handbook of Ethics of AI (Markus Dubber et al., eds 2021).

Mathew Liao, “The Moral Status and Rights of Artificial Intelligence,” in Ethics of Artificial Intelligence (S. Matthew Liao, ed. 2020).

Peter Millican, “Artificial General Intelligence: Shocks, Sentience, and Moral Status,” in AI Morality (David Edmonds, ed. 2024).

Nathan Rourke, Mind Crime: The Moral Frontier of Artificial Intelligence (2025).

Susan Schneider, “How to Catch an AI Zombie: Testing for Consciousness in Machines,” in Ethics of Artificial Intelligence (S. Matthew Liao, ed. 2020).

Eric Schwitzgebel & Mara Garza, “Designing AI with Rights, Consciousness, Self-Respect, and Freedom,” in Ethics of Artificial Intelligence (S. Matthew Liao, ed. 2020).

Jeff Sebo, The Moral Circle: Who Matters, What Matters, and Why (2025).

Videos:

Cognitive dissonance: https://ethicsunwrapped.utexas.edu/video/cognitive-dissonance.

Incrementalism: https://ethicsunwrapped.utexas.edu/video/incrementalism.

Moral myopia:  https://ethicsunwrapped.utexas.edu/glossary/moral-myopia.

Overconfidence bias: https://ethicsunwrapped.utexas.edu/video/overconfidence-bias.

Overoptimism bias:  https://ethicsunwrapped.utexas.edu/glossary/optimism-bias.

Self-serving bias: https://ethicsunwrapped.utexas.edu/video/self-serving-bias.

Subject of moral worth: https://ethicsunwrapped.utexas.edu/video/moral-agent-subject-of-moral-worth.

Tangible & Abstract: https://ethicsunwrapped.utexas.edu/video/tangible-abstract