Oxford University computer scientist Nigel Shadbolt and co-author Roger Hampson (S&H), like so many others these days, believe that we must think carefully about the ethical issues surrounding the development of artificial intelligence (AI), so they’ve written As If Human: Ethics and Artificial Intelligence (2025).

S&H are AI Doubters. S&H point out a litany of AI failures, limitations, and dangers: the chess playing robot that broke an opponent’s hand, the self-driving cars that killed humans in accidents, the devastating invasions of privacy by companies and governments wielding AI, the dangers of AI-controlled tools of warfare, the many AI judgment tools that have embodied and even exacerbated human biases in deciding how long individual prison sentences will be, which parents will lose custody of their children, and so on.

These considerations aside, S&H recognize that AI carries lots of promise. However, they aren’t sure if Artificial General Intelligence (AGI) will ever happen, and they are certain that AI won’t be matching human thinking any time soon.

S&H point out that although AI is not human, people still have a tendency to anthropomorphize it, as they often do with animals and other objects. People often talk to Alexa (Amazon’s virtual assistant technology) as if “she” is human. Many people treat their ChatGPT as if it is a friend or a love interest. Google engineer Blake Lemoine famously became convinced that the company’s LaAMDA AI tool had become sentient.

S&H are not having any of this. They emphasize that AI is basically a bunch of wires and aluminum and silicon bits that carry a batch of organized electrons. AI is not human in any way. However, AI is increasingly having a significant impact on the world, for both good and ill. AI developments are conceived by humans, designed by humans, produced by humans, marketed by humans, and so on. Therefore, although AI is not human, moral judgments about its impact on the world must be evaluated as if it is human. Hence the title of the book and the authors’ insistence that, in order to maintain accountability, “we need to judge the output of complex algorithms as if they embodied moral agency.” (p. 19)

Some have suggested that AI might, in all its wisdom, create new ethical standards beyond what humans are capable of. S&H disagree:

The possibility that AIs would invent for us, wrest from their own bowels, a transformative and unexpected new ethics, unrelated to any existing humans ethics is, we think, symmetrical to the other much debated questions of (1) will machines ever be conscious and, if so, how would we know and (2) might machines take on some other attribute of life to a level we might not recognize? The answer to all of them is, not yet. And if ever, a long time from now. (p. 73)

As If Human contains numerous interesting discussions about such issues as the nature of consciousness, the contours of human agency, the many unique moral issues raised by developments and promised developments in AI, the fact that human policymakers have for a very long time operated on a mixed philosophy (meaning not purely consequentialist, purely deontological, or purely virtue ethics), whether machines can be virtuous (S&H say mostly ‘no’), the attempts at MIT and elsewhere to develop “moral machines” (which S&H define as “machines that can make moral decisions, or at least take moral considerations into account when performing their tasks”), the growing disparity between AI-rich and AI-poor countries, and other timely and fascinating topics.

            Ultimately, a key concept in the book is this: “Human values are what protect us from machines, however smart.” (p. 143)

S&H end with seven of what they term “proverbs” for how a good citizen should approach the future:

  1. A thing should say what it is and be what it says.
  2. Artificial intelligence should show respect for human beings.
  3. Artificial intelligences are only ethical if they embody the best human values.
  4. Artificial intelligence should be transparent and accountable to humans.
  5. Humans have a right to be judged by humans if they so wish.
  6. Decisions that affect a lot of humans should involve a lot of humans.
  7. The future of humanity must not be decided in private by vested interests. (p. 227)

We here at Ethics Unwrapped agree with S&H that “data professionals have a special obligation to ensure that these frameworks are applied – much like a properly trained physician has a responsibility to use up-to-date practices and take into account all the observable facts about the patient when making a prognosis.”  (pp. 32-33) This book is worth a read.

 

Sources

Luciano Floridi, “AI as Agency without Intelligence: On ChatGPT, Large Language Models, and Other Generative Models,” Philosophy and Technology, 36(1) (March 2023).

Daniel Kahneman et al., Noise:  A Flaw in Human Judgment (2021).

Nigel Shadbolt & Roger Hamson, As If Human: Ethics and Artificial Intelligence (2025).

Nigel Shadbolt & Roger Hamson, The Digital Ape: How to Live (in Peace) with Smart Machines (2018).