Sound moral judgments must be based on facts. People court disaster when they make morally-tinged decisions based on nothing more than speculation. We believe that at this particular point in time, artificial intelligence (AI) presents the world with several of its most critical moral issues.  We have addressed AI ethics in several recent blog posts and have necessarily emphasized this point because the most knowledgeable people in the world regarding AI hold widely disparate views regarding its promises and perils. Whether we have a moral obligation to encourage unfettered AI development or to monitor and restrain such development (or something between those two poles) is uncertain.

It makes sense for us to now consider Neil D. Lawrence’s new provocatively-titled book The Atomic Human: What Makes Us Unique in the Age of AI (2025). Lawrence has worked on machine learning models for more than 25 years and has often been in “the room where it happens” with the brightest AI minds in the world. Most recently, after three years as director of machine learning at Amazon, he became the DeepMind Professor of Machine Learning at the University of Cambridge.

We cannot possibly summarize adequately this dense 448-page book, but we promise that if you read it you will be a killer addition to any Friday night pub quiz team in London or Edinburgh. It contains fascinating tidbits about the first moon landing, Alan Turing the marathoner, the nature of trust, Erwin Rommel, tesseracts, pretty much any mathematician you’ve ever heard of (including Babbage, Bayes, Bernoulli (all three of them), Boole, Box…and that’s just the “B’s”), the Industrial Revolution, Bletchley Park, the first death caused by an autonomous vehicle, Prometheus, back propagation, reinforcement learning, social cues, optic nerves, the University of Texas (we had to mention that one), and the fact that the best defense the British had against the V-2 rocket in World War II “was to propagate a fiction that the rockets were overshooting their target and, falling for this fiction, the Germans then recalibrated the autopilot guidance systems, causing the rockets to fall short.”

Big picture: Lawrence calls his book a “piecemeal social philosophy on how to react to the computer’s new capabilities.” In order to adequately compare and contrast AI with human intelligence, Lawrence describes the evolution, characteristics, and limitations of human intelligence. He also tracks from their earliest origins (think Aristotle and before) the mathematical theories and engineering breakthroughs that over the centuries led to the neural networks, large-language models, and human analogue machines (HAMS) of today’s AI. It is difficult to conceive that too many people have thought more deeply about these topics than Lawrence.

Lawrence pronounces himself an AI optimist. He has used AI to deliver Amazon’s packages all over the world and for many other purposes. Nonetheless, he has many concerns about AI. One of those concerns is not that AI will imminently take over the world and turn all of us into paperclips. While experts like Nick Bostrom and Jack Good speak about “superintelligence” and “ultraintelligence,” respectively, and it’s true that recent tools like ChatGPT can pass the Turing Test.

Lawrence cautions: “the artificial intelligence we are now peddling, the techniques we are using, simply combine very large datasets and computers. It is a mix of advanced computation and statistics,” and not much more. Lawrence believes AI has a good PR firm.

In response to Nick Bostrom’s prediction of a “singularity” where machines become smart enough to redesign themselves and become so “superintelligent” that humans will not be able to control them, Lawrence says: “hooey.” He describes all the many things he has done with AI over the years, observing that “none of these systems has expressed any ill will towards me. They haven’t really expressed themselves at all.”

So, Lawrence is not losing any sleep worrying about an imminent takeover of humanity by our AI masters. However, he does have several concrete concerns about AI that should be considered. First, he is concerned about control over information. Humans become vulnerable when machines harvest and use their data. Many of today’s most powerful companies such as Facebook and other social media firms make their money by harvesting people’s most sensitive personal information. Lawrence refers to this as “System Zero – a decision-making system that uses our data to second-guess us, prejudge what we want and restrict our view of the world.”  Lawrence thinks we give up our personal freedom when we give away access to our personal information.

Second, Lawrence is concerned with the entities—the “digital oligarchy”—that own AI. They profit immensely and, as is often the case with increased automation, most of the rest of us do not. Benefits are distributed in an extremely uneven manner. In an epilogue, Lawrence despairs regarding the seeming inability of these companies (Facebook, OpenAI, etc.) to self-regulate.

Third, AI has gotten so complicated that even the digital elite do not fully control or understand their own systems. Lawrence uses Facebook as an example. In the wake of the 2016 election, some suggested that Facebook had helped elect Donald Trump. CEO Mark Zuckerberg deemed this a “crazy idea.” However, an 11-month investigation by Facebook found that the Internet Research Agency (IRA), a tool of the Russian government, had used Facebook to contact 126 million Americans with targeted messages aimed at helping Trump and damaging his opponent, Hillary Clinton.

Writes Lawrence: “These companies often don’t understand their own systems, let alone the effect they are having on society and culture.”

Fourth, the generative AI products that are now flooding the marketplace enable AI not only to pass the Turing Test, but also to manipulate humans in ways that have never been possible before. Lawrence says we’ve tended to labor under the “AI fallacy,” the notion that AI will ultimately adapt to humans and serve humans. “Jeeves in a computer,” he calls it. However, that has not been the case and Lawrence worries that it will continue to not be the case:

In commonplace usage, the word “intelligence” implies common sense and empathy. It implies a range of evolved characteristics that we take for granted in our human companions, and even our animal companions. It implies that we’re creating a flexible entity that could seamlessly integrate with the fabric of human society.

In practice, we are only just starting to see the first glimpses of this possibility in the first wave of HAMs [human analogue machines like generative AI] that have emerged. A major question is to what extent the AI fallacy will continue to hold. Modern AI systems have moved a long way from the rigid classical AI ideas. Will we continue to have to adapt to the machine, or will it adapt to us? And if it does gain a deeper understanding of who we are and adapts to our needs, how do we prevent it manipulating us? (pp. 357-358)

Lawrence urges us to think carefully about how we can encourage creation of AI business models that will be most likely to advance human values and shield human vulnerabilities, rather than just line the pockets of the AI firms.  “Within the complex relationship between humans and machines, we need to ensure that humans remain in control,” Lawrence writes. He concludes that AI should be never be more than a tool to assist humans in achieving their goals.

This book is unlikely to settle many of the factual questions in this ongoing debate about AI’s promises and perils, but it does add many insightful arguments worthy of serious consideration.

 

Resources:

Max Bennett, A Brief History of Intelligence: Evolution, AI, and the Five Breakthroughs That Made Our Brains (2023).

Nick Bostrom, Superintelligence: Paths, Dangers, Strategies (2014).

John Edwards, The Atomic Human (book review) (Dec. 13, 2024, at https://medium.com/data-policy/the-atomic-human-understanding-ourselves-in-the-age-of-ai-by-neil-lawrence-df97c47aaa74.

Ray Kurzweil, The Singularity Is Nearer: When We Merge with AI (2024).

Neil Lawrence, “Living Together: Mind and Machine Intelligence,” May 22, 2017, at https://arxiv.org/abs/1705.07996.

Neil Lawrence, The Atomic Human: What Makes Us Unique in the Age of AI (2025).

Craig Wheeler, The Path to Singularity: How Technology Will Challenge the Future of Humanity 2024).

 

Videos:

AI Ethics: https://ethicsunwrapped.utexas.edu/glossary/ai-ethics.

Algorithmic Bias: https://ethicsunwrapped.utexas.edu/glossary/algorithmic-bias