In 2024, top language algorithms could “read” 2.6 billion words in just a couple of hours. This gives them a fighting chance of keeping up with the innumerable books and articles being written about the ethical implications of various aspects and impacts of artificial intelligence (AI). In 2025, we here at Ethics Unwrapped intend to devote a fair amount of attention to the myriad important moral questions raised by the AI revolution, but a good starting point is to concede the difficulty of the task.
The very first step in sound ethical decision making, say Trevino and Nelson, is to establish the facts to the extent possible. AI itself–via deepfake AI imagine generators, deepfake AI voice generators, and massive manipulation of social media and other sources of information–is already “distorting perceptions of reality” (Tiffany Hsu) and “destabiliz[ing] the concept of truth itself” (Libby Lange). But setting that aside, the fantastically complicated technology behind AI and the seemingly unknowable future impact many believe it will have on virtually every aspect of life on earth give rise to the likelihood that moral decisions–as well as policy, commercial, technological and other AI-related choices—will have to be made in a state of such factual uncertainty as to undermine the entire moral judgment enterprise.
Let’s start with the most significant possible moral issue facing AI researchers, entrepreneurs, users, regulators and others: Are we are about to see a version of AI that carries a meaningful chance of bringing about the apocalypse, so that someone should be putting the brakes on its development? Should today’s AI developers–like Oppenheimer and his colleagues as they developed the atomic bomb during World War II–keep the Bhagavad Gita reference (“Now I am become death, the destroyer of worlds”) in mind as they proceed? What we need to know in order to make a moral judgment regarding further AI development is whether doomsday prophecies are a realistic concern or just the ravings of a few nervous Nellies.
Some experts seem so confident in the promise of AI and so unafraid of possible negative consequences that they advocate that we go “full speed ahead” with developing AI. For example, Ray Kurzweil (an influential futurist) believes: “AI is the pivotal technology that will allow us to meet the pressing challenges that confront us, including overcoming disease, poverty, environmental degradation, and all of our human frailties. We have a moral imperative to realize this promise of new technologies.” Marc Andreessen (co-author of Mosaic, co-founder of Netscape, and famed Silicon Valley tech investor) argues that members of what he calls “the AI risk cult” are unreasonably engaged in “a full-blown moral panic,” while Mark Zuckerberg deems these cautionary folks as “pretty irresponsible.” Yann LeCun (NYU professor and one of the three “godfathers of AI” who won the 2018 Turing Award for their contributions to AI’s development) believes that fears that AI development will cause serious problems “are overblown,” tweeting that ‘the most common reaction by AI researchers to these prophecies of doom is face palming.’
So, nothing to see here, right? Nothing to worry about? Would that the situation were so clear.
As it happens, the other two “godfathers of AI,” professors Geoffrey Hinton and Yoshua Bengio, signed a 2023 statement circulated by the Centre for AI Safety stating that “[m]itigating the risk of extinction from AI should be a global priority alongside other societal scale risks such as pandemics and nuclear war.” Three leading developers of AI—Sam Altman of ChatGPT, Demis Hassabis of Google DeepMind, and Dario Amodei of Anthropic–also supported the statement, as did dozens of other AI researchers. In 2023, a survey of 2,778 researchers showed that more than a third believed there was at least a 10% chance that advanced AI would lead to serious adverse outcomes, perhaps even human extinction. Other prominent experts who have voiced similar dystopian concerns include Bill Joy (founder of Sun Microsystems and creator of Java), Nick Bostrom (founder of Oxford’s Future of Humanity Institute), Lord Martin Rees (astrophysicist at Cambridge University and co-founder of the Centre for the Study of Existential Risk), Mark Tegmark (cofounder of MIT’s Future of Life Institute), Bill Gates, Elon Musk, and the late Stephen Hawking.
At the end of the day, AI technology is so complicated and progressing so rapidly that columnist David Brooks probably had it right when he concluded: “[I]t is literally unknowable whether this technology is leading us to heaven or hell. …A.I. is a field that has brilliant people painting widely diverging but also persuasive portraits of where this is going. …Nobody knows who’s right, but the researchers just keep plowing ahead.”
As if to emphasize the point of this post, after it was initially drafted a Chinese company announced that its new product—DeepSeek—could do everything that the most powerful American AI models could do with just a fraction of the computer chips and energy usage, jarring both the AI industry and the stock prices of the energy firms that had been expecting to provide much more energy to AI firms than may now be needed going forward.
With so little factual certainty, how can anyone confidently judge the morality of those who continue to research, develop, market, and use AI tools? Or of those who are trying to slow the process or stop it altogether? This is a true ethical dilemma with no obvious right answer and seemingly outsized consequences for the entire world.
Sources:
David Brooks, “The Fight for the Soul of A.I.,” New York Times, Nov. 23, 2023.
Katja Grace et al., “Thousands of AI Authors on the Future of AI,” (Jan. 2024), at https://aiimpacts.org/wp-content/uploads/2023/04/Thousands_of_AI_authors_on_the_future_of_AI.pdf.
Yuval Noah Harari, Nexus: A Brief History of Information Networks from the Stone Age to AI 235 (2024).
Tiffany Hsu et al., “Elections and Disinformation are Colliding Like Never Before in 2024,” New York Times, Jan. 22, 2024.
Ray Kurzweil, The Singularity is Nearer 285(2024).
Heather Long, “’Drill, Baby, Drill’ Is Hitting a Pricing Problem,” Washington Post, January 30, 2025.
Andrew Marantz, “O.K., Doomer,” The New Yorker, March 18, 2024.
Cade Metz, “Mark Zuckerberg, Elon Musk and the Feud over Killer Robots,” New York Times, June 9, 2018.
Cade Metz, “What to Know about DeepSeek and How It Is Upending A.I.,” New York Times, Jan. 27, 2025.
Linda K. Trevino & Katherine A. Nelson, Managing Business Ethics: Straight Talk About How to Do It Right 103 4th ed. 2007).
Chris Vallance, “Artificial Intelligence Could Lead to Extinction, Experts Warn,” The BBC, May 30, 2023, at https://www.bbc.com/news/uk-65746524.
Pranshu Verma & Gerrit De Vynck, “AI is Destabilizing ‘the Concept of Truth Itself’ in 2024 Election,” Washington Post, Jan. 22, 2024 (quoting Libby Lange).
J. Craig Wheeler, The Path to Singularity: How Technology Will Challenge the Future of Humanity 25 (2024).
Videos:
Artificial Intelligence: https://ethicsunwrapped.utexas.edu/glossary/artificial-intelligence.
AI Ethics: https://ethicsunwrapped.utexas.edu/glossary/ai-ethics.
Blog Posts:
“Artificial Intelligence, Democracy, and Danger”: https://ethicsunwrapped.utexas.edu/artificial-intelligence-democracy-and-danger.