A year ago today (“today” being the date this blog post is written–February 3, 2026), we published our first of several blog posts on AI ethics, this one titled “AI Ethics: ‘Just the Facts, Ma’am.’” Our central contention was that to make sound moral judgments one must first be in possession of the facts, at least to the extent possible. In the AI field, we posited, the relevant facts were so uncertain that confidently making moral choices was almost impossible.
We established that on moral issues small and large—large including the question of whether AI was to be the savior or destroyer of humanity—leading experts held totally opposite opinions. As an example, we observed that three of the “godfathers” of AI who won the 2018 Turing Award for their contributions to AI development were split badly. Yann LeCun believed that it was ridiculous to believe that AI development might lead to humanity’s doom. The other two godfathers, on the other hand—Geoffrey Hinton and Dario Amodei—had recently signed a statement circulated by the Centre for AI Safety taking the position that
“[m]itigating the risk of extinction from AI should be a global priority alongside other societal scale risks such as pandemics and nuclear war.”
One godfather was an AI boomer, the other two godfathers were AI doomers.
We write today to make the point that in the area of factual certainty relevant to key AI moral questions, we have made little progress in the intervening twelve months. This point was made by an article published in the New York Times just yesterday. In an article titled “Where Is A.I. Taking Us?,” the Times’ Timo Lenzen questioned eight leading experts regarding their predictions for where AI is likely to go in the next five years. There was huge disagreement among these experts, even regarding such a relatively short timeline.
The experts included Melanie Mitchell (computer scientist and professor at the Santa Fe Institute), Yuval Noah Harari (historian, philosopher and author), Carl Benedikt Frey (professor of AI at Oxford), Gary Marcus (founder of Geometric.AI and author of Taming Silicon Valley), Nick Frosst (co-founder of an AI start-up), Ajeya Cotra (AI risk assessor at METR, a research nonprofit), Aravind Srinivas (co-founder and CEO of Perplexity), and Helen Toner (interim executive director of Georgetown University’s Center for Security and Emerging Technology).
As noted, these experts disagreed widely as to the answers to questions such as the near-term impact of AI on medicine, on scientific research, on education, on mental health, and on art and creativity.
In response to the question of whether AI will significantly increase unemployment in the U.S. in the next five years, Marcus said “true,” Toner said “false,” and the others stated no opinion. Will AI lead to a breakthrough treatment or cure for a major disease in that time frame? Toner said “true” and Cotra said “false.” The others stated no opinion. How likely is it that we will see [Artificial General Intelligence] in the next 10 years?” Toner said “unlikely,” Srinivas believed “possible,” and Harari opined “very likely.” There was similar disagreement on a range of other questions about AI’s likely impact, though the eight experts generally agreed that AI would have a meaningful impact on computer coding.
So, one year on, shocking levels of disagreement continue to pervade nearly every aspect of AI’s development and impact, rendering it perilous to make firm moral judgments on a range of issues.
Maybe building the data centers firms like Anthropic think they need to train their new AI models will drain the American electrical grid and price normal Americans out of the market (Witt), but perhaps building such centers in outer space will solve that problem. (Hsu)
It is most likely that many, many humans will lose their jobs to AI (Schaul). Presumably people who lose their jobs to AI will be able to find suitable replacements positions (Groh). At least they will if we retrain them property (Khan). But maybe we won’t…and then what?
It is likely that AI will destroy educational institutions and perhaps learning (Purser + Watkinson) and thinking (Chen) as well. But maybe its bad influences can be warded off (Slater) and conceivably it can successfully be put to work improving both. (Singer)
AI chatbots seem to exacerbate the mental health problems of some children and adults (Gibson + Tiku), while simultaneously comforting and providing useful counseling to others. (Witt + Rosenbluth). Is their impact, on balance, good or bad? It’s hard to tell.
Maybe all these billions of dollars being poured into AI development is creating a bubble that may wreck our economy (Lashinsky + Ovide), or maybe not (Streitfeld + Rothman).
AI may already be on the cusp of thinking (Somers) and of developing superintelligence and even consciousness (Montero). Or maybe not (Effron).
We are not the only ones to observe these titanic clashes of opinion, of course. As Ross Douthat observed a couple of days ago:
Unfortunately, everyone I talk with offers conflicting reports. There are the people who envision A.I. as a revolutionary technology, but ultimately merely akin to the internet in its effects — the equivalent, let’s say, of someone telling you that the Indies are a collection of interesting islands, like the Canaries or the Azores, just bigger and potentially more profitable.
Then there are the people who talk about A.I. as an epoch-making, Industrial Revolution-level shift — which would be the equivalent of someone in 1500 promising that entire continents waited beyond the initial Caribbean island chain, and that not only fortunes but empires and superpowers would eventually rise and fall based on initial patterns of exploration and settlement and conquest.
And then, finally, there are the people with truly utopian and apocalyptic perspectives — the Singularitarians, the A.I. doomers, the people who expect us to merge with our machines or be destroyed by them. Think of them as the equivalent of Ponce de Leon seeking the Fountain of Youth, envisioning the New World as a territory where history fundamentally ruptures and the merely-human age is left behind.
Our takeaway is that the field of AI ethics remains frustratingly uncertain. Any attempt at moral certainty regarding so many moral issues raised by AI’s development is currently thwarted by rampant factual uncertainty.
Resources:
Brian Chen, “How A.I. and Social Media Contribute to ‘Brain Rot,’” New York Times, Nov. 6, 2025.
Elon Danziger, “ChatGPT Will Never Beat Indiana Jones,” New York Times, Dec. 22, 2025.
Ross Douthat, “Pay More Attention to A.I.,” New York Times, Jan. 31, 2026.
Blair Effron, “Why A.I. Can’t Make Thoughtful Decisions,” New York Times, Jan. 25, 2026.
Caitlin Gibson, “Her Daughter Was Unraveling, and She Didn’t Know Why. Then She Found the AI Chat Logs,” Washington Post, Dec. 23, 2025.
Brian Groh, “When A.I. Took My Job, I Bought a Chain Saw,” New York Times, Dec. 28, 2025.
Yuval Noah Harari, Nexus: A Brief History of Information Networks from the Stone Age to AI 2024).
Jeremy Hsu, “Data Centers in Space Aren’t as Wild as They Sound,” Scientific American, Dec. 9, 2025.
Sal Khan, “A 1 Percent Solution to the Looming A.I. Job Apocalypse,” New York Times, Dec. 27, 2025.
Adam Lashinsky, “’Circularity’ Is a Flashing Warning for the AI Boom,” Washington Post, Dec. 8, 2025.
Timo Lenzen, “Where Is A.I. Taking Us? Eight Leading Thinkers Share Their Visions,” New York Times, Feb. 2, 2026.
Gary Marcus, Taming Silicon Valley: How We Can Ensure that AI Works for Us (2024).
Barbara Gail Montero, “A.I. Is on Its Way to Something Even More Remarkable than Intelligence,” New York Times, Nov. 8, 2925
Ronald Purser, “AI is Destroying the University and Learning Itself,” Current Affairs, Dec. 1, 2025.
Shira Ovide, “The AI Spending Frenzy Is So Huge that It Makes No Sense,” New Yorker, Nov. 7, 2025.
Teddy Rosenbluth, “The Chatbot Is In,” New York Times, Nov. 17, 2025.
Joshua Rothman, “Is A.I. Actually a Bubble,” New Yorker, Dec. 12, 2025.
Kevin Schaul, “Can AI Do Your Job? See the Results from Hundreds of Tests,” Washington Post, Jan. 8, 2026.
Natasha Singer, “College Students Flock to a New Major: A.I.,” New York Times, Dec. 1, 2025.
Joanna Slater, “To AI-proof Exams, Professors Turn to the Oldest Technique of All,” Washington Post, Dec. 12, 2025.
James Somers, “The Case That A.I. Is Thinking,” New Yorker, Nov. 3, 2025.
David Streitfeld, “Why the A.I. Boom Is Unlike the Dot-Com Boom,” New York Times, Dec. 9, 2025.
Geoff Watkinson, “I’m an AI Power User. It Has No Place in the Classroom,” Chronicle of Higher Education, Jan. 9, 2026.
Stephen Witt, “Centers that Train A.I. and Drain the Electric Grid,” New Yorker, Oct. 27, 2025.
Stephen Witt, “The Race to Build the World’s Best Friend,” New York Times, Dec. 20, 2025.
Blog Posts:
“AI Ethics: ‘Just the Facts,’ Maam,” at https://ethicsunwrapped.utexas.edu/ai-ethics-just-the-facts-maam.