Many of us have watched enough movies on the topic–“2001: A Space Odyssey,” “War Games,” “Terminator,” “Blade Runner,” “Ex Machina,” “The Matrix,” and the like– to be viscerally concerned about today’s rapid-fire development of artificial intelligence (AI). And this concern is not unwarranted, for many of the most-knowledgeable experts are themselves very apprehensive.

In March 2023, more than 1,000 AI technologists and researchers signed an open letter warning of an “out-of-control race to develop and deploy ever more powerful digital minds.” Two months later, more than 350 executives and engineers in the AI industry signed an open letter reading: “Mitigating the risk of extinction from A.I. should be a global priority alongside other societal-scale risks, such as pandemics and nuclear war.” Among the signatories of these documents were CEOs of three of the leading AI companies (Sam Altman of OpenAI, Demis Hassabis of Google DeepMind, and Dario Amodei of Anthropic), two of the three “godfathers of AI” who won a Turing Award for their work on neural networks (Geoffrey Hinton and Yoshua Bengio), Elon Musk, and Steve Wozniak. Furthermore, more than a third of 2,778 researchers polled that year expressed a belief that there was at least a 10% chance that advanced AI would lead to serious adverse outcomes, perhaps even human extinction.

These facts were worrisome, but sent a slightly comforting signal to “AI doomers” (people seriously worried about AI bringing about the apocalypse) that at least those researchers, entrepreneurs, investors and others in the driver’s seat of the AI evolution were taking seriously their moral responsibility to protect the human race. The March 2023 open letter, consistent with the precautionary principal, called for a six-month pause on the development of the most powerful AI models so their dangers could be assessed. Indeed, in testimony before Congress, Sam Altman called for both domestic and international regulation of AI development.

Yet, less than two years later we find ourselves in the big middle of a breakneck race for market dominance by multiple companies, most of which have stopped expressing concerns about AI’s perils and begun emphasizing its benefits as they roll out their latest technology. Open AI has been criticized for creating an “AI arms race” with its rapid deployment of products. Nearly half its safety staffers have left the company, with one (Steven Adler) stating that he is “pretty terrified by the pace of AI development these days” and another (Jan Leike) saying “safety culture and processes have taken a backseat to shiny products.” Microsoft has described the pace at which it released its own chatbot as “frantic.”

Thus, cautious AI development has been replaced by what appears to be an unconstrained race toward artificial general intelligence and even superintelligence, with insufficient attention paid to potential adverse consequences. What happened and why? Well, it’s all about incentives. Sometimes they overwhelm moral constraints. (Check out our video on the self-serving bias: https://ethicsunwrapped.utexas.edu/video/self-serving-bias.)

Consider the researchers. Physicist J. Robert Oppenheimer said: “When you see something that is technically sweet, you go ahead and do it and you argue about what to do about it only after you have had your technical success. That is the way it was with the atomic bomb.” We imagine it would be pretty difficult for a researcher making rapid progress on the next great AI model, perhaps in competition with scientists at a rival company or in a rival country, to take a six-month hiatus and perhaps be “scooped” by the competition. So, they may well keep plugging away, regardless of qualms about their product comparable to Oppenheimer’s—“Now I am become death, the destroyer of worlds.” The potential professional prestige and personal satisfaction of winning the race might be overwhelming.

Researchers and other employees would also likely find it difficult to forgo monetary and related benefits. One senior employee at an AI start-up who personally believed there was a 50% chance that AI would end life as we know it, was asked how he could continue to help build it. He responded: “…in the meantime, I get to have a nice house and car.” Journalist Andrew Marantz commented that the fact that people choose to make this sort of a trade-off “could be a matter of simple greed, or subtle denialism. Or it could be ambition—prudently refraining from building something, after all, is no way to get into the history books….Elon Musk [who had often expressed great worry about the dangers of advanced AI] …has said that, as long as A.G.I. [advanced general intelligence] is going to be built, he might as well try to be the first to build it.” Sam Altman expressed a similar sentiment in 2015: “AI will probably, like, most likely lead to the end of the world, but in the meantime, there’ll be great companies.”

Speaking of companies, OpenAI began as a nonprofit, but created a for-profit subsidiary when the potential to make millions (or billions) became apparent. At this writing, co-founder Sam Altman is attempting to convert OpenAI into an entirely for-profit entity, though his efforts are being complicated by his other co-founder Elon Musk’s hostile bid of $97 billion for control of OpenAI.

Microsoft started with six “responsible AI” principles and then violated most of them when the competition heated up. Nobel Prize-winning economist Simon Johnson and his co-author Daron Acemoglu believe that “[t]he fact that these companies are attempting to outpace each other, in the absence of externally imposed safeguards should give us even more cause for concern, given the potential for A.I. to do great harm to jobs, privacy and cybersecurity. Arms races without restriction generally do not end well.” Big picture: AI firms went quickly from debating the ethics of launching AI products at all to launching them and talking with varying degrees of seriousness about the ethics of “responsible AI.” As former Google CEO Eric Schmidt recently said: “…these are really social and moral questions. The companies are doing what companies do. They’re trying to maximize their revenue. [What’s missing is a social consensus] of what’s right and what’s wrong.”

The federal government commissioned Gladstone AI to evaluate the national security risks created by this AI rivalry. In the executive summary of its lengthy report, Gladstone concluded:

The recent explosion of progress in advanced artificial intelligence (AI) has brought great opportunities, but it is also creating entirely new categories of weapons of mass destruction-like (WMD-like) and WMD-enabling catastrophic risks. A key driver of these risks is an acute competitive dynamic among the frontier labs that are building the world’s most advanced AI systems

Frontier lab executives and staff have publicly acknowledged these dangers. Nonetheless, competitive pressures continue to push them to accelerate their investments in AI capabilities at the expense of safety and security.

Capitalism is a wonderful economic engine, but a significant flaw is its tendency to cause collateral damage as companies strive single-mindedly for profit. “The market will always push AI companies to move fast and break things,” or so it seems.

The late January 2025 release of Chinese company DeepSeek’s R1 AI model really caused a panic in U.S. tech and investing realms. China had caught up to the U.S. overnight, it appeared (though many had their doubts). What tech investor extraordinaire Marc Andreessen termed China’s “Sputnik moment, caused a “how do we catch up?!” panic in the U.S. comparable to what happened after Russia became the first country to put a man in space in 1961. Worse, DeepSeek R1 may have been so cheap to produce because its makers sacrificed safety. Researchers Kassianik and Karbasi had a 100% success rate in inducing DeepSeek R1 to engage in “harmful behaviors including cybercrime, misinformation, illegal activities, and general harm.”

Government regulation might substitute where self-regulation fails, but states also have incentives. And one of their biggest incentives is to attract tech companies specializing in AI (and everything else) to their locales so that they can enjoy the jobs and tax infusions that follow. States more often do this by eliminating regulations, not adding them.

Geopolitical pressure weighs against national regulation, as U.S. politicians will hesitate to take actions hamstringing U.S. tech companies in their race for supremacy over Chinese and other international rivals. The Biden administration sought to win the AI race for the U.S. by limiting Chinese access to U.S. computer chips, but at least Biden issued an executive order encouraging AI developers to ensure their models were “safe, secure, and trustworthy.”

The Trump administration is similarly committed to winning the AI competition by, for example, supporting the $500 billion Stargate AI project to provide infrastructure for data centers. Unfortunately, the Trump administration is unconcerned with safety, security, and trustworthiness. One of Trump’s first acts was to issue an executive order rescinding Biden’s aforementioned executive order, and Vice President Vance has not only denounced any attempt to limit AI development because of safety concerns and promised that the U.S. would not issue such regulations, but also warned European nations against adding such regulations themselves should they be so craven as to think guardrails on AI development to be necessary.

All in all, these are worrisome days for AI doomers. Individual engineers and investors, companies, states, and entire nations are all incentivized to go where AI has not gone before and to do so as fast as possible, dangers be damned. It seems extraordinarily unlikely that either the U.S. or Nazi Germany would have, in the throes of World War II, decided not to pursue creation of an atom bomb on moral grounds. Similarly, with both the Biden and Trump administrations viewing U.S. superiority over China in AI as an utmost national security priority, it seems unlikely that the race toward artificial general intelligence and superintelligence will be decelerated on moral grounds either.

We are not sure enough of the facts to opine decisively whether sound moral judgment mitigates in favor of or against rapid AI development. Instead, we will simply support the very practical observation of Princeton professor Zeynep Tufekci: “America can’t re-establish its dominance over the most advance A.I. because the technology, the data and the expertise that created it are already distributed all around the world. The best way this country can position itself for the new age is to prepare for its impact.”


 

Sources:

Daron Acemoglu & Simon Johnson, “Big Tech Is Bad. Big A.I. Will Be Worse,” New York Times, June 9, 2023.

Daron Acemoglu & Simon Johnson, Power and Progress: Our 1000-Year Struggle over Technology & Prosperity (2023).

Reid Blackman, “Microsoft is Sacrificing Its Ethical Principles to Win the A.I. Race,” New York Times, Feb. 23, 2023.

Paul Dickson, Sputnik: The Shock of the Century (2001).

Ellen Francis, “ChatGPT Maker OpenAI Calls for AI Regulation, Warning of ‘Existential Risk,’” Washington Post, May 24, 2023.

Gladstone AI, Defense in Depth: An Action Plan to Increase the Safety and Security of Advanced AI (Feb. 2024), at https://www.gladstone.ai/action-plan.

David Goldman & Matt Egan, “A Shocking Chinese AI Advancement Called DeepSeek Is Sending U.S. Stocks Plunging,” CNN, January 27, 2025, at https://www.cnn.com/2025/01/27/tech/deepseek-stocks-ai-china/index.html.

Sharon Goldman, “Exodus at OpenAI: Nearly Half of AGI Safety Staffers Have Left, Says Former Researcher,” Fortune, Aug. 26, 2024.

GoodReads, at https://www.goodreads.com/quotes/11642962-when-you-see-something-that-is-technically-sweet-you-go.

Katja Grace et al., “Thousands of AI Authors on the Future of AI,” (Jan. 2024), at https://aiimpacts.org/wp-content/uploads/2023/04/Thousands_of_AI_authors_on_the_future_of_AI.pdf.

John Herrman, “Out: Building God. In: Partnering with Apple,” New York Magazine, June 4, 2024.

Steve Inskeep, “6 Unsettling Thoughts Google Former CEO Has about Artificial Intelligence, NPR, Feb. 5, 2025 (interview with Eric Schmidt), at https://www.tpr.org/2025-02-05/6-unsettling-thoughts-googles-former-ceo-has-about-artificial-intelligence.

Timothy Karoff, “Texas Just Took a Big Hit in Its Competition against California,” SFGATE, April 29, 2024, at https://www.sfgate.com/tech/article/austin-texas-big-hit-tech-race-california-19429213.php.

Andrew Marantz, “O.K., Doomer,” The New Yorker, March 18, 2024

Dan Milmo, “Former Open AI Safety Researcher Brands Pace of AI Development ‘Terrifying,’” The Guardian, Jan. 28, 2025.

Laura Rodini, “Sam Altman’s Net Worth: The OpenAI Founder Is Now a Billionaire,” The Street.com, Jan. 4, 2025, at https://www.thestreet.com/investors/sam-altman-net-worth-how-does-he-make-money.

Kevin Roose, “A.I. Poses ‘Risk of Extinction,” Industry Leaders Warn,” New York Times, May 30, 2023.

Reuters, “Trump Revokes Biden Executive Order on Addressing AI Risks,” Jan. 21, 2025, at https://www.reuters.com/technology/artificial-intelligence/trump-revokes-biden-executive-order-addressing-ai-risks-2025-01-21/.

Thompson Reuters, “The Economic  & Regulatory Implications of Trump’s 2024 Election Victory,” Nov. 6, 2024, at https://www.thomsonreuters.com/en-us/posts/government/trump-economic-regulatory-implications/.

David Sanger, “Vance, in First Foreign Speech, Tells Europe that U.S. Will Dominate A.I.,” New York Times, Feb. 11, 2025.

Andrew Ross Sorkin et al., ”A Safety Check for OpenAI,” New York Times, May 20, 2024.

Andrew Ross Sorkin et al., “What’s Behind Elon Musk’s Hostile Bid for Control of OpenAI,” New York Times, Feb. 11, 2025.

Zeynep Tufekci, “The Dangerous A.I. Nonsense That Trump and Biden Fell For,” New York Times, Feb. 5. 2025, at https://www.nytimes.com/2025/02/05/opinion/ai-deepseek-trump-biden.html.

Robert Wright, “Sam Altman’s Imperial Reach,” Washington Post, Oct. 7,  2024, at https://www.washingtonpost.com/opinions/2024/10/07/sam-altman-ai-power-danger/.

Cat Zakrzewski, “Vance Boosts AI Industry in France as Trump Embraces the ‘Broligarchy,’” Washington Post, Feb. 12, 2025.

 

Related Videos:

Artificial Intelligence: https://ethicsunwrapped.utexas.edu/glossary/artificial-intelligence.

AI Ethics: https://ethicsunwrapped.utexas.edu/glossary/ai-ethics.

Self-serving Bias: https://ethicsunwrapped.utexas.edu/video/self-serving-bias.

 

Related Blog Posts:

“Artificial Intelligence, Democracy, and Danger”:  https://ethicsunwrapped.utexas.edu/artificial-intelligence-democracy-and-danger.

“AI Ethics: ‘Just the Facts, Ma’am,’”: https://ethicsunwrapped.utexas.edu/ai-ethics-just-the-facts-Ma’am.

“AI Ethics: Techno-Optimist or AI Doomer? Consequentialism and the Ethics of AI”: https://ethicsunwrapped.utexas.edu/techno-optimist-or-ai-doomer-consequentialism-and-the-ethics-of-ai.