There is little question that artificial intelligence (AI)—if it continues to be developed as most experts foresee—will reshape our world. Some changes will be positive. Some will be negative.
As we pointed out in a previous blog post (AI Ethics: “Just the Facts Ma’am”), having a firm handle on the facts is prerequisite to making sound moral judgments, yet so many experts disagree so strongly as to the promises and perils of continued AI development that such factual certainty is not currently available.
A second post (“Techno-Optimist or AI Doomer? Consequentialism and the Ethics of AI”) provided more detail regarding the factual controversies, emphasizing that a confident consequentialist judgment regarding the morality of continuing to develop AI is very difficult to render, in part because the upside of AI is large but as yet unknown while the downside is similarly uncertain and potentially apocalyptic. Many deeply knowledgeable experts believe there is a meaningful chance that AI could end civilization as we know it.
This post explores the question of whether the precautionary principle, which is often applied to give guidance to important moral and policy decisions–might provide useful in this setting.
The precautionary principle is a guide to both moral behavior and policy decisions. Its origins are often traced to the 1992 Rio Declaration on Environment and Development which stated, in part: “Where there are threats of serious or irreversible damage, lack of full scientific certainty shall not be used as a reason for postponing cost-effective measures to prevent environmental degradation.” The precautionary principle (PP) has been widely used in international law, especially regarding potential damage to the environment. There is no single accepted definition of the PP, and there are multiple theories as to how it should be applied.
The most controversial part of the PP is its flipping of the burden of proof. Say that a company wishes to build a plant near an environmentally fragile area. Before the PP was widely recognized, the burden would be generally have been on regulators to prove that the plant posed a meaningful danger to the environment before they could regulate. The PP flips the burden, as embodied in the 1999 Wingspread Declaration: “When an activity raises threats of harm to human health or the environment, precautionary measures should be taken even if some cause and effect relationships are not established scientifically. In this context the proponent of the activity, rather than the public, should bear the burden of proof.”
That activity might be building a large nuclear power plant, launching a new drug with addictive properties, genetically editing embryos, or developing a new AI product that (to use Nick Bostrom’s famous hypothetical) is tasked with making as many paper clips as possible. Indeed, the PP has already been officially applied in the European Union to AI activities via the Artificial Intelligence Act and the AI Liability Directive. And in 2023 the Biden administration received voluntary assurances from many American AI companies that they would follow the PP’s guidance, although those assurances have largely melted away, especially in the wake of the early 2025 announcement of China’s DeepSeek artificial intelligence model which may (or may not…at this writing it is too early to tell) be a game-changer. (See a third blog post at https://ethicsunwrapped.utexas.edu/ethical-ai-moral-judgments-swamped-by-competitive-forces.)
Does the precautionary principle provide meaningful help in our attempts to judge the morality of those who are pushing forward with their efforts to develop AI notwithstanding its potentially apocalyptic consequences not to mention its more mundane possible impacts, such as discriminatory data bases, vast unemployment for humans, and exacerbation of disastrous climate change trends?
Probably not. Unfortunately.
First, in most formulations, the PP is triggered when (a) an activity threatens serious damage of some sort, and (b) a scientific knowledge threshold is met, justifying application of the PP. The first criterion is surely met, because many experts in the AI field believe that additional AI development could cause many harms, including possibly threatening the continued existence of life as we know it. But do we meet a sufficient scientific knowledge threshold to justify application of the PP? Our previous blog posts have demonstrated the uncertain nature of—and strong controversy about–the future dangers and benefits of AI. Even if we were to conclude that we should apply the PP in deciding our future course of action, the doctrine adds no certainty to the risk/benefit calculation that will be critical to making defensible moral and policy choices.
Second, the PP itself is often strongly criticized. Legal scholar Cass Sunstein points out that the PP is “hopelessly vague,” giving no guidance as to what a proper level of precaution would be in any given case. Worse, it is “incoherent” in that it urges a presumption in favor of regulation when an activity (like rolling out powerful new AI tools) threatens harm, but then gives that regulation a pass by failing to carefully consider the harm it might do by preventing implementation of those new AI tools. The regulatory actions to throttle AI development would prevent society from enjoying all the benefits created by that development, which could be substantial. Just consider, for example, the boost AI has given to vaccine development and its major breakthrough in modeling protein folding. Given current levels of factual uncertainty, it is impossible to judge which would be worse—suffering the disadvantages of AI or forgoing its benefits.
Regular readers will know that here at Ethics Unwrapped, we emphasize behavioral ethics—the psychology of moral decision making (see our video at Behavioral Ethics ). Sunstein himself pays close attention to the findings of behavioral psychology research. He warns that defects in human reasoning may account for the appeal of the PP. First, he notes, many proponents of the PP “seem to be especially concerned about new technology,” because they seem to carry an unfounded belief that what is “natural” is automatically preferable to any alternatives. Pointing to tobacco, Sunstein notes that it is a product of nature but that it kills by “naturally” causing cancer. Second, Sunstein suspects that PP supporters are victims of loss aversion—the tendency people have to hate losses more than they enjoy gains (see our video at Loss Aversion). In this context, says Sunstein, the loss aversion concept would “predict that the precautionary principle would place a spotlight on the losses introduced by the risk and downplay the foregone benefits resulting from controls on that risk.”
Of course, on the flip side, many decry the PP because they dislike regulation in general. Congressman Jay Obernolte, for example, supports the values of “freedom and entrepreneurship.” While there are definitely legitimate concerns that overregulation might stifle innovation and prevent AI from reaching its potential for good as well as protecting humans from its excesses, these free marketers often fall victim to the tangible and the abstract (see our video at Tangible & Abstract) –the tendency people often have to be “influenced more by what is immediately observable [e.g., the potential for profits from selling their AI product right now] than by factors that are hypothetical or distant, such as something that could happen in the future or is happening far away [such as AI developing an ability to overthrow its human overlords as many AI experts worry about].”
Sunstein points out that regarding application of the PP: “Some of the most difficult cases arise when (1) a product or activity has significant benefits and (2)(a) the probability of a bad outcome is difficult or impossible to specify (creating a situation of uncertainty rather than risk), and (b) the bad outcome is catastrophic…” AI presents just such a case. Pretty much every expert agrees that AI, if fully developed, could have many significant benefits—perhaps curing cancer and helping reverse climate change among them. On the other hand, many experts also believe that AI could cause catastrophic consequences, but are unable to quantify with confidence the likelihood of such an eventuality.
Ultimately, our goal must be to strike the proper balance between caution and risk. To do so effectively, we need more factual certainty than we have now and the PP helps little in that regard. Indeed, it might appear that the precautionary principle offers us little more than did our mothers when they reminded us: “Better safe than sorry.” Still, when experts are bandying the word “apocalypse” about, that’s not bad advice to keep in mind. This is true particularly when we combine that commonsense notion with one other key principle of the PP: that we should anticipate damage from AI before it occurs and be proactive in protecting ourselves from damage and not simply reactive, focusing on fixing damage after it occurs.
Sources:
Marko Ahteensuu, “Rationale for Taking Precautions: Normative Choices and Commitments in the Implementation of the Precautionary Principle,” at https://www.kent.ac.uk/scarr/events/ahteensuu.pdf.
John Bailey, “Treading Carefully: The Precautionary Principle in AI Development,” AEIdeas, July 25, 2023, at https://www.aei.org/technology-and-innovation/treading-carefully-the-precautionary-principle-in-ai-development/.
Harry Boyle & Elizabeth Hollander, Wingspread Declaration on Renewing the Civic Mission of the American Research University (1999), at https://compact.org/sites/default/files/2022-05/wingspread_declaration.pdf.
Susan Carr, “Ethical and Value-Based Aspects of the European Commission’s Precautionary Principal,” Journal of Agricultural and Environmental Ethics, Vol 15, pp. 31-38 (2002).
Jovanna Davidovic, “On the Purpose of Meaningful Human Control of AI,” Frontiers in Big Data Jan. 9, 2003, at https://pmc.ncbi.nlm.nih.gov/articles/PMC9868906/.
Will Douglas Heaven, “DeepMind’s Protein-folding AI Has Solved a 50-year-old Grand Challenge of Biology,” MIT Technology Review, Nov. 30, 2020.
Bronwyn Howell, “Can AI Regulation Really Make Us Safe(r)?” AEIdeas, April 26, 2024, at https://www.aei.org/technology-and-innovation/can-ai-regulation-really-make-us-safer/.
Bronwyn Howell, “The Precautionary Principle, Safety Regulation and AI: This Time, It Really Is Different,” (American Enterprise Institute, Sept. 4, 2024), at https://www.aei.org/research-products/report/the-precautionary-principle-safety-regulation-and-ai-this-time-it-really-is-different/.
John O. McGinniss, “AI’s Future: Liberty or License?” Law & Liberty, June 1, 2023, at https://lawliberty.org/ais-future-liberty-or-license/ .
Rep. Jay Obernolte, “The Role of Congress in Regulating Artificial Intelligence,” The Ripon Forum, Vol. 58, No. 3 (June 2023).
David B. Olawade et al., “Leveraging Artificial Intelligence in Vaccine Development: A Narrative Review,” Journal of Microbiological Methods 224: 106998 (2024).
Jose Felix Pinto-Bazurco, “The Precautionary Principle,” Earth Negotiations Bulletin, October 2020, at https://www.iisd.org/system/files/2020-10/still-one-earth-precautionary-principle.pdf.
Daniel Steel, Philosophy and The Precautionary Principle: Science, Evidence, and Environmental Policy 1-2 (2014).
Cass Sunstein, How Change Happens (2019).
Videos:
Artificial Intelligence: https://ethicsunwrapped.utexas.edu/glossary/artificial-intelligence.
AI Ethics: https://ethicsunwrapped.utexas.edu/glossary/ai-ethics.
Behavioral Ethics: https://ethicsunwrapped.utexas.edu/glossary/behavioral-ethics.
Loss Aversion: https://ethicsunwrapped.utexas.edu/video/loss-aversion.
Tangible & Abstract: https://ethicsunwrapped.utexas.edu/video/tangible-abstract.
Blog Posts:
“AI and the Energy Issue”: https://ethicsunwrapped.utexas.edu/ai-and-the-energy-issue
“Artificial Intelligence, Democracy, and Danger”: https://ethicsunwrapped.utexas.edu/artificial-intelligence-democracy-and-danger.
“AI Ethics: ‘Just the Facts, Ma’am,’”: https://ethicsunwrapped.utexas.edu/ai-ethics-just-the-facts-maam.
“Ethical AI: Moral Judgments Swamped by Competitive Forces”: https://ethicsunwrapped.utexas.edu/ethical-ai-moral-judgments-swamped-by-competitive-forces.
AI Ethics: “Techno-Optimist or AI Doomer? Consequentialism and the Ethics of AI”: https://ethicsunwrapped.utexas.edu/techno-optimist-or-ai-doomer-consequentialism-and-the-ethics-of-ai.