Skip to main content

A.I. & Transparency: An Epic Deception

Epic’s widely used AI tool for sepsis detection promised accuracy, but the “black-box” nature of the algorithm made it difficult to quickly and effectively evaluate its effectiveness.

Sepsis occurs when pathogenic microorganisms or their toxins invade a person’s body. The victim’s immune system’s response to such an invasion can cause tissue damage, organ failure, and death. Sepsis is the leading cause of death in U.S. hospitals. Because timely treatment with antibiotics and intravenous fluids have been shown to reduce sepsis-caused mortality, it is very important to accurately predict the risk of sepsis onset as soon as possible. In recent years, many companies have attempted to use artificial intelligence (AI) models to make such early predictions.

Epic, a U.S.-based company with access to health records of more than 250 million people became the early industry leader. It released its Epic Sepsis Model (ESM) in 2017. Hospitals supposedly could use ESM with their existing health records, saving time and money.

Hundreds of hospitals adopted ESM and the company widely touted its adoption rate. If so many hospitals are adopting it, the AI model must be awesome, right? Well, maybe. What Epic did not do was release results of any peer-reviewed studies demonstrating the model’s accuracy. Indeed, as is often the case with new technology, Epic claimed that its model was a proprietary trade secret, which prevented independent researchers from testing the model’s accuracy.

It turns out that rate of adoption was not predictive of the model’s accuracy, which should not be surprising given that it was eventually disclosed that Epic paid various hospitals up to $1 million each to adopt ESM.

Although Epic claimed an accuracy rate of between 76% and 83% for ESM, the first full-fledged independent study which was made possible by the researchers having access to the records of a hospital using EMR, found an accuracy rate of only 63%. This is not a lot better than the flip of a coin, point out AI skeptics Arvind Narayanan and Sayash Kapoor. Furthermore, ESM recognized only 7% of sepsis cases missed by clinicians.

This study prompted an editorial by Drs. Habib, Lin, and Grant, who opined that “the ‘black box’ nature of these future machine learning tools” requires special caution by health systems choosing to rely on them. Because only Epic (and perhaps not even Epic) could understand why ESM was making the decisions it was making, detecting errors and minimizing them might be nearly impossible.

Eventually, after ESM had been in use for several years, it overhauled its algorithm, urging users to train the algorithm on their own particular patient data. “After years of insisting that a plug-and-play model could save lives, Epic had walked back on its claims.” (Narayanan & Kapoor).

In 2024, a different company received the FDA’s first-ever clearance for an AI sepsis detection tool.

Discussion Questions

1. Do you think that the “black box” nature of Epic’s proprietary algorithms contributed to the delay in independent researchers being able to adequately test Epic’s inaccurate (it turns out) claims for ESM’s accuracy? Explain.

2. Should Epic have had to disclose the payments it was making to hospitals to adopt ESM? If so, why? If not, why not? Why would that have been relevant information for potential adopters of ESM?

3. Studies show that although most people are honest most of the time even when they are not being monitored, they are more likely to lie and cheat when not being watched. (Redish, p. 71). Do you think the same is true of companies like Epic? Explain.

4. Would open AI (also known as explainable AI) generally be preferable to black box AI (also known as opaque AI)? Would a proprietary black box approach be preferable because its potential profits would provide more effective incentives to entrepreneurs and investors? Explain your reasoning.

5. In areas like health care where people’s welfare is directly affected by the AI, as the patients were by Epic’s ESM, should open or explainable AI be required? Why or why not?

6. Epic’s algorithms were opaque because Epic protected them as proprietary trade secrets. But sometimes algorithms, such as those produced by large language models (LLMs) that are trained on hundreds of billions of text samples, are opaque because their sheer complexity makes it impossible to explain the LLM’s answers or its decision-making processes. Can transparency ever be achieved in this setting? What do we risk as users and consumers of such AI technology, if transparency cannot be achieved?

A.I. & Fairness: Beyond Blind Spot Bias?

A.I. & Fairness: Beyond Blind Spot Bias?

AI tools from companies like Amazon and Google were supposed to remove human bias from hiring, but instead ended up replicating and reinforcing the same discrimination they aimed to fix.

View

A.I. & Trust: Tay’s Trespasses

A.I. & Trust: Tay’s Trespasses

Microsoft’s Tay, an AI chatbot intended as a friendly companion, was quickly manipulated into spewing offensive content by internet trolls—highlighting the need for trustworthy AI systems.

View

AI Ethics

AI Ethics

AI ethics focuses on ensuring that AI is developed and deployed responsibly, promoting fairness, transparency, accountability, and societal well-being while minimizing harm.

View

Algorithmic Bias

Algorithmic Bias

Algorithmic bias occurs when AI algorithms reflect human prejudices due to biased data or design, leading to unfair or discriminatory outcomes.

View

Artificial Intelligence

Artificial Intelligence

Artificial intelligence (AI) describes machines that can think and learn like human beings. AI is continually evolving, and includes subfields such as machine learning and generative AI.

View

Technological Somnambulism

Technological Somnambulism

Technological somnambulism refers to the unreflective, blind creation and adoption of new technologies without consideration for their long-term societal and ethical impacts.

View

AI Ethics: Is AI a Savior or a Con? – Part 1

AI Ethics: Is AI a Savior or a Con? – Part 1

Several months ago, our blog post titled “Techno-Optimist or AI Doomer?: Consequentialism and the Ethics of AI” made the point that despite the ubiquitous attention being paid to artificial intelligence (AI), a technological concept that dates back at least 75 years, expert opinions regarding its utility and dangers were all over the map, ranging from […]

View

AI Ethics: Is AI a Savior or a Con? – Part 2

AI Ethics: Is AI a Savior or a Con? – Part 2

To make sound ethical judgments, people must know the facts. In the realm of artificial intelligence (AI), it is difficult to ascertain with certainty a key fact—whether AI is the most consequential technology in the history of the world as claimed by its proponents (“AI Boosters”) or is mainly snake oil and hype as claimed […]

View

The Perfect (Mis)Match: Algorithms and Intentions

The Perfect (Mis)Match: Algorithms and Intentions

This post is prompted by a forthcoming article in the American Criminal Law Review by Melissa Hamilton, entitled “The Biased Algorithm: Evidence of DisparateImpact on Hispanics.”  Hamilton makes the point that because judges tend to be human beings and therefore subject to all the decision making foibles uncovered by behavioral psychology and related fields in […]

View

Bibliography

Katie Adams, “FDA Grants Its First-Ever Clearance for Sepsis Detection AI,” MedCity News, Apr. 4, 2024, at https://medcitynews.com/2024/04/fda-sepsis-ai/.

Anand Habib et al, “The Epic Sepsis Model Falls Short—The Importance of External Validation,” JAMA Internal Medicine Vol. 181., No. 8, p. 1040-1041 (2021).

Fahad Kamran, “Evaluation of Sepsis Prediction Models before Onset of Treatment,” New England Journal of Medicine AI, Vol. 1, No. 3 (2024).

Sania Kennedy, “Epic Sepsis Model Predictions May Have Limited Clinical Utility,” TechTarget, Feb. 26, 2024, at https://www.techtarget.com/healthtechanalytics/news/366590054/Epic-Sepsis-Model-Predictions-May-Have-Limited-Clinical-Utility.

Arvind Narayanan & Sayash Kapoor, AI Snake Oil: What Artificial Intelligence Can Do, What It Can’t, and How to Tell the Difference (2024).

Neil Raden, “How Did a Proprietary AI Get into Hundreds of Hospitals—Without Extensive Peer Reviews? The Concerning Story of Epic’s Deterioration Index,” Diginomica, Sept. 2, 2021, at https://diginomica.com/how-did-proprietary-ai-get-hundreds-hospitals-without-extensive-peer-reviews-concerning-story-epics.

A. David Redish, Changing How We Choose: The New Science of Morality (2022).

Casey Ross, “Epic’s AI Algorithms, Shielded from Scrutiny by a Corporate Firewall, Are Delivering Inaccurate Information on Seriously Ill Patients,” STAT, July 26, 2021, at https://www.statnews.com/2021/07/26/epic-hospital-algorithms-sepsis-investigation/.

Casey Ross, “Epic Overhauls  Popular Sepsis Algorithm Criticized for Faulty Alarms,” STAT, Oct. 3, 2022, at https://www.statnews.com/2022/10/03/epic-sepsis-algorithm-revamp-training/.

Nicole Wetsman, “Health Records Company Pays Hospitals That Use Its Algorithms,” The Verge, July 26, 2021, at https://www.theverge.com/2021/7/26/22594241/epic-health-algorithm-payment-accuracy-sepsis.

Andrew Wong et al., “External Validation of a Widely Implemented Proprietary Sepsis Prediction Model in Hospitalized Patients,” JAMA Internal Medicine, 181(8): 1065-1070 (2021).

Shares