Skip to main content

Running With Scissors: AI and the Race for the Future

The race to develop and deploy artificial intelligence has led innovation to outpace ethical inquiry. Through the voices of students, AI experts and academics, and industry insiders, this documentary explores the risks of prioritizing speed over responsibility, and the ethical safeguards that can ensure a better future for humanity.

Discussion Questions

  1. As emerging algorithm-driven artificial intelligence (AI) continues to spread, will people be better off or worse off than they are today? Support your position with concrete examples.
  2. When organizations (businesses, government agencies, educational institutions, and so on) adopt new AI technologies without regard to the wider social, economic, cultural, and political contexts, they can endanger human liberty and exacerbate inequities in ways that are difficult to reverse. Give a first-hand example of this or find a first-person accounting of this phenomenon from a credible news source. Could the harmful impact have been avoided? If so, how? If not, why not?
  3. Stephen Hawking believed that AI would be either the best or the worst thing to happen to humanity. Can you find other experts who have espoused similar opinions? Do you agree with Hawkings view? Why or why not?
  4. Again, Stephen Hawking believed that AI would be either the best or the worst thing to happen to humanity. Which outcome do you believe is more likely—that it will be the best or the worst thing? How do you believe that impact will manifest? Please give concrete examples.
  5. There are experts on both sides of the debate over whether AI’s promise to help mankind outweighs its perils, or vice versa. How do we choose whom to believe in this setting? Provide examples of opinions that you find persuasive.
  6. Many have claimed that AI’s potential impact on human health—creating new drugs, providing a diagnostic resource for physicians, streamlining health care processes, etc.—is one of its most promising features. Can you provide examples of this positive impact? Do you agree with these claims based on the evidence that is currently available? Explain.
  7. Many believe that AI will transform education. Have you seen evidence of this? If you are a student, please describe AI’s impact on your own education and that of your peers. Give specifics. Overall, do you think AI’s impact on education is likely to be positive or negative? Explain.
  8. Many believe that humans’ use of AI will ultimately damage their ability to think carefully and critically? Do you agree? Have you seen evidence of this? Have you yourself experienced such an impact? What examples of AI’s positive impact on education have you seen? Please be specific.
  9. One expert believes that “AI is happening to us, not for us or with us.” What does this statement mean to you? Do you agree or disagree with this conclusion? If you agree, how might we change this outcome for the better?
  10. AI’s impact on human privacy seems to be one of the most significant concerns for many AI critics. Can you summarize a few of the most significant impacts of AI technology on human privacy? Do you agree that these incursions on human privacy are concerning? Has your privacy been adversely impacted by AI? In what way?
  11. AI developers are building huge numbers of data centers around the world. Such centers have been criticized for consuming inordinate amounts of water, energy, and real estate…things that are often in short supply where such centers have been located. Allegedly, these centers are exacerbating climate change. Are these legitimate concerns? Explain and give specific examples. AI developers claim they are being creative in minimizing these environmental impacts of data centers. Can you find examples of their innovations in this regard? Do they alleviate your concerns? Explain.
  12. Do you worry about bad actors using AI for illicit purposes? What sorts of incidents do you worry about? Be specific. Are these worries sufficiently substantial that we should demand that AI companies install guardrails to minimize the likelihood of such bad events? Is that even feasible? Please consult experts.
  13. The Luddites were worried about steam power leading to massive unemployment in 19th Century England’s textile mills. The economy seems to have survived the industrial revolution and several more disruptions over time. But can it survive AI, or will so many jobs be destroyed that economic catastrophe follows and the employment prospects of a few generations are devastated? Experts disagree. Some are convinced that the job apocalypse is already here. Others are positive that the economy (and jobs) can survive AI as they have survived other challenges over the centuries. Find the arguments made by a prominent commentator on each side, summarize their arguments, and then explain clearly why you find one side or the other to be the more convincing one.
  14. You have no doubt heard about people who have found their chatbots to be good friends, reliable advisers, and the most helpful therapists they have found. But you have probably also heard about chatbots that led teenagers to attempt to murder their parents or to attempt suicide. Do the mental health benefits of such bots outweigh the harms? Support your answer with concrete examples. How can AI companies introduce safeguards into their chatbots to recalibrate the risk/reward ratio of such AI tools. What ideas have experts recommended?
  15. Many chatbots currently are programmed to tend to agree with the humans that interact with them. This makes it more likely that the human will maintain connection with the bot (we all prefer to be told that we are right over being told the opposite), which increases engagement that is, in turn, economically beneficial to the bot’s maker. Such agreeableness can also reinforce any disagreements prevailing in society and exacerbate division among people and thus, perhaps, increasing the isolation and loneliness felt by the humans using the bots. Can you think of any obvious ways to remedy these problems or to stop the reinforcing cycle of loneliness and division that AI seems to be creating? Do you know of any AI companies that are attempting to solve these issues? If so, how are they proceeding? Do you think their ideas will work? Why or why not?
  16. AI tools are becoming increasingly proficient at producing videos that make fiction look like fact. What harms do you perceive might be caused by such fakery? Can you find any expert recommendations for mitigating this problem? Can you yourself think of any steps that might be helpful? Will banning this type of AI tool or removing it from the market make this issue go away? Why or why not?
  17. While making this documentary, Open AI removed Sora  an AI tool that creates images and videos  from the market. Sora heralded a new generation of video-generating tools that produced life-like images and videos using text-based prompts. According to the New York Times, Sora made “disinformation extremely easy and extremely real. Open AI reportedly discontinued Sora because it had a weak business model; the company was spending a lot of money to support the AI tool, but the return on investment was rapidly diminishing. Perhaps copyright issues and other liability questions surrounding Sora were part of Open AI’s decision to discontinue the AI tool. But Sora’s ability to easily fabricate life-like images and videos and the potential downstream negative effects of AI “fakes” in the world was not the driving force behind the company’s decision. How does this decision reflect on Open AI’s commitment to AI safety? What does it say about the company’s commitment to AI ethics? If Open AI discontinued Sora primarily because of its risks to information integrity and to society, would that reflect a different kind of commitment? Explain.
  18. AI tools are famously liable to create and spread unintentional fabrications that are often called “hallucinations.” Please present a few examples that you have heard about. They shouldn’t be hard to find. Many experts believe that given how LLMs work now, such fantasies are inevitable…a problem we’ll just have to live with. The evidence suggests that as AI systems get more powerful, they hallucinate more. What do you think about hallucinations? What kinds of harms do they present? Can find experts who are attempting to find solutions that might mitigate this hallucination problem?
  19. Perhaps worse than hallucinations, AI models can churn out disinformation in record volumes. Those who wish to spread false beliefs for evil purposes, such as undermining societal stability or election integrity, can count on such models to spread record numbers of lies around the world in unprecedented volumes. What solutions to this problem have experts suggested be tried? Are they practicable? Explain.
  20. The business of AI seems to be producing billionaires at a record pace, threatening to exacerbate wealth inequality in our country and around the world. Do your research and decide whether you think that is true. In either event, do you believe that wealth inequality presents a danger to our society? Please spell out your opinion and present some supporting arguments.
  21. Most people would agree that fairness is a basic societal value, but there are many examples of AI algorithms that produced biased results when evaluating job candidates, identifying criminal suspects from surveillance camera footage, detecting cheating in essays written by nonnative English writers, etc. Why does such bias exist? Can it be prevented? Are AI companies making any progress in bringing bias down to an acceptable level? Explain.
  22. Transparency is another widely accepted societal value. As a citizen and a consumer, are you satisfied with the transparency offered by today’s creators of AI? Or does the “black box” problem persist? Explain. Is it practical for governments to attempt to mandate adequate transparency? Support your opinion.
  23. What do you think of the ethics of an AI company that will spend money on lobbying and campaign contributions while trying to eliminate or at least minimize legislative attempts to require transparency? Is it ethical for an AI company to spend money on lobbying and financially support candidates that advocate for no AI regulations? Is this business decision beneficial or detrimental for their customers? What impact do their actions have on society in general? Do you trust AI companies that support no regulation? Explain your reasoning.
  24. Accountability is another arguably critical ethical value. We are now starting to see governments enact regulations to require safety guardrails for AI tools and litigation when chatbots induce teens and others to commit crimes or suicide. Are laws and lawsuits the best avenues for imposing accountability on AI? Why or why not? Will the threat of such laws and lawsuits incentivize AI companies to proactively create and incorporate safety guardrails? How? What do you think is the best route to maintaining AI accountability?
  25. Do you think it’s practical or feasible for AI companies to keep themselves accountable and to self-regulate to prevent potential harms from AI? Why or why not? Watch the Ethics Unwrapped glossary term on self-serving bias. With this concept in mind, does your perspective change? If so, how? If not, why not?
  26. Many of the biggest AI companies have significantly reduced or eliminated their AI ethics and safety teams. Some technology ethicists have quit, claiming their work is not heeded or taken seriously by their company. Are you worried by the headlines indicating that many AI companies are reducing or even eliminating their ethics staffers? Why or why not?
  27. It has been said that “ethics needs to be the bible” for continued AI development. Do you agree with this statement? What does this statement mean to you in practical terms? How can we make this vision a concrete embodiment of our societal values? How do we reach consensus on those values, or would consensus be impossible? Discuss.
  28. Many believe that once people started taking the monetary possibilities of AI models seriously, the profit motive that capitalism elevates quickly overwhelmed competing interests of safety, fairness, transparency, and general ethics. Do you agree or disagree? Do you think it necessary to attempt to put a leash on this branch of capitalism? If so, how would you manage it? Is it even possible?
  29. If you were in charge of putting together policy recommendations for one of the two major political parties as the next election rolls around, would you suggest a ban on AI regulation or a plan to vigorously regulate AI development? If you adopted the latter view, what would your suggested regulatory plan look like? What planks would make up your policy platform for ensuring that AI does more good than harm? Please be specific.
AI & Fairness: Beyond Blind Spots?

AI & Fairness: Beyond Blind Spots?

AI tools from companies like Amazon and Google were supposed to remove human bias from hiring, but instead ended up replicating and reinforcing the same discrimination they aimed to fix.

View

AI & Transparency: An Epic Deception

AI & Transparency: An Epic Deception

Epic’s widely used AI tool for sepsis detection promised accuracy, but the “black-box” nature of the algorithm made it difficult to quickly and effectively evaluate its effectiveness.

View

AI & Trust: Tay’s Trespasses

AI & Trust: Tay’s Trespasses

Microsoft’s Tay, an AI chatbot intended as a friendly companion, was quickly manipulated into spewing offensive content by internet trolls—highlighting the need for trustworthy AI systems.

View

AI Ethics

AI Ethics

AI ethics focuses on ensuring that AI is developed and deployed responsibly, promoting fairness, transparency, accountability, and societal well-being while minimizing harm.

View

Algorithmic Bias

Algorithmic Bias

Algorithmic bias occurs when AI algorithms reflect human prejudices due to biased data or design, leading to unfair or discriminatory outcomes.

View

Artificial Intelligence

Artificial Intelligence

Artificial intelligence (AI) describes machines that can think and learn like human beings. AI is continually evolving, and includes subfields such as machine learning and generative AI.

View

Technological Somnambulism

Technological Somnambulism

Technological somnambulism refers to the unreflective, blind creation and adoption of new technologies without consideration for their long-term societal and ethical impacts.

View

AI Ethics: “Just the Facts, Ma’am”

AI Ethics: “Just the Facts, Ma’am”

In 2024, top language algorithms could “read” 2.6 billion words in just a couple of hours. This gives them a fighting chance of keeping up with the innumerable books and articles being written about the ethical implications of various aspects and impacts of artificial intelligence (AI). In 2025, we here at Ethics Unwrapped intend to […]

View

AI Ethics: As If Human

AI Ethics: As If Human

Oxford University computer scientist Nigel Shadbolt and co-author Roger Hampson (S&H), like so many others these days, believe that we must think carefully about the ethical issues surrounding the development of artificial intelligence (AI), so they’ve written As If Human: Ethics and Artificial Intelligence (2025). S&H are AI Doubters. S&H point out a litany of […]

View

AI Ethics: Feeding the Machine

AI Ethics: Feeding the Machine

As is often the case, this blog post calls your attention to a new book we think is worth a peek—Feeding the Machine: The Hidden Human Labour Powering AI (2024) by James Muldoon, Mark Graham, and Callum Cant (whom we will collectively refer to as “MGC”). As you can tell from the spelling of “labour” […]

View

AI Ethics: Getting to Moral AI

AI Ethics: Getting to Moral AI

As you have been able to tell from recent blog posts, we here at Ethics Unwrapped, along with most other sentient beings who are paying attention, believe that ongoing developments in the field of artificial intelligence (AI) present ethical challenges that demand our careful attention. Fortunately, three prominent experts—philosopher Walter Sinnott-Armstrong, data scientist Jana Schaich […]

View

AI Ethics: If Someone Builds It, Will We All Die?

AI Ethics: If Someone Builds It, Will We All Die?

Many interested in AI have been eagerly awaiting the just-published, provocatively-titled book If Anyone Builds It, Everyone Dies: Why Superhuman AI Would Kill Us All by Eliezer Yudkowski and Nate Soares. Yudkowski has been a major AI naysayer for 20 years and, with Soares, founded the nonprofit Machine Intelligence Research Institute (MIRI) in 2005. The […]

View

AI Ethics: Is AI a Savior or a Con? – Part 1

AI Ethics: Is AI a Savior or a Con? – Part 1

Several months ago, our blog post titled “Techno-Optimist or AI Doomer?: Consequentialism and the Ethics of AI” made the point that despite the ubiquitous attention being paid to artificial intelligence (AI), a technological concept that dates back at least 75 years, expert opinions regarding its utility and dangers were all over the map, ranging from […]

View

AI Ethics: Is AI a Savior or a Con? – Part 2

AI Ethics: Is AI a Savior or a Con? – Part 2

To make sound ethical judgments, people must know the facts. In the realm of artificial intelligence (AI), it is difficult to ascertain with certainty a key fact—whether AI is the most consequential technology in the history of the world as claimed by its proponents (“AI Boosters”) or is mainly snake oil and hype as claimed […]

View

AI Ethics: Is the Precautionary Principle Helpful?

AI Ethics: Is the Precautionary Principle Helpful?

There is little question that artificial intelligence (AI)—if it continues to be developed as most experts foresee—will reshape our world. Some changes will be positive. Some will be negative. As we pointed out in a previous blog post (AI Ethics: “Just the Facts Ma’am”), having a firm handle on the facts is prerequisite to making […]

View

AI Ethics: Moral Certainty Defeated by Factual Uncertainty

AI Ethics: Moral Certainty Defeated by Factual Uncertainty

A year ago today (“today” being the date this blog post is written–February 3, 2026), we published our first of several blog posts on AI ethics, this one titled “AI Ethics: ‘Just the Facts, Ma’am.’” Our central contention was that to make sound moral judgments one must first be in possession of the facts, at […]

View

AI Ethics: The Atomic Human

AI Ethics: The Atomic Human

Sound moral judgments must be based on facts. People court disaster when they make morally-tinged decisions based on nothing more than speculation. We believe that at this particular point in time, artificial intelligence (AI) presents the world with several of its most critical moral issues.  We have addressed AI ethics in several recent blog posts […]

View

AI Ethics: The Obligation to Design for Safety

AI Ethics: The Obligation to Design for Safety

When architects design buildings or engineers design planes, they have a moral obligation to protect humans from harm. Think of the Hyatt Regency Walkway collapse in Kansas City or the Boeing 737 MAX crashes. Or think about Ford Motor Company which was in a race to match Japanese imports and beat domestic competitors General Motors […]

View

AI Ethics: What Duties Do We Owe a Sentient Digital Mind?

AI Ethics: What Duties Do We Owe a Sentient Digital Mind?

In his new book, Mind Crime: The Moral Frontier of Artificial Intelligence (2025), Nathan Rourke analyzes many of the same questions that others paying attention to the AI revolution find concerning. Will fierce competition between corporations and between countries lead to creation of artificial superintelligence (ASI) before humanity is ready to handle it? Will this […]

View

Ethical AI: Moral Judgments Swamped by Competitive Forces

Ethical AI: Moral Judgments Swamped by Competitive Forces

Many of us have watched enough movies on the topic–“2001: A Space Odyssey,” “War Games,” “Terminator,” “Blade Runner,” “Ex Machina,” “The Matrix,” and the like– to be viscerally concerned about today’s rapid-fire development of artificial intelligence (AI). And this concern is not unwarranted, for many of the most-knowledgeable experts are themselves very apprehensive. In March […]

View

Companion e-book

link to open book

AI Ethics Companion Handbook

This e-book compiles all the resources on AI ethics available on the Ethics Unwrapped website. It will continue to be updated as new materials are published.

Bibliography

Daron Acemoglu & Simon Johnson, Power and Progress: Our 1000-Year Struggle Over Technology & Prosperity (PublicAffairs 2023)

James Barrat, Our Final Invention: Artificial Intelligence and the End of the Human Era (Thomas Dunne Books 2013).

Emily Bender & Alex Hanna, The AI Con: How to Fight Big Tech’s Hype and Create the Future We Want (Harper Collins 2025).

Andrius Bielskis, editor, Human Flourishing in the Age of Digital Capitalism: AI, Automation and Alienation 2025).

Reid Blackman, Your Concise Guide to Totally Unbiased, Transparent, and Respectful AI (Harvard Business Review Press2022).

Jana Schaich Borg, Moral AI And How We Get There (Pelican 2024).

Nick Bostrom, Deep Utopia: Life and Meaning in a Solved World (Ideapress 2024).

Annette Buhler, Navigating Ethical Leadership in the Age of AI (Kindle Direct Publishing 2024)..

Mark Coeckelbergh, AI Ethics (MIT Press 2020).

Brian Christian, The Alignment Problem: Machine Learning and Human Values (W.W. Norton & Co. 2020).

Markus Dubber et al., editors, The Oxford Handbook of AI Ethics (Oxford University Press 2021).

David Edmonds, editor, AI Morality (Oxford University Press 2024).

Luciano Floridi, The Ethics of Artificial Intelligence: Principles, Challenges, and Opportunities (Oxford University Press 2023).

Tricia Bertram Gallant & David A. Rettinger, The Opposite of Cheating: Teaching for Integrity in the Age of AI (University of Oklahoma Press 2025).

Urs Gasser & Viktor-Mayer-Schonberger, Guardrails: Guiding Human Decisions in the Age of AI (Princeton University Press 2024).

Keach Hagey, The Optimist: Sam Altman, OpenAI, and the Race to Invent the Future (W.W. Norton & Co. 2025).

Karen Hao, Empire of AI: Dreams and Nightmares in Sam Altman’s OpenAI (Penguin Press 2025).

Yuval Noah Harari, Nexus: A Brief History of Information Networks from the Stone Age to AI (Random House 2024).

Reid Hoffman & Greg Beato, Superagency: What Could Possibly Go Right with Our AI Future (Authors Equity 2025).

Debbie Sue Jancis, AI Ethics: Status of the Present, Ethical Dilemmas, and Frameworks for the Practical Mind (2024).

Webb Keane, Animals, Robots, Gods: Adventures in the Moral Imagination (Princeton University Press 2025).

Henry Kissinger et al., Genesis: Artificial Intelligence, Hope, and the Human Spirit (Little Brown & Co. 2024).

Ray Kurzweil, The Singularity is Nearer: When We Merge with AI (Viking 2024).

Neil D. Lawrence, The Atomic Human: What Makes Us Unique in the Age of AI (PublicAffairs 2024).

Matthew Liao (editor), Ethics of Artificial Intelligence (Oxford University Press 2020).

Hamilton Mann, Artificial Integrity: The Paths to Leading AI Toward a Human-Centered Future Wiley 2025).

Gary Marcus, Taming Silicon Valley: How We Can Ensure that AI Works for Us (MIT Press 2024).

Ethan Mollick, Co-Intelligence: Living and Working with AI (Portfolio/Penguin 2024).

James Muldoon et al., Feeding the Machine: The Hidden Human Labour Powering AI (Cannongate Books 2025).

Madhumita Murgia, Code Dependent: How AI Is Changing Our Lives Picador 2024).

Arvind Narayanan & Sayash Kapoor, AI Snake Oil: What Artificial Intelligence Can Do, What It Can’t, and How to Tell the Difference (Princeton University Press 2024).

Parmy Olson, Supremacy: AI, ChatGPT, and the Race That Will Change the World (St. Martin’s Press 2024).

Cathey O’Neil, Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy (Broadway Books 2016).

Nelson Polson & James Scott, AIQ: How People and Machines are Smarter Together (St Martin’s Press 2018).

Kevin Roose, Futureproof: 9 Rules for Surviving in the Age of AI (Random House 2022).

Jathan Sadowski, The Mechanic and the Luddite: A Ruthless Criticism of Technology and Capitalism (University of California Press 2025).

Jeff Sebo, The Moral Circle: Who Matters, What Matters, and Why (W.W. Norton 2025).

Nigel Shadbolt & Roger Hampson, As If Human: Ethics and Artificial Intelligence (Yale University Press 2024).

Mustafa Suleyman, The Coming Wave: Technology, Power, and the 21st Century’s Greatest Dilemma (Crown 2023).

Christopher Summerfield, These Strange New Minds: How AI Learned to Talk and What It Means (Viking 2025).

Richard Susskind, How to Think About AI: A Guide for the Perplexed (Oxford University Press 2025).

Max Tegmark, Life 3.0: Being Human in the Age of Artificial Intelligence (Vintage Books 2017).

Shannon Vallor, The AI Mirror: How to Reclaim Our Humanity in an Age of Machine Thinking (Oxford University Press 2024).

Amy Webb & Andrew Hessel, The Genesis Machine: Our Quest to Rewrite Life in the Age of Synthetic Biology (Hachette Book Group 2022).

J. Craig Wheeler, The Path to Singularity: How Technology Will Challenge the Future of Humanity (Prometheus Books 2024).

James Scott & Nick Polson, AIQ: How People and Machines Are Smarter Together (St. Martin’s Press, 2018).

Additional Resources

The latest resource from Ethics Unwrapped is a book, Behavioral Ethics in Practice: Why We Sometimes Make the Wrong Decisions, written by Cara Biasucci and Robert Prentice. This accessible book is amply footnoted with behavioral ethics studies and associated research. It also includes suggestions at the end of each chapter for related Ethics Unwrapped videos and case studies. Some instructors use this resource to educate themselves, while others use it in lieu of (or in addition to) a textbook.

Cara Biasucci also recently wrote a chapter on integrating Ethics Unwrapped in higher education, which can be found in the latest edition of Teaching Ethics: Instructional Models, Methods and Modalities for University Studies. The chapter includes examples of how Ethics Unwrapped is used at various universities.

The most recent article written by Cara Biasucci and Robert Prentice describes the basics of behavioral ethics and introduces Ethics Unwrapped videos and supporting materials along with teaching examples. It also includes data on the efficacy of Ethics Unwrapped for improving ethics pedagogy across disciplines. Published in Journal of Business Law and Ethics Pedagogy (Vol. 1, August 2018), it can be downloaded here: “Teaching Behavioral Ethics (Using “Ethics Unwrapped” Videos and Educational Materials).”

An article written by Ethics Unwrapped authors Minette Drumwright, Robert Prentice, and Cara Biasucci introduce key concepts in behavioral ethics and approaches to effective ethics instruction—including sample classroom assignments. Published in the Decision Sciences Journal of Innovative Education, it can be downloaded here: “Behavioral Ethics and Teaching Ethical Decision Making.”

A detailed article written by Robert Prentice, with extensive resources for teaching behavioral ethics, was published in Journal of Legal Studies Education and can be downloaded here: “Teaching Behavioral Ethics.”

Another article by Robert Prentice, discussing how behavioral ethics can improve the ethicality of human decision-making, was published in the Notre Dame Journal of Law, Ethics & Public Policy. It can be downloaded here: “Behavioral Ethics: Can It Help Lawyers (And Others) Be their Best Selves?

A dated (but still serviceable) introductory article about teaching behavioral ethics can be accessed through Google Scholar by searching: Prentice, Robert A. 2004. “Teaching Ethics, Heuristics, and Biases.” Journal of Business Ethics Education 1 (1): 57-74.

Books about the lobbying scandal include Jack Abramoff’s own account, “Capitol Punishment: The Hard Truth About Washington Corruption from America’s Most Notorious Lobbyist” (WND Books, 2011) and an exposé from journalist Peter H. Stone, “Heist: Superlobbyist Jack Abramoff, His Republican Allies, and the Buying of Washington” (Farrar, Straus and Giroux, 2006).

Movies about the scandal include a documentary, Casino Jack and the United States of Money (Dir. Alex Gibney, 2010), and a dramatization starring Kevin Spacey, Casino Jack (Dir. George Hickenlooper, 2010).

Shares