Causing harm explores the different types of harm that may be caused to people or groups and the potential reasons we may have for justifying these harms.
1. The students interviewed for this video disagree about which type of harm is the worst — physical, emotional, psychological, financial, reputational — which do you think is the worst and why?
2. Can you think of an example of when you have been harmed? Was this harm ethically justifiable? Was it not? Explain how.
3. The video claims that we should not cause harm to others unless we are willing to suffer the same harm ourselves. Do you agree?
4. In what situation(s) would you knowingly cause harm? How would the benefits outweigh the harm?
5. Do you think an institution such as a business or government can be held accountable for causing harm in the same way an individual can be? Support your position.
6. Are you supportive of governments or institutions taking actions that may cause harm to some but would likely benefit many? How is this justified? Why is it permissible?
7. Can you think of other instances when taking such actions is not ethical?
In 2013, computer expert and former CIA systems administrator, Edward Snowden released confidential government documents to the press about the existence of government surveillance programs. According to many legal experts, and the U.S. government, his actions violated the Espionage Act of 1917, which identified the leak of state secrets as an act of treason. Yet despite the fact that he broke the law, Snowden argued that he had a moral obligation to act. He gave a justification for his “whistleblowing” by stating that he had a duty “to inform the public as to that which is done in their name and that which is done against them.” According to Snowden, the government’s violation of privacy had to be exposed regardless of legality.
Many agreed with Snowden. Jesselyn Radack of the Government Accountability Project defended his actions as ethical, arguing that he acted from a sense of public good. Radack said, “Snowden may have violated a secrecy agreement, which is not a loyalty oath but a contract, and a less important one than the social contract a democracy has with its citizenry.” Others argued that even if he was legally culpable, he was not ethically culpable because the law itself was unjust and unconstitutional.
The Attorney General of the United States, Eric Holder, did not find Snowden’s rationale convincing. Holder stated, “He broke the law. He caused harm to our national security and I think that he has to be held accountable for his actions.”
Journalists were conflicted about the ethical implications of Snowden’s actions. The editorial board of The New York Times stated, “He may have committed a crime…but he has done his country a great service.” In an Op-ed in the same newspaper, Ed Morrissey argued that Snowden was not a hero, but a criminal: “by leaking information about the behavior rather than reporting it through legal channels, Snowden chose to break the law.” According to Morrissey, Snowden should be prosecuted for his actions, arguing that his actions broke a law “intended to keep legitimate national-security data and assets safe from our enemies; it is intended to keep Americans safe.”
1. What values are in conflict in this case? What harm did Snowden cause? What benefits did his actions bring?
2. Do you agree that Snowden’s actions were ethically justified even if legally prohibited? Why or why not? Make an argument by weighing the competing values in this case.
3. If you were in Snowden’s position, what would you have done and why?
4. Would you change your position if you knew that Snowden’s leak would lead to a loss of life among CIA operatives? What about if it would save lives?
5. Is there a circumstance in which you think whistleblowing would be ethically ideal? How about ethically prohibited?
Whistle-Blowers Deserve Protection Not Prison
Eric Holder: If Edward Snowden were open to plea, we’d talk
Edward Snowden: Whistleblower
Edward Snowden Broke the Law and should be Prosecuted
In the context of health care in the United States, the value on autonomy and liberty was cogently expressed by Justice Benjamin Cardozo in Schloendorff v. Society of New York Hospitals (1914), when he wrote, “Every human being of adult years and sound mind has a right to determine what shall be done with his own body.” This case established the principle of informed consent and has become central to modern medical practice ethics. However, a number of events since 1914 have illustrated how the autonomy of patients may be overridden. In Buck v. Bell (1927), Justice Oliver Wendell Holmes wrote that the involuntary sterilization of “mental defectives,” then a widespread practice in the U.S., was justified, stating, “Three generations of imbeciles are enough.” Another example, the Tuskegee Syphilis Study, in which African-American males were denied life-saving treatment for syphilis as part of a scientific study of the natural course of the disease, began in 1932 and was not stopped until 1972.
Providing advice related to topics of bioethics, the President’s Commission for the Study of Ethical Problems in Medicine and Biomedical and Behavioral Research stated, “Informed consent is rooted in the fundamental recognition—reflected in the legal presumption of competency—that adults are entitled to accept or reject health care interventions on the basis of their own personal values and in furtherance of their own personal goals.” But what of circumstances where patients are deemed incompetent through judicial proceedings, and where someone else is designated to make decisions on behalf of a mentally incompetent individual?
Consider the following case:
A middle aged man was involuntarily committed to a state psychiatric hospital because he was considered dangerous to others due to severe paranoid thinking. His violent behavior was controlled only by injectable medications, which were initially administered against his will. He had been declared mentally incompetent, and the decisions to approve the use of psychotropic medications were made by his adult son who had been awarded guardianship and who held medical power of attorney.
While the medications suppressed the patient’s violent agitation, they made little impact on his paranoid symptoms. His chances of being able to return to his home community appeared remote. However, a new drug was introduced into the hospital formulary which, if used with this patient, offered the strong possibility that he could return home. The drug, however, was only available in a pill form, and the patient’s paranoia included fears that others would try to poison him. The suggestion was made to grind up the pill and surreptitiously administer the drug by mixing it in pudding.
Hospital staff checked with the patient’s son and obtained informed consent from him. The “personal values and…personal goals” of the son and other family members were seen to substitute for those of the mentally incompetent patient—and these goals included the desire for the patient to live outside of an institution and close to loved ones in the community. This was the explicitly stated rationale for the son’s agreeing to the proposal to hide the medication in food. However, staff were uncomfortable about deceiving the patient, despite having obtained informed consent from the patient’s guardian.
1. In the case study above, do you think the ends justify the means? In other words, does the goal of discharging the patient from an institutional setting into normal community living justify deceiving him? Explain your reasoning.
2. Do you think it is ever ethically permissible to deceive clients? Under what circumstances? Why or why not?
3. To what degree should family members or legal guardians have full capacity to make decisions or give consent on behalf of those under their care? Explain.
4. Do you think severely mentally ill people retain any rights “to determine what shall be done with [their] own [bodies]?” Why or why not?
5. Are there risks in surreptitiously medicating a paranoid patient? Would this confirm the patient’s delusions of being “poisoned” by others or escalate his resistance to treatment? Are these risks worth taking in view of the potential to dramatically improve his mental functioning and reduce his suffering?
6. Since psychiatric patients have the right to treatment, does the strategy to surreptitiously administer medications serve this goal? Do you think this is ethically justifiable? Why or why not?
7. Does the history of the forcible treatments of persons with disabilities and other powerless populations affect how you view this case? Explain.
The Nazi Doctors: Medical Killing and the Psychology of Genocide
Medical Apartheid: The Dark History of Medical Experimentation on Black Americans from Colonial Times to the Present
Imbeciles: The Supreme Court, American Eugenics, and the Sterilization of Carrie Buck
Texas Administrative Code, Chapter 404, Subchapter E: Rights of persons receiving mental health services
A history and a theory of informed consent
Enduring and emerging challenges of informed consent
Chapter “Consent to medical care: the importance of fiduciary context” in The ethics of consent: theory and practice
CASES; Advice rejoins consent
Making health care decisions: The ethical and legal implications of informed consent in the patient-practitioner relationship
In many ways, social media platforms have created great benefits for our societies by expanding and diversifying the ways people communicate with each other, and yet these platforms also have the power to cause harm. Posting hurtful messages about other people is a form of harassment known as cyberbullying. Some acts of cyberbullying may not only be considered slanderous, but also lead to serious consequences. In 2010, Rutgers University student Tyler Clementi jumped to his death a few days after his roommate used a webcam to observe and tweet about Tyler’s sexual encounter with another man. Jane Clementi, Tyler’s mother, stated, “In this digital world, we need to teach our youngsters that their actions have consequences, that their words have real power to hurt or to help. They must be encouraged to choose to build people up and not tear them down.”
In 2013, Idalia Hernández Ramos, a middle school teacher in Mexico, was a victim of cyber harassment. After discovering that one of her students tweeted that the teacher was a “bitch” and a “whore,” Hernández confronted the girl during a lesson on social media etiquette. Inquiring why the girl would post such hurtful messages that could harm the teacher’s reputation, the student meekly replied that she was upset at the time. The teacher responded that she was very upset by the student’s actions. Demanding a public apology in front of the class, Hernández stated that she would not allow “young brats” to call her those names. Hernández uploaded a video of this confrontation online, attracting much attention.
While Hernández was subject to cyber harassment, some felt she went too far by confronting the student in the classroom and posting the video for the public to see, raising concerns over the privacy and rights of the student. Sameer Hinduja, who writes for the Cyberbullying Research Center, notes, “We do need to remain gracious and understanding towards teens when they demonstrate immaturity.” Confronting instances of a teenager venting her anger may infringe upon her basic rights to freedom of speech and expression. Yet, as Hinduja explains, teacher and student were both perpetrators and victims of cyber harassment. All the concerns of both parties must be considered and, as Hinduja wrote, “The worth of one’s dignity should not be on a sliding scale depending on how old you are.”
1. In trying to teach the student a lesson about taking responsibility for her actions, did the teacher go too far and become a bully? Why or why not? Does she deserve to be fired for her actions?
2. What punishment does the student deserve? Why?
3. Who is the victim in this case? The teacher or the student? Was one victimized more than the other? Explain.
4. Do victims have the right to defend themselves against bullies? What if they go through the proper channels to report bullying and it doesn’t stop?
5. How should compassion play a role in judging other’s actions?
6. How are factors like age and gender used to “excuse” unethical behavior? (ie. “Boys will be boys” or “She’s too young/old to understand that what she did is wrong”) Can you think of any other factors that are sometimes used to excuse unethical behavior?
7. How is cyberbullying similar or different from face-to-face bullying? Is one more harmful than the other? Explain.
8. Do you know anyone who has been the victim of cyber-bullying? What types of harm did this person experience?
Why or why not? Does she deserve to be fired for her actions?
Teacher suspended after giving student a twitter lesson
Pros and Cons of Social Media in the Classroom
How to Use Twitter in the Classroom
Twitter is Turning Into a Cyberbullying Playground
Can Social Media and School Policies be “Friends”?
What Are the Free Expression Rights of Students In Public Schools Under the First Amendment?
Teacher Shames Student in Classroom After Student Bullies Teacher on Twitter
The Therac-25 machine was a state-of-the-art linear accelerator developed by the company Atomic Energy Canada Limited (AECL) and a French company CGR to provide radiation treatment to cancer patients. The Therac-25 was the most computerized and sophisticated radiation therapy machine of its time. With the aid of an onboard computer, the device could select multiple treatment table positions and select the type/strength of the energy selected by the operating technician. AECL sold eleven Therac-25 machines that were used in the United States and Canada beginning in 1982.
Unfortunately, six accidents involving significant overdoses of radiation to patients resulting in death occurred between 1985 and 1987 (Leveson & Turner 1993). Patients reported being “burned by the machine” which some technicians reported, but the company thought was impossible. The machine was recalled in 1987 for an extensive redesign of safety features, software, and mechanical interlocks. Reports to the manufacturer resulted in inadequate repairs to the system and assurances that the machines were safe. Lawsuits were filed, and no investigations took place. The Food and Drug Administration (FDA) later found that there was an inadequate reporting structure in the company, to follow up with reported accidents.
There were two earlier versions of the Therac-25 unit: the Therac-6 and the Therac-20, which were built from the CGR company’s other radiation units–Neptune and Sagittaire. The Therac-6 and Therac-20 units were built with a microcomputer that made the patient data entry more accessible, but the units were operational without an onboard computer. These units had built-in safety interlocks and positioning guides, and mechanical features that prevented radiation exposure if there was a positioning problem with the patient or with the components of the machine. There was some “base duplication” of the software used from the Therac-20 that carried over to the Therac-25. The Therac-6 and Therac-20 were clinically tested machines with an excellent safety record. They relied primarily on hardware for safety controls, whereas the Therac-25 relied primarily on software.
On February 6, 1987, the FDA placed a shutdown on all machines until permanent repairs could be made. Although the AECL was quick to state that a “fix” was in place, and the machines were now safer, that was not the case. After this incident, Leveson and Turner (1993) compiled public information from AECL, the FDA, and various regulatory agencies and concluded that there was inadequate record keeping when the software was designed. The software was inadequately tested, and “patches” were used from earlier versions of the machine. The premature assumption that the problem(s) was detected and corrected was unproven. Furthermore, AECL had great difficulty reproducing the conditions under which the issues were experienced in the clinics. The FDA restructured its reporting requirements for radiation equipment after these incidents.
As computers become more and more ubiquitous and control increasingly significant and complex systems, people are exposed to increasing harms and risks. The issue of accountability arises when a community expects its agents to stand up for the quality of their work. Nissenbaum (1994) argues that responsibility in our computerized society is systematically undermined, and this is a disservice to the community. This concern has grown with the number of critical life services controlled by computer systems in the governmental, airline, and medical arenas.
According to Nissenbaum, there are four barriers to accountability: the problem of many hands, “bugs” in the system, the computer as a scapegoat, and ownership without liability. The problem of too many hands relates to the fact that many groups of people (programmers, engineers, etc.) at various levels of a company are typically involved in creation of a computer program and have input into the final product. When something goes wrong, there is no one individual who can be clearly held responsible. It is easy for each person involved to rationalize that he or she is not responsible for the final outcome, because of the small role played. This occurred with the Therac-25 that had two prominent software errors, a failed microswitch, and a reduced number of safety features compared to earlier versions of the device. The problem of bugs in the software system causing errors in machines under certain conditions has been used as a cover for careless programming, lack of testing, and lack of safety features built into the system in the Therac-25 accident. The fact that computers “always have problems with their programming” cannot be used as an excuse for overconfidence in a product, unclear/ambiguous error messages, or improper testing of individual components of the system. Another potential obstacle is ownership of proprietary software and an unwillingness to share “trade secrets” with investigators whose job it is to protect the public (Nissenbaum 1994).
The Therac-25 incident involved what has been called one of the worst computer bugs in history (Lynch 2017), though it was largely a matter of overall design issues rather than a specific coding error. Therac-25 is a glaring example of what can go wrong in a society that is heavily dependent on technology.
1. Who should be responsible for the errors in a medical device?
2. What moral responsibility do creators of software have for the adverse consequences that flow from flaws in that software?
3. What steps are creators of software morally required to take to minimize the risk that they will sell flawed software with dangerous consequences?
4. What should constitute FDA approval of a medical device? Should the benefit outweigh the harm? Should the device be 100% safe prior to approval? Should FDA approval guidelines take into consideration novel therapies for protected populations such as children or patients with rare conditions?
5. Should updated medical devices be reviewed by the FDA as a new device or as an improvement in an older design? If reviewed as an improvement, at what point can/should a device be subject to a full review process? If reviewed as a novel device, how might this effect the production of modified/ improved devices and the overall companies that produce medical devices?
Gotterbarn, Donald, “Software Engineering Ethics,” Encyclopedia of Software Engineering (2002), https://onlinelibrary.wiley.com/doi/abs/10.1002/0471028959.sof314.
Leveson, Nancy & Turner, Clark, “An Investigation of the Therac-25 Accidents,” Computer 26:7, p. 18 (July 19993), https://web.stanford.edu/class/cs240/old/sp2014/readings/therac-25.pdf
Leveson, Nancy, Medical Devices: The Therac—25 (1995), http://sunnyday.mit.edu/papers/therac.pdf.
Lynch, Jamie, “Therac-25 Causes Radiation Overdoses,” Bugsnag Blog (2017) https://www.bugsnag.com/blog/bug-day-race-condition-therac-25
Nissenbaum, Helen, “Computing and Accountability,“ Communications of the ACM, 37:1, p. 73 (1994). http://delivery.acm.org/10.1145/180000/175228/p72-nissenbaum.pdf?ip=18.104.22.168&id=175228&acc=ACTIVE%20SERVICE&key=603D2E7028CD4EF5%2E4D4702B0C3E38B35%2E4D4702B0C3E38B35%2E4D4702B0C3E38B35&__acm__=1576603038_0292b9d0b31643bd7b06dde8efa509f7
This video introduces the general ethics concepts of harm and justification. Causing Harm explores the different types of harm that may be caused to people or groups and the potential reasons we may have for justifying these harms.
To gain a better understanding of when and how harms can be considered justified, watch Systematic Moral Analysis, which explores the moral dimensions we face when making ethical decisions.
The case studies covered on this page explore different types of harm that can be caused, at varying scales. “Edward Snowden: Traitor or Hero?” raises questions over whether or not Edward Snowden’s release of confidential government documents was ethically justifiable. “Patient Autonomy & Informed Consent” explores the difficult decisions involved in taking care of a patient who has been deemed legally incompetent and refuses certain types of treatment. “Cyber Harassment” examines the case of a teacher who confronts a student in class and posts a video of the confrontation online in response to that student defaming the teacher on social media. For a case study about causing harm to the environment, read “Climate Change & the Paris Deal.”
Terms defined in our ethics glossary that are related to the video and case studies include: diffusion of responsibility, framing, incrementalism, justice, moral agent, self-serving bias, subject of moral worth, tangible & abstract, and values.
For more information on concepts covered in this and other videos, as well as activities to help think through these concepts, see Deni Elliott’s workbook Ethical Challenges: Building an Ethics Toolkit, which may be downloaded for free as a PDF. This workbook explores what ethics is and what it means to be ethical, offering readers a variety of exercises to identify their own values and reason through ethical conflicts. Discussion and exercises regarding harms and justifications may be found beginning on page 14. More information and activities on justified harm can be found in the sections that address the concept of systematic moral analysis, pages 35-44.
Harrosh, Shlomit. 2012. “Identifying Harms.” Bioethics 26 (9): 493-498.
Rodin, David. 2011. “Justifying Harm.” Ethics 122 (1): 74-110.
Smilansky, Saul. 2004. “Terrorism, Justification, and Illusion.” Ethics 114 (4): 790-805.
Transcript of Narration
Written and Narrated by
Deni Elliott, Ph.D., M.A.
Department of Journalism & Media Studies
College of Arts and Sciences
The University of South Florida at St. Petersburg
“How can I harm thee? Let me count the ways. Physically. Psychologically or emotionally. Financially. And, I can cause you reputational harm.
Harms rarely come isolated from one another. So, let’s review the categories:
Physical harm is the easiest. It can be short-term, like, oh, being shoved out of the way and into a mud puddle by someone hurrying down the street. Or it can be long-term, like being injured in a car accident by a drunken driver.
Psychological and emotional harm may not carry any visible scars. But, they are true harms. Emotional harm is the short-term version. When we feel offended or embarrassed or humiliated, it may be due to emotional harm.
Psychological harm makes us feel unsure of our worth or lose confidence in ourselves; it can result from a trauma and haunt us from that point on. The tentative child or the volatile, explosive adult may be acting from a place of psychological harm.
Financial harm is important too. If I take advantage of you being naive about investments and convince you to put your life savings into some get rich quick scheme that fails, I’ve caused you harm.
Last of all is reputational harm. This kind of harm has become more prevalent because of the wide reach of the Internet. Cyber-bullying has led teenagers to commit suicide; false or mean-spirited reviews have led to professional ruin for individuals and for businesses.
Now causing harm can be justified, but the harm-causing action must first meet one of the following conditions:
Number one: The person harmed gave consent. Think of someone who agrees to go through a painful surgery so that he will be healthy again. That’s consent to cause harm.
Number two: The harm caused was part of the harmer’s role-related responsibility. Sometimes causing justified harms is just part of the job. If a parent prevents her teenager from hanging out with friends until homework is done, she is fulfilling her role-related responsibility, no matter how much anguish she might cause her child at the moment.
Number three: A harm was caused to prevent an even greater harm to the community as a whole. For example, a government collects taxes, causing financial harm to some citizens, because without taxes the government could not provide services that benefit all citizens.
After meeting one of these conditions, an act of justified harm must also pass a publicity test. The publicity test means that we’re willing for the exception to the general rule, “cause no harm,” to be widely and publicly known, and applied in all similar situations. The harm-causer in this case must also be willing to acknowledge that she or he might be the one hurt in the future by the same exception.
So, maybe I can harm you in a variety of ways. But, being the ethical person that I strive to be, I won’t harm you without justification. And, I won’t harm you unless I am willing to explain to you and the public at large why I am doing so. And, I won’t harm you without believing that you and everyone else is equally justified in causing the same kind of harm, even to me.”