Ethics Unwrapped Blog

Birthing Vaccine Skepticism

Andrew Wakefield, the child of two physicians, graduated from medical school in England in 1981 and embarked upon a research career in gastroenterology. He was particularly interested in the causes of Crohn’s disease, an inflammatory bowel disease. After a fellowship at the University of Toronto, Wakefield became a lab researcher at the Royal Free Hospital in London in 1988.

Wakefield developed a theory that the measles virus caused Crohn’s disease and formed an “Inflammatory Bowel Disease Study Group.” In May of 1995, the distraught mother of an autistic child phoned Wakefield and suggested that her son’s condition had been caused by the measles virus in the MMR (measles, mumps, rubella) vaccine. Of interest to Wakefield was the fact that the child also apparently suffered from symptoms consistent with inflammatory bowel disease, which the mother suspected were also caused by the MMR vaccine. Wakefield and his team began to study the mother’s hypothesis. From July 1996 to February 1997, the team solicited referral letters from doctors of children who appeared to have both autism and inflammatory bowel issues. Ultimately, the team studied twelve children.

On February 28, 1998, the “team” published an article in the prestigious medical journal, The Lancet. The article listed twelve co-authors but was actually solely written by Wakefield. It proposed a potential link between the MMR vaccine and a bowel-brain “syndrome” causing both “regressive autism” (where children lost abilities such as speech that they had once had) and Crohn’s disease. Even before publication of the article, Wakefield had told the press that the results of his research “clearly confirm our suspicions.” At a press conference following publication, Wakefield said: “I cannot support the continued use of the three vaccines given together. My concerns are that one more case of this is too many.” The article attracted much publicity and had a significant effect on the public’s views regarding vaccination. What was unknown at that time included:

  • On February 19, 1996, Wakefield had agreed to be a well-paid consultant to attorney Richard Barr who had been awarded a contract by the British Legal Aid Board to potentially file a class action lawsuit over the MMR vaccine.
  • Barr and Wakefield had developed a list of things that Wakefield’s research needed to show in order to be helpful to Barr’s lawsuit even before the study began.
  • Barr largely funded Wakefield’s research project using government funds allocated to the lawsuit.
  • Many of the twelve children in the study had ties to Barr or to anti-vaccine groups; they had gone to the hospital with their parents already intent upon suing the vaccine makers.
  • To gather information for the study, Wakefield and his team subjected the twelve children to endoscopies, a procedure that was “high risk,” even though many of the children had no medical need for the procedure. Wakefield did not disclose that his team did not have ethics committee approval to perform these risky, unnecessary procedures.
  • In June 1996, Wakefield filed patent applications for:
    • A single measles vaccine
    • A treatment for inflammatory bowel disease, and
    • A treatment for autism
  • Members of Wakefield’s research team often spoke giddily of the Nobel Prize they would win and Wakefield himself aspired to be the greatest gastroenterologist ever.

Wakefield had previously formed companies (1991, 1993, 1994) to commercially exploit his scientific discoveries, but none had succeeded. Nonetheless, soon after publication of The Lancet article, Wakefield met with potential investors to form a company to sell products (e.g., an alternative measles vaccine) that he planned to develop in this area. His projections for revenue and profit potential were aggressively optimistic, depending largely on Barr’s litigation succeeding. He negotiated to give the school a piece of his projected action in return for certain benefits, including a title for himself as “professor.” The school refused to give Wakefield that title.

In October of 1998, Barr filed the first court claims. Wakefield was to be Barr’s primary expert witness and pretended to be an independent scientist. In 1999, when shortcomings in Wakefield’s study came to light (very small number of subjects, subjects not chosen randomly, no comparison group of non-autistic children, heavy reliance on parents’ memories and beliefs, etc.), his university asked him to replicate his research results with a “gold standard” scientific study. Wakefield readily agreed to do so, but never even began such a study. Ultimately, he refused additional replication requests.

In 2000, Wakefield testified in Washington D.C. before a congressional committee regarding the dangers of the MMR vaccine. Another witness was John O’Leary, an Irish pathologist. Neither disclosed that they were business partners nor that O’Leary was also being supported financially by lawyer Barr. By October 2003, the flaws in Wakefield’s study were sufficiently apparent that Barr’s MMR lawsuit was dismissed for lack of evidence. The government money stopped flowing, but Barr and his staff had already received $51m in current U.S. dollars and Wakefield had received $846,000 in today’s dollars, plus expenses (roughly eight times his medical school salary).

Soon thereafter, journalist Brian Deer — whose book The Doctor Who Fooled the World provides virtually all the material for this case study — began thoroughly investigating Wakefield and his research. As litigation involving Wakefield and his colleagues proceeded, documents related to Wakefield’s study in The Lancet came to light, showing that the article inaccurately described the conditions of every single one of the twelve subjects. Some were described as having autism when medical records showed that they did not. Some were described as having Crohn’s disease, when medical records showed that they did not. Others were described as having developed autism soon after having received the MMR vaccine, while medical records showed that they had manifested signs of autism before being vaccinated. And so on.

Although the Uniform Requirements for Manuscripts Submitted to Biomedical Journals clearly require that third-party funding and expert witness work must be disclosed as conflicts of interest, Wakefield said that he did not agree. On May 3, 2004, 10 of 12 co-authors on The Lancet article repudiated the paper’s conclusions. Ultimately, after a long hearing, the General Medical Council canceled Wakefield’s medical registration.

None of this prevented Wakefield from becoming the darling of the anti-vaccine set and creating significant vaccine hesitancy regarding the MMR vaccine (and others). The result has been a rise in measles deaths around the world. As of 2022, Wakefield maintains his discredited theory and paints himself as the victim in all this.

 

Continue Reading

Wells Fargo and Moral Emotions

On September 8, 2016, Wells Fargo, one of the nation’s oldest and largest banks, admitted in a settlement with regulators that it had created as many as two million accounts for customers without their permission. This was fraud, pure and simple. It seems to have been caused by a culture in the bank that made unreasonable demands upon employees. Wells Fargo agreed to pay $185 million in fines and penalties.

Employees had been urged to “cross-sell.” If a customer had one type of account with Wells Fargo, then top brass reasoned, they should have several. Employees were strongly incentivized, through both positive and negative means, to sell as many different types of accounts to customers as possible. “Eight is great” was a motto. But does the average person need eight financial products from a single bank? As things developed, when employees were unable to make such sales, they just made the accounts up and charged customers whether they had approved the accounts or not. The employees used customers’ personal identification numbers without their knowledge to enroll them in various products without their knowledge. Victims were frequently elderly or Spanish speakers.

Matthew Castro, whose father was born in Colombia, felt so bad about pushing sham accounts onto Latino customers than he tried to lessen his guilt by doing volunteer work. Other employees were quoted as saying “it’s beyond embarrassing to admit I am a current employee these days.”

Still other employees were moved to call company hotlines or otherwise blow the whistle, but they were simply ignored or oftentimes punished, frequently by being fired. One employee who sued to challenge retaliation against him was “uncomfortable” and “unsettled” by the practices he saw around him, which prompted him to speak out. “This is a fraud, I cannot be a part of that,” the whistleblower said.

Early prognostications were that CEO John Stumpf would not lose his job over the fiasco. However, as time went on and investigations continued, the forms and amount of wrongdoing seemed to grow and grow. Evidence surfaced that the bank improperly changed the terms of mortgage loans, signed customers up for unauthorized life insurance policies, overcharged small businesses for credit-card processing, and on and on.

In September of 2016, CEO Stumpf appeared before Congress and was savaged by Senators and Representatives of both parties, notwithstanding his agreement to forfeit $41 million in pay. The members of Congress denounced Wells Fargo’s actions as “theft,” “a criminal enterprise,” and an “outrage.” Stumpf simultaneously took “full responsibility,” yet blamed the fraud on ethical lapses of low-level bankers and tellers. He had, he said, led the company with courage. Nonetheless, by October of 2016 Stumpf had been forced into retirement and replaced by Tim Sloan.

Over the next several months, more and more allegations of wrongdoing arose. The bank had illegally repossessed cars from military veterans. It had modified mortgages without customer authorization. It had charged 570,000 customers for auto insurance they did not need. It had ripped off small businesses by charging excessive credit card fees. The total number of fake accounts rose from two million to 3.5 million. The bank also wrongly fined 110,000 mortgage clients for missing a deadline even though the party at fault for the delay was Wells Fargo itself.

At its April 2017 annual shareholders meeting, the firm faced levels of dissent that a Georgetown business school professor, Sandeep Dahiya, called “highly unusual.”

By September, 2017, Wells Fargo had paid $414 million in refunds and settlements and incurred hundreds of millions more in attorneys’ and other fees. This included $108 million paid to the Department of Veterans Affairs for having overcharged military veterans on mortgage refinancing.

In October 2017, new Wells Fargo CEO Tim Sloan was told by Massachusetts Senator Elizabeth Warren, a Democrat, that he should be fired: “You enabled this fake-account scandal. You got rich off it, and then you tried to cover it up.” Republicans were equally harsh. Senator John Kennedy Texas said: “I’m not against big. With all due respect, I’m against dumb.”

Sloan was still CEO when the company had its annual shareholders meeting in April 2018. Shareholder and protestors were both extremely angry with Wells Fargo. By then, the bank had paid an additional $1 billion fine for abuses in mortgage and auto lending. And, in an unprecedented move, the Federal Reserve Board had ordered the bank to cap its asset growth. Disgust with Wells Fargo’s practices caused the American Federation of Teachers, to cut ties with the bank. Some whistleblowers resisted early attempts at quiet settlements with the bank, holding out for a public admission of wrongdoing.

In May 2018, yet another shoe dropped. Wells Fargo’s share price dropped on news that the bank’s employees improperly altered documents of its corporate customers in an attempt to comply with regulatory directions related to money laundering rules.

Ultimately, Wells Fargo removed its cross-selling sales incentives. CEO Sloan, having been informed that lower level employees were suffering stress, panic attacks, and other symptoms apologized for the fact that management initially blamed them for the results of the toxic corporate culture, admitting that cultural weaknesses had caused a major morale problem.

Continue Reading

The Central Park Five

In 1989, a young woman jogging in New York’s Central Park was raped and beaten nearly to death.  This high-profile attack upon a white investment banker in the heart of the city was quickly called the “crime of the century.”  There was intense public pressure to solve the case and, indeed, the police quickly arrested five young (14 to 16 years old) men who were black and Latino.  They had been part of a larger group of young men harassing passersby in another part of the park.

After intense interrogations ranging from 14 to 30 hours in length, four of the five confessed to the crime.  The five were charged with the attack.  Importantly, (a) the boys soon recanted their confessions which they blamed on police coercion, (b) no physical evidence linked the young men to the crime, (c) no physical evidence indicated that there was more than one attacker, (d) the semen found in the victim did not match any of the young men, and (e) the four confessions were inconsistent with each other and with the physical evidence from the crime scene.  Nonetheless, the young men were convicted and sent to jail.  Real estate developer Donald Trump called for their swift execution in a full-page newspaper ad.

Thirteen years later, Matias Reyes, who was serving a life sentence for murder, confessed to the crime.  Indeed, his DNA matched the semen recovered from the victim.  His was the only semen recovered from the victim.  The attack on the jogger was similar in M.O. to his other rapes, none of which involved any other perpetrator.

Eventually, the Central Park 5 settled a wrongful conviction lawsuit with the City of New York for $41 million.

However, the indisputable and overwhelming evidence of their innocence did not change the minds of:

•The lead prosecutor, who claimed that the five young men were indeed still guilty and that Reyes was simply an additional perpetrator—an “unindicted co-ejaculator.”

•The head detective, who said: ‘This lunatic [Reyes] concocts this wild story and these people fell for it.”

•Donald Trump who in 2013 tweeted regarding Ken Burns’ award-winning documentary on the Central Park 5’s innocence: “The Central Park Five documentary was a one-sided piece of garbage that didn’t explain the horrific crimes of these young men while in park.”

•The second chair lawyer in the prosecution who in 2018 still finds the taped confessions “pretty compelling” notwithstanding their inconsistencies and the fact that of the first 325 DNA exonerations in the U.S., 27% involved false confessions.

Continue Reading

Meet Me at Starbucks

On April 12, 2018, at a Starbucks location in Philadelphia, two black men, Rashon Nelson and Donte Robinson, were waiting for a friend, Andrew Yaffe. Nelson and Robinson were entrepreneurs and were going to discuss business investment opportunities with Yaffe, a white real estate developer. As they waited, an employee asked if she could help them. They said “no,” that they were just waiting for a business meeting. Then a manager told Nelson that he couldn’t use the restroom because he was not a paying customer.

Because the two men had not purchased anything yet, a store manager called police, even though Robinson had been a customer at the store for almost a decade and both men had used the store location for business meetings before. At least six Philadelphia Police Department officers arrived. The police officers did not ask the men any questions; they just demanded that they leave immediately. They declined. The police officers then proceeded to arrest the men for trespassing. As the arrest occurred, Mr. Yaffe arrived. He said: “Why would they be asked to leave? Does anyone else think this is ridiculous? It’s absolute discrimination.” The two men were taken out in handcuffs. They were taken to the police station, photographed, and fingerprinted. They were held for almost nine hours before being released from custody. Prosecutors decided that there was insufficient evidence to charge the men with a crime.

After a video of the arrest went viral, Starbucks CEO Kevin Johnson released a statement: “We apologize to the two individuals and our customers and are disappointed this led to an arrest. We take these matters seriously and clearly have more work to do when it comes to how we handle incidents in our stores. We are reviewing our policies and will continue to engage with the community and the police department to try to ensure these types of situations never happen in any of our stores.”

Johnson then announced that every company-owned Starbucks location in the nation would close on May 29, 2018, for “racial-bias education.” When one customer complained on Facebook that closing the stores because of just one incident seemed overkill, Starbucks responded: “There are countless examples of implicit bias resulting in discrimination against people of color, both in and outside our stores. Addressing bias is crucial in ensuring that all our customers feel safe and welcome in our stores.” A similar complaint about closing thousands of stores because of the actions of a handful of employees prompted this response from Starbucks: “Our goal is to make our stores a safe and welcoming place for everyone, and we have failed. This training is crucial in making sure we meet our goal.”

Continue Reading

Something Fishy at the Paralympics

Some of sports’ most inspiring and heart-warming stories have come from the Special Olympics (for athletes with intellectual disabilities) and the Paralympics (for athletes with intellectual impairments and visual or physical disabilities). The Paralympic games are held in parallel with the Olympics every four years and have become a very big deal.  Sports Illustrated reports that “Paralympic sport has grown into big business, with countries and sponsors pouring in millions of dollars to fund and promote athletes whose stories highlight the best of humanity.”[1] Australian Paralympians are sometimes provided “tens of thousands of dollars in government funding and other perks, including college scholarships, vehicles, and housing.”[2] In the 2020 Paralympics, U.S. athletes were slated to receive $37,500 for each gold medal they won.

Unfortunately, sometimes Paralympians cheat much like regular Olympians have been known to do—they take illicit drugs[3] or they drive their blood pressure up (“boosting”) to drive heart rate and improve performance.[4] And, unsurprisingly, the Russians systematically cheat in the Paralympics just as they do in the traditional Olympics.[5] But the most significant cheating involves gaming the International Paralympic Committee’s (IPC’s) classification system.

The IPC classifies the disabilities of competitors in order to provide a structure for competition. Fair competition thrives only if athletes have similar levels of disabilities, so athletes are grouped into classes based on “how much their impairment affects fundamental activities in each specific sport and discipline.”[6]

Obviously, by pretending to have a more serious disability than they actually do, athletes could convince officials to group them with athletes of lesser abilities.  And there is evidence that this has happened.  For example, among the wrongdoing[7]:

  • Swimmers tape their arms for days, removing the tape just before classification. Because of the taping, they are unable to fully extend their arms.
  • Athletes arrive for the classification in a wheelchair when they do not otherwise use wheelchairs, or wearing braces that they normally do not wear.
  • Athletes submerge in cold water or roll in snow soon before classification to worsen muscle tone.
  • Athletes intentionally perform below their ability (“tanking”) in assessment races.
  • Remarkably, even the shortening and removal of limbs has reportedly occurred.

The most infamous example of Paralympic cheating was by Spain’s basketball team at the 2000 Sydney Paralympics: none of the 12 players was mentally disabled as represented.[8] Recently several Paralympians, especially Para-swimmers, have claimed that cheating the classification system is “epidemic.”[9]

After the Spanish basketball team was finally punished in 2017, the IPC removed basketball as a Paralympic sport until officials could prove to the IPC’s satisfaction that they had the classification problem under control.[10]

And, after believable allegations of widespread classification cheating among Australian Paralympians,[11] Australia has launched an online course that is mandatory for all its Paralympic athletes. The course outlines the classification process and requirements for all staff, coaches and athletes, explains penalties for noncompliance, and trains everyone on ethical decision making.[12]

At this time, the 2020 Tokyo Paralympics, along with the 2020 Tokyo Olympics, have been postponed due to the COVID-19 pandemic. Whether or not the increased visibility (and awareness) of Paralympic cheating changes the way these games are played in the future remains to be seen.

Continue Reading

The Astros’ Sign-Stealing Scandal

Major League Baseball (MLB) fosters an extremely competitive environment.  Tens of millions of dollars in salary (and endorsements) can hang in the balance, depending on whether a player performs well or poorly.  Likewise, hundreds of millions of dollars of value are at stake for the owners as teams vie for World Series glory.  Plus, fans, players and owners just want their team to win. And everyone hates to lose!

It is no surprise, then, that the history of big-time baseball is dotted with cheating scandals ranging from the Black Sox scandal of 1919 (“Say it ain’t so, Joe!”), to Gaylord Perry’s spitter, to the corked bats of Albert Belle and Sammy Sosa, to the widespread use of performance enhancing drugs (PEDs) in the 1990s and early 2000s.  Now, the Houston Astros have joined this inglorious list.

Catchers signal to pitchers which type of pitch to throw, typically by holding down a certain number of fingers on their non-gloved hand between their legs as they crouch behind the plate.  It is typically not as simple as just one finger for a fastball and two for a curve, but not a lot more complicated than that.

In September 2016, an Astros intern named Derek Vigoa gave a PowerPoint presentation to general manager Jeff Luhnow that featured an Excel-based application that was programmed with an algorithm. The algorithm was designed to (and could) decode the pitching signs that opposing teams’ catchers flashed to their pitchers. The Astros called it “Codebreaker.”  One Astros employee referred to the sign-stealing system that evolved as the “dark arts.”[1]

MLB rules allowed a runner standing on second base to steal signs and relay them to the batter, but the MLB rules strictly forbade using electronic means to decipher signs.  The Astros’ “Codebreaker” blatantly violated these rules.

According to Wall Street Journal writer Jared Diamond:

The way Codebreaker worked was simple:  Somebody would watch an in-game live feed and log the catcher’s signals into the spreadsheet, as well as the type of pitch that was actually thrown.  With that information, Codebreaker determined how the signs corresponded with different pitches.  Once decided, that information would be communicated through intermediaries to a baserunner, who would relay them to the hitter.

Starting around June 2017, the system was embellished by Astros players.  They started watching a live game feed on a monitor near the dugout and then would bang on a trash can to communicate the coming pitch to the batter.  The “banging scheme” lasted through the 2017 World Series, which the Astros won over the Los Angeles Dodgers.[2]

This all occurred despite the fact that late in the 2017 season, MLB caught the Boston Red Sox relaying signs from their video room to an Apple watch worn by a trainer sitting in the dugout. MLB Commissioner Rob Manfred fined the Red Sox and issued a strong warning to all teams against illegal electronic sign-stealing.[3]

However, the Astros’ scheme lasted into the 2018 season in away games as well as home games, despite the fact that other teams were very suspicious that the Astros were stealing signs. Other teams often changed their own signs several times a game in an attempt to thwart the Astros suspected sign stealing.  An executive for an opposing team was quoted as saying “The whole industry knows they’ve been cheating their asses off for three or four years.  Everybody knew it.”[4]  Indeed, many teams had complained to MLB’s executives about the Astros’ cheating. Some suspect the cheating continued through the 2019 season although others think not, and MLB found no convincing evidence of it.[5]

Sign-stealing might not seem like it would give a big advantage.  After all, even if a batter knows that a certain pitch is coming, he still has to hit it.  And it is not easy hitting a 100-mph fastball or a major league-caliber slider, even if you know it’s coming.  Nonetheless, the advantage is substantial.  According to the Washington Nationals’ pitching coach Paul Menhart, “It’s the worst feeling in the world stepping on that mound and having an idea that the hitter knows what’s coming.  It’s one of the most unnerving feelings.  You feel helpless.  You just get ticked off to the point where you lose total focus and confidence.”[6]  The Washington Nationals won the 2019 World Series over the favored Astros. They won, at least in part, by assuming that the Astros would be attempting to steal their signs, and putting into place elaborate countermeasures, including multiple sets of signs for each pitcher.[7]

There is no question that many of the Astros players were actively involved in the scheme.  The Astros manager, AJ Hinch, clearly knew about it.  There is substantial, though perhaps not airtight evidence, that General Manager (GM) Rob Luhnhow also knew of the scheme.  Carlos Beltran, a Hall-of-Fame caliber player near the end of his 20-year playing career was a leader in the scheme.  And bench coach Alex Cora was a primary instigator.  Owner Jim Crane appears not to have known of the dark arts being practiced by his club.[8]

The scandal became public on November 12, 2019, when former Astros’ pitcher Mike Fiers blew the whistle in an interview published in “The Athletic.”[9] Although some current MLB players praised Fiers for coming forward about the scandal, other players criticized him for violating baseball’s presumed “code of silence,” also called the “clubhouse code.”[10]  MLB then launched an investigation that granted the Astros players immunity in return for their fessing up. Commissioner Rob Manfred soon issued a nine-page report that found that most of the Astros players knew of the scheme and many participated in it. The report said that manager Hinch knew of the scheme and that GM Luhnow should have prevented it.[11]  Commissioner Manfred suspended both Hinch and Luhnow, who were quickly fired by Astros’ owner Crane.  MLB fined the Astros $5 million, and stripped the club of its first- and second-round draft picks in both 2020 and 2021.[12]

There was other fall-out, too.  Beltran, who had just been hired as manager of the New York Mets, was fired.  Cora, who had subsequently become the manager of the Boston Red Sox, was also fired.  In late April 2020, Manfred found that the Red Sox had done some illicit sign-stealing in the 2018 season. Surprisingly, though, he concluded that manager Cora and most of the Red Sox players did not know about it. Manfred imposed a modest punishment on the Red Sox organization in the form of a lost draft pick. But again, none of the players who participated in the scheme were penalized.[13]

Manfred’s decision not to punish players was harshly criticized by many. He claimed that granting immunity in exchange for information was the best way to quickly discover the truth. This approach was praised by some,[14] but other observers were unconvinced.[15] He also argued that it was difficult to determine how much advantage the cheating scandal had given the Astros. However, many major league players – including the game’s best player, Mike Trout – suggested that they would love to know what pitch was coming.[16] Manfred also claimed that with so many players involved to different degrees, it would be difficult to apportion blame appropriately. Additionally, MLB had stated in its 2017 warning about sign-stealing that it would hold management responsible for violations.[17]

Some suggested that Manfred was simply trying to minimize damage to MLB’s image. The game got a black eye from the PED scandal, which is brought back into the spotlight every year as Barry Bonds, Roger Clemens, and others are refused entry to the baseball Hall of Fame by sportswriters who insist on punishing their cheating in ways that MLB never did. And Astros players such as Carlos Correa, Jose Altuve, and Justin Verlander will probably have a better chance to enter the Hall of Fame than if they had been suspended for cheating.[18]

The damage done by the Astros is significant.  Former major leaguer Doug Glanville said the Astros’ “selfish act makes everyone question the validity of the future and the truth of the past,” concluding that MLB now faces an “existential crisis.”[19]  Veteran catcher Stephen Vogt said, “The integrity of our game is what we have, and now that’s been broken.”[20]

The impact on the Astros and its players, beyond a new manager and general manager, is as yet unknown.  The Astros worry that opposing pitchers will feel some degree of freedom to throw at Astros hitters.  A former major league pitcher, Mike Bolsinger, sued the Astros. He claimed that a particularly bad outing he had was caused by the Astros’ cheating, and that it effectively ended his MLB career.[21]  The effect of their cheating ways can be seen in non-professional baseball, too, with some little leagues banning the use of “Astros” as a team name.[22] Regardless of league level, gaming the system to advantage one’s own team is not the kind of play that, in the long run, makes for good sport.

Continue Reading

Head Injuries & American Football

American football is a rough and dangerous game. “Football is both notorious and cherished for its unapologetic, brute-force violence.”[1] Players suffer bruises, lacerations, torn muscles, dislocated shoulders, torn knee ligaments, broken bones, internal organ damage, and, occasionally, even paralysis. Football rules intentionally create high speed collisions between human beings, making such injuries inevitable and the sport controversial. And new knowledge about brain injuries have caused many people to call football immoral[2] and to advocate its abolition.[3]

A traumatic brain injury (TBI) is “a disruption in the normal function of the brain that can be caused by a bump, blow, or jolt to the head, or penetrating head injury.”[4] A concussion is a form of TBI where the blow causes the brain to move rapidly back and forth, bouncing around in the skull and suffering various types of structural damage. [5] Although concussions can carry serious consequences, they are termed a “mild” form of TBI because they are not typically life threatening. Chronic traumatic encephalopathy (CTE) is “brain degeneration likely caused by repeated head traumas.”[6] Repetitive head impacts (RHIs) can cumulatively lead to CTE and early death, even though no single RHI results in a concussion.[7]

If only one thing is clear about the current science surrounding sports-related concussions (SRCs) and related brain injuries, it is that very little is clear about the current science. The field is surprisingly new.  As told in the movies, a significant scientific breakthrough occurred in 2002 when an African-American neuropathologist in Pittsburgh named Bennet Omalu (played by Will Smith in the 2015 movie “Concussion”) performed an autopsy on Hall of Fame center Mike Webster. Dr. Omalu identified abnormal clumps of the protein tau in Webster’s brain, which he believed to be evidence of CTE.[8] Such proteins develop in tangles that slowly strangle neurons and, consequently, inhibit brain function.[9]

Many recent studies point to how dangerous football is to players’ long-term brain health. These studies are broken down by football league level:

National Football League (NFL):

  • Over two regular seasons (2012-2014), NFL players sustained 4,384 injuries, including 301 concussions. This statistic is up 61% from 2002-2007, perhaps reflecting an improvement of awareness and reporting.[10]
  • In a study of 14,000 NFL players, researchers found that even head impacts insufficient to cause concussions can mount up over the years, leading to CTE and premature death. An NFL player who plays 24 games increases the likelihood of premature death by 16%.[11]
  • A 2019 study of the brains of 223 football players with CTE and 43 players without CTE found that for each additional 2.6 years of play, the risk of developing CTE doubled.[12]
  • Another study found that greater RHI exposure correlated with higher levels of plasma t-tau (a biomarker for CTE) in symptomatic former NFL players as compared to the study’s control group.[13]
  • Of 111 NFL players whose brains were donated for one study, 110 were diagnosed with CTE.[14]
  • A 2012 study of 3,439 NFL players with five years or more in the NFL found that their neurogenerative mortality was three times that of the general U.S. population, and four times higher for two subcategories: Alzheimer’s disease, and Lou Gehrig’s Disease (amyotrophic lateral sclerosis or ALS).[15]
  • Other studies found that NFL players who suffered concussions were more likely to later be diagnosed with depression, [16] dementia-related syndromes,[17] Lou Gehrig’s Disease (ALS),[18] and erectile dysfunction.[19]

 

College & High School:

  • A study of former high school and college football players found that RHI exposure predicted later-life apathy, depression, executive dysfunction, and cognitive impairment.[20]
  • After a single season, college football players had less midbrain white matter than they had started with.[21]
  • High school athletes are reluctant to report concussions.[22]
  • A 2017 study found CTE in 21% of donated brains of deceased high school football players.[23]
  • Over time more evidence has indicated that even mild concussions suffered by high school football players can cause serious consequences.[24]
  • Football causes more concussions than any other high school sport,[25] and these concussions can cause death.[26]

 

Youth Leagues (Under 14):

  • Youth football players average 240 head impacts per season. Some of these are high impacts comparable to those experienced in high school and college games.[27]
  • Children between the ages of 9 and 14 make up the largest cohort of football players in the U.S. They can suffer concussions from milder collisions than would be required to concuss a collegiate or professional player.[28]
  • According to research by neuroscientists, “There seems to be greater consequences if you’re getting your head hit when the brain is rapidly developing [below age 12].”[29]
  • A study of former NFL players found that those who began playing football before age 12 tended to show greater later-life cognitive impairments as compared to those who began after age 12.[30]

THE OTHER SIDE OF THE STORY

Given the results of the studies above, it is not surprising that there has been a strong outcry against football. However, the science in this area is truly not settled. Part of the reason is that “[m]ost of the time when a player has a concussion, standard medical imaging techniques do not show damage.”[31] No “gold standard” for diagnosing concussions currently exists.[32] Many researchers in the area recently published an article saying:

Contrary to common perception, the clinical syndrome of CTE has not yet been fully defined. Its prevalence is unknown, and the neuropathological diagnostic criteria are no more than preliminary. We have an incomplete understanding of the extent or distribution of pathology required to produce neurological dysfunction or to distinguish diseased from healthy tissue, with the neuropathological changes reported in apparently asymptomatic individuals.”[33]

Neuropsychologist Munro Cullum argues: “I worry the pendulum has swung too far. The reality is that we still don’t know who is most likely to suffer a concussion, who will take longer to recover, how anatomic or genetic differences influence concussions, and who may be at risk of prolonged symptoms or developing cognitive problems later in life.”[34]

Furthermore, many of the studies cited by those who would like to abolish tackle football have involved relatively small sample sizes.[35]  Other studies have involved skewed samples, including one where all the NFL players’ brains had been donated because of mental declines that the donors had suffered before their deaths.[36]

Most importantly, other studies seem to indicate that concussions may be more benign. Again, these studies are broken down by league level:

NFL

  • A 2016 study found no elevated risk of suicide in a population of players with at least five years in the league.[37]
  • Another study of 35 former NFL players over age 50 who had sustained multiple concussions during their careers found no significant association between the length of careers, the number of concussions, and their level of cognitive function later in life.[38]
  • One study found no statistically significant difference between the all-cause mortality among career NFL players and NFL replacement players who played just three games during the strike of 1987.[39]
  • A 2007 study found that retired NFL players experienced levels of depressive symptoms no worse than those of the general population.[40]

 

College & High School

  • Suicide rates among NCAA football players are the highest among all sports, but they are substantially lower than the general population age 18-22 or college students in that age range.[41]
  • A study of 3,904 Wisconsin men found no significant harmful association between playing football in high school and cognitive impairment or depression later in life.[42]
  • Reducing tackling in practices has reduced overall concussion numbers among high school players, even though the number of concussions in games has risen slightly. And concussion recurrence has been reduced, most likely by protocols guiding when it is safe to return to play.[43]
  • One expert said “It really seems right now that if your [football] practices are highly controlled and reduced as much as possible and you only play four years of high school, your [CTE] risk is probably pretty low.”[44]

 

Youth Leagues (Under 14)

  • Despite their heightened susceptibility to concussions, youth football players rarely sustain concussions because they are lighter and collide with less force than older players.[45]
  • In one study, use of newly-designed football helmets and safe tackling techniques eliminated concussions for 20 middle school aged players for an entire season.[46]

Studies such as these provide ammunition for those who defend organized football as an institution. However, many such studies were funded or carried out by the NFL, owners of NFL franchises, universities that earn millions of dollars from football, and other interested parties. Given the obvious conflict of interest, the studies have been criticized on that ground.[47] There is also evidence that the NFL sought to influence the findings of some of the research it funded. [48] In addition, evidence indicates (and is consistent with the self-serving bias) that industry funding of research often influences results.[49]

The NFL has taken other concrete steps to respond to the controversy. It paid more than $750 million to settle a civil lawsuit by former players.[50] The NFL has also changed rules to discourage helmet-to-helmet contact,[51] and has instituted protocols for safely returning concussed players to the field.[52]

On the other hand, while football helmets can prevent fractured skulls, they will likely never be able to prevent concussions.[53] Studies indicate that there are helmets that may decrease concussions,[54] but neuroscientist Julie Stamm says: “No helmet will ever be concussion-proof, because the brain still moves inside the skull. And for the same reason, a helmet alone will not prevent CTE.”[55] Furthermore, while the NFL has banned helmet-to-helmet hits, these are neither the only nor the most common cause of concussions.[56] Professor Goldberg argues that “there is little evidence that such incremental changes [e.g., in tackling techniques] have a substantial risk-reducing effect.”[57]

Some people accuse the media (and others) of hysterically overhyping the dangers of tackle football to the brain.[58] Other people believe that media discussions have impeded needed change in minimizing sports violence.[59] At the end of the day, the jury still seems to be out on the question of whether you can go to a football game or watch one on television and still feel good about yourself for supporting a sport that seems to cause irreversible traumatic brain injuries.

Continue Reading

Therac-25

The Therac-25 machine was a state-of-the-art linear accelerator developed by the company Atomic Energy Canada Limited (AECL) and a French company CGR to provide radiation treatment to cancer patients. The Therac-25 was the most computerized and sophisticated radiation therapy machine of its time. With the aid of an onboard computer, the device could select multiple treatment table positions and select the type/strength of the energy selected by the operating technician. AECL sold eleven Therac-25 machines that were used in the United States and Canada beginning in 1982.

Unfortunately, six accidents involving significant overdoses of radiation to patients resulting in death occurred between 1985 and 1987 (Leveson & Turner 1993). Patients reported being “burned by the machine” which some technicians reported, but the company thought was impossible. The machine was recalled in 1987 for an extensive redesign of safety features, software, and mechanical interlocks. Reports to the manufacturer resulted in inadequate repairs to the system and assurances that the machines were safe. Lawsuits were filed, and no investigations took place. The Food and Drug Administration (FDA) later found that there was an inadequate reporting structure in the company, to follow up with reported accidents.

There were two earlier versions of the Therac-25 unit: the Therac-6 and the Therac-20, which were built from the CGR company’s other radiation units–Neptune and Sagittaire. The Therac-6 and Therac-20 units were built with a microcomputer that made the patient data entry more accessible, but the units were operational without an onboard computer. These units had built-in safety interlocks and positioning guides, and mechanical features that prevented radiation exposure if there was a positioning problem with the patient or with the components of the machine. There was some “base duplication” of the software used from the Therac-20 that carried over to the Therac-25. The Therac-6 and Therac-20 were clinically tested machines with an excellent safety record. They relied primarily on hardware for safety controls, whereas the Therac-25 relied primarily on software.

On February 6, 1987, the FDA placed a shutdown on all machines until permanent repairs could be made. Although the AECL was quick to state that a “fix” was in place, and the machines were now safer, that was not the case. After this incident, Leveson and Turner (1993) compiled public information from AECL, the FDA, and various regulatory agencies and concluded that there was inadequate record keeping when the software was designed. The software was inadequately tested, and “patches” were used from earlier versions of the machine. The premature assumption that the problem(s) was detected and corrected was unproven. Furthermore, AECL had great difficulty reproducing the conditions under which the issues were experienced in the clinics. The FDA restructured its reporting requirements for radiation equipment after these incidents.

As computers become more and more ubiquitous and control increasingly significant and complex systems, people are exposed to increasing harms and risks. The issue of accountability arises when a community expects its agents to stand up for the quality of their work. Nissenbaum (1994) argues that responsibility in our computerized society is systematically undermined, and this is a disservice to the community. This concern has grown with the number of critical life services controlled by computer systems in the governmental, airline, and medical arenas.

According to Nissenbaum, there are four barriers to accountability: the problem of many hands, “bugs” in the system, the computer as a scapegoat, and ownership without liability. The problem of too many hands relates to the fact that many groups of people (programmers, engineers, etc.) at various levels of a company are typically involved in creation of a computer program and have input into the final product. When something goes wrong, there is no one individual who can be clearly held responsible. It is easy for each person involved to rationalize that he or she is not responsible for the final outcome, because of the small role played. This occurred with the Therac-25 that had two prominent software errors, a failed microswitch, and a reduced number of safety features compared to earlier versions of the device. The problem of bugs in the software system causing errors in machines under certain conditions has been used as a cover for careless programming, lack of testing, and lack of safety features built into the system in the Therac-25 accident. The fact that computers “always have problems with their programming” cannot be used as an excuse for overconfidence in a product, unclear/ambiguous error messages, or improper testing of individual components of the system. Another potential obstacle is ownership of proprietary software and an unwillingness to share “trade secrets” with investigators whose job it is to protect the public (Nissenbaum 1994).

The Therac-25 incident involved what has been called one of the worst computer bugs in history (Lynch 2017), though it was largely a matter of overall design issues rather than a specific coding error. Therac-25 is a glaring example of what can go wrong in a society that is heavily dependent on technology.

Continue Reading

Ethical Use of Home DNA Testing

Home DNA testing is a booming business. Millions of Americans have sent their DNA to commercial testing companies such as 23andMe or Ancestry to learn more about their heritage or potential for disease. According to Grand View Research, “the global DNA testing market is set to reach over $10 billion by 2022.” (Brown 2018). Successful marketing campaigns have led consumers to believe that home DNA testing is fun, informative, and personal to them. However, what consumers may not realize is that once their genetic information is shared, they have limited control as to who has access to it.

Regardless of the reason consumers decide to purchase a home DNA test kit, the information they provide to the testing company is far greater than the information they receive. The benefits that these testing companies can gain from gathering, using, and selling customers’ private information places them in a significant conflict-of-interest situation. Some of this information includes the IP address, name, address, email, and family history, collected from the application, as well as information provided on follow up surveys. Furthermore, according to its website, if customers opt to share their data for research, 23andMe could keep their physical spit sample and the genetic data it contains for up to a decade. Additional information that consumers upload to the companies’ genealogy website, such as pictures, obituaries, family relationships, and even third-party information is probably added to the pool of data linked to customers’ DNA.

Recently, some have felt that privacy and consumer rights have been violated when they used home DNA kits. In June 2019, Lori Collett sued Ancestry for allegedly misleading customers about what it was doing with their DNA. This class action lawsuit claims that personal information was released to outside parties without customer consent. Further contentions include that the waiver of consumer rights through consent forms is often vague, general in scope, and ever-changing. The fine print may not accurately spell out what the company, its third-party associates, and collaborators can or will do with customer information. (Merken, 2019)

Further concerns arise as testing companies often align themselves with pharmaceutical companies, public and private research organizations, and Google. For example, “GlaxoSmithKline purchased a $300 million stake in the company, allowing the pharmaceutical giant to use 23andMe’s trove of genetic data to develop new drugs — and raising new privacy concerns for consumers.” (Ducharme, 2018) Similarly, Ancestry is sharing its data with Google through its research subsidiary Calico. Ancestry admits that “once they share people’s genetic information with partner companies, they can’t be responsible for security protocols of those partners.” (Leavenworth, 2018).

Additionally, both 23andMe and Ancestry use Google Analytics to provide third parties with consumer information for targeted marketing. In its privacy policy 23andMe states that “when you use our Services, including our website or mobile app(s), our third-party service providers may collect Web-Behavior Information about your visit, such as the links you clicked on, the duration of your visit, and the URLs you visited.” This use of shared information allows testing services and third parties to build a comprehensive personal profile on you, which may include your genetic information.

Although privacy may be a concern of consumers, law enforcement with the cooperation of DNA testing companies, either through partnership or warrants, have brought justice to the victims of numerous unsolved cases. Over the past few years, the use of consumer DNA databases have closed many high profile cold cases such as the Golden State Killer and overturned the wrongful conviction of Alfred Swinton. In some cases, such as the Golden State Killer, the DNA used to identify suspects are cross-referenced through the DNA of relatives as far removed as third cousins. However, this has brought additional concerns, as a DNA expert for the American Civil Liberties Union, Vera Eidelman states, “There’s always a danger that things will be used beyond their initial targets, beyond their initial purpose.” (St. John 2019)

The success of consumer DNA databases has led some law enforcement to meet with Bennett Greenspan, the CEO of FamilyTreeDNA, seeking his help to convince consumers to share their genetic data with police. This partnership has resulted in the creation of the non-profit Institute for DNA Justice that has the following stated mission:

“The Institute for DNA Justice was formed to educate the public about the value of investigative genetic genealogy (IGG) as a revolutionary new tool to identify, arrest, and convict violent criminals, deter violent crime, exonerate the innocent, encourage the 26 million Americans who have taken a DNA test to become genetic witnesses by participating in publicly available family-matching databases working with law enforcement using IGG, and to promote the adoption of industry leading best practices guidelines surrounding its use by law enforcement agencies around the country.”

Regardless of public or private testing, laws in the United States have not yet determined a standard for the home DNA testing industry.

Continue Reading

Myanmar Amber

Amber is a resin material that is formed from fossilized conifer tree sap during years of constant pressure and heat. This yellow to reddish-brown translucent material has been used in a number of ways, including to make jewelry, in Egyptian burials, and in the healing arts. Amber also plays an invaluable role in research. In some cases, amber contains inclusions, such as insects, whole or parts of animals, and plants that are trapped and preserved. The ability to hold a piece of history untouched by time has resulted in a number of scientific discoveries and advances such as feathers on a non-avian dinosaur dated 99 million years ago and the biosynthesis of gene clusters for novel antibiotics.

One of the oldest amber deposits in the world, dating back 100 million years, is located in the Northern region of Myanmar. Myanmar amber is plentiful, high quality and contains inclusions within the resin. The mining of these amber specimens in Myanmar is the center of many legitimate and blackmarket sales to university researchers and private collectors alike. Over the last ten years, more than one billion dollars in legal revenue has been generated from the mining and sale of amber.

Myanmar is a small southeast Asian country that contains about 130 diverse ethnic groups recognized by the government. There is no official state religion but the Myanmar government favors the majority Theravada Buddhism population. This favoritism has created ethnic and religious conflicts resulting in government-enforced discrimination. For example, the government has made it difficult for Christian and Islamic groups to gain permission to repair or build new places of worship. The Kachin Independence Army, which includes ethnic minorities who live in the northern Kachin and surrounding regions of Myanmar, has been in armed conflict with the Myanmar government for the restoration of minority ethnic groups’ rights.

For many years this mining area has been protected by the Kachin Independence Army. However, in 2017 the Myanmar government dropped leaflets from helicopters informing the population in northern Kachin that civilians and Kachin militants who remain in the region will be considered hostile opposition to the government military forces. The government then forced more than 5000 inhabitants from their homes and villages, as well as from the amber mines. This hostile takeover of the profitable Kachin amber mines ensures that amber purchases from researchers and private collectors will help fund the government side of the Myanmar ethnic civil war.

While some researchers and universities feel as though they should refrain from making such amber purchases, their failure to participate enables many private collectors to remove collections from the public or to charge researchers an exorbitant fee for access.

Furthermore, many of the miners in the Kachin region, on both sides of the conflict, are not fully aware of the value of the amber that they are selling and are therefore being exploited by the wholesalers who purchase from them. Myanmar classifies amber as a gemstone, not a fossil, so it can be legally removed from the country, unlike fossils that have restrictions on removal.

Continue Reading

Sports Blogs: The Wild West of Sports Journalism?

This case study examines controversial reporting by the sports blog Deadspin over a personal misconduct case involving NFL star Brett Favre. It highlights current debates surrounding the ethics of sports blogging as illustrated by the issue of paying sources for information, i.e. “checkbook journalism.”

The full case study, discussion questions, and additional resources can be accessed through the link below, which will open a new tab at The Texas Program in Sports & Media website.

Full TPSM Case: Sports Blogs: The Wild West of Sports Journalism?

Continue Reading

Sacking Social Media in College Sports

This case study explores the different ways in which coaches and universities have limited or banned the use of social media by their student athletes. It also raises questions about the processes that coaches and student athletes follow when managing their public personas.

The full case study, discussion questions, and additional resources can be accessed through the link below, which will open a new tab at The Texas Program in Sports & Media website.

Full TPSM Case: Sacking Social Media in College Sports

Continue Reading

Covering Yourself? Journalists and the Bowl Championship Series

This case study examines the conflict of interest that arose from the Bowl Championship Series’ use of news media polls to create their team matchups. News outlets claimed that they could not fairly report sports news if their polls were used to create the news.

The full case study, discussion questions, and additional resources can be accessed through the link below, which will open a new tab at The Texas Program in Sports & Media website.

Full TPSM Case: Covering Yourself? Journalists and the Bowl Championship Series

Continue Reading

Defending Freedom of Tweets?

This case study discusses the unique challenges to freedom of speech public figures face when negotiating their public image and expressing their own values. It examines the controversy that broke out when Rashard Mendenhall, a running back for the Pittsburgh Steelers, tweeted comments criticizing the celebration of the assassination of Osama Bin Laden.

The full case study, discussion questions, and additional resources can be accessed through the link below, which will open a new tab at The Texas Program in Sports & Media website.

Full TPSM Case: Defending the Freedom of Tweets?

 

Continue Reading

Abramoff: Lobbying Congress

On March 29, 2006, former lobbyist Jack Abramoff was sentenced to six years in federal prison after pleading guilty to mail fraud, tax evasion, and conspiracy to bribe public officials. Key to Abramoff’s conviction were his lobbying efforts that began in the 1990s on behalf of Native American tribes seeking to establish gambling on reservations.

In 1996, Abramoff began working for the Mississippi Band of Choctaw Indians. With the help of Republican tax reform advocate Grover Norquist, and his political advocacy group Americans for Tax Reform, Abramoff defeated a Congressional bill that would have taxed Native American casinos. Texas Representative and House Majority Whip Tom DeLay also played a major role in the bill’s defeat. DeLay pushed the agenda of Abramoff’s lobbying clients in exchange for favors from Abramoff.

In 1999, Abramoff similarly lobbied to defeat a bill in the Alabama State Legislature that would have allowed casino-style games on dog racing tracks. This bill would have created competition for his clients’ casino businesses. Republican political activist Ralph Reed, and his political consulting firm Century Strategies, aided the effort by leading a grassroots campaign that rallied Alabama-based Christian organizations to oppose the bill.

As Abramoff’s successes grew, his clients, political contacts, and influence expanded. He hired aides and former staff of members of Congress. In 2001, Abramoff began working with Congressman DeLay’s former communications director, Michael Scanlon, who had formed his own public affairs consulting firm, Capitol Campaign Strategies. The Coushatta Tribe of Louisiana hired Abramoff and Capitol Campaign Strategies to help them renegotiate their gambling agreement with the State of Louisiana. Abramoff, however, did not disclose to the tribe that, in addition to his own consulting fees, he also received a portion of the fees paid to Scanlon’s firm.

In an effort to protect his Coushatta clients in Louisiana from competition by a new casino near Houston, Texas, Abramoff successfully lobbied for a state gambling ban in Texas between 2001 and 2002. Incidental to this ban was the closure of a casino in El Paso, Texas, owned by the Tigua Tribal Nation. The Tigua were another one of Abramoff’s casino clients.

Later in 2002, Abramoff made a pitch to the Tigua to work to oppose the ban for which he had previously lobbied successfully. With the Tigua’s money, Abramoff took Ohio Representative Bob Ney and his staff on a golfing trip to Scotland. Abramoff hoped to convince Ney and his colleagues to slip a provision into an election-reform bill that would grant the Tigua gaming rights. Abramoff’s efforts did not pay off, and the deal he sought fell through, but he did not inform the Tigua of this outcome. Rather, Abramoff continued to give the Tigua hope for the provision’s success, while also continuing to charge them for his and Scanlon’s services. And, in their email exchanges, Abramoff and Scanlon often mocked their tribal clients as “morons” and “monkeys.”

Throughout the course of their work with Native American tribes, Abramoff and Scanlon charged upwards of $66 million. The Coushatta paid over $30 million to protect their casino and to stop competing casinos in Texas. The Tigua paid $4.2 million to try to continue operating their casino in Texas. Abramoff has stated that he donated much of the money he made to charities, schools, and causes he believed in. But he also spent millions of dollars on activities or contributions in connection with politicians and campaigns he sought to influence. Furthermore, he evaded taxes by funneling money through nonprofit organizations with which he partnered.

After his conviction in 2006, Abramoff cooperated in the investigation of his relationships with Congress members, including aides, business associates, government officials, and lawmakers. Representatives DeLay and Ney both stepped down from their positions in Congress. DeLay, who had risen to the rank of House Majority Leader, was charged with money laundering and conspiracy of funneling corporate contributions to state candidates. Ney plead guilty to conspiracy to commit fraud and making false statements. In exchange for gifts, lavish trips, and political donations from Abramoff, DeLay and Ney had used their positions in Congress to grant favors to Abramoff’s clients and lobbying team. Abramoff served three and a half years of a six-year prison term. He was released on December 3, 2010.

Since his release, Abramoff has spoken out against corruption in politics. He has stated that he believed himself to be a “moral lobbyist” and has apologized for his actions. In a 2011 interview, he said, “What’s legal in this system is the problem,” and in his memoir, he wrote, “Unfortunately, I was a miniature version of that system.” But not everyone perceived his redemption as a genuine effort. Tigua tribal leaders said his apologies were too little, too late. Rick Hill, former chairman of the Oneida Nation of Wisconsin, stated, “You look at Jack—though he took money from my elders and our kids, and now he comes here, and he gets to prop himself up, and it’s an acceptable part of [Washington] D.C. culture. He wouldn’t stand a minute on the reservation.”

Others point to the American political system, and see Abramoff as a symptom of broader corruption. Investigative journalist Susan Schmidt stated, “Abramoff couldn’t have flourished if this system, itself, was not corrupt, where the need for money—the members of Congress and their need for money—is so voracious and so huge that they don’t have their guard up.” California Representative Dana Rohrabacher said, “What Jack had been doing was what had been done before. People should pay more attention to the fact that we have got some enormous special interests in this country who are having incredible influences on policy.”

In his memoir, Abramoff reflected on personal and professional reform: “Regardless of my rationalizations, I was the one who didn’t disclose to my clients that there was a conflict of interest… I wasn’t the devil that the media were so quick to create, but neither was I the saint I always hoped to become. …I decided that, in order to move myself close to the angels, I would take what happened in my life, try to learn from it, and use it to educate others.”

Continue Reading

Freedom of Speech on Campus

In the fall of 2015, student groups on the campuses of the University of Missouri and Yale University led protests in the wake of a series of racially-motivated offenses that many students saw as part of a history of unsafe or hostile campus climates for students of color, particularly black students. Offenses included verbal, emotional, and physical abuse.

At Yale University, administrators sent an email to students that offered advice on racially-insensitive costumes to avoid for Halloween, including costumes featuring blackface or mock Native American headdresses. Controversy emerged after Erika Christakis, a white lecturer of early childhood education and associate master at one of the university’s residential colleges, sent an email to the students she resided over in which she objected to the call for sensitivity. Christakis debated what she described as an “institutional… exercise of implied control over college students,” asking, “Is there no room anymore for a child or young person to be a little bit obnoxious… a little bit inappropriate or provocative or, yes, offensive?” In response, many students signed an open letter to Christakis. In this letter they stated, “We are not asking to be coddled… [We] simply ask that our existences not be invalidated on campus. This is us asking for basic respect of our cultures and our livelihoods.” During a protest, a student confronted Christakis’s husband, Nicholas Christakis, a professor at Yale and master of one of the residential colleges. In disagreement, the students told him to step down, saying that being a master was “not about creating an intellectual space… [but] creating a home here.”

At Missouri, university administrators were criticized for their slow and ineffective responses to address ongoing racial tensions on the campus. After Payton Head, a black student and president of the Missouri Student Association, was taunted with racial slurs, it took university chancellor R. Bowen Loftin nearly a week to respond. Following this and other incidences, students organized rallies and demonstrations. Tensions were made worse after someone used feces to smear a swastika on a communal bathroom in a residence hall. This act of vandalism, and the university’s response, became the final straw for graduate student Jonathan Butler. Butler had led or been involved in many demonstrations up to this point. He decided to go on indefinite hunger strike until university system president, Tim Wolfe, was removed from office. In support of Butler, the football team later announced they would neither practice nor play until Wolfe resigned. Many students joined in support of the protests. Butler ended his weeklong hunger strike after Wolfe resigned.

In the midst of the student protests at Missouri, further controversy emerged when protesters tried to keep news media out of the campus public grounds where protesters had been camping out for days. Student photographer Tim Tai, on assignment for ESPN, was surrounded and confronted by protesters, including university staff members, who did not want any media to enter what they said was a “safe space.” Tai was attempting to document the protests in public spaces, stating, “This is the First Amendment that protects your right to stand here and mine. …The law protects both of us.” A video capturing the confrontation went viral and sparked wider debate over the issue of freedom of speech in the protests at Missouri, Yale, and other college campuses. Journalists, commentators, and academics raised discussion over the roles of free speech, deliberation, and tolerance in the dialogue between student activists and university administrators.

Freelance journalist Terrell Jermaine Starr, in defense of the protesters, wrote: “This wasn’t a problem with Tai’s character or his journalistic integrity; he was doing his job… but reporters should also feel a responsibility to try to understand and respect [the protesters’] pain…” Starr continued: “In many communities that historically have been marginalized and unfairly portrayed by the media, there’s good reason people do not trust journalists: They often criminalize black people’s pain and resistance to racial oppression.” Suzanne Nossel, executive director of PEN American Center, defended free speech as a crucial driver of social justice reform: “[Without] free speech, the “safe spaces” students crave will soon suffocate them. Social movements must evolve or they die. Ideological and even tactical evolution demands willingness to hear out heterodoxy. Likewise, free speech defenders will not win by dismissing students as insolent whiners. …The Black Lives Matter movement and the campus protests are efforts to jump-start a drive for racial equality that has stalled in key areas. Free speech is essential to that quest.”

Writing about the Yale incident, journalist Conor Friedersdorf suggested that the student activists’ intolerance of other views could lead to censorship. He wrote, “[Students] were perfectly free to talk about their pain. Some felt entitled to something more, and that is what prolonged the debate.” Op-ed columnist Nicholas Kristof addressed the broader role of freedom of speech on college campuses: “The protesters at Mizzou and Yale and elsewhere make a legitimate point: Universities should work harder to make all students feel they are safe and belong. Members of minorities—whether black or transgender or (on many campuses) evangelical conservatives—should be able to feel a part of campus, not feel mocked in their own community.” Political theorist Danielle Allen, on the other hand, described the debate over freedom of speech as a distraction from the key issues of the protests. Allen wrote, “The issues of free speech matter, too, but they are leading people in the wrong direction, away from the deepest issue. …The real issue is how to think about social equality.”

Continue Reading

Pao & Gender Bias

On May 10, 2012, executive Ellen Pao filed a lawsuit against her employer, Silicon Valley-based tech venture capital firm Kleiner Perkins Caufield & Byers (Kleiner Perkins), on grounds of gender discrimination. Pao began working at Kleiner Perkins in 2005. She became a junior investing partner, but after several years at the firm was passed over for a senior partner position and was eventually terminated. Pao claimed that men with similar profiles and achievements were promoted instead.

In late 2011, Pao and a coworker were asked by a senior partner to come up with ways of improving the firm’s treatment of women, but the senior partner, according to Pao, was “noncommittal.” On January 4, 2012, Pao took this issue a step further and wrote a formal memorandum to several of her superiors and the firm’s outside counsel. In the memorandum, she described harassment she had received while at the firm, claiming she had been excluded from meetings by male partners, and asserting an absence of training and policies to prevent discrimination at the firm. Pao’s memo indicated that she wished to work with the firm on improving conditions for women. She was fired on October 1, 2012. The lawsuit went to trial in February 2015.

In a testimony during the trial, Pao explained that she sued because there was no process for HR issues at the firm and believed she had exhausted all options for addressing these issues internally: “It’s been a long journey, and I’ve tried many times to bring Kleiner Perkins to the right path. I think there should be equal opportunities for women and men to be venture capitalists. I wanted to be a VC but I wasn’t able to do so in that environment. And I think it’s important…to make those opportunities available in the future. And I wanted to make sure my story was told.”

Pao’s lawsuit made four claims against Kleiner Perkins: 1) they discriminated against Pao on the basis of gender by failing to promote her and/or terminating her employment; 2) they retaliated by failing to promote her because of conversations she had in late 2011 and/or the memo from January 4, 2012; 3) they failed to take all reasonable steps to prevent gender discrimination against her; and 4) they retaliated against her by terminating her employment because of conversations she had in late 2011 and/or the memo from January 4, 2012.

Pao’s legal team argued that men were promoted ahead of women, women who experienced sexual harassment received little support, and women’s ideas were often more quickly dismissed than men’s. Pao’s performance reviews revealed contradictory criticisms such as “too bold” and “too quiet.” Pao also accused company partner Ajit Nazre of pressuring her into an affair and subsequently retaliating against her after she ended the relationship. She said she received an inappropriate gift containing erotic imagery and was present while men at the firm were making inappropriate conversation. Further, the legal team described how Pao and other women had been left out of certain meetings and gatherings.

The defense’s case focused on Pao’s performance and character, noting that Pao received several negative performance reviews and acted entitled or resentful toward other employees and was not a team player. Evidence included evaluations, self-evaluations, meeting summaries, and messages both personal and professional. Kleiner Perkins claimed that Pao was paid more than her male counterparts, including bonuses and training. The firm also argued that Pao’s job description was mostly managerial and that limiting her involvement in investing was therefore not a form of discrimination.

The verdict was announced on March 27, 2015. The jury ruled 10 to 2 in favor of Kleiner Perkins on the first three claims, and 8 to 4 in favor of Kleiner Perkins on the fourth claim. Speaking after the trial, juror Steve Sammut said that the verdict came down to performance reviews, in which Pao’s negative criticism remained consistent each year. But he added that he wished there was some way for Kleiner Perkins to be punished for its treatment of employees, “It isn’t good. It’s like the wild, wild West.” Juror Marshalette Ramsey voted in favor of Pao, believing Pao had been discriminated against. Ramsey stated that the male junior partners who were promoted “had those same character flaws that Ellen was cited with.”

Deborah Rhode, law professor at Stanford University, said that even with this loss, Pao’s lawsuit succeeded in prompting debate about women in venture capital and tech. She stated, “This case sends a powerful signal to Silicon Valley in general and the venture capital industry in particular… Defendants who win in court sometimes lose in the world outside it.” After the verdict was announced, Pao stated that she hoped the case at least helped level the playing field for women and minorities in venture capital. She later wrote, “I have a request for all companies: Please don’t try to silence employees who raise discrimination and harassment concerns. …I hope future cases prove me wrong and show that our community and our jurists have now developed a better understanding of how discrimination works in real life, in the tech world, in the press and in the courts.” Pao’s case has since been credited for inspiring others facing workplace discrimination to act; similar lawsuits have been filed against companies such as Facebook, Twitter, and Microsoft.

Continue Reading

Digital Downloads

Copyright laws exist to protect authors’ and publishers’ rights, but also to balance that protection with access and innovation. In 1999, two teenagers created the file-sharing program Napster. Within its first year, the service surpassed 20 million users. Many Napster users shared music files with each other, but without any compensation to the artists and producers who made the music, sparking a series of legal battles over copyright and distribution. In 2001, an appellate panel upheld a previous ruling that Napster violated copyright laws, stating that, “Repeated and exploitative unauthorized copies of copyrighted works were made to save the expense of purchasing authorized copies.”

Artists were divided on the benefits and harms of Napster. Over 70 artists formed “Artists Against Piracy” in coalition with major record companies to combat the piracy occurring on Napster and other peer-to-peer internet services. In contrast, some established artists such as Neil Young saw piracy as the “new radio” and applauded the potential to reach larger audiences and drive additional sales through increased popularity. Seeing both the benefits and detriments of piracy, singer Norah Jones stated, “If people hear it I’m happy…it’s great that young people who don’t have a lot of money can listen to music and be exposed to new things… But I also understand it’s not ideal for the record industry, and a lot of young artists who won’t make any [money] off their album sales, but at least they can tour.”

Although court rulings forced Napster to terminate its file-sharing business, Napster’s innovations stimulated payment-based services, such as iTunes, Pandora, and many others. But the availability of such services has not put an end to the debate surrounding artist compensation with digital music, as seen with Taylor Swift’s open letter to Apple in 2015. Swift’s albums, along with the music of many other artists, were going to be streamed at no cost to new Apple Music customers over the first three months of service without any compensation to the artists. In her open letter, Swift stated, “I’m not sure you know that Apple Music will not be paying writers, producers, or artists for those three months. I find it to be shocking, disappointing, and completely unlike this historically progressive and generous company.” Within a few hours, Apple responded by changing the terms of its agreement in order to compensate artists at a reduced rate.

Continue Reading

Reporting on Robin Williams

When actor Robin Williams took his life in August of 2014, major news organizations covered the story in great detail. Most major news outlets reported on Marin County Sheriff’s Lt. Keith Boyd’s press conference, which revealed graphic details from the coroner’s report about the methods Williams used. While there was great interest on the part of the public in finding out what happened, many argued that reporting too much detail about the suicide violated the family’s privacy.

Indeed, many of Robin Williams’s fans posted on Facebook, Twitter, and other social networks to express their objections to the media treatment of the suicide, urging reporters to respect the family’s right to grieve in peace. Several members of the mental health community also took issue with the detailed reports. Paul Farmer, chief executive of the mental health charity Mind, wrote to CNN that “When a media report describes clear details of unusual methods of suicide and essentially gives a “how to” guide—the danger is it can make suicide seem like a more accessible action to take.” Some journalists expressed similar viewpoints, criticizing the reports as a clear violation of media ethics. According to the Press Complaints Commission, “When reporting suicide, care should be taken to avoid excessive detail about the method used.”

Yet other journalists argued that the primary responsibility of the media was to report the story truthfully and factually. In an op-ed in the LA Times, Andrew Klavan wrote, “The manner of Williams’ death is public information. Journalists should report it as long as it remains of interest to the public. It is not a journalist’s job to protect us from the ugly facts.” Klavan argued that the journalist’s duty is not to do good or be wise, but to report the whole story, which may in fact be a part of a larger story unfolding elsewhere. Sheriff Boyd similarly defended his own actions by stating that he had a duty to report the details as part of the public record.
In an interview with Today, Williams’s daughter Zelda discussed how her father never sought to hide his problems, mentioning his openness about struggling with alcoholism. She stated, “I think that one of the things that is changing, that is wonderful, is that people are finally starting to approach talking about illnesses that people can’t immediately see…He didn’t like people feeling like the things that were hard for them they should go through alone.”

Continue Reading

Pardoning Nixon

On August 9, 1974, Richard Nixon resigned the presidency in the wake of the Watergate scandal and the release of the ‘smoking gun’ tape that could have indicted him for involvement in criminal activity while president. Following resignation, many Americans were angry with Nixon and also suspicious of Gerald Ford as he stepped into the presidential role. Nixon soon became extremely ill. On August 15, 1974, he was admitted to Bethesda Naval Hospital and diagnosed with viral pneumonia. One account suggests he was admitted with a recurrence of phlebitis. Nixon had a history of phlebitis, which can be fatal even if treated.

On September 8, 1974, President Ford issued a full and absolute pardon of Nixon for all offenses against the United States, making Nixon immune from any arrest, investigation, or imprisonment from his involvement in Watergate. The pardon infuriated many Americans. Suspicions arose of a possible deal between Ford and Nixon in exchange for Ford’s prior nomination to vice president. All parties denied any such deal and no evidence in support of these allegations ever surfaced.

With the Watergate scandal consuming the nation, Ford signaled that he wanted to refocus the public and rebuild trust in the executive branch. He sought to move forward by concentrating on the nation’s problems, such as ending the Vietnam War, rather than spending his entire administration dissecting the activities of the previous president for years to come. However, as a result of the pardon, Nixon would never be held accountable for activity widely thought to be criminal.

President Ford also believed from reports and advisors that Nixon’s health was seriously compromised and that his death was likely imminent. In his speech announcing the pardon, Ford referred both to the health crisis of Nixon as well as his own personal constitutional duty to ensure domestic tranquility. At the time, it was impossible for Ford to realize that Nixon would eventually recover and live for twenty more years. Ford believed he acted in the nation’s best interests. The public vehemently disagreed.

Continue Reading

Covering Female Athletes

This case study examines the controversy over the Sports Illustrated cover photo of U.S. Olympic skier Lindsey Vonn, which some commentators argued was focused more on Vonn’s physical appearance than her athletic abilities. It highlights different perspectives on the coverage of female athletes in popular media and the representation of female athletes in sports journalism.

The full case study, discussion questions, and additional resources can be accessed through the link below, which will open a new tab at The Texas Program in Sports & Media website.

Full TPSM Case: Covering Female Athletes

Continue Reading

The Miss Saigon Controversy

In 1990, theatre producer Cameron Mackintosh brought the musical Miss Saigon to Broadway following a highly successful run in London. Based on the opera Madame Butterfly, Miss Saigon takes place during the Vietnam War and focuses on a romance between an American soldier and a Vietnamese orphan named Kim. In the musical, Kim is forced to work at ‘Dreamland,’ a seedy bar owned by the half-French, half-Vietnamese character ‘the Engineer.’ The production was highly anticipated, generating millions of dollars in ticket sales before it had even opened.

Controversy erupted, however, when producers revealed that Jonathan Pryce, a white British actor, would reprise his role as the Eurasian ‘Engineer.’ Asian American actor B.D. Wong argued that by casting a white actor in a role written for an Asian actor, the production supported the practice of “yellow-face.” Similar to “blackface” minstrel shows of the 19th and 20th centuries, “yellow-face” productions cast non-Asians in roles written for Asians, often relying on physical and cultural stereotypes to make broad comments about identity. Wong asked his union, Actors’ Equity Association, to “force Cameron Mackintosh and future producers to cast their productions with racial authenticity.”

Actors’ Equity Association initially agreed and refused to let Pryce perform: “Equity believes the casting of Mr. Pryce as a Eurasian to be especially insensitive and an affront to the Asian community.” Moreover, many argued that the casting of Pryce further limited already scarce professional opportunities for Asian American actors.

Frank Rich of The New York Times disagreed, sharply criticizing the union for prioritizing politics over talent: “A producer’s job is to present the best show he can, and Mr. Pryce’s performance is both the artistic crux of this musical and the best antidote to its more bloated excesses. It’s hard to imagine another actor, white or Asian, topping the originator of this quirky role. Why open on Broadway with second best, regardless of race or creed?” The casting director, Vincent G. Liff, also defended his actions on the same grounds: “I can say with the greatest assurance that if there were an Asian actor of 45-50 years, with classical stage background and an international stature and reputation, we would have certainly sniffed him out by now.”

Actors’ Equity ultimately reversed their decision and Pryce performed the role of ‘the Engineer’ on Broadway to great acclaim. Nonetheless, the production remained controversial during its successful Broadway run. For many, it is seen as one of the most famous examples of contemporary “yellow-face” performance.

Continue Reading

Dr. V’s Magical Putter

On January 15, 2014, sports blog Grantland published the article “Dr. V’s Magical Putter,” a longform piece by journalist Caleb Hannan. What began as an article about a unique new golf putter gradually became an article about the inventor of the putter, Dr. Essay Anne Vanderbilt. In his investigation into Vanderbilt’s invention, Hannan discovered that Vanderbilt had lied about her academic background and work experience and had taken cash from an investor that she never returned.

Hannan also found out that Vanderbilt was a transgender woman and outed her as trans to an investor. Although Hannan had made an agreement with Vanderbilt to focus the story “on the science, not the scientist,” the putter became a backdrop to a story about what Hannan saw as a deceitful personal life and fraudulent professional career. Vanderbilt, who wished to maintain her privacy from the start, did not want the story published. A few months before Grantland published the article, Vanderbilt committed suicide.

The article sparked immediate controversy over the merits and ethics of its reporting, as well as its role in Vanderbilt’s suicide. Detractors criticized Hannan and Grantland for a lack of awareness and compassion regarding trans issues. Defenders of the article saw value in the story and believed it would be dangerous to not report all of the facts. In an editorial response published in Grantland, sports journalist Christina Kahrl wrote, “It was not Grantland’s job to out [Vanderbilt],” noting, “she was a member of a community…for whom suicide attempts outpace the national average almost 26 times over.” Josh Levin, executive editor at Slate, wrote, “The fact that Dr. V once lived under a different name is not irrelevant to Hannan’s story… But presenting Dr. V’s gender identity as one in a series of lies and elisions was a careless editorial decision. …Dr. V is a con artist and a trans woman. Hannan, though, conflates those two facts…” Journalist James Kirchick defended Hannan, writing, “What I saw was a careful and ingenious reporter ferret out a fraud with care. …[There’s] no evidence that Hannan was…seeking to “out” and humiliate a transgender woman… On the contrary, in his article, Hannan arrives at a conclusion sympathetic to Vanderbilt.” Trans advocate and medical doctor Dana Beyer said that the article reflected the tragedy of being in the closet as trans, but did not revel in it or have malicious intentions.

Several days after the publication of the article, Grantland editor-in-chief Bill Simmons published a response to the article’s criticism. He wrote, “I didn’t know nearly enough about the transgender community—and neither does my staff… We just didn’t see the other side. We weren’t sophisticated enough. In the future, we will be sophisticated enough… we made mistakes, and we’re going to learn from them.” Reflecting on the article over a year later, Hannan spoke about the complexities of seeking truth in journalism, “At every point in the reporting I could justify myself going forward… ‘I’m doing my job.’ But part of the job was to assess whether it was worth it.”

Continue Reading

Snyder v. Phelps

Matthew Snyder was a Marine Lance Corporal from Maryland who died in Iraq on March 2, 2006 at the age of 20. The Westboro Baptist Church, led by Fred Phelps, announced in advance that they would picket his funeral. Westboro contends that American military deaths are a direct result of God’s vengeance for the tolerance of homosexuality in the United States. Church members protest military funerals because they believe enlisted soldiers “voluntarily [join] a fag-infested army to fight for a fag-run country.” They denounced Snyder’s parents, Albert Snyder and Julia Francis, for raising their son Catholic. They claimed Snyder and Francis taught their son to “be an idolater” and support the world’s “largest pedophile machine.” At Snyder’s funeral, Westboro members held up signs saying “Fag Troops,” “God hates the U.S.A.,” and many others of a similar nature.

Albert Snyder sued for defamation stemming from false statements made about his son’s upbringing. He also sued for “publicity given to private life” because his son’s funeral was a private, not public event. These counts were dismissed because the defamation fell under religious opinion and because an obituary was printed with details of their religion. Other counts, including intrusion upon seclusion, intentional infliction of emotional distress, and civil conspiracy, were all allowed to continue.

Westboro Baptist Church maintained that they followed all local ordinances and were compliant with all police instructions. They were allowed to picket in an area designated by police about 1000 feet from the church. Albert Snyder claimed he saw the tops of signs, but only read their contents when he saw a news program on television afterwards. Evidence was presented that showed Albert Snyder suffered physical and emotional harm, including complications from diabetes and depression.

At the district court level, Albert Snyder was awarded a total of $5 million in damages, but the Fourth Circuit later reversed this ruling. It was then appealed to the Supreme Court, which upheld the ruling of the Fourth Circuit: Westboro Baptist Church was within their free speech rights set forth in the First Amendment. The primary message of their signs dealt with their broad public message and not one specific individual, even if it was hurtful. The court ruling stated, “Because this Nation has chosen to protect even hurtful speech on public issues to ensure that public debate is not stifled, Westboro must be shielded from tort liability for its picketing in this case.”

Continue Reading

Responding to Child Migration

In the summer of 2014, the United States experienced a significant increase of unaccompanied minors illegally entering the country from Central America. The number of minors apprehended as they tried to enter the U.S. nearly doubled over the previous year from 35,200 to 66,120. The fastest growing segment of child migrants were those under 12 years old, increasing concern that vulnerable children were risking their lives on a dangerous journey to the U.S. to escape violence and poverty in their home countries. The influx posed a number of logistical and ethical dilemmas for state and federal authorities, and overwhelmed the capacity of authorities to process new migrants or even provide shelter for them.

The Obama administration responded with a multifaceted plan that included millions of dollars of emergency funding. The plan called for increased border enforcement, deportation of those deemed economic migrants, more detention facilities, additional immigration judges to process claims for political asylum as refugees, and new programs in countries of origin that would mitigate violence and economic hardship for minors as well as discourage or intercept migrants before reaching the U.S. Because facilities at the border were being overrun, the government also transported some migrants to other parts of the country. This drew protests from local communities that tried to turn back buses filled with migrant children. The administration’s response drew criticism from all quarters.

Human rights and refugee advocates, as well as many religious institutions, argued that the U.S. was neglecting its moral obligation to protect innocent and vulnerable children, many of whom were fleeing violence at the hands of criminal gangs and the drug trade. According to journalist Sonia Nazario, the influx of minors was not a crisis of illegal immigration but rather a refugee crisis: the violence in countries, such as Honduras, was prompting youths to flee their homes as a means of survival. Nazario argued that these refugees, similar to refugees in war-torn regions such as Syria, deserved legal and physical protection. She criticized the Obama administration for concentrating on border enforcement and interdiction of child migrants instead.

Others argued the opposite point: that the crisis was brought on by weak control of U.S. borders. According to Jessica Vaughn, director of policy studies at the Center for Immigration Studies, the ongoing crisis was “the best evidence yet that lax enforcement, both at the border and within the country, and talk of amnesty only bring more illegal immigration.” She and others promoting stronger limits on immigration urged the Obama administration to turn back those who entered the country illegally on the grounds that the only way to end this crisis was to stem the tide of migrants before they got to the U.S.

Continue Reading

Welfare Reform

In 1996, Democratic President Bill Clinton and a Republican-led Congress passed The Personal Responsibility and Work Opportunity Reconciliation Act (PRWORA), also known as the “Welfare Reform Act.” This bill changed how government-funded welfare operated in the United States. PRWORA reduced the amount of federal spending for low-income families, placed a limit on the number of years a person could receive federal financial assistance, and required recipients to work within two years of receiving benefits. It also included legislation that limited the funding available to unmarried parents under the age of 18, enhanced legal enforcement of child support, and restricted funding for immigrants. Republican supporters believed these provisions would curb the number of out-of-wedlock births.

The bill ignited a decades-long debate about individual responsibility versus social responsibility and the role of the government in directly alleviating poverty. On the one hand, the bill was heralded as an important step toward helping welfare recipients achieve self-reliance and employment. Through this bill, Clinton aimed to “end welfare as we know it” by creating job opportunities that would help stop a cycle of poverty and dependency. Republican Speaker of the House Newt Gingrich and his colleagues in Congress pressured Clinton to make the bill even more austere. They argued that reducing welfare funding reinforced core American values of individual responsibility, hard work, independence, and free enterprise.

Critics of the bill argued that it negatively affected the most vulnerable people in society. Several members of Clinton’s administration even resigned as a result of the bill. One of these detractors, Peter Edelman, argued that welfare reform would not solve the problem, but rather drive millions more people into poverty, many of them single mothers and their children. During the debate, Senator Edward Kennedy called the bill “legislative child abuse.” From this perspective, the government was essentially abdicating its responsibility to care for children and impoverished people who are systemically disadvantaged.

The bill was effective for getting people off of welfare at first, in part due to a booming economy in the late 1990s. By 2000, welfare caseloads were at their lowest level in 30 years. However, wages tended to be barely above the poverty line and did not provide long term financial stability. Financial instability was exacerbated by the economic downturn in 2008. In a 2016 report from the Center on Budget and Policy Priorities examining the effects of PRWORA and related policies, research showed several findings: “Employment increases…were modest and faded over time;” “Stable employment…[was] the exception, not the norm;” “Most recipients…never found work even after participation in work programs…;” “The large majority of individuals…remained poor, and some became poorer;” and “Voluntary employment programs can significantly increase employment without the negative impacts of ending basic assistance…”

The government’s role in supporting the poor through direct aid remains an active debate in the U.S. today.

Continue Reading

Cyber Harassment

In many ways, social media platforms have created great benefits for our societies by expanding and diversifying the ways people communicate with each other, and yet these platforms also have the power to cause harm. Posting hurtful messages about other people is a form of harassment known as cyberbullying. Some acts of cyberbullying may not only be considered slanderous, but also lead to serious consequences. In 2010, Rutgers University student Tyler Clementi jumped to his death a few days after his roommate used a webcam to observe and tweet about Tyler’s sexual encounter with another man. Jane Clementi, Tyler’s mother, stated, “In this digital world, we need to teach our youngsters that their actions have consequences, that their words have real power to hurt or to help. They must be encouraged to choose to build people up and not tear them down.”

In 2013, Idalia Hernández Ramos, a middle school teacher in Mexico, was a victim of cyber harassment. After discovering that one of her students tweeted that the teacher was a “bitch” and a “whore,” Hernández confronted the girl during a lesson on social media etiquette. Inquiring why the girl would post such hurtful messages that could harm the teacher’s reputation, the student meekly replied that she was upset at the time. The teacher responded that she was very upset by the student’s actions. Demanding a public apology in front of the class, Hernández stated that she would not allow “young brats” to call her those names. Hernández uploaded a video of this confrontation online, attracting much attention.

While Hernández was subject to cyber harassment, some felt she went too far by confronting the student in the classroom and posting the video for the public to see, raising concerns over the privacy and rights of the student. Sameer Hinduja, who writes for the Cyberbullying Research Center, notes, “We do need to remain gracious and understanding towards teens when they demonstrate immaturity.” Confronting instances of a teenager venting her anger may infringe upon her basic rights to freedom of speech and expression. Yet, as Hinduja explains, teacher and student were both perpetrators and victims of cyber harassment. All the concerns of both parties must be considered and, as Hinduja wrote, “The worth of one’s dignity should not be on a sliding scale depending on how old you are.”

Continue Reading

Patient Autonomy & Informed Consent

In the context of health care in the United States, the value on autonomy and liberty was cogently expressed by Justice Benjamin Cardozo in Schloendorff v. Society of New York Hospitals (1914), when he wrote, “Every human being of adult years and sound mind has a right to determine what shall be done with his own body.” This case established the principle of informed consent and has become central to modern medical practice ethics. However, a number of events since 1914 have illustrated how the autonomy of patients may be overridden. In Buck v. Bell (1927), Justice Oliver Wendell Holmes wrote that the involuntary sterilization of “mental defectives,” then a widespread practice in the U.S., was justified, stating, “Three generations of imbeciles are enough.” Another example, the Tuskegee Syphilis Study, in which African-American males were denied life-saving treatment for syphilis as part of a scientific study of the natural course of the disease, began in 1932 and was not stopped until 1972.

Providing advice related to topics of bioethics, the President’s Commission for the Study of Ethical Problems in Medicine and Biomedical and Behavioral Research stated, “Informed consent is rooted in the fundamental recognition—reflected in the legal presumption of competency—that adults are entitled to accept or reject health care interventions on the basis of their own personal values and in furtherance of their own personal goals.” But what of circumstances where patients are deemed incompetent through judicial proceedings, and where someone else is designated to make decisions on behalf of a mentally incompetent individual?
Consider the following case:

A middle aged man was involuntarily committed to a state psychiatric hospital because he was considered dangerous to others due to severe paranoid thinking. His violent behavior was controlled only by injectable medications, which were initially administered against his will. He had been declared mentally incompetent, and the decisions to approve the use of psychotropic medications were made by his adult son who had been awarded guardianship and who held medical power of attorney.

While the medications suppressed the patient’s violent agitation, they made little impact on his paranoid symptoms. His chances of being able to return to his home community appeared remote. However, a new drug was introduced into the hospital formulary which, if used with this patient, offered the strong possibility that he could return home. The drug, however, was only available in a pill form, and the patient’s paranoia included fears that others would try to poison him. The suggestion was made to grind up the pill and surreptitiously administer the drug by mixing it in pudding.

Hospital staff checked with the patient’s son and obtained informed consent from him. The “personal values and…personal goals” of the son and other family members were seen to substitute for those of the mentally incompetent patient—and these goals included the desire for the patient to live outside of an institution and close to loved ones in the community. This was the explicitly stated rationale for the son’s agreeing to the proposal to hide the medication in food. However, staff were uncomfortable about deceiving the patient, despite having obtained informed consent from the patient’s guardian.

Continue Reading

Approaching the Presidency: Roosevelt & Taft

Theodore Roosevelt, President of the United States from 1901-1909, embodied what many scholars typically refer to as the ‘stewardship presidency.’ In the words of Roosevelt, it is the president’s “duty to do anything that the needs of the nation demanded unless such action was forbidden by the Constitution or by the laws.” Under Roosevelt’s expansionist view, anything the president does is considered acceptable unless it is expressly forbidden by the Constitution or laws passed by Congress. Roosevelt believed he served the people, not just the government. He took many actions as president that stretched the limits of the executive branch, including the creation of national parks without regard for states’ jurisdiction and fostering revolt in Colombia to establish the Panama Canal.

On the other hand, William Howard Taft, President of the United States from 1909-1913, embodied what many scholars refer to as a ‘strict constructionist’ model of the presidency. Under this approach, unless the Constitution or Congress explicitly grants a certain power, the president does not have the right to act. In Taft’s words, “the President can exercise no power which cannot be fairly and reasonably traced to some specific grant of power or justly implied and included within such express grant as proper and necessary to its exercise.”

While Roosevelt expanded federal power in many areas, Taft felt many of these actions were legal overreaches. For example, as a “trust-buster” Roosevelt differentiated between ‘good’ trusts and ‘bad’ trusts, using his expanded powers as president to make this distinction unilaterally. He made a ‘gentlemen’s agreement’ with U.S. Steel and told them that the American government would not attack their corporation as a monopoly since he believed the company was working in the interests of the American people. Roosevelt did not, however, pass any legislation or make any binding orders to this effect. Taft took a more legalistic view and later, as president, directed his attorney general to file an anti-trust lawsuit against U.S. Steel. Roosevelt took Taft’s actions as a personal attack upon Roosevelt’s presidency and positions.

Although Taft continued many of Roosevelt’s policies, he was inclined to look at the facts of the situation and make a choice based on evidence. Roosevelt, on the other hand, was more inclined to do what he felt was “right.” Their disagreements, which hinged on the grey areas of the legal and the ethical, ultimately propelled the break within the Republican Party during the 1912 elections.

Continue Reading

Prenatal Diagnosis & Parental Choice

In the United States, many citizens agree that the government may impose limits on the freedom of individuals when individuals interfere with the rights of others, but the extent of these limits is often a topic of debate. Among the most debated of bioethical issues is the issue of abortion, which hinges on whether the fetus is a person with rights, notably the right to life.

In conjunction with the legal right to abortion affirmed in the Supreme Court decision in Roe v. Wade, the issue of prenatal diagnosis has led to decisions by pregnant women to pursue abortion where prenatal testing has revealed genetic abnormalities in fetuses. However, this practice has met with recent opposition in the wake of research showing that between 60 and 90 percent of fetal diagnoses of Down syndrome have led to abortion. In 2015, legislation was introduced in the Ohio Legislature that would make it illegal to terminate a pregnancy for the purpose of avoiding giving birth to a baby with Down syndrome.

Those opposed to this legislation have noted that such a law would violate the Roe v. Wade decision by the Supreme Court, and that laws based on intention or motivation to terminate would be unenforceable. “This is interference with a medical decision following a complicated diagnosis,” according to Kellie Copeland, executive director of NARAL Pro-Choice Ohio, “Not
knowing the family and the circumstances, the legislature can’t possibly take into account all the factors involved.”

Supporters of the legislation have described this as a way to limit the number of abortions in the state and protect babies born with disabilities. Mike Gonidakis, president of Ohio Right to Life,
stated, “We all want to be born perfect, but none of us are, and everyone has a right to live, perfect or not.” Rachel Mullen, a member of the Cuyahoga County chapter of Ohio Right to Life, said in an interview, “we need this bill so that [babies with Down syndrome] can be born, and not culled.”

Teaching Note:

Bioethics examines the moral dimensions surrounding the use of medical technology, raising questions such as: Should all scientific advances in medicine be made available to all? Do some advances conflict with society’s values and morals? What role should the government play in the moral decision-making of individuals insofar and with respect to limiting or expanding choices available? These are broader questions to keep in mind while reading and discussing this case study.

Continue Reading

Stangl & the Holocaust

Franz Stangl was born in Austria in 1908. From a working class family, Stangl trained as a master weaver. Unsatisfied in his career, at the age of 23, he applied to become a police officer. In 1936, despite his position in law enforcement, he joined the ranks of the then-illegal Nazi Party. When Germany invaded Austria, and subsequently annexed it in March 1938, he became a Gestapo agent. In 1940, under the order of Nazi leaders, Stangl was appointed as head of security at Hartheim Castle. At the time, Hartheim was one of the secret killing centers used by the authorities to administer “mercy deaths” to sick and disabled persons. A special unit within the German administration, codenamed T4, carried out this so-called “euthanasia” program. T4 employed doctors, nurses, lawyers, and police officers, among others, at killing centers in Germany and Austria. In all, historians estimate that the staff at Hartheim killed 18,269 people by August 1941.

After a brief stint in Berlin, Stangl transferred to German-occupied Poland in the spring of 1942. Nazi authorities appointed Stangl to be the first commandant of the killing center at Sobibór. By September 1942, having distinguished himself as an effective organizer, Stangl was transferred to what would become the most horrible of these death camps, Treblinka. While there, he managed and perfected a system of mass murder, using psychological techniques to first deceive then terrify and subdue his victims before they entered the gas chambers. In less than 18 months, under Stangl’s supervision, between 870,000 and 925,000 Jews were killed at Treblinka.

After the war, Franz Stangl and his family emigrated to Brazil where he lived and worked under his own name for decades. He was extradited to West Germany in 1967 and tried for his role in the murder of 900,000 men, women, and children during the Holocaust. During his trial, Stangl claimed that he was doing his duty and was not a murderer. Stangl defended himself by making three main claims. First, that he did not get to choose his postings, and that disobeying an order would put himself and his family at risk. Second, that once in a position, it was his nature to do an excellent job (he became known as the best commandant in Poland). And third, that he never personally murdered anyone. He saw himself as an administrator. Stangl claimed that his dedication to his work was not about ideology or hatred of Jews.

On October 22, 1970, the court found Stangl guilty of crimes against humanity and sentenced him to the maximum penalty, life in prison. During an interview while in prison, he stated, “My conscience is clear about what I did, myself. …I have never intentionally hurt anyone, myself. …But I was there. …So yes, in reality I share the guilt.” He continued, “My guilt…is that I am still here. That is my guilt.” On June 28, 1971, less than a day after this interview, Stangl died of heart failure in prison.

Continue Reading

Cheating: Atlanta’s School Scandal

In 2006, Damany Lewis was a 29-year-old math teacher at Parks Middle School in Atlanta. The school was in a run-down neighborhood three miles south of downtown that was plagued by armed robberies. Lewis himself had grown up in a violent neighborhood. He empathized with his students and was devoted to their success. A colleague described Lewis as a “star teacher” and a “very hard worker, who will go the extra mile.”

Lewis was a teacher when Beverly Hall was Atlanta’s school superintendent. Hall believed that business approaches and the values of the market system could save public education. She set accountability measures for the Atlanta school district and created performance objectives that were tougher than those of No Child Left Behind, the federal program that became law in 2002. Teacher evaluations were linked to students’ performance on standardized tests. Schools whose students did not make appropriate progress toward the standardized test goals received escalating sanctions that culminated in replacement of the faculty and staff, and restructuring or closing of the school.

Parks Middle School was in dire straights because it had been classified as “a school in need of improvement” for the previous five years. Unless 58 percent of students passed the math portion of the standardized test and 67 percent passed the language arts portion, Parks Middle School could be closed down. Its students would be separated and bussed across town to different schools.

“[It] was my sole obligation to never let that happen,” Lewis later told Rachel Aviv in an article about these events in The New Yorker. Lewis had pushed his students to work harder than they ever had in preparing for the test. But he knew that it would be very difficult for many of them to pass. Christopher Waller, the new principal of Parks, had heard that teachers in the elementary schools that fed into Parks had changed their students’ answers on the standardized tests under the guise of erasing stray pencil marks. Waller asked Lewis and other teachers to do the same. Lewis found the exams of students who needed to get a few more questions right in order to pass. He changed their answers. If he did not change their scores, Lewis feared that his students would lapse into “why try” attitudes. They would lose their neighborhood school and the community that had developed within it.

Thanks to Lewis and other teachers, Parks students did better than ever on the standardized tests. Neekisia Jackson, a former student at Parks at the time, recalled, “Everyone was jumping up and down,” after a teacher announced the school had met the goals of No Child Left Behind for the first time. Jackson continued, “We had heard what everyone was saying: ‘Y’all aren’t good enough.’ Now we could finally go to school with our heads held high.”

The same process of changing answers continued at Parks through 2010. By that time, nine other teachers were helping Lewis change answers.

In October of 2010, 50 agents of the Georgia Bureau of Investigation visited Parks and other Atlanta schools. The investigators concluded that teachers and administrators at 44 schools had cheated in the manner that Lewis had. In July of 2012, 110 teachers who had confessed or been accused of cheating were placed on administrative leave, including Lewis. Later that year, Lewis’ employment was terminated.

This case study is based on an article by Rachel Aviv entitled, “Wrong answer: In an era of high-stakes testing, a struggling school made a shocking choice,” that appeared in The New Yorker on July 21, 2014.

Continue Reading

Edward Snowden: Traitor or Hero?

In 2013, computer expert and former CIA systems administrator, Edward Snowden released confidential government documents to the press about the existence of government surveillance programs. According to many legal experts, and the U.S. government, his actions violated the Espionage Act of 1917, which identified the leak of state secrets as an act of treason. Yet despite the fact that he broke the law, Snowden argued that he had a moral obligation to act. He gave a justification for his “whistleblowing” by stating that he had a duty “to inform the public as to that which is done in their name and that which is done against them.” According to Snowden, the government’s violation of privacy had to be exposed regardless of legality.

Many agreed with Snowden. Jesselyn Radack of the Government Accountability Project defended his actions as ethical, arguing that he acted from a sense of public good. Radack said, “Snowden may have violated a secrecy agreement, which is not a loyalty oath but a contract, and a less important one than the social contract a democracy has with its citizenry.” Others argued that even if he was legally culpable, he was not ethically culpable because the law itself was unjust and unconstitutional.

The Attorney General of the United States, Eric Holder, did not find Snowden’s rationale convincing. Holder stated, “He broke the law. He caused harm to our national security and I think that he has to be held accountable for his actions.”

Journalists were conflicted about the ethical implications of Snowden’s actions. The editorial board of The New York Times stated, “He may have committed a crime…but he has done his country a great service.” In an Op-ed in the same newspaper, Ed Morrissey argued that Snowden was not a hero, but a criminal: “by leaking information about the behavior rather than reporting it through legal channels, Snowden chose to break the law.” According to Morrissey, Snowden should be prosecuted for his actions, arguing that his actions broke a law “intended to keep legitimate national-security data and assets safe from our enemies; it is intended to keep Americans safe.”
safe.”

Continue Reading

Cadavers in Car Safety Research

In 1993, it was widely disclosed that research engineers at Heidelberg University in Germany had used 200 adult and child cadavers in simulated car crash tests. The researchers argued that the use of human cadavers was necessary to study the actual effects of these crashes on the body. They insisted that the research would save lives because it would help engineers design safer cars.

There was significant public outcry against this practice from numerous groups. The ADAC, Germany’s largest automobile club, issued a statement challenging the research on ethical grounds: “In an age when experiments on animals are being put into question, such tests must be carried out on dummies and not on children’s cadavers.” Rudolph Hammerschmidt, spokesman for the Roman Catholic German Bishops’ Conference similarly decried the practice, arguing, “Even the dead possess human dignity…this research should be done with manikins.” Political leaders also weighed in on the debate. Klaus von Trotha, research minister of Baden-Wuerttemberg state, questioned the study: “Our constitution guarantees freedom in scientific research. But the constitution also guarantees the protection of human dignity.”

The university defended its research by pointing to the results. Dr. Rainer Mattern, the head of Heidelberg University’s forensic pathology department, responded to public reaction against the use of child cadavers, arguing, “The tests have saved lives of other children.”

When it was revealed that similar tests were being conducted in the United States at Wayne State University, some U.S. officials offered their support. George Parker, the associate administrator for research at the National Highway Traffic Safety Administration argued, “We need that type of data to find out how people are injured in crashes to know what areas of the body are injured under what conditions.” He added that human subjects were necessary to determine the validity of the data gathered from crash test dummies: “If you didn’t do this testing, you wouldn’t know what limits to put on dummies for crash tests.”

For many, the debate ultimately hinged on whether the research yielded information not attainable from crash dummies and whether or not the families gave their consent to the use of the cadavers.

Continue Reading

Full Disclosure: Manipulating Donors

Jenny, a university student studying public relations, accepted an internship position in the fundraising department at Casa Tia Maria.* Casa Tia Maria is a non-profit organization in the United States that provides shelter for Central American immigrants while they look for permanent housing and employment. In addition to shelter, Casa Tia Maria provides food, clothing, and English classes. Most immigrants stay at the shelter for several months before securing permanent housing.

After Jenny had worked at Casa Tia Maria for two weeks, Mary, the director of development, asked Jenny to accompany her to a fundraising dinner at a luxurious downtown hotel. Many wealthy and influential individuals were in attendance. After most of the guests had left, Mary and Jenny were approached by Robert, a Texas oil baron and one of the state’s biggest philanthropists. Robert was known to donate to almost any cause as long as he found it to be what he considered “morally sound” and to the benefit of “hard-working Americans.”

Mary and Robert talked for a few minutes about Casa Tia Maria and its specific needs. Jenny noticed, however, that most of Mary’s answers to Robert’s questions about the shelter’s clients were vague. When Robert said that he was happy to lend a hand to any poor American citizen, Jenny knew he clearly did not understand that immigrants, who were not U.S. citizens, were the shelter’s clientele. Mary said nothing to correct Robert’s misperception.

Robert pulled a checkbook out of his jacket and wrote a substantial check. As he handed it to Mary, he said, “I am so pleased to be able to help hard-working Americans.” He then turned quickly and walked away.

*This case study is based on actual experiences of a university student. Names and situations have been changed, but the case study reflects the key ethical dilemmas the student faced.

Continue Reading

In-FUR-mercials: Advertising & Adoption

The Animal Foundation is a nonprofit organization operating Nevada’s largest open-admission animal shelter, the Lied Animal Shelter and pet adoption center. The Lied Animal Shelter is located in Las Vegas and is financed by taxpayers, grants, and individual donors. It provides a refuge for thousands of lost, unwanted, neglected, and abandoned animals every year.

In recent years, the Lied Animal Shelter has been plagued by a variety of problems from overcrowding due to a spike in animal intake as residents in the greater Las Vegas area (Clark County) surrendered or lost their pets. Analysts believed that the recession of 2008 was a major contributing factor to pet abandonment. April Corbin, writing for Las Vegas Weekly, reported:

“The Las Vegas Valley has a problem with domestic animals: we have more that we seem able or willing to handle, and those without homes mostly end up at the Lied Shelter. On any given day, it may be the busiest animal holding facility in the nation. …Some blame the recession, which led to the foreclosures of more than 150,000 homes in Clark County between January 2007 and May 2014, triggering the wholesale abandonment of animals.”

In 2013, the Lied Animal Shelter took in over 40,000 abandoned or lost animals. From that population, more than 10,000 animals were adopted, nearly 5,000 were reunited with their owners, and over 2,500 were transferred to other facilities. But 21,000 animals—more than half of the animals brought to the shelter—were euthanized. Many in Clark County were discouraged by the seemingly insurmountable problems that the Lied Animal Shelter faced.

Leaders at R&R Partners, a full-service, international advertising agency headquartered in Las Vegas, believed that their persuasive communication skills could help solve Animal Foundation’s problem. R&R took on the nonprofit as a pro bono client with goals of promoting pet ownership and driving traffic to the Animal Foundation’s pet adoption website, NewPetNow.com. The agency staff conducted qualitative research in the form of focus groups with R&R employees who were pet owners. They came up with the strategy of framing pet adoption not about love and companionship but about pets’ many household uses (e.g., alarm system, sleeping mask, vacuum cleaner) with a tongue-in-cheek tone. The agency staff created an integrated communication campaign of “In-FUR-mercial” spoofs that portrayed pets as multi-purpose products for the home. Below are links to examples of the “Pet Dog” and “Pet Cat” In-FUR-mercials, and examples of print ads (Exhibits 1 and 2) follow in the Reference section.

https://www.youtube.com/watch?v=ChMYMHvpJ0c

https://www.youtube.com/watch?v=MlD3BZHtcqA

After the release of the ads in early 2015, the campaign immediately received critical acclaim from industry analysts. ADWEEK contributor Gabriel Beltrone stated, “The writing is sharp and funny, the acting perfectly overdone, and the voiceover as cheesy as possible—dead-on parody.” The In-FUR-mercials also received CynopsisMedia’s award for the Best 30-Second Spot.

The campaign connected with audiences in Las Vegas and generated positive press for the Animal Foundation and the Lied Shelter, helping them to achieve their goal of increasing pet adoption. The percentage of available pets adopted increased by 9.39 percent during 2015, which meant that more than 1,000 additional animals were adopted.

Leaders at R&R Partners acknowledged that the campaign also resulted in important benefits for the agency that extended beyond the success and visibility of the campaign. Morale and comraderie within the agency were increased and the agency’s reputation as a responsible corporate citizen was reinforced. Sarah Catletti, an account supervisor at R&R Partners, described the benefits to the agency:

“Welcoming the Animal Foundation to R&R Vegas’ list of clients was a great way to boost morale within the agency. The pro bono client was chosen through an employee voting system. Since the Animal Foundation was the organization that received the largest number of votes, the entire agency was invested and excited to hear about the work, even those who weren’t directly involved with the account.”

Continue Reading

Blurred Lines of Copyright

In 2013, Robin Thicke and Pharrell Williams co-produced the run-away hit single “Blurred Lines,” earning them over $16 million in sales and streaming revenues. The music video has been viewed hundreds of millions of times on YouTube and Vevo, and has been parodied numerous times as well. Despite its popularity, the similarity of “Blurred Lines” to Marvin Gaye’s 1977 hit song “Got to Give It Up” sparked controversy. The family of artist Marvin Gaye was outraged; they believed Gaye’s work was stolen. Thicke filed a preemptive lawsuit to prevent the Gaye family from claiming any share of royalties. However, Thicke also stated in public interviews that he was influenced by Marvin Gaye and, specifically, “Got to Give It Up” when he co-composed “Blurred Lines” with Williams.

In response, the Gaye family sued Williams and Thicke. Contradictions were apparent in Thicke’s account. In an interview with GQ, he stated that he co-wrote “Blurred Lines.” But in court he claimed that he was too high in the studio, and that Williams had in fact composed the song, and he had lied earlier in order to get credit. Williams claimed that, although Gaye’s music had influenced him in his youth, he did not copy Gaye’s song in his composition.

In March 2015, the jury ruled in favor of the Gaye estate, stating that while Williams and Thicke did not directly copy “Got to Give It Up,” there was enough of a similar “feel” to warrant copyright infringement. Gaye’s heirs were awarded $7.4 million in damages, the largest amount ever granted in a music copyright case.

While many commentators agreed with this verdict, others were concerned that it could negatively affect song writing within an entire genre. Musicologist Robert Fink, for example, stated that this verdict had the potential to set a precedent for “fencing off our shared heritage of sounds, grooves, vibes, tunes, and feels.” Musicians, artists, and writers often note that previous works influence them in their creative process, and that there is very little that is completely original. Thicke and Williams did not see the musical influence of Gaye as copyright infringement, but rather as inspiration that spurred them to create a new, original single.

Continue Reading

Buying Green: Consumer Behavior

Green consumer products, such as organic food, fair trade coffee, or electric cars, represent a fast-growing segment of the consumer market. In the area of organic food alone, data from the Organic Trade Association reveals that consumer demand in the United States has seen double-digit growth every year since 1990. In 2014, the organic food market reached almost $40 billion in sales. Consumers of these products tend to be seen in a more positive light—they are deemed more ethical, more altruistic, and kinder than people who do not buy green products. But is there another side to this kind of consumer behavior?

In a series of experiments comparing consumption of green and “conventional” products, psychologists Nina Mazar and Chen-Bo Zhong demonstrated that those people who bought green products—like eco-friendly laundry detergent or organic yogurt—were less likely to share money with a stranger, more likely to cheat on a task in which they could earn money, and more likely to steal money when they thought they would not get caught. As the psychologists stated, “purchasing green products may license indulgence in self-interested and unethical behaviors.”

Mazar and Zhong, whose study received considerable media attention in their native Canada, as well as in American and British publications, said the results surprised them. Initially, they expected green products to provide a halo effect, whereby the positive impressions associated with green consumption would lead to positive outcomes in other areas. “Given that green products are manifestations of high ethical standards and humanitarian considerations, mere exposure [to them would] activate norms of social responsibility and ethical conduct,” said Mazar and Zhong in an interview.

But as the results indicate, the opposite can be true. “The message of this research is that actions which produce a sense of self content and moral glow can sometimes backfire,” Mazar stated in another interview.

These patterns have been shown to extend to other shopping scenarios. For example, one study tracked scanner data and shopper receipts at a California grocery store. Those shoppers who brought reusable grocery bags with them were more likely to buy environmentally friendly products, like organic food. But they were also more likely to buy indulgent products, like ice cream, cookies, candy, and cake. The researchers followed up this study with a series of experiments that showed these moral licensing effects only happened when the decision to bring the reusable bags was at the shopper’s discretion. When shoppers were told that the store required customers to use cloth bags, licensing effects disappeared and customers chose not to buy indulgent products. Only when consumers felt like using cloth bags was their own idea did the moral licensing effects hold.

Continue Reading

Appropriating “Hope”

Artists commonly appropriate, or borrow, objects or images and include them in their artwork. Andy Warhol, for example, is well known for appropriating images of Campbell’s soup cans for his pop art. Typically, the original object or image remains recognizable, but the new work of art transforms or recontextualizes the borrowed image or object in order to generate new meaning. Many artists believe that without artistic appropriation, creating new art would not be possible. On the other hand, the line between copyright infringement and fair use is not always clear.

In 2008, Shepard Fairey appropriated an Associated Press (A.P.) photo of Barack Obama to create his well-known “Hope” image of the presidential candidate. In 2009, Fairey filed a preemptive lawsuit against The A.P., requesting that the court declare protection from any copyright infringement claims on the basis of fair use. Fair use is the copying of copyrighted material for limited “transformative” purposes, such as criticism, parody, or commentary. Fairey acknowledged that his image was based on a 2006 photograph taken by A.P. photographer Mannie Garcia. The A.P. claimed that any use of the photo required permission and asked for credit and compensation.

Anthony T. Falzone, executive director of the Fair Use Project and one of Fairey’s lawyers, said that Fairey only used the original image as a reference and transformed it into a “stunning, abstracted and idealized visual image that created powerful new meaning and conveys a radically different message.” Paul Colford, spokesman for The A.P., said, “[The A.P. was] disappointed by the surprise filing by Shepard Fairey and his company and by Mr. Fairey’s failure to recognize the rights of photographers in their works.” Mannie Garcia argued that he actually owned the copyright to the photo, not The A.P., according to his contract at the time. He stated, “I don’t condone people taking things, just because they can… But in this case I think it’s a very unique situation… If you put all the legal stuff away, I’m so proud of the photograph and that Fairey did what he did artistically with it, and the effect it’s had.”

After two years in court, Shepard Fairey and The A.P. settled the case with an undisclosed financial agreement. Fairey also gave up fair use rights to any other A.P. photos, and both sides agreed to share the rights to make posters and merchandise based on the “Hope” image.

Continue Reading

The Collapse of Barings Bank

Founded in 1762, Barings Bank was a United Kingdom institution with worldwide reach. Even the Queen of England had an account there. In 1989, Nick Leeson was hired at Barings, where he prospered. He was quickly promoted to the trading floor and appointed manager in Singapore where he traded on the Singapore International Monetary Exchange (SIMEX). Leeson was an aggressive trader, making large profits in speculative trading. In 1993, his profits constituted almost 10% of Barings’ total profits. He had developed a reputation for expertise, for near-infallibility, and his superiors in London gave him little supervision.

In July 1992, a new Barings employee suffered a small loss on Leeson’s watch. Leeson did not wish to lose his reputation for infallibility, or his job, so he hid the loss in an error account. Leeson attempted to make back the loss through speculative trading, but this led to even bigger losses, which again were hidden in this account. He kept doubling up his bets in an attempt to get out from under the losses. Leeson later said: “[I] wanted to shout from the rooftops…this is what the situation is, there are massive losses, I want to stop. But for some reason you’re unable to do it. … I had this catastrophic secret which was burning up inside me—yet…I simply couldn’t open my mouth and say, ‘I’ve lost millions and millions of pounds.’”

Leeson took out a short-term, highly leveraged bet on the Nikkei index in Japan. At the same time, a severe earthquake in Kobe, Japan sent the index plummeting, and his loss was so huge that he could no longer hide it. Barings, a 233-year old bank, collapsed overnight and was bought by ING for £1. Leeson fled to Malaysia, Thailand, and finally to Germany, where he was arrested and extradited to Singapore. He plead guilty to two counts of deceiving bank auditors (including forging documents) and cheating the SIMEX. Leeson was sentenced to six and a half years of prison in Singapore, but only served four years due a diagnosis of colon cancer, which he ultimately survived.

Continue Reading

The FBI & Apple Security vs. Privacy

In December 2015, the FBI attained the iPhone of one of the shooters in an ISIS-inspired terrorist attack that killed 14 people in San Bernardino, California. As part of the investigation, the FBI attempted to gain access to the data stored on the phone but was unable to penetrate its encryption software. Lawyers for the Obama administration approached Apple for assistance with unlocking the device, but negotiations soon broke down. The Justice Department then obtained a court order compelling Apple to help the FBI unlock the phone. Apple CEO, Timothy Cook, publicly challenged the court in an open letter, sparking an intense debate over the balance between maintaining national security and protecting user privacy.

Apple and its supporters, including top technology companies such as Google and Facebook, made the case on several fronts that the court order threatened the privacy of all individuals. First, according to Apple, the order effectively required the company to write code, violating its First Amendment right to free speech by forcing the company to “say” something it did not want to say. Previous court cases had already established computer code as legally protected speech. Second, such a backdoor, once created, could fall into the wrong hands and threaten the privacy of all iPhone owners. Finally, it would set a dangerous precedent; law enforcement could repeatedly require businesses such as Apple to assist in criminal investigations, effectively making technology companies an agent of government.

Representatives from both sides of the political aisle offered several arguments in favor of the Justice Department’s efforts and against Apple’s stance. Their central claim was that the U.S. legal system establishes constraints on the government’s access to private information which prevent abuse of search and surveillance powers. At the same time, the law still allows authorities to gain access to information that facilitates prevention and prosecution of criminal activities, from terrorism to drug trafficking to child pornography. Critics of Apple also rejected the slippery slope argument on the grounds that, if Apple cooperated, it could safeguard the code it created and keep it out of the hands of others, including bad actors such as terrorists or criminal groups. Moreover, Apple was accused of being too interested in protecting its brand, and even unpatriotic for refusing to comply with the court order.

Ultimately, the FBI dropped the case because it was able to circumvent the encryption on the iPhone without Apple’s help.

Continue Reading

Gaming the System: The VA Scandal

In the United States, the Veterans Administration (VA) is tasked with, among other things, providing quality health care for U.S. military veterans. Chronically underfunded, the agency was having difficulty providing care in a timely manner. At various locations around the country, veterans were put on lengthy wait lists before they could receive care.

Turning to a common private sector solution in an attempt to reduce wait times, the VA provided bonuses to administrators who could reduce veterans’ wait times for doctor and hospital appointments. While these incentives were meant to spur more efficient and productive health care for veterans, not all administrators complied as intended.

In one hospital, the goal was to reduce wait times to less than 14 days. Clerks would record a wait time of how many days there would be between the first available appointment date and the veteran’s scheduled appointment date, disregarding any days prior to the first available date. In an email to colleagues, the clerk admitted, “Yes, it is gaming the system a bit. But you have to know the rules of the game you are playing, and when we exceed the 14-day measure, the front office gets very upset.”

At some locations, veterans were put on an electronic waiting list. After waiting for up to six weeks to move to the top of that list, they were finally able to call for a doctor’s appointment. If that appointment occurred soon after the call, it was counted as reducing the wait time; the time spent on the preliminary electronic waiting list was not counted. At other locations, VA officials used two sets of books, one recording the real wait times and another recording much shorter wait times that would be used to report success to superiors.

Using these and other maneuvers, executives in the VA qualified for millions of dollars of bonuses, even though actual wait times continued to lengthen. Following an audit of these practices, financial incentives for all Veterans Health Administration executives were suspended for the 2014 fiscal year. As of 2016, investigations remain ongoing.

Continue Reading

Limbaugh on Drug Addiction

Debates on the distribution, sale, and use of illegal drugs have been prominent in United States politics for the past several decades. Political commentator and talk show host Rush Limbaugh has become well known for his outspoken opinions on a number of political and social issues, including drug abuse.

During his talk show on October 5, 1995, Limbaugh stated: “There’s nothing good about drug use. We know it. It destroys individuals. It destroys families. Drug use destroys societies. Drug use, some might say, is destroying this country. And we have laws against selling drugs, pushing drugs, using drugs, importing drugs. And the laws are good because we know what happens to people in societies and neighborhoods which become consumed by them. And so if people are violating the law by doing drugs, they ought to be accused and they ought to be convicted and they ought to be sent up.” Limbaugh argued that drug abuse was a choice, not a disease, and that it should be combatted with strict legal consequences.

In October 2003, news outlets reported that Limbaugh was under investigation for illegally obtaining prescription drugs. Limbaugh illegally purchased hundreds of prescription pills per month over a period of several years. He engaged in the practice of “doctor shopping” by visiting different doctors to obtain multiple prescriptions for drugs that would otherwise be illegal. When this was disclosed, Limbaugh checked into a treatment facility. He said, “Over the past several years I have tried to break my dependence on pain pills and, in fact, twice checked myself into medical facilities in an attempt to do so…. I have recently agreed with my physician about the next steps.”

Though doctor shopping was punishable by up to five years in prison under Florida law, charges against Limbaugh were dropped after he sought help and agreed to the prosecutors’ settlement. Limbaugh has said that he became addicted to painkillers as a result of serious back pain.

Continue Reading

Selling Enron

In the late 1990s, the state of California deregulated many of its electricity markets, opening them up to private sector energy companies. Enron Corporation had long lobbied for deregulation of such markets and would likely have profited greatly had California’s experiment succeeded and become a model for other states.

Enron CEO Ken Lay wrote a public statement saying that Enron “believes in conducting business affairs in accordance with the highest ethical standards… your recognition of our ethical standards allows Enron employees to work with you via arm’s length transactions and avoids potentially embarrassing and unethical situations.” At the same time, Tim Belden, a key Enron employee in its energy trading group, noticed that California’s “complex set of rules…are prone to gaming.”

According to Bethany McLean and Peter Elkind, authors of The Smartest Guys in the Room: The Amazing Rise and Scandalous Fall of Enron, “In one scheme, Enron submitted a schedule reflecting demand that wasn’t there… Another was a variation of the Silverpeak experiment: Enron filed imaginary transmission schedules in order to get paid to alleviate congestion that didn’t really exist… Get Shorty was a strategy that involved selling power and other services that Enron did not have for use as reserves…”

Some Enron employees admitted that their schemes were “kind of squirrelly,” but used them because they were profitable. The impact on customers was clear: electricity prices rose and rolling blackouts occurred. Enron’s profits, however, quadrupled. An Enron lawyer later wrote that the Enron traders did not think “they did anything wrong.” Another employee admitted, “The attitude was, ‘play by your own rules.’ …The energy markets were new, immature, unsupervised. We took pride in getting around the rules.”

In October 2001, Enron’s unethical and illegal business practices became public knowledge. Enron’s stock prices plummeted, and the company filed for bankruptcy in December 2001.

Continue Reading

Banning Burkas: Freedom or Discrimination?

In September 2010, the French Parliament passed a bill prohibiting people from concealing their faces in public areas. While this law applied to all citizens and all forms of face covering, it became known as France’s “burka bill” because the rhetoric surrounding the bill targeted Muslim women who wore burkas—religious garments covering the face and body—in public.

French lawmakers argued that the law was important for the separation of church and state and for the emancipation of women. Similar to the 2004 bill that outlawed the use of conspicuous religious symbols in public schools, including Muslim headscarves and Christian crosses, this law sought to further remove religious expression and iconography from public spaces in France. Some legislators argued that the burka was a harmful symbol of gender inequality that forced women to assume a subservient status to men in public. According to them, the law freed women from a discriminatory, patriarchal subculture.

However, some in the French Muslim community saw the bill as an infringement of religious freedom and an act of cultural imperialism. They argued that French legislators were imposing their idea of gender equality onto their culture. Many of them, including some women, argued that wearing burkas actually emancipated women from the physical objectification so common in Western culture. A number of women protested the bill by dressing in burkas and going to the offices of lawmakers who supported the legislation. Other reports from individual women suggested that the law created a more hostile atmosphere for Islamic women in France. One of these women critiqued the bill, stating, “My quality of life has seriously deteriorated since the ban…the politicians claimed they were liberating us; what they’ve done is to exclude us from the social sphere.”

The law was challenged in 2014 and taken to the European Court of Human Rights. The court upheld the legality of the law.

Continue Reading

Arctic Offshore Drilling

Offshore oil and gas reserves, primarily along coastlines in Alaska, California, Louisiana, and Texas, account for a large proportion of the oil and gas supply in the United States. In August 2015, President Obama authorized Royal Dutch Shell to expand drilling off Alaska’s northwest coast. His decision brought into sharp relief the different, oftentimes competing views on the expansion of offshore drilling.

Many proponents of offshore drilling argue that tapping into the vast amount of oil and gas reserves in the Arctic will help shore up national security interests for the United States, bolster its economy and workforce, and offer Americans a reliable, safe supply of oil. According to Robert Bryce, senior fellow at the Manhattan Institute for Policy Research, there are “enormous amounts of recoverable energy resources in the Arctic. The Department of Energy estimates them at something on the order of 400 billion barrels of oil equivalent in natural gas and oil. That’s four times the crude oil reserves of Kuwait.” Framed this way, drilling in the Arctic presents a way for Americans to mitigate risks from dependence on foreign oil and build the local and national economies by creating jobs and supplying cheap oil.

A competing point of view charges that offshore oil drilling poses immense risk to the environment while reinforcing a reliance on dirty, environmentally unfriendly sources of energy. Critics claim that industrial activity associated with offshore drilling in the Arctic could harm native animals, including polar bears, walruses, seals, and whales already jeopardized by climate warming and declining levels of sea ice. Environmentalists argue that oil companies have not demonstrated the capability to clean up an oil spill in water obstructed by ice. Furthermore, they contend, extracting oil only perpetuates a fossil-fuel economy and will contribute dangerously to rising global temperature thereby exacerbating climate change.

“Granting Shell the permit to drill in the Arctic was the wrong decision, and this fight is far from over,” said Michael Brune, executive director of the Sierra Club. “The people will continue to call on President Obama to protect the Arctic and our environment.”

Continue Reading

The Costco Model

Costco is often cited as one of the world’s most ethical companies. It has been called a “testimony to ethical capitalism” in large part due to its company practices and treatment of employees. Costco maintains a company code of ethics which states, “The continued success of our company depends on how well each of Costco’s employees adheres to the high standards mandated by our Code of Ethics… By always choosing to do the right thing, you will build your own self-esteem, increase your chances for success and make Costco more successful, too.”

In debates over minimum wage in the United States, many commentators see Costco as an example of how higher wages can yield greater company success, often pointing to competitors such as Walmart and Target as examples that fall short in providing for their employees. Other commentators do not see Costco’s model as being easily replicable for different types of businesses, citing wages as only one of many factors to consider in companies’ best practices.

Costco tends to pay around 40% more and provides more comprehensive health and retirement benefits than Walmart and Target, saving large amounts in employee turnover costs. The company resists layoffs, invests in training its employees, and grants them substantial autonomy to solve problems. U.S. Secretary of Labor Thomas Perez stated, “And the remarkable loyalty that [employees] have to [Costco cofounder Jim Sinegal] is a function of the fact that he categorically rejects the notion that, ‘I either take care of my shareholders or my workers.’ That is a false choice.”

While few disagree with the benefits of fair treatment of employees, some commentators credit the success of Costco to its broader business model that favors higher productivity, not employee satisfaction. Columnist and economist Megan McArdle explains, “A typical Costco store has around 4,000 SKUs [stock keeping units], most of which are stacked on pallets so that you can be your own stockboy. A Walmart has 140,000 SKUs, which have to be tediously sorted, replaced on shelves, reordered, delivered, and so forth. People tend to radically underestimate the costs imposed by complexity, because the management problems do not simply add up; they multiply.” Furthermore, McArdle notes that Costco mainly serves as a grocer rather than department store and caters to a generally affluent customer base in suburban areas.

Continue Reading

Bullfighting: Art or Not?

Bullfighting has its roots in rituals dating back many centuries. In its modern Spanish style, bullfighting first became a prominent cultural event in the early 18th century. Yet despite its cultural significance, bullfighting continues to face increasing scrutiny in light of animal rights issues.

Some people consider bullfighting a cruel sport in which the bull suffers a severe and tortuous death. Many animal rights activists often protest bullfighting in Spain and other countries, citing the needless endangerment of the bull and bullfighter. Some cities around the world where bullfighting was once popular, including Coslada (Spain), Mouans-Sartoux (France), and Teocelo (Mexico), have even declared themselves to be anti-bullfighting cities. Other places, including some towns in Catalonia (Spain), have ceased killing the bull in the fight, but continue bullfighting.

To other people, the spectacle of the bullfight is not mere sport. The event is not only culturally significant, but also a fine art in which the bullfighter is trained in a certain style and elicits emotion through the act of the fight. Writer Alexander Fiske-Harrison, in his research and training as a bullfighter, defends the practice and circumstances of the bull, “In terms of animal welfare, the fighting bull lives four to six years whereas the meat cow lives one to two. …Those years are spent free roaming…” And others similarly argue that the death of the bull in the ring is more humane than the death of animals in a slaughterhouse.

Continue Reading

Ebola & American Intervention

In 2014, a highly contagious and deadly virus, Ebola, emerged in Western Africa, primarily in the countries of Liberia, Sierra Leone, and Guinea. The epidemic caught world health authorities off guard, ultimately killing thousands and threatening to develop into a worldwide epidemic. A broad range of organizations and politicians, from health care authorities and Doctors Without Borders to the World Health Organization and Liberian President Ellen Johnson Sirleaf, made dramatic appeals for American military intervention. The legacy of colonial ties affected the perceptions of responsibility for provision of assistance. The United Kingdom took charge of relief efforts in Sierra Leone, France in Guinea, and the United States in Liberia, a state founded in the 19th century by former African-American slaves.

After initially receiving criticism for acting too cautiously, President Obama responded by sending over 3,000 military personnel, mostly medics and engineers, to Liberia. It was the largest American intervention ever in a global health crisis. President Obama justified this decision by arguing that the United States had an ethical obligation as a leader of the global community to address the humanitarian crisis in Liberia as well as a security interest in controlling the epidemic in Africa so that it did not spread to the U.S. and other countries. According to President Obama, only the American military had the resources, hierarchical structure, and discipline to carry out such a largescale effort.

Objections to the “militarization” of this relief effort came in several forms. Conservative critics argued that militaries are for fighting and winning wars, not providing humanitarian assistance. Others argued the humanitarian effort could morph into security and military engagement. David Ridenhour, president of the National Center for Public Policy Research, worried that American soldiers could be faced with difficult moral dilemmas, such as “having to shoot unarmed, possibly infected Liberian civilians or allow Ebola to spread.” Some critics were concerned that U.S. military intervention jeopardized the principle of neutrality that health relief organizations try to maintain. Historian Andrew Bacevich argued that a military response to a humanitarian crisis, even if successful, would mask and perpetuate gross misallocation of resources toward building military capacity rather than address global health care needs.

Ultimately, the Ebola epidemic was brought under control in Liberia and the rest of Western Africa. The United States military built 11 treatment units and the government expended hundreds of millions of dollars in the relief effort. However, as The New York Times reported, there is limited evidence that these efforts played any significant role. Only 28 Ebola patients were treated in the 11 treatment centers built by the military. The number of new Ebola cases peaked at 635 the week after President Obama announced the military intervention, but dropped to just over 100 by the time the first medical unit was opened. By the time the additional units were operational, Ebola cases had dwindled to less than 50.

Continue Reading

Dennis Kozlowski: Living Large

Dennis Kozlowski came from modest circumstances. He began his career at Tyco International in 1975 as an auditor, and worked his way up the corporate ladder to become CEO in 1992. Kozlowski gained notoriety as CEO for the rapid growth and success of the company, as well as his extravagant lifestyle. He left the company in 2002 amid controversy surrounding his compensation and personal spending. In 2005, Kozlowski was convicted of crimes in relation to alleged unauthorized bonuses of $81 million, in addition to other large purchases and investments.

As CEO, Kozlowski was lauded for his risk-taking and the immense growth of the company. He launched a series of strategic mergers and acquisitions, rapidly building up the size of Tyco. During his first six years as CEO, he secured 88 deals worth over $15 billion. Strong growth was bolstered by a booming economy, and Tyco’s stock price soared as the company consistently beat Wall Street’s expectations. However, when the economy slowed, the company began to struggle.

Allegedly, Tyco paid for Kozlowski’s $30 million New York apartment, as well as personal gifts and parties, including $1 million of a $2 million birthday party for his wife. After Kozlowski paid a $20 million finding fee to a board member without proper approval, and paintings invoiced for Tyco offices ended up in Kozlowski’s apartment (among other irregularities), Kozlowski was criminally charged with looting more than $600 million of assets from Tyco and its shareholders.

While many questioned his lifestyle, others questioned the trial and conviction. Commenting on the case, civil rights lawyer Dan Ackman wrote, “It’s fair to say that Kozlowski…abused many corporate prerogatives… Still, the larceny charges at the heart of the case did not depend on whether the defendants took the money—they did—but whether they were authorized to take it.” Kozlowski asserted his innocence of the charges, stating, “There was no criminal intent here. Nothing was hidden. There were no shredded documents. All the information the prosecutors got was directly off the books and records of the company.”

Continue Reading

Apple Suppliers & Labor Practices

With its highly coveted line of consumer electronics, Apple has a cult following among loyal consumers. During the 2014 holiday season, 74.5 million iPhones were sold. Demand like this meant that Apple was in line to make over $52 billion in profits in 2015, the largest annual profit ever generated from a company’s operations. Despite its consistent financial performance year over year, Apple’s robust profit margin hides a more complicated set of business ethics. Similar to many products sold in the U.S., Apple does not manufacture most its goods domestically. Most of the component sourcing and factory production is done overseas in conditions that critics have argued are dangerous to workers and harmful to the environment.

For example, tin is a major component in Apple’s products and much of it is sourced in Indonesia. Although there are mines that source tin ethically, there are also many that do not. One study found workers—many of them children—working in unsafe conditions, digging tin out by hand in mines prone to landslides that could bury workers alive. About 70% of the tin used in electronic devices such as smartphones and tablets comes from these more dangerous, small-scale mines. An investigation by the BBC revealed how perilous these working conditions can be. In interviews with miners, a 12-year-old working at the bottom of a 70-foot cliff of sand said: “I worry about landslides. The earth slipping from up there to the bottom. It could happen.”

Apple defends its practices by saying it only has so much control over monitoring and regulating its component sources. The company justifies its sourcing practices by saying that it is a complex process, with tens of thousands of miners selling tin, many of them through middle-men. In a statement to the BBC, Apple said “the simplest course of action would be for Apple to unilaterally refuse any tin from Indonesian mines. That would be easy for us to do and would certainly shield us from criticism. But that would also be the lazy and cowardly path, since it would do nothing to improve the situation. We have chosen to stay engaged and attempt to drive changes on the ground.”

In an effort for greater transparency, Apple has released annual reports detailing their work with suppliers and labor practices. While more recent investigations have shown some improvements to suppliers’ working conditions, Apple continues to face criticism as consumer demand for iPhones and other products continues to grow.

Continue Reading

Krogh & the Watergate Scandal

Egil “Bud” Krogh was a young lawyer who worked for the Nixon administration in the late 1960s and early 1970s as deputy assistant to the president. Military analyst Daniel Ellsberg leaked the “Pentagon Papers,” which contained sensitive information regarding the United States’ progress in the Vietnam War. President Nixon himself tasked Krogh with stopping leaks of top-secret information. And Nixon’s Assistant for Domestic Affairs, John Ehrlichman, instructed Krogh to investigate and discredit Ellsberg, telling Krogh that the leak was damaging to national security.

Krogh and another staffer assembled a covert team that became known as the “plumbers” (to stop leaks), which was broadly supervised by Ehrlichman. In September 1971, the plumbers’ first break-in was at the office of Ellsberg’s psychiatrist; they were looking for documents that would discredit Ellsberg based on mental health. Reflecting on the meeting in which the break-in was proposed and approved, Krogh later wrote, “I listened intently. At no time did I or anyone else there question whether the operation was necessary, legal or moral. Convinced that we were responding legitimately to a national security crisis, we focused instead on the operational details: who would do what, when and where.”

The break-in, which was illegal, was also unproductive. Nothing was found to discredit Ellsberg. Importantly, the ties between this break-in and Nixon were much more direct and easy to establish than the ties between Nixon and the Watergate break-in. Krogh later pled guilty to his role in the break-in and was sentenced to two-to-six years in prison. At his sentencing, Krogh explained that national security is “subject to a wide range of definitions, a factor that makes all the more essential a painstaking approach to the definition of national security in any given instance.” Judge Gesell, sentencing Krogh to serve six months in prison and remain on unsupervised probation for another two years, said, “In acknowledging your guilt, you have made no effort, as you very well might have, to place the primary blame on others who initiated and who approved the undertaking. A wholly improper, illegal task was assigned to you by higher authority and you carried it out because of a combination of loyalty and I believe a degree of vanity, thereby compromising your obligations as a lawyer and a public servant.”

Krogh, who cooperated with the Watergate prosecutors and never bargained for leniency, served only four-and-a-half months of his sentence. The Washington State Supreme Court disbarred Krogh in 1975, although he successfully petitioned to be reinstated in 1980 and became partner in the Seattle law firm Krogh & Leonard. Krogh has spent much of the past 45 years supporting legal ethics education and writing and lecturing on the topic of integrity. Writing for The New York Times in 2007, he stated, “I finally realized that what had gone wrong in the Nixon White House was a meltdown in personal integrity. Without it, we failed to understand the constitutional limits on presidential power and comply with statutory law.”

Continue Reading

Climate Change & the Paris Deal

In December 2015, representatives from 195 nations gathered in Paris and signed an international agreement to address climate change, which many observers called a breakthrough for several reasons. First, the fact that a deal was struck at all was a major accomplishment, given the failure of previous climate change talks. Second, unlike previous climate change accords that focused exclusively on developed countries, this pact committed both developed and developing countries to reduce greenhouse gas emissions. However, the voluntary targets established by nations in the Paris climate deal fall considerably short of what many scientists deem necessary to achieve the stated goal of the negotiations: limiting the global temperature increase to 2 degrees Celsius. Furthermore, since the established targets are voluntary, they may be lowered or abandoned due to political resistance, short-term economic crises, or simply social fatigue or disinterest.

As philosophy professor Stephen Gardiner aptly explains, the challenge of climate change presents the world with several fundamental ethical dilemmas. It is simultaneously a profoundly global, intergenerational, and philosophical problem. First, from a global perspective, climate change presents the world with a collective action problem: all countries have a collective interest in controlling global carbon emissions. But each individual country also has incentives to over-consume (in this case, to emit as much carbon as necessary) in response to societal demands for economic growth and prosperity.

Second, as an intergenerational problem, the consequences of actions taken by the current generation will have the greatest impact on future generations yet to be born. Thus, the current generation must forego benefits today in order to protect against possibly catastrophic costs in the future. This tradeoff is particularly difficult for developing countries. They must somehow achieve economic growth in the present to break out of a persistent cycle of poverty, while limiting the amount of greenhouse gasses emitted into the atmosphere to protect future generations. The fact that prosperous, developed countries (such as the U.S. and those in Europe) arguably created the current climate problems during their previous industrial economic development in the 19th and 20th centuries complicates the tradeoffs between economic development and preventing further climate change.

Finally, the global and intergenerational nature of climate change points to the underlying philosophical dimensions of the problem. While it is intuitive that the current generation has some ethical responsibility to leave an inhabitable world to future generations, the extent of this obligation is less clear. The same goes for individual countries who have pledged to reduce carbon emissions to help protect environmental health, but then face real economic and social costs when executing those pledges. Developing nations faced with these costs may encounter further challenges as the impact of climate change will most likely fall disproportionally on the poor, thus also raising issues of fairness and inequality.

Continue Reading

German Police Battalion 101

During the Holocaust, more than a third of Nazi Germany’s Jewish victims never boarded deportation trains and did not die in gas chambers. Jewish men, women, and children were murdered near their homes in surrounding fields and forests by German police forces and their local helpers. Historians estimate that these so-called mobile killing units shot almost 2 million people during World War II. After the war, when some of the shooters and their commanders were put on trial, they claimed that they had to follow orders. Decades later, however, historians studying the interrogation files of one of these police battalions made a startling discovery. Not only did many ordinary Germans participate in the mass murder of Jews, they did so voluntarily.

In his book on one group of reserve policemen from the German city Hamburg, Ordinary Men: Reserve Police Battalion 101 and the Final Solution in Poland, historian Christopher Browning shows that while the men were expected to follow orders when it came to killing civilians, they could have refused to do so. In July 1942, before their induction into the mass shooting of civilians in the small Polish town of Józefów, their commander gave battalion members a choice. If any of the men were “not up to the task,” they would be assigned to do “other duties,” such as guarding or transportation. When given the opportunity to opt out, only a very small number of men did. Even though this option remained in the months that followed, the majority of reserve policemen chose to kill—to do the “dirty work” even if just for a short time before being relieved of duty—rather than separate themselves from their unit by refusing to murder civilians.

Most of these ordinary, middle-aged German men became willing, although not enthusiastic, killers. A small minority consistently excused themselves from the task at hand. Those that killed, Browning argues, did so because of “the pressure for conformity—the basic identification of men in uniform with their comrades and the strong urge not to separate themselves from the group by stepping out… [The] act of stepping out…meant leaving one’s comrades and admitting that one was ‘too weak’ or ‘cowardly’.”

Continue Reading

Cheney v. U.S. District Court

On June 24, 2004, the United States Supreme Court decided the case of Cheney v. U.S. District Court. Believing that U.S. Vice President Dick Cheney’s handling of an energy task force violated the Federal Advisory Committee Act, and suspecting undue influence in governmental deliberations by the energy industry, two environmental groups—the Sierra Club and Judicial Watch—sued to discover official documents relating to the meetings. Cheney and other government defendants moved to dismiss the lawsuit, but the federal district court in Washington D.C. ordered defendants to produce information about the task force. Defendants appealed, and the Circuit Court also held that they had to turn over the information. Defendants appealed again to the Supreme Court. A majority of the Supreme Court, for largely procedural reasons, held that the Circuit Court should reconsider the appeal in light of new legal guidelines that the Supreme Court set out. Dissenters argued that the lower courts had ruled correctly, and the case should be returned to the District Court where it could go forward. Justice Antonin Scalia voted with the majority, but also said that he favored dismissing the case and ruling for Cheney and the other defendants. Justice Scalia also filed a statement explaining why he was refusing requests that he recuse himself from the case.

Justice Scalia’s opinion in favor of Cheney was controversial. While the case against Cheney was pending, Scalia had taken a widely publicized duck hunting trip with defendant Cheney and others. Federal law states that “any justice or judge shall disqualify himself in any proceeding in which his impartiality might be questioned.” Critics of Justice Scalia thought it reasonable to question his impartiality. Stephen Gillers, a New York University law professor and expert on legal ethics, noted, “A judge may have a friendship with a lawyer, and that’s fine. But if the lawyer has a case before the judge, they don’t socialize until it’s over. That shows a proper respect for maintaining the public’s confidence in the integrity of the process.”

Defenders of Justice Scalia argued that these criticisms were politically motivated by people who wished that Scalia not be able to vote in the case. They said it is common for justices to be friends with political actors who might be involved in cases coming before the Court. Defending his actions, Scalia stated, “Social contacts with high-level executive officials…have never been thought improper for judges who may have before them cases in which those people are involved… For example, Supreme Court Justices are regularly invited to dine at the White House, whether or not a suit seeking to compel or prevent certain presidential action is pending.”

Continue Reading

Healthcare Obligations: Personal vs. Institutional

In a typical year in the United States, the public is urged to get flu shots as a means of protection against influenza. A report published by an influenza expert at the British Columbia Centre for Disease Control found that the 2014-2015 rate of effectiveness for flu shots was 23% in the U.S., and that the shots offered no significant protection in Canada. A related finding published by researchers at the National Institutes of Health documented that, although the percentage of seniors who received flu shots in recent decades rose from 15% to 65%, the deaths caused by influenza among the elderly continued to climb during this period. These researchers concluded “either the vaccine failed to protect the elderly against mortality… and/or the vaccination efforts did not adequately target the frailest elderly.”

More recent research has tried to develop a method to assess in advance whether a given flu vaccine would have any protection benefit. A report published in 2016 in the journal Nature Immunology used a blood assay and identified a correlation between persons with a certain pattern of gene expression and the likelihood that such persons would experience adverse events after receiving a flu vaccine. If this assay could be made economical, and included in blood tests typically done in annual physicals, it could reduce the number of suits filed with the federal Vaccine Injury Compensation Program. With these reports in mind, consider the following case:

Dr. Jones works in a hospital and she recently became aware of all the above reports. She belongs to the American Medical Association (AMA), which strongly recommends that everyone receive flu shots each year. Moreover, her hospital recently informed her that she herself must take annual flu shots or risk termination of her hospital privileges or employment. Dr. Jones, however, is aware of the AMA Code of Ethics, which states that patients have a right of self-decision regarding their health care, and that this right can only be effectively exercised “if the patient possesses enough information to enable an informed choice.” She feels a moral obligation to inform her senior patients that she has reservations about the efficacy of flu shots for their age group and why.

Since the AMA and the Centers for Disease Control and Prevention are strong proponents of annual flu shots, if Dr. Jones gives contrary advice to her patients, this could jeopardize her standing with the AMA, in addition to her employment at her hospital. Furthermore, her hospital administrator and other health officials are concerned that if doctors advise patients about the relative ineffectiveness of, and potential injury from, flu vaccines, this could feed public doubts about the efficacy or safety of other vaccines. Such doubts could increase public opposition to new state laws that aim to promote “herd immunity” by mandating certain vaccinations.

While the case of Dr. Jones is based on the actual experiences of a medical doctor, her name and identifying details have been changed. This case study reflects the key ethical dilemmas the doctor faced.

Continue Reading

A Million Little Pieces

In 2003, publisher Doubleday released James Frey’s book A Million Little Pieces, marketing it as a memoir about Frey’s struggles with alcohol and drug addiction. In 2005, the book was selected for Oprah’s Book Club, in part for the inspiring and supposedly true story of Frey’s overcoming addiction. The publicity from The Oprah Winfrey Show sparked strong sales for the book, which topped bestseller lists in the following weeks.

On January 8, 2006, investigative website The Smoking Gun published an exposé describing numerous exaggerations and fabrications in Frey’s account of his life story as written, creating controversy regarding the truthfulness of the book as a “memoir.” When Frey first appeared on The Oprah Winfrey Show in 2005, he emphasized his honesty: “If I was going to write a book that was true, and I was going to write a book that was honest, then I was going to have to write about myself in very negative ways.” As he did so, he expanded on falsehoods that appeared in the book.

Frey and his publisher, Nan Talese, were unable to effectively refute The Smoking Gun allegations. When Winfrey invited Frey back on her show, she harangued him for lying, saying that she felt “duped” and that Frey had “betrayed millions of readers.” Talese described Winfrey’s rebuke of Frey as “mean and self-serving,” while critics of Frey saw him as opportunistic.

Frey defended the right of authors and memoirists to draw upon their memories, not only upon documented facts: “I wanted the stories in the book to ebb and flow, to have dramatic arcs, to have the tension that all great stories require.” Authors and literary critics have echoed this sentiment, noting that memoirs are not necessarily the same genre as biographies or autobiographies. When asked about this controversy, author Joyce Carol Oates stated, “the tradition of personal memoir has always been highly ‘fictionalized’ — colored with an individual’s own ‘emotional truth’ … This is an ethical issue…with convincing arguments on both sides. In the end, [Winfrey] had to defend her own ethical standards of truth on her television program, which was courageous of her; and [Talese] had to defend her standards as a longtime revered editor, which was courageous of her.”

Continue Reading

The CIA Leak

In 2002, the Central Intelligence Agency (CIA) asked Joseph Wilson, U.S. diplomat and husband of CIA agent Valerie Plame, to investigate allegations that Saddam Hussein purchased yellowcake uranium in Niger. Wilson traveled to Niger and found no evidence of this. Nonetheless, during the 2003 State of the Union Address, President George W. Bush stated, “The British government has learned that Saddam Hussein recently sought significant quantities of uranium from Africa.” On July 6, 2003, Wilson rebutted this statement in an editorial for The New York Times. One week later, journalist Robert Novak published an op-ed in The Washington Post criticizing Wilson and releasing information identifying Plame as a CIA agent. Another journalist, Matthew Cooper, wrote in Time Magazine that government officials informed him that Wilson’s wife was employed by the CIA. Plame was a classified covert agent and her actual employment was not public knowledge. Her employer, Brewster Jennings, was thus unmasked as a CIA front company and their employees worldwide were put at risk.

The CIA asked the Department of Justice to investigate the leak. Bush stated if a leak occurred from his administration “and if the person violated the law, the person will be taken care of.” He later said, “If someone committed a crime, they will no longer work in my administration.” A special counsel examined the legal violations and a grand jury summoned the journalists involved, as well as various members of the Bush administration, with a focus on presidential aide Karl Rove and Scooter Libby, Chief of Staff for Vice President Dick Cheney.

Cooper claimed Rove told him Plame’s name and employment, while Rove contended he only learned of her name from journalists. Evidence suggested Cheney might have informed Libby. Eventually, the source was revealed as Richard Armitage, Deputy Secretary of State at the time. Armitage was ultimately not charged because no evidence existed to prove he was aware Plame’s employment was covert, and thus, illegal to disclose.

The only person charged over the leak was Libby. He was indicted on two counts of perjury, two counts of making false statements, and one count of obstruction of justice. These charges all stemmed from testimony he gave during the investigation, not the initial disclosure of information. He resigned from his position, and was later fined and sentenced to thirty months in federal prison. President Bush commuted the prison time, but left the fines intact. Cheney aggressively sought a full pardon for Libby and was reportedly very upset with Bush for refusing to grant it. Bush publicly stated he respected the jury’s verdict, but Cheney felt Libby did nothing inappropriate.

Wilson and Plame eventually filed a civil lawsuit against Rove, Libby, Cheney, and Armitage for their role in disclosing her identity. The lawsuit was dismissed, and the U.S. Supreme Court refused to hear the dismissal on appeal.

In 2018, President Donald Trump pardoned Libby, stating, “I don’t know Mr. Libby, but for years I have heard that he has been treated unfairly. Hopefully, this full pardon will help rectify a very sad portion of his life.” Cheney responded to the pardon, stating, “He is innocent, and he and his family have suffered for years because of his wrongful conviction. I am grateful today that President Trump righted this wrong.”

Plame saw the pardon as a dangerous precedent that could further endanger the safety of covert operatives and the operations of the CIA. In an op-ed, Plame wrote, “The pardon power of the president cannot be challenged constitutionally; it should be wielded with enormous diligence and prudence. In granting his pardon to Scooter Libby, Donald Trump seems to have avoided the careful process of review within the Justice Department that has been established to consider pardons,” continuing, “Our national security is at serious risk when there is daylight and distrust between the president and the CIA.”

Continue Reading

High Stakes Testing

Since the enactment of the No Child Left Behind Act of 2001 (NCLB), some parents, teachers, and administrators have taken their own stand against something that they believe is harmful for public education and American children: high stakes testing. Under NCLB, every child in the U.S. must achieve proficiency in reading and math. While each state can determine its own level of proficiency, a child’s ability to advance to the next grade level, and even graduate from high school, hinges on passing a standardized test. Across the U.S., children in minority communities have been more likely not to receive a diploma due to low-test scores on mandated exams.

Assessment has many benefits. Advocates of large-scale assessments claim that it is an objective and fair measure of student achievement. Results show how students, or groups of students, measure up against one another and broader standards. Ideally, all children throughout the country will receive an equal education, and testing can help educators target where instructional improvements are necessary. Sonja Brookins Santelises, Vice President of K-12 Policy and Practice for the Education Trust, acknowledges that there is too much rote test preparation, but argues that we must work together to reduce the achievement gap among student populations. The founder of nonprofit organization StudentsFirst and former Chancellor of D.C. Public Schools, Michelle Rhee, also sees standardized testing as a means to reduce this gap. She states, “It’s not inconceivable for a student to be receiving all A’s and B’s on her report card but still be stuck far behind her peers in other districts and states across the country. And without standardized testing, that child’s parents, teachers and principal would have no idea.”

Opponents, however, firmly believe that high stakes testing is problematic and even ruinous to our educational system. There is no research to corroborate that standardized testing, a multi-billion dollar industry controlled by three large U.S. corporations, is effective. Teachers complain that they are forced to “teach to the test,” leaving little or no time for subjects that are not tested, such as art, social studies, and science. Parent and former teacher Edy Chamness founded a Facebook group in 2011 to rally parents in her community to protest against school accountability and standardized testing requirements. She also kept her son out of school on testing days. Other parents, including Maeve Siano of Celina, Texas, similarly felt that the preparation and stress associated with testing were more likely to damage her son than help him. Celina Superintendent Donny O’Dell stated, “Our country was basically founded on rebellion, so to speak. So I don’t hold that against any of these parents, but we as educators have to do what we have to do…and we need some form of accountability.”

Continue Reading

Christina Fallin: “Appropriate Culturation?”

In March 2014, twenty-seven year old Christina Fallin, daughter of Oklahoma Governor Mary Fallin, found herself at the center of controversy when she posted an image of herself wearing a red Plains headdress on Facebook and Instagram with the tag “Appropriate Culturation.” Fallin posed for this photo as a promotional piece for her band, Pink Pony. Public outcry criticized Fallin for appropriating Native American cultures, sparking uproar on social media and leading to protests at their shows.

In response, Fallin and Pink Pony removed the photo and released a statement on their Facebook page explaining their aesthetic appreciation for Native American culture. Fallin told the Indian Country Today Media Network that, “I think Native American culture is the most beautiful thing I’ve ever seen, so I was naturally drawn to it.” Musician Wayne Coyne of The Flaming Lips became involved in the issue when he fired bandmate Kliph Scurlock for criticizing Fallin online. To show his support for Fallin, Coyne posted Instagram photos of several friends and a dog wearing headdresses.

Some argue that Fallin’s photo could be an example of artistic appropriation. Throughout history, artists have borrowed objects and images from everyday life as well as other cultures in order to re-contextualize the object in a new manner. On the other hand, some argue that non-Native Americans do not have the right to adorn a headdress at all. Taking a sacred or meaningful object out of context is problematic even when touted as “art.” Summer Morgan, member of the Kiowa tribe in Oklahoma, believes that Fallin may have had good intentions, but there are better ways to express her appreciation of Native American cultures. Morgan believes that headdresses are not fashion accessories. Following Kiowa tradition, only men can own war bonnets and each feather represents a war deed. Female relatives may be given the right to wear a male relative’s war bonnet, but only after they understand what’s expected of them when they wear it, how to treat it properly, and when it is acceptable to wear.

Continue Reading

Freedom vs. Duty in Clinical Social Work

Mental health clinicians are taught to introspect about the degree to which their own background, culture, values, and beliefs may affect their reactions to their clients, and to strive to maintain objectivity in the process of assessment, diagnosis, and treatment. Clinical social workers are the largest professional group providing mental health services in the United States, providing services in urban and rural outpatient and inpatient settings. Social workers are seen as different from clinical psychologists, psychiatrists, and other occupational groups that provide therapy in the emphasis that social work places on social justice, cultural competence, and respect for diversity. According to the National Association of Social Workers Code of Ethics, the social work profession requires its members to “act to prevent and eliminate…discrimination against any person, group, or class on the basis of race, ethnicity, national origin, color, sex, sexual orientation, age, marital status, political belief, religion, or mental or physical disability.”

An ethical dilemma may arise when the religious or moral beliefs of the social worker interfere with the duty of all health care professionals to provide optimal service to clients and to “do no harm.” This issue made national headlines in a related context, when Kim Davis, a clerk in Rowan County, Kentucky, was jailed after defying a federal court order to issue marriage licenses to gay couples. Her action was based on her contention that to do so would violate her religious beliefs. In his ruling, Judge David L. Bunning of the United States District Court stated, “If you give people the opportunity to choose which orders they follow, that’s what potentially causes problems.” In defense of Davis, Ryan Anderson of the Heritage Foundation wrote, “Ms. Davis felt she had to follow her conscience… That, after all, is what religious freedom and religious accommodations are all about: creating the space for citizens to fulfill their duties, as they understand them, to God—regardless of what the rest of us think.”

A similar conflict between religious faith and the requirements of one’s job or one’s profession may be seen in social work practice in the following scenario:

A clinical social worker has been treating a 25-year-old man for depression and anxiety. In the fourth session, the client reveals that he is gay, and that he has not “come out” to his family. He states that he has been involved in a committed, monogamous relationship with another man, and is contemplating marriage. He would like to inform his parents of this good news, but is fearful that they may angrily reject him. He is seeking counseling around this issue. The social worker belongs to a faith tradition that believes that homosexuality is a sin, and whose leaders have been prominent in opposing same-sex marriage. The social worker, who had up to this point believed that treatment was going well, is concerned that his own religiously based objections to homosexuality will interfere with his ability to provide unbiased mental health treatment services. The social worker contemplates informing the client that he will have to transfer him to another therapist.

Continue Reading

Flying the Confederate Flag

On July 9, 2015, Governor Nikki Haley signed a bill requiring the Confederate flag to be removed from the South Carolina State House grounds. The flag and the pole on which it was flown were both removed the following day. Leading up to this removal was heated debate concerning whether or not the Confederate flag should be taken down. Similar discussions occurred across the United States in places where Confederate flags or other Confederate symbols were on display, ranging from governmental properties and university campuses to NASCAR venues and popular television series.

Prior to the flag’s official removal from the front of the South Carolina State House, police arrested activist Brittany Newsome for climbing the flagpole and removing the flag herself. The activist explained her act of defiance, stating, “because it was the right thing to do and it was time for somebody to step up. Do the right thing. We have to bury hate; it’s been too long.” South Carolina Representative Jenny Anderson Horne, an ancestor of former Confederate President Jefferson Davis, argued that the Confederate flag should no longer fly in front of the State House. She chastised her colleagues in an emotional speech, stating, “I cannot believe that we do not have the heart in this body to do something meaningful—such as take a symbol of hate off these grounds on Friday.”

On the other hand, Confederate sympathizers contend that the flag is a symbol of historical pride, not of hatred. They claim that efforts to remove the flag are a misplaced reaction to photos of Dylann Roof posing with a Confederate flag. Roof had been recently charged with the racially motivated killing of nine black people in a Charleston church. South Carolina State Senator Lee Bright noted that symbols have been misused throughout history. Bright said, for example, that he believed the Ku Klux Klan abused the symbol of the cross, but there has not been a push to remove all crosses. Similarly, Kenneth Thrasher, the lieutenant commander of the Sons of the Confederate Veterans, urged decision makers not to act in haste because, “The flag didn’t kill anybody. It was a deranged young man who did.”

Continue Reading

Retracting Research: The Case of Chandok v. Klessig

In 2003, a research team from prominent laboratory the Boyce Thompson Institute (BTI) for Plant Research in Ithaca, New York published an article in the prestigious academic journal Cell. It was considered a breakthrough paper in that it answered a major question in the field of plant cell biology. The first author of this paper was postdoctoral researcher Meena Chandok, working under her supervisor Daniel Klessig, president of BTI at the time.

After Chandok left BTI for another job, other researchers in the laboratory were unable to repeat the results published in Cell, following exactly the same methods described in the article. Klessig, suspecting possible scientific misconduct, requested Chandok to return to the laboratory to redo her experiments and confirm the authenticity of her results, but she declined. An institutional investigation into the experiment concluded there “was no conclusive evidence that Dr. Chandok achieved the results reported,” but also that there was “no conclusive evidence” of misconduct or that Chandok had fabricated the results. Klessig and the other co-authors retracted the article without Chandok’s agreement. Chandok subsequently sued Klessig for defamation, claiming the retraction had caused significant damage to her career and reputation within the scientific community.

Over several years in court, the case drew attention to a number of issues in scientific research and publishing. John Travis, an editor at Science magazine, wrote of the case’s consistency with “the National Institutes of Health’s grant policy that researchers should come forward with concerns about possible misconduct.” John Dahlberg, director of the Office of Research Integrity’s Division of Investigative Oversight, believed the case could encourage anyone with fear of being sued for defamation to come forward. Science writer Eugenie Reich described Klessig as a “whistle-blower,” while philosopher Janet Stemwedel raised questions surrounding the collaborative responsibility of the coauthors and Klessig with regard to quality control for the research. She asked, “If credit is shared, why isn’t blame?”

In 2011, the Court of Appeals for the Second Circuit in New York dismissed the case. It ruled that Klessig’s statements were legally protected because they were “matters as to which the speaker [had] a legal or moral obligation” to notify the journal that his laboratory could not replicate the results they had published and were made between “communicants who [shared] a common interest.” The court found there was no proof of malice toward Chandok and that the investigation and attempts requesting Chandok to replicate her work left the question of scientific misconduct open.

Continue Reading

Teaching Blackface: A Lesson on Stereotypes

In 2014, Alan Barron, a white middle school history teacher who taught for 36 years in Monroe, Michigan, was placed on administrative leave a few weeks before his retirement. Barron’s administration viewed his history lesson as racist. While teaching about racial segregation laws during the Jim Crow era, Barron played a video showing a white entertainer in blackface. During the nineteenth and early twentieth century, white actors commonly painted their faces with makeup to depict black individuals. Barron explained that the purpose of the video was to show how stereotypes of African-Americans were portrayed at one point in American history. During the lesson, an assistant principal who was observing the classroom demanded that Barron stop the video because she “concluded that Barron’s lesson about how entertainers used to be racist was itself racist.” Barron was subsequently suspended.

Many parents spoke out against Barron’s suspension. Adrienne Aaron, whose African-American daughter was in Barron’s eighth grade history class at the time, said that her daughter was not offended by the lesson and thought that the subject needed to be discussed. Aaron stated, “[My daughter] was more offended that they stopped the video…History is history. We need to educate our kids to see how far we’ve come in America. How is that racism?”

After two weeks on leave, the district allowed Barron to return to his classroom. The superintendent stated, “The teacher in question was placed on paid leave to give the district time to fully consider what occurred in this classroom. As a result of incorrect information, a highly respected and loved teacher, and one who has done much for his students and the community, has had to endure a public airing of what should have ended through a district discussion.” Barron was set to retire soon after being reinstated.

Continue Reading

Negotiating Bankruptcy

John Gellene, a bankruptcy lawyer at the law firm Milbank Tweed, worked directly under Wall Street attorney Larry Lederman. In 1994, Lederman asked Gellene to represent mining equipment company Bucyrus-Erie (BE) in a reorganization bankruptcy that became increasingly complicated.

In an attempt to initially ward off bankruptcy, BE had, pursuant to the legal advice of Milbank Tweed and the financial advice of Goldman Sachs, accepted a $35 million infusion of cash from an investment fund called South Street. In exchange, BE gave South Street a lien on all of the company’s manufacturing equipment, putting it ahead of other BE creditors, including Jackson National Life (JNL). JNL was BE’s largest single creditor, but was unsecured. BE had posted no collateral in return for JNL’s loan, so JNL was in line in bankruptcy court behind all of BE’s creditors that had demanded collateral. South Street was controlled by Mikael Salovaara, a former Goldman Sachs banker who had previously provided financial advice to BE and was advised by Lederman.

When Gellene filed a Chapter 11 bankruptcy petition on behalf of BE, he was required to ask the court to appoint him and Milbank Tweed as BE’s counsel for purposes of the proceedings. At that time, he filed documents under oath that were supposed to disclose any potential conflicts of interest that Milbank Tweed had in the proceedings. For reasons unknown, Gellene did not disclose to the bankruptcy judge (who would appoint counsel) the fact that Milbank Tweed was representing both South Street and Salovaara regarding various matters.

Legal scholars and attorneys reflecting on this case years later have speculated as to why Gellene did not disclose what might seem to be obvious connections that could be potential conflicts of interest for Milbank Tweed. Lawyer Steve Sather suggests that the lack of disclosure may have been inadvertent, or that Gellene did not see the connections as inherent conflicts, among other possible reasons.

Regardless, Gellene did successfully guide BE through the reorganization process. The failure to disclose was not discovered until years later by JNL, which then sued Milbank Tweed. Criminal charges were filed against Gellene for three felony counts of making false statements under oath in regard to Milbank Tweed’s ability to serve as bankruptcy counsel. Gellene was convicted and sent to prison for 15 months.

Continue Reading