1. For both ethical and policy reasons, the law of products liability has long imposed accountability in the form of civil liability upon the sellers (e.g., manufacturers, wholesalers, retailers) of products that caused injury to consumers and others. The key drivers of this approach are a desire to encourage these sellers to design, manufacture, and sell products that are safe for use and to compensate parties injured when they are not. Some applicable legal theories apply when sellers are careless. Others impose liability without regard to fault on the seller’s behalf. Do you think that the standards for accountability should be different for sellers of an autonomous vehicle than for sellers of a traditional vehicle? Why or why not?
2. Criminal liability is rarely imposed upon sellers of products. These sellers are typically companies that have “no soul to damn, no body to kick,” and therefore can be punished only via monetary fines, which often doesn’t seem worth the trouble. Nonetheless, occasionally such cases are brought and won. Again, do you think that the standards for criminal accountability should be different for sellers of an autonomous vehicle than for sellers of a traditional vehicle. Why or why not?
3. Anthropologist Webb Keane suggests that to be accountable, an AI “machine must, literally, be answerable, that is able to give a response if we were to ask ‘why?’ [it made a given decision].” (p. 139) Keane also quotes computer scientist Stuart Russell who “says the way to make AI safe is to have machines ‘check in with humans—rather like a butler—on any decision.’” (p. 109) Do either of these suggestions sound like viable approaches to maintaining AI accountability? Why or why not?
4. Shadbolt and Hampson write:
“There simply is a fundamental accountability difference between a human and a machine, arising from all the other differences. Alexa or Siri can have a face painted on it, be put in smart clothes, and be set up to recognize you as you walk up to the bar, buy you a drink and ask about your day at the office, yet this will not affect you in the same way as a human performing exactly the same set of acts. A full, thick description of the difference would include dozens of dimensions. Prime among them is that even where we care a lot about a decision an AI makes about us, we don’t care at all what private opinion the machine may have about us, not in the way we are affected by human opinions. Nor do we feel, reciprocally, that we should be diplomatic in how we treat the machine.” (p. 114)
Does the fact that humans react differently to AI tools that make certain decisions than to humans who make similar decisions (about how to maneuver a car or which prisoners to parole) justify differential accountability/liability judgments when injuries occur?
5. Looking at the issue of accountability from a different angle, Vallor suggests that “opaque AI decision systems are highly attractive tools for those in power; they offer a virtually bulletproof accountability shield.” (p. 119) In other words, like “the dog ate my homework excuse,” a political actor can say: “I didn’t make that unpopular or disastrous decision, the algorithm did.” This is especially true if the “model was trained using deep learning and other opaque techniques; even the software engineers and data scientists who created it will not know exactly how or why it works in a given case.” (p. 127) Do you agree with Vallor’s point as a factual matter? As a policy concern? Explain.
6. In writing about autonomous weapons systems (AWS), guns and the like that can make their own decisions about when and upon whom to fire, Eggert writes: “Free from human limitations [AWS] promise the prospect of a world without abuses like [human soldiers have often committed]. They do not succumb to anger or fear or vengefulness. And they can process vast amounts of information at superhuman speed. But, also unlike humans, they have no conscience to wrestle with.” (p. 7) Egger then asks: “How should we weigh the promise of AWS to reduce harm to innocent people against the value of accountability?” (p. 13) In terms of accountability, how do you feel about the use of AWS without humans “in the loop”? Explain.
7. Several observers (including Harari, Wynn-Williams, and Lawrence) have written extensively about the damage that Facebook’s engagement-maximizing algorithms inflicted in Myanmar in 2016-2017 by inciting anti-Rohingya violence leading to genocide. As Harari noted:
“In 2016-2017, Facebook’s algorithms were making active and fateful decisions by themselves….The algorithms could have chosen to recommend sermons on compassion or cooking classes, but they decided to spread hate-filled rumors.” (p. 197-198)
Who is accountable for the genocide? Facebook’s algorithm for spreading inciting information? The company? The engineers who developed the algorithm to maximize engagement without regard to potential dangers or costs? Those who turned the inciting information into violence? Is this type of situation an argument for always having humans in the loop? Is that even feasible? Explain.
8. As AI agents become more active, they are likely to not only produce many more good results, but also to cause more damage. A general requirement for criminal liability is criminal intent (mens rea). Floridi and Sanders suggest that AI agents “may be causally accountable for a criminal act), but only a human agent can be morally responsible for it.” Do you agree with their distinction on this key issue of accountability? Why or why not?
9. When we think about AI accountability, do we need to be keeping users in mind as well? In what ways do we need to be considering their responsibility? What considerations do you personally try to have top of mind when you interact with AI?