Skip to main content

A.I. & Agency: Because You Liked…

AI Recommender Systems are shaping our viewing habits by curating what we see, raising deep questions about autonomy, agency, and control.

Bob was a surly prisoner. You might be surly yourself had you been incarcerated in the state penitentiary for 37 years for a crime you did not commit. Bob was frequently punished with stints in solitary confinement and not allowed television or even newspaper privileges. And certainly, no internet. As a result, when he was finally exonerated and released in 2025, he was a bit naive in many ways. Financially secure because of the payment he received from his state’s wrongful incarceration compensation fund, he spends most days sitting in front of a screen with a cold beer trying to catch up on movies and politics.

Given his naiveté, when Bob goes to Google, Netflix, YouTube, TikTok, or many other online sites or services, he is unaware that the offerings that appear on the screen are usually provided by Recommender Systems (RSs). RSs, as defined by Bonicalzi et al, are “algorithms based on artificial intelligence (AI)—mostly on machine learning techniques—that support user-tailored decision-making by providing suggestions out of a wider catalog, i.e., about news, videos, advertisements, or exercises, based on the users’ or like-minded users’ past choices or personal information.”

Bob doesn’t have a personal user history or a cohort of like-minded users, having spent nearly all the internet revolution incarcerated. Frankly, he is largely a blank slate. Nonetheless, he is offered a constant stream of recommendations on whatever online platform he is engaging with. So how are the RSs determining what Bob should watch next? What are their goals for his online experience? What are the RSs’ overall aims and intentions? Are those easily accessible and clear to Bob? How much freedom does he truly have to shape his online experience? And, given that the RSs feeds choose an endless stream of options for Bob’s selection, how much agency does he really have in forming his views and opinions?

In this case study and following discussion questions, the focus is not only on agency, but also on another “A” word—autonomy. These two concepts are closely related and both impacted by RSs. One source[i] defines “autonomy” as “the inherent right and capacity to make choices impacting one’s life, free from external interference,” and “agency” as the ability to act on those choices, set goals, and influence outcomes.” RSs impact both autonomy and agency, usually in similar fashion. People who are unknowingly manipulated by RSs suffer damage to both their ability (agency) and their capacity (autonomy) to make fully-informed choices regarding their beliefs and their life’s path.


[i] https://www.mentalhealthwellnessmhw.com/blog/agency-and-autonomy

Discussion Questions

1. Do RS designers owe a duty to act ethically when designing their algorithms? Why or why not?

2. According to Bonicalzi and colleagues, RSs have been associated with ethical concerns in the following areas, among others:

• Tracking and monitoring of users’ data
• Diffusion of inappropriate content
• Breach of privacy and data protection laws, extending to the selling of personal information to third parties
• Opacity in how recommendations are generated due to the complexity or even secrecy of the underlying mathematical models, with the connected problem of accountability
• Lack of fairness and biases in how data is sampled and used to shape recommendation

Do these make sense to you? Can you think of examples fitting these categories? Can you think of other areas of concern that RS designers should keep in mind? Have you had any personal experiences with RSs that concerned you?

3. Another area of especial concern relates to personal autonomy. Professor Sahebi and Formosa believe that “autonomy is broadly a matter of developing autonomy competencies, having authentic ends and control over key aspects of your life, and not being manipulated, coerced, and controlled by others.” According to these authors, “[a]utonomy competences are those skills, capacities, and powers that agents need to be able to act autonomously, such as the ability to reason and critically reflect on their values, imagine different alternatives, develop a conception of the good, and regard themselves as self-directing agents worthy of respect.” Do these formulations sound reasonable to you? Why or why not? Can you think of a better definition of “autonomy”?

4. Luciano Floridi and other experts as part of the “AI4People” project that attempts to envision how to make an AI world human-friendly, concluded that autonomy should be a basic principle of AI ethics, just as it is of bioethics, and that enhancing human agency should be a key focus of AI development. Do you agree? Why or why not?

5. Might the sort of beliefs Bob will choose to hold and the type of person he will be in a year depend on whether the RSs send him to political videos from Fox News rather than from MSNBC (or vice versa)? Or to movies like Nickel Boys and The Zone of Interest (#1 and #2 on IndieWire’s 2025 list of the best movies of the 2020s) or Skinamarink (Flicknerd’s worst horror movie of the decade)? Explain your reasoning.

6. Should RS designers’ primary (or perhaps exclusive) goal be to achieve accuracy? That is, to as accurately as possible recommend what Bob or like-minded users would most like to watch? Rodriguez and Watkins, for example, believe that accuracy should not be the sole criterion and suggest that Bob should be protected from inappropriate or harmful content. What do you think the RS designer’s primary goal should be?

7. Will Bob be able to exercise autonomy if an RS takes him to a source which is spouting conspiracy theories that, for example, maintain that people of Bob’s race are inherently inferior? Professors Sahebi and Formosa believe that people “may lack autonomy if their practical identity is the result of false beliefs.” What do you think?

8. Sahebi and Formosa observe:

“it is worth noting that the interest of social media companies is not to ensure that its users are provided with diverse viewpoints, political ideologies, or news. Their interest lies in ensuring they can maintain the attention of users to generate revenues through advertising and other means by showing users what they want to see or will cause them outrage. This is how harmful echo chambers are formed.”

Do you agree with this? What are your answer’s implications for autonomy?

9. Tang and Winoto suggest the following scenario:

A household of four including parents Alex and Mary, 15-year-old Chloe, and 12-year-old John have subscribed to an online movie rental service, for example Netflix, for over 2 years. Occasionally, the family receives recommendations from its highly successful and profitable personalized recommendation service (known as Cinematch in Netflix) based on the family’s rental history. Alex and Mary enjoy war and action movies (giving high ratings to movies such as Schindler’s List and the Bourne series); therefore two movies, The Kite Runner and Mission Impossible 4: Ghost Protocol, are among the recommended items. However, both these movies should not be recommended without warnings to this account, as they are not appropriate for the two children John and Chloe in this household. From the system’s perspective, both movies will be favored by this user (a collective account): they are algorithmically appropriate but not ethically appropriate (both The Kite Runner and MI4 are listed in the Internet Movie Database (IMDB) as ‘PG-13’; The Kite Runner contains a child rape which is especially inappropriate for young children). So, should the recommender system (RS) make the suggestions or not?

How do you answer the question Tang and Winoto pose? Are you more, or less, concerned about children’s autonomy compared to that of adults? Explain.

A.I. & Fairness: Beyond Blind Spot Bias?

A.I. & Fairness: Beyond Blind Spot Bias?

AI tools from companies like Amazon and Google were supposed to remove human bias from hiring, but instead ended up replicating and reinforcing the same discrimination they aimed to fix.

View

A.I. & Transparency: An Epic Deception

A.I. & Transparency: An Epic Deception

Epic’s widely used AI tool for sepsis detection promised accuracy, but the “black-box” nature of the algorithm made it difficult to quickly and effectively evaluate its effectiveness.

View

A.I. & Trust: Tay’s Trespasses

A.I. & Trust: Tay’s Trespasses

Microsoft’s Tay, an AI chatbot intended as a friendly companion, was quickly manipulated into spewing offensive content by internet trolls—highlighting the need for trustworthy AI systems.

View

AI Ethics

AI Ethics

AI ethics focuses on ensuring that AI is developed and deployed responsibly, promoting fairness, transparency, accountability, and societal well-being while minimizing harm.

View

Artificial Intelligence

Artificial Intelligence

Artificial intelligence (AI) describes machines that can think and learn like human beings. AI is continually evolving, and includes subfields such as machine learning and generative AI.

View

Algorithmic Bias

Algorithmic Bias

Algorithmic bias occurs when AI algorithms reflect human prejudices due to biased data or design, leading to unfair or discriminatory outcomes.

View

Technological Somnambulism

Technological Somnambulism

Technological somnambulism refers to the unreflective, blind creation and adoption of new technologies without consideration for their long-term societal and ethical impacts.

View

AI Ethics: The Atomic Human

AI Ethics: The Atomic Human

Sound moral judgments must be based on facts. People court disaster when they make morally-tinged decisions based on nothing more than speculation. We believe that at this particular point in time, artificial intelligence (AI) presents the world with several of its most critical moral issues.  We have addressed AI ethics in several recent blog posts […]

View

AI and the Energy Issue

AI and the Energy Issue

Most scientists believe that climate change threatens mankind’s future and that human activity contributes mightily to that change, especially as it involves fossil fuels. A November 2024 update on climate progress found much reason for pessimism, particularly because fossil fuel subsidies are at an all-time high and funding for fossil fuel-prolonging projects quadrupled from 2021 […]

View

AI Ethics: The Obligation to Design for Safety

AI Ethics: The Obligation to Design for Safety

When architects design buildings or engineers design planes, they have a moral obligation to protect humans from harm. Think of the Hyatt Regency Walkway collapse in Kansas City or the Boeing 737 MAX crashes. Or think about Ford Motor Company which was in a race to match Japanese imports and beat domestic competitors General Motors […]

View

Bibliography

Sofia Bonicalzi et al., “Artificial Intelligence and Autonomy: On the Ethical Dimension of Recommender Systems,” Topoi, Vol. 42: 819-832 (2023).

Marietjie Botes, “Autonomy and the Social Dilemma of Online Manipulative Behavior,” AI and Ethics 3: 315-323 (2023).

Christopher Burr et al., “An Analysis of the Interaction Between Intelligent Software Agents and Human Users,” Minds and Machines 28: 735-774 (2018).

Guillaume Chaslot, “Discredit ‘the Media,’” (2018), at https://guillaumechaslot.medium.com/how-algorithms-can-learn-to-discredit-the-media-d1360157c4fa.

Matthew Hutson, “Can AI’s Recommendations Be Less Insidious? (Oct. 2. 2022), at https://spectrum.ieee.org/recommendation-engine-insidious.

Floridi, Luciano et al., “ÁI4People—an Ethical Framework for a Good AI Society: Opportunities, Risks, Principles, and Recommendations,” Minds and Machines 28:689-707 (2018).

Floridi, Luciano, The Ethics of Artificial Intelligence: Principles, Challenges, and Opportunities (2023).

Gerald Kembellec et al., eds., Recommender Systems (2014).

Michael Klenk & Jeff Hancock, “Autonomy and Online Manipulation,” Internet Policy Review 1:1-11 (2019), at https://philpapers.org/rec/KLEAAO-3.

Silvia Milano, “Recommended!” in AI Morality (David Edmonds, ed., 2024).

Francesco Ricci et al., eds., Recommender Systems Handbook (3d Ed. 2022).

Marko Rodriguez & Jennifer Watkins, “Faith in the Algorithm, Part 2: Computational Eudaemonics,” in Knowledge-Based and Intelligent Information and Engineering Systems, (Juan D. Velasquez et al., eds.2009).

Siavosh Sahebi & Paul Formosa, “Social Media and Its Negative Impacts on Autonomy,” Philosophy & Technology 35:70 (2022).

Andreas Spahn, “And Lead Us (Not) into Persuasion…? Persuasive Technology and the Ethics of Communication,” Science & Engineering Ethics Vol. 18: 633-650 (2012).

Christopher Summerfield, These Strange New Minds (2025).

Daniel Susser et al., “Technology, Autonomy, and Manipulation,” Internet Policy Review 8(2) (2019).

Tiffany Tang & Pinata Winoto, “I Should Not Recommend It to You Even If You Will Like It: The Ethics of Recommender Systems,” Review of Hypermedia and Multimedia 22(1-2): 111-138 (2016).

Shares