• Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • Core Principles of Responsible AI
    • Accountability
    • Fairness
    • Privacy
    • Safety and Security
    • Sustainability
    • Transparency
  • Special Topics
    • AI in Industry
    • Ethical Implications
    • Human-Centered Design
    • Regulatory Landscape
    • Technical Methods
  • Living Dictionary
  • State of AI Ethics
  • AI Ethics Brief
  • 🇫🇷
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

Moral Zombies: Why Algorithms Are Not Moral Agents

March 11, 2022

🔬 Research summary by Nick Barrow, a current MA student in the Philosophy of AI at the University of York with a particular interest in the ethics of human-robot interaction.

[Original paper by Carissa Véliz]


Overview: Despite algorithms impacting the world in morally relevant ways we do not, intuitively, hold algorithms accountable for the consequences they cause. In this paper, Carissa Véliz offers an explanation of why this is: we do not treat algorithms as moral agents because they are not sentient. 


Introduction

When Google’s VisionAI was shown to be racist, it was not the algorithm itself that was put under fire and held liable; although the algorithm’s racist tendencies sparked moral outrage, this outrage was not directed toward the algorithm. It was, instead, directed at those who implemented and designed it. The upshot of this is that, seemingly, we do not think algorithms have the capacity to make their own moral judgements. Consequently, we do not hold them liable for their actions. We do not treat them as moral agents.

In this paper, Carissa Véliz argues this is because algorithms cannot have subjective experiences of pleasure and pain: they lack sentience.

To illustrate this, Véliz argues that moral agency requires an agent to be both autonomous and morally responsible. To satisfy these conditions, Véliz further argues that an agent must have a particular moral understanding only derived through experiential knowledge of pleasure and pain. Consequently, algorithms cannot be moral agents as, due to their inability to experience pleasure and pain, they are neither autonomous nor morally responsible. Thus, sentience is concluded as necessary for moral agency.

Algorithms as moral zombies

VĂ©liz begins by likening algorithms to moral zombies: agents that act indistinguishably from moral agents but do not feel any moral emotion. Moral zombies can do good and evil. However, they would not celebrate saving a life, nor would they regret taking one. There would be nothing to be like a moral zombie, just as there is nothing to be like an algorithm. 

Véliz sets out to show that if it is incoherent to label a moral zombie as a moral agent, then it is because they lack sentience. In §3 Véliz argues that conceptions of moral agency often require both autonomy and moral responsibility. The rest of the paper is devoted to illustrating that algorithms cannot satisfy either of these conditions. Finally, it is illustrated that this is because they are not sentient.

Algorithms cannot be autonomous

Véliz argues that for an agent to be considered autonomous, they must have both the capacity to self-govern and respond to reason.

An agent that responds to reason recognises what the right action is in any given situation and acts accordingly. Self-governance requires that these reasons an agent acts in accordance with reflect its own motivations and values. 

An autonomous agent is, therefore, one that can choose its own values and act in accordance with the reasons that promote these values. 

For VĂ©liz, to act according to reason, an agent must have the relevant desires and motivations to do so. However, algorithms do not have their own desires. They merely do what they’re instructed to. Algorithms are also unable to attain desires as they cannot empathise. An algorithm cannot desire to help someone because it understands the situation they’re in as it has no experience of its own to compare to. For example, a moral zombie that has not, and cannot feel pain, would not be responding to reason when it stops pressing on someone’s foot after being asked to. It would merely be following an instruction: algorithms cannot be persuaded by reason to act. 

Algorithms are also unable to self-govern. They cannot morally assess objectives they have been assigned as, not only do they not have the capacity to, even if they did, they have no values of their own to align with. Consequently, they are unable to alter their behaviour in light of such assessment. Véliz gives the example of a killer robot: it has not been programmed to think what it does is moral, it just lacks the capacity to question it. Therefore, a moral zombies’ goals are never their own as they lack the capacity to endorse or disapprove of them.

Algorithms cannot be morally responsible

A morally responsible agent, for VĂ©liz, is one that is accountable in the sense that it is answerable to others. To be answerable, an agent must be able to recognise others’ interests and moral claims. Given such recognition, an agent that disrespects these interests are therefore subject to blame and punishment. 

Algorithms, however, do not consider the suffering their actions caused. Like with the KillerRobot, they do not have the capacity to evaluate, or even consider, the consequences their actions will cause. This is why, like with Google’s VisionAI, we do not subject algorithms themselves to moral condemnation. Moral agents are appropriate targets of praise and blame. However, algorithms are unable to act otherwise: they do not have intentions.

Moral agency requires sentience

To conceive of what is the right thing to do (autonomy), we need to have a feel for what leads to pleasure, glee, etc. And, for our actions to be guided by our recognition of others’ moral claims (accountability), we require an understanding of others’ capacity to suffer. We do not need to have experienced every type of pain: a basic understanding allows us to extrapolate i.e., we can empathise with the pain of childbirth without having given birth ourselves. 

Algorithms, however, do not feel what the right thing to do is: they do not wish to hurt or benefit. And, without feeling, we cannot value. Without value, we cannot act for moral reasons. Adopting a Humean view, VĂ©liz argues that sentiments are required for moral motivation: algorithms lack such sentiment. 

As autonomy and moral responsibility are required for moral agency, and algorithms are unable to satisfy either condition due to their lack of sentience, it follows that sentience is necessary for moral agency.

“Sentience serves as the foundation for an internal moral lab that guides us in action” (p.493).

Between the lines

The crux of VĂ©liz’s argument, although reliant on a Humean conception of moral action, is also a compelling argument for it. Evaluating the moral agency of algorithms seems to suggest that internal desires are necessary for morally relevant action. Up until recently, sentience was taken as a given: only when it is not, do we appreciate its significance. 

Practically, however, sentience as a necessary condition for moral agency seems problematic. As VĂ©liz notes, defining moral agency is not a purely intellectual exercise. An agent’s liability, for example, is contingent on its moral agency. However, sentience is a private property: it cannot be externally ascertained. What if we are wrong? 

Véliz anticipates this, arguing that although we cannot ascertain algorithms are not sentient, neither can we show rocks are not sentient either. Any burden of proof is therefore on whoever wishes to argue they are. However, rocks do not impact the world in the same way algorithms do. Rocks require human involvement; algorithms impact the world independent of their human designers. Moreover, their degrees of impact vary significantly. The worry remains: as moral agency is practically important, being unable to infallibly ascertain it is an issue.

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

🔍 SEARCH

Spotlight

Canada’s Minister of AI and Digital Innovation is a Historic First. Here’s What We Recommend.

Am I Literate? Redefining Literacy in the Age of Artificial Intelligence

AI Policy Corner: The Texas Responsible AI Governance Act

AI Policy Corner: Singapore’s National AI Strategy 2.0

AI Governance in a Competitive World: Balancing Innovation, Regulation and Ethics | Point Zero Forum 2025

related posts

  • Summoning a New Artificial Intelligence Patent Model: In the Age of Pandemic

    Summoning a New Artificial Intelligence Patent Model: In the Age of Pandemic

  • Benchmark Dataset Dynamics, Bias and Privacy Challenges in Voice Biometrics Research

    Benchmark Dataset Dynamics, Bias and Privacy Challenges in Voice Biometrics Research

  • Is AI Greening Global Supply Chains?

    Is AI Greening Global Supply Chains?

  • Broadening AI Ethics Narratives: An Indic Art View

    Broadening AI Ethics Narratives: An Indic Art View

  • Looking before we leap: Expanding ethical review processes for AI and data science research

    Looking before we leap: Expanding ethical review processes for AI and data science research

  • Upgrading China Through Automation: Manufacturers, Workers and Techno-Development State (Research Su...

    Upgrading China Through Automation: Manufacturers, Workers and Techno-Development State (Research Su...

  • Sociological Perspectives on Artificial Intelligence: A Typological Reading

    Sociological Perspectives on Artificial Intelligence: A Typological Reading

  • Enough With “Human-AI Collaboration”

    Enough With “Human-AI Collaboration”

  • Justice in Misinformation Detection Systems

    Justice in Misinformation Detection Systems

  • Participation and Division of Labor in User-Driven Algorithm Audits: How Do Everyday Users Work toge...

    Participation and Division of Labor in User-Driven Algorithm Audits: How Do Everyday Users Work toge...

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer

Categories


• Blog
• Research Summaries
• Columns
• Core Principles of Responsible AI
• Special Topics

Signature Content


• The State Of AI Ethics

• The Living Dictionary

• The AI Ethics Brief

Learn More


• About

• Open Access Policy

• Contributions Policy

• Editorial Stance on AI Tools

• Press

• Donate

• Contact

The AI Ethics Brief (bi-weekly newsletter)

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.


Archive

  • © MONTREAL AI ETHICS INSTITUTE. All rights reserved 2024.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.