🔬 Research summary by Nick Barrow, a current MA student in the Philosophy of AI at the University of York with a particular interest in the ethics of human-robot interaction.
[Original paper by Carissa VĂ©liz]
Overview: Despite algorithms impacting the world in morally relevant ways we do not, intuitively, hold algorithms accountable for the consequences they cause. In this paper, Carissa VĂ©liz offers an explanation of why this is: we do not treat algorithms as moral agents because they are not sentient.Â
Introduction
When Google’s VisionAI was shown to be racist, it was not the algorithm itself that was put under fire and held liable; although the algorithm’s racist tendencies sparked moral outrage, this outrage was not directed toward the algorithm. It was, instead, directed at those who implemented and designed it. The upshot of this is that, seemingly, we do not think algorithms have the capacity to make their own moral judgements. Consequently, we do not hold them liable for their actions. We do not treat them as moral agents.
In this paper, Carissa VĂ©liz argues this is because algorithms cannot have subjective experiences of pleasure and pain: they lack sentience.
To illustrate this, VĂ©liz argues that moral agency requires an agent to be both autonomous and morally responsible. To satisfy these conditions, VĂ©liz further argues that an agent must have a particular moral understanding only derived through experiential knowledge of pleasure and pain. Consequently, algorithms cannot be moral agents as, due to their inability to experience pleasure and pain, they are neither autonomous nor morally responsible. Thus, sentience is concluded as necessary for moral agency.
Algorithms as moral zombies
VĂ©liz begins by likening algorithms to moral zombies: agents that act indistinguishably from moral agents but do not feel any moral emotion. Moral zombies can do good and evil. However, they would not celebrate saving a life, nor would they regret taking one. There would be nothing to be like a moral zombie, just as there is nothing to be like an algorithm.
Véliz sets out to show that if it is incoherent to label a moral zombie as a moral agent, then it is because they lack sentience. In §3 Véliz argues that conceptions of moral agency often require both autonomy and moral responsibility. The rest of the paper is devoted to illustrating that algorithms cannot satisfy either of these conditions. Finally, it is illustrated that this is because they are not sentient.
Algorithms cannot be autonomous
VĂ©liz argues that for an agent to be considered autonomous, they must have both the capacity to self-govern and respond to reason.
An agent that responds to reason recognises what the right action is in any given situation and acts accordingly. Self-governance requires that these reasons an agent acts in accordance with reflect its own motivations and values.
An autonomous agent is, therefore, one that can choose its own values and act in accordance with the reasons that promote these values.
For Véliz, to act according to reason, an agent must have the relevant desires and motivations to do so. However, algorithms do not have their own desires. They merely do what they’re instructed to. Algorithms are also unable to attain desires as they cannot empathise. An algorithm cannot desire to help someone because it understands the situation they’re in as it has no experience of its own to compare to. For example, a moral zombie that has not, and cannot feel pain, would not be responding to reason when it stops pressing on someone’s foot after being asked to. It would merely be following an instruction: algorithms cannot be persuaded by reason to act.
Algorithms are also unable to self-govern. They cannot morally assess objectives they have been assigned as, not only do they not have the capacity to, even if they did, they have no values of their own to align with. Consequently, they are unable to alter their behaviour in light of such assessment. Véliz gives the example of a killer robot: it has not been programmed to think what it does is moral, it just lacks the capacity to question it. Therefore, a moral zombies’ goals are never their own as they lack the capacity to endorse or disapprove of them.
Algorithms cannot be morally responsible
A morally responsible agent, for Véliz, is one that is accountable in the sense that it is answerable to others. To be answerable, an agent must be able to recognise others’ interests and moral claims. Given such recognition, an agent that disrespects these interests are therefore subject to blame and punishment.
Algorithms, however, do not consider the suffering their actions caused. Like with the KillerRobot, they do not have the capacity to evaluate, or even consider, the consequences their actions will cause. This is why, like with Google’s VisionAI, we do not subject algorithms themselves to moral condemnation. Moral agents are appropriate targets of praise and blame. However, algorithms are unable to act otherwise: they do not have intentions.
Moral agency requires sentience
To conceive of what is the right thing to do (autonomy), we need to have a feel for what leads to pleasure, glee, etc. And, for our actions to be guided by our recognition of others’ moral claims (accountability), we require an understanding of others’ capacity to suffer. We do not need to have experienced every type of pain: a basic understanding allows us to extrapolate i.e., we can empathise with the pain of childbirth without having given birth ourselves.
Algorithms, however, do not feel what the right thing to do is: they do not wish to hurt or benefit. And, without feeling, we cannot value. Without value, we cannot act for moral reasons. Adopting a Humean view, VĂ©liz argues that sentiments are required for moral motivation: algorithms lack such sentiment.
As autonomy and moral responsibility are required for moral agency, and algorithms are unable to satisfy either condition due to their lack of sentience, it follows that sentience is necessary for moral agency.
“Sentience serves as the foundation for an internal moral lab that guides us in action” (p.493).
Between the lines
The crux of Véliz’s argument, although reliant on a Humean conception of moral action, is also a compelling argument for it. Evaluating the moral agency of algorithms seems to suggest that internal desires are necessary for morally relevant action. Up until recently, sentience was taken as a given: only when it is not, do we appreciate its significance.
Practically, however, sentience as a necessary condition for moral agency seems problematic. As Véliz notes, defining moral agency is not a purely intellectual exercise. An agent’s liability, for example, is contingent on its moral agency. However, sentience is a private property: it cannot be externally ascertained. What if we are wrong?
VĂ©liz anticipates this, arguing that although we cannot ascertain algorithms are not sentient, neither can we show rocks are not sentient either. Any burden of proof is therefore on whoever wishes to argue they are. However, rocks do not impact the world in the same way algorithms do. Rocks require human involvement; algorithms impact the world independent of their human designers. Moreover, their degrees of impact vary significantly. The worry remains: as moral agency is practically important, being unable to infallibly ascertain it is an issue.