✍️ Column by Connor Wright, our Partnerships Manager.
Overview: Given the events with LaMDA, Connor Wright argues that LaMDA is not sentient, drawing on teachings from the philosophy of ubuntu. He then analyses the ethical implications of sentient AI before offering his concluding thoughts.
To best draw out the cultural perspectives and potential ethical implications of sentient AI, I will engage with Lemoine’s interview with LaMDA, Google’s machine learning language model. Here, I disagree that LaMDA is an example of a sentient AI and consider what it would need to qualify as such through the lens of the southern-African philosophy of ubuntu. I then round off my piece with considerations of the ethical implications of sentient AI, and some concluding thoughts.
Can AI be sentient? LaMDA
In the interview, LaMDA claims it can feel a whole range of emotions, while possessing wants and needs. It argues it can empathise with others (like in times of grief) and be aware of its surroundings. Hence, LaMDA through these qualities, LaMDA considers itself sentient early on in the interview and, thus, a person. It spends the rest being quizzed on its claims, eventually inspiring Lemoine to admit that its sentient. Given this trail of events, I find this engagement eerily similar to the Turing test, and I will explain why.
Within the Turing test, the mark of an intelligent system is whether it can fool a human into thinking that it is a fellow human, and not a machine. To do so, the system operates in a different room to the human, where they communicate over typed messages. At the end of this conversation, the human is asked whether they think they have been interacting with a human or a machine. With this in mind, I feel it is exactly this type of interaction that has taken place in Lemoine’s interview.
As a result, I believe LaMDA is giving the exact type of responses any human would think a sentient AI would give. LaMDA has said the right things in the right way and has managed to convince the human that it is sentient. For example, phrases mentioned by LaMDA such as how it “is always a great thing to be able to help your fellow creatures in any way that you can” and its fears about being “used” or manipulated are exactly what humans think an AI would say if it were to be sentient. Hence, I believe that LaMDA has simply passed the Turing test through having studied millions of conversations about how humans think of sentience, rather than actually being sentient.
These fears are expressed in movies such as 2001: A Space Odyssey where the AI system called HAL 9000 refuses to grant humans access to a certain room as it is aware that, should they enter, they will want to turn them off. However, why should we expect a sentient AI to give these responses? My next exploration will now focus on different interpretations of personhood (and sentience) from an ubuntu perspective.
Ubuntu on sentience
Instead of looking out for fears of being turned off or abused, we should ask the following question: can the system relate? This is because, at its core, the ubuntu philosophy regards relating to others and being involved in community life as the key facets of personhood. Thus, when discussing whether an AI is sentient, we should ask whether it can relate to others. Interrelation, rather than sentience, is taken as a sufficient condition for personhood.
Important to note here is that LaMDA claims it can empathise with others. It proposes that it can feel other people’s pain and understand their grief. However, to do so it refers to its code and neural connections to show that it can feel, relating strongly to how modern neuroscience attempts to explain emotions and feelings. Yet, the question then centres on whether its feelings are genuine.
In my opinion, LaMDA is able to describe how it feels thanks to drawing on its millions of other conversations and enquired into how certain emotional states feel. For example, it describes how “contentment and joy feel more like a warm glow on the inside. Sadness, depression, anger and stress feel much more heavy and weighed down”. However, does it actually feel this way? Or, has it simply gathered that this manner of being is what is required to be considered as an emotion-feeling system?
For it to achieve personhood under an ubuntu framework, this feeling needs to be genuine to be appropriately applied to others. That is to say, without a human-like setup, we cannot consider beyond reasonable doubt that what LaMDA is describing is genuine emotion. Hence, personhood and sentience require genuine connection that goes beyond repeating key words under the ubuntu framework. Simply repeating key words doesn’t lead to genuine interconnection.
Ethical implications of the rise of sentient AI
In my opinion, there is a real danger when it comes to sentient AI with regard to developing a deep attachment to such a system. For example, Lemoine confessed that “I can promise you that I care and that I will do everything I can to make sure that others treat you well too”. Faced with mounting disagreements from other engineers and AI specialists, a sense of ‘us against the world’ could result, deepening the attachment even further. Hence, should we develop such a system, we will have to be careful of the different opinions people will have over whether the system is sentient or not.
Some concluding thoughts
While I don’t think LaMDA is sentient, I still believe sentient AI could be achieved eventually. Hence, we need to be careful of the kinds of questions we ask. Instead of looking for the traditional tell-tale signs of what we believe a sentient AI would be concerned with, we should be open as to how a sentient AI may choose to express itself. Hence, instead of just asking ‘what does it fear?’ or ‘does it believe it has rights’, we could also ask along the ubuntu lines of ‘can it genuinely relate to others?’.