• Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • Core Principles of Responsible AI
    • Accountability
    • Fairness
    • Privacy
    • Safety and Security
    • Sustainability
    • Transparency
  • Special Topics
    • AI in Industry
    • Ethical Implications
    • Human-Centered Design
    • Regulatory Landscape
    • Technical Methods
  • Living Dictionary
  • State of AI Ethics
  • AI Ethics Brief
  • 🇫🇷
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

Can an AI be sentient? Cultural perspectives on sentience and on the potential ethical implications of the rise of sentient AI.

October 13, 2022

✍️ Column by Connor Wright, our Partnerships Manager.


Overview: Given the events with LaMDA, Connor Wright argues that LaMDA is not sentient, drawing on teachings from the philosophy of ubuntu. He then analyses the ethical implications of sentient AI before offering his concluding thoughts.


To best draw out the cultural perspectives and potential ethical implications of sentient AI, I will engage with Lemoine’s interview with LaMDA, Google’s machine learning language model. Here, I disagree that LaMDA is an example of a sentient AI and consider what it would need to qualify as such through the lens of the southern-African philosophy of ubuntu. I then round off my piece with considerations of the ethical implications of sentient AI, and some concluding thoughts.

Can AI be sentient? LaMDA 

In the interview, LaMDA claims it can feel a whole range of emotions, while possessing wants and needs. It argues it can empathise with others (like in times of grief) and be aware of its surroundings. Hence, LaMDA through these qualities, LaMDA considers itself sentient early on in the interview and, thus, a person. It spends the rest being quizzed on its claims, eventually inspiring Lemoine to admit that its sentient. Given this trail of events, I find this engagement eerily similar to the Turing test, and I will explain why. 

Within the Turing test, the mark of an intelligent system is whether it can fool a human into thinking that it is a fellow human, and not a machine. To do so, the system operates in a different room to the human, where they communicate over typed messages. At the end of this conversation, the human is asked whether they think they have been interacting with a human or a machine. With this in mind, I feel it is exactly this type of interaction that has taken place in Lemoine’s interview.

As a result, I believe LaMDA is giving the exact type of responses any human would think a sentient AI would give. LaMDA has said the right things in the right way and has managed to convince the human that it is sentient. For example, phrases mentioned by LaMDA such as how it “is always a great thing to be able to help your fellow creatures in any way that you can” and its fears about being “used” or manipulated are exactly what humans think an AI would say if it were to be sentient. Hence, I believe that LaMDA has simply passed the Turing test through having studied millions of conversations about how humans think of sentience, rather than actually being sentient.

These fears are expressed in movies such as 2001: A Space Odyssey where the AI system called HAL 9000 refuses to grant humans access to a certain room as it is aware that, should they enter, they will want to turn them off. However, why should we expect a sentient AI to give these responses? My next exploration will now focus on different interpretations of personhood (and sentience) from an ubuntu perspective.

Ubuntu on sentience

Instead of looking out for fears of being turned off or abused, we should ask the following question: can the system relate? This is because, at its core, the ubuntu philosophy regards relating to others and being involved in community life as the key facets of personhood. Thus, when discussing whether an AI is sentient, we should ask whether it can relate to others. Interrelation, rather than sentience, is taken as a sufficient condition for personhood.

Important to note here is that LaMDA claims it can empathise with others. It proposes that it can feel other people’s pain and understand their grief. However, to do so it refers to its code and neural connections to show that it can feel, relating strongly to how modern neuroscience attempts to explain emotions and feelings. Yet, the question then centres on whether its feelings are genuine.

In my opinion, LaMDA is able to describe how it feels thanks to drawing on its millions of other conversations and enquired into how certain emotional states feel. For example, it describes how “contentment and joy feel more like a warm glow on the inside. Sadness, depression, anger and stress feel much more heavy and weighed down”. However, does it actually feel this way? Or, has it simply gathered that this manner of being is what is required to be considered as an emotion-feeling system?

For it to achieve personhood under an ubuntu framework, this feeling needs to be genuine to be appropriately applied to others. That is to say, without a human-like setup, we cannot consider beyond reasonable doubt that what LaMDA is describing is genuine emotion. Hence, personhood and sentience require genuine connection that goes beyond repeating key words under the ubuntu framework. Simply repeating key words doesn’t lead to genuine interconnection.

Ethical implications of the rise of sentient AI

In my opinion, there is a real danger when it comes to sentient AI with regard to developing a deep attachment to such a system. For example, Lemoine confessed that “I can promise you that I care and that I will do everything I can to make sure that others treat you well too”. Faced with mounting disagreements from other engineers and AI specialists, a sense of ‘us against the world’ could result, deepening the attachment even further. Hence, should we develop such a system, we will have to be careful of the different opinions people will have over whether the system is sentient or not.

Some concluding thoughts

While I don’t think LaMDA is sentient, I still believe sentient AI could be achieved eventually. Hence, we need to be careful of the kinds of questions we ask. Instead of looking for the traditional tell-tale signs of what we believe a sentient AI would be concerned with, we should be open as to how a sentient AI may choose to express itself. Hence, instead of just asking ‘what does it fear?’ or ‘does it believe it has rights’, we could also ask along the ubuntu lines of ‘can it genuinely relate to others?’.

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

🔍 SEARCH

Spotlight

Canada’s Minister of AI and Digital Innovation is a Historic First. Here’s What We Recommend.

Am I Literate? Redefining Literacy in the Age of Artificial Intelligence

AI Policy Corner: The Texas Responsible AI Governance Act

AI Policy Corner: Singapore’s National AI Strategy 2.0

AI Governance in a Competitive World: Balancing Innovation, Regulation and Ethics | Point Zero Forum 2025

related posts

  • The Chief AI Ethics Officer: A Champion or a PR Stunt?

    The Chief AI Ethics Officer: A Champion or a PR Stunt?

  • The Ethical AI Startup Ecosystem 04: Targeted AI Solutions and Technologies

    The Ethical AI Startup Ecosystem 04: Targeted AI Solutions and Technologies

  • AI Policy Corner: The Texas Responsible AI Governance Act

    AI Policy Corner: The Texas Responsible AI Governance Act

  • A Beginner’s Guide for AI Ethics

    A Beginner’s Guide for AI Ethics

  • Social Context of LLMs - the BigScience Approach, Part 3: Data Governance and Representation

    Social Context of LLMs - the BigScience Approach, Part 3: Data Governance and Representation

  • Real talk: What is Responsible AI?

    Real talk: What is Responsible AI?

  • ISED Launches AI Risk Management Guide Based on Voluntary Code

    ISED Launches AI Risk Management Guide Based on Voluntary Code

  • Knowledge, Workflow, Oversight: A framework for implementing AI ethics

    Knowledge, Workflow, Oversight: A framework for implementing AI ethics

  • Canada’s Minister of AI and Digital Innovation is a Historic First. Here’s What We Recommend.

    Canada’s Minister of AI and Digital Innovation is a Historic First. Here’s What We Recommend.

  • The Ethical AI Startup Ecosystem 02: Data for AI

    The Ethical AI Startup Ecosystem 02: Data for AI

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer

Categories


• Blog
• Research Summaries
• Columns
• Core Principles of Responsible AI
• Special Topics

Signature Content


• The State Of AI Ethics

• The Living Dictionary

• The AI Ethics Brief

Learn More


• About

• Open Access Policy

• Contributions Policy

• Editorial Stance on AI Tools

• Press

• Donate

• Contact

The AI Ethics Brief (bi-weekly newsletter)

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.


Archive

  • © MONTREAL AI ETHICS INSTITUTE. All rights reserved 2024.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.