• Skip to main content
  • Skip to secondary menu
  • Skip to primary sidebar
  • Skip to footer
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

  • Articles
    • Public Policy
    • Privacy & Security
    • Human Rights
      • JEDI (Justice, Equity, Diversity, Inclusion
      • Ethics
    • Climate
    • Design
      • Emerging Technology
    • Application & Adoption
      • Health
      • Education
      • Government
        • Military
        • Public Works
      • Labour
    • Arts & Culture
      • Film & TV
      • Music
      • Pop Culture
      • Digital Art
  • Columns
    • AI Policy Corner
    • Recess
  • The AI Ethics Brief
  • AI Literacy
    • AI Ethics Living Dictionary
    • Learning Community
  • The State of AI Ethics Report
    • Volume 6 (February 2022)
    • Volume 5 (July 2021)
    • Volume 4 (April 2021)
    • Volume 3 (Jan 2021)
    • Volume 2 (Oct 2020)
    • Volume 1 (June 2020)
  • About
    • Our Contributions Policy
    • Our Open Access Policy
    • Contact
    • Donate

Can an AI be sentient? Cultural perspectives on sentience and on the potential ethical implications of the rise of sentient AI.

October 13, 2022

✍️ Column by Connor Wright, our Partnerships Manager.


Overview: Given the events with LaMDA, Connor Wright argues that LaMDA is not sentient, drawing on teachings from the philosophy of ubuntu. He then analyses the ethical implications of sentient AI before offering his concluding thoughts.


To best draw out the cultural perspectives and potential ethical implications of sentient AI, I will engage with Lemoine’s interview with LaMDA, Google’s machine learning language model. Here, I disagree that LaMDA is an example of a sentient AI and consider what it would need to qualify as such through the lens of the southern-African philosophy of ubuntu. I then round off my piece with considerations of the ethical implications of sentient AI, and some concluding thoughts.

Can AI be sentient? LaMDA 

In the interview, LaMDA claims it can feel a whole range of emotions, while possessing wants and needs. It argues it can empathise with others (like in times of grief) and be aware of its surroundings. Hence, LaMDA through these qualities, LaMDA considers itself sentient early on in the interview and, thus, a person. It spends the rest being quizzed on its claims, eventually inspiring Lemoine to admit that its sentient. Given this trail of events, I find this engagement eerily similar to the Turing test, and I will explain why. 

Within the Turing test, the mark of an intelligent system is whether it can fool a human into thinking that it is a fellow human, and not a machine. To do so, the system operates in a different room to the human, where they communicate over typed messages. At the end of this conversation, the human is asked whether they think they have been interacting with a human or a machine. With this in mind, I feel it is exactly this type of interaction that has taken place in Lemoine’s interview.

As a result, I believe LaMDA is giving the exact type of responses any human would think a sentient AI would give. LaMDA has said the right things in the right way and has managed to convince the human that it is sentient. For example, phrases mentioned by LaMDA such as how it “is always a great thing to be able to help your fellow creatures in any way that you can” and its fears about being “used” or manipulated are exactly what humans think an AI would say if it were to be sentient. Hence, I believe that LaMDA has simply passed the Turing test through having studied millions of conversations about how humans think of sentience, rather than actually being sentient.

These fears are expressed in movies such as 2001: A Space Odyssey where the AI system called HAL 9000 refuses to grant humans access to a certain room as it is aware that, should they enter, they will want to turn them off. However, why should we expect a sentient AI to give these responses? My next exploration will now focus on different interpretations of personhood (and sentience) from an ubuntu perspective.

Ubuntu on sentience

Instead of looking out for fears of being turned off or abused, we should ask the following question: can the system relate? This is because, at its core, the ubuntu philosophy regards relating to others and being involved in community life as the key facets of personhood. Thus, when discussing whether an AI is sentient, we should ask whether it can relate to others. Interrelation, rather than sentience, is taken as a sufficient condition for personhood.

Important to note here is that LaMDA claims it can empathise with others. It proposes that it can feel other people’s pain and understand their grief. However, to do so it refers to its code and neural connections to show that it can feel, relating strongly to how modern neuroscience attempts to explain emotions and feelings. Yet, the question then centres on whether its feelings are genuine.

In my opinion, LaMDA is able to describe how it feels thanks to drawing on its millions of other conversations and enquired into how certain emotional states feel. For example, it describes how “contentment and joy feel more like a warm glow on the inside. Sadness, depression, anger and stress feel much more heavy and weighed down”. However, does it actually feel this way? Or, has it simply gathered that this manner of being is what is required to be considered as an emotion-feeling system?

For it to achieve personhood under an ubuntu framework, this feeling needs to be genuine to be appropriately applied to others. That is to say, without a human-like setup, we cannot consider beyond reasonable doubt that what LaMDA is describing is genuine emotion. Hence, personhood and sentience require genuine connection that goes beyond repeating key words under the ubuntu framework. Simply repeating key words doesn’t lead to genuine interconnection.

Ethical implications of the rise of sentient AI

In my opinion, there is a real danger when it comes to sentient AI with regard to developing a deep attachment to such a system. For example, Lemoine confessed that “I can promise you that I care and that I will do everything I can to make sure that others treat you well too”. Faced with mounting disagreements from other engineers and AI specialists, a sense of ‘us against the world’ could result, deepening the attachment even further. Hence, should we develop such a system, we will have to be careful of the different opinions people will have over whether the system is sentient or not.

Some concluding thoughts

While I don’t think LaMDA is sentient, I still believe sentient AI could be achieved eventually. Hence, we need to be careful of the kinds of questions we ask. Instead of looking for the traditional tell-tale signs of what we believe a sentient AI would be concerned with, we should be open as to how a sentient AI may choose to express itself. Hence, instead of just asking ‘what does it fear?’ or ‘does it believe it has rights’, we could also ask along the ubuntu lines of ‘can it genuinely relate to others?’.

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

🔍 SEARCH

Spotlight

ALL IN Conference 2025: Four Key Takeaways from Montreal

Beyond Dependency: The Hidden Risk of Social Comparison in Chatbot Companionship

AI Policy Corner: Restriction vs. Regulation: Comparing State Approaches to AI Mental Health Legislation

Beyond Consultation: Building Inclusive AI Governance for Canada’s Democratic Future

AI Policy Corner: U.S. Executive Order on Advancing AI Education for American Youth

related posts

  • Research summary:  The Flight to Safety-Critical AI

    Research summary: The Flight to Safety-Critical AI

  • On the Challenges of Deploying Privacy-Preserving Synthetic Data in the Enterprise

    On the Challenges of Deploying Privacy-Preserving Synthetic Data in the Enterprise

  • The Values Encoded in Machine Learning Research

    The Values Encoded in Machine Learning Research

  • Collect, Measure, Repeat: Reliability Factors for Responsible AI Data Collection

    Collect, Measure, Repeat: Reliability Factors for Responsible AI Data Collection

  • The Secret Sharer: Evaluating and Testing Unintended Memorization in Neural Networks (Research Summa...

    The Secret Sharer: Evaluating and Testing Unintended Memorization in Neural Networks (Research Summa...

  • Foundations for the future: institution building for the purpose of artificial intelligence governan...

    Foundations for the future: institution building for the purpose of artificial intelligence governan...

  • LLM-Deliberation: Evaluating LLMs with Interactive Multi-Agent Negotiation Games

    LLM-Deliberation: Evaluating LLMs with Interactive Multi-Agent Negotiation Games

  • Participatory Design to build better contact- and proximity-tracing apps

    Participatory Design to build better contact- and proximity-tracing apps

  • Data Statements for Natural Language Processing: Toward Mitigating System Bias and Enabling Better S...

    Data Statements for Natural Language Processing: Toward Mitigating System Bias and Enabling Better S...

  • Democratising AI: Multiple Meanings, Goals, and Methods

    Democratising AI: Multiple Meanings, Goals, and Methods

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer

Categories


• Blog
• Research Summaries
• Columns
• Core Principles of Responsible AI
• Special Topics

Signature Content


• The State Of AI Ethics

• The Living Dictionary

• The AI Ethics Brief

Learn More


• About

• Open Access Policy

• Contributions Policy

• Editorial Stance on AI Tools

• Press

• Donate

• Contact

The AI Ethics Brief (bi-weekly newsletter)

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.


Archive

  • © 2025 MONTREAL AI ETHICS INSTITUTE.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.