• Skip to main content
  • Skip to secondary menu
  • Skip to primary sidebar
  • Skip to footer
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

  • Articles
    • Public Policy
    • Privacy & Security
    • Human Rights
      • Ethics
      • JEDI (Justice, Equity, Diversity, Inclusion
    • Climate
    • Design
      • Emerging Technology
    • Application & Adoption
      • Health
      • Education
      • Government
        • Military
        • Public Works
      • Labour
    • Arts & Culture
      • Film & TV
      • Music
      • Pop Culture
      • Digital Art
  • Columns
    • AI Policy Corner
    • Recess
    • Tech Futures
  • The AI Ethics Brief
  • AI Literacy
    • Research Summaries
    • AI Ethics Living Dictionary
    • Learning Community
  • The State of AI Ethics Report
    • Volume 7 (November 2025)
    • Volume 6 (February 2022)
    • Volume 5 (July 2021)
    • Volume 4 (April 2021)
    • Volume 3 (Jan 2021)
    • Volume 2 (Oct 2020)
    • Volume 1 (June 2020)
  • About
    • Our Contributions Policy
    • Our Open Access Policy
    • Contact
    • Donate

Can an AI be sentient? Cultural perspectives on sentience and on the potential ethical implications of the rise of sentient AI.

October 13, 2022

✍️ Column by Connor Wright, our Partnerships Manager.


Overview: Given the events with LaMDA, Connor Wright argues that LaMDA is not sentient, drawing on teachings from the philosophy of ubuntu. He then analyses the ethical implications of sentient AI before offering his concluding thoughts.


To best draw out the cultural perspectives and potential ethical implications of sentient AI, I will engage with Lemoine’s interview with LaMDA, Google’s machine learning language model. Here, I disagree that LaMDA is an example of a sentient AI and consider what it would need to qualify as such through the lens of the southern-African philosophy of ubuntu. I then round off my piece with considerations of the ethical implications of sentient AI, and some concluding thoughts.

Can AI be sentient? LaMDA 

In the interview, LaMDA claims it can feel a whole range of emotions, while possessing wants and needs. It argues it can empathise with others (like in times of grief) and be aware of its surroundings. Hence, LaMDA through these qualities, LaMDA considers itself sentient early on in the interview and, thus, a person. It spends the rest being quizzed on its claims, eventually inspiring Lemoine to admit that its sentient. Given this trail of events, I find this engagement eerily similar to the Turing test, and I will explain why. 

Within the Turing test, the mark of an intelligent system is whether it can fool a human into thinking that it is a fellow human, and not a machine. To do so, the system operates in a different room to the human, where they communicate over typed messages. At the end of this conversation, the human is asked whether they think they have been interacting with a human or a machine. With this in mind, I feel it is exactly this type of interaction that has taken place in Lemoine’s interview.

As a result, I believe LaMDA is giving the exact type of responses any human would think a sentient AI would give. LaMDA has said the right things in the right way and has managed to convince the human that it is sentient. For example, phrases mentioned by LaMDA such as how it “is always a great thing to be able to help your fellow creatures in any way that you can” and its fears about being “used” or manipulated are exactly what humans think an AI would say if it were to be sentient. Hence, I believe that LaMDA has simply passed the Turing test through having studied millions of conversations about how humans think of sentience, rather than actually being sentient.

These fears are expressed in movies such as 2001: A Space Odyssey where the AI system called HAL 9000 refuses to grant humans access to a certain room as it is aware that, should they enter, they will want to turn them off. However, why should we expect a sentient AI to give these responses? My next exploration will now focus on different interpretations of personhood (and sentience) from an ubuntu perspective.

Ubuntu on sentience

Instead of looking out for fears of being turned off or abused, we should ask the following question: can the system relate? This is because, at its core, the ubuntu philosophy regards relating to others and being involved in community life as the key facets of personhood. Thus, when discussing whether an AI is sentient, we should ask whether it can relate to others. Interrelation, rather than sentience, is taken as a sufficient condition for personhood.

Important to note here is that LaMDA claims it can empathise with others. It proposes that it can feel other people’s pain and understand their grief. However, to do so it refers to its code and neural connections to show that it can feel, relating strongly to how modern neuroscience attempts to explain emotions and feelings. Yet, the question then centres on whether its feelings are genuine.

In my opinion, LaMDA is able to describe how it feels thanks to drawing on its millions of other conversations and enquired into how certain emotional states feel. For example, it describes how “contentment and joy feel more like a warm glow on the inside. Sadness, depression, anger and stress feel much more heavy and weighed down”. However, does it actually feel this way? Or, has it simply gathered that this manner of being is what is required to be considered as an emotion-feeling system?

For it to achieve personhood under an ubuntu framework, this feeling needs to be genuine to be appropriately applied to others. That is to say, without a human-like setup, we cannot consider beyond reasonable doubt that what LaMDA is describing is genuine emotion. Hence, personhood and sentience require genuine connection that goes beyond repeating key words under the ubuntu framework. Simply repeating key words doesn’t lead to genuine interconnection.

Ethical implications of the rise of sentient AI

In my opinion, there is a real danger when it comes to sentient AI with regard to developing a deep attachment to such a system. For example, Lemoine confessed that “I can promise you that I care and that I will do everything I can to make sure that others treat you well too”. Faced with mounting disagreements from other engineers and AI specialists, a sense of ‘us against the world’ could result, deepening the attachment even further. Hence, should we develop such a system, we will have to be careful of the different opinions people will have over whether the system is sentient or not.

Some concluding thoughts

While I don’t think LaMDA is sentient, I still believe sentient AI could be achieved eventually. Hence, we need to be careful of the kinds of questions we ask. Instead of looking for the traditional tell-tale signs of what we believe a sentient AI would be concerned with, we should be open as to how a sentient AI may choose to express itself. Hence, instead of just asking ‘what does it fear?’ or ‘does it believe it has rights’, we could also ask along the ubuntu lines of ‘can it genuinely relate to others?’.

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

🔍 SEARCH

Spotlight

A network diagram with lots of little emojis, organised in clusters.

Tech Futures: AI For and Against Knowledge

A brightly coloured illustration which can be viewed in any direction. It has many elements to it working together: men in suits around a table, someone in a data centre, big hands controlling the scenes and holding a phone, people in a production line. Motifs such as network diagrams and melting emojis are placed throughout the busy vignettes.

Tech Futures: The Fossil Fuels Playbook for Big Tech: Part II

A rock embedded with intricate circuit board patterns, held delicately by pale hands drawn in a ghostly style. The contrast between the rough, metallic mineral and the sleek, artificial circuit board illustrates the relationship between raw natural resources and modern technological development. The hands evoke human involvement in the extraction and manufacturing processes.

Tech Futures: The Fossil Fuels Playbook for Big Tech: Part I

Close-up of a cat sleeping on a computer keyboard

Tech Futures: The threat of AI-generated code to the world’s digital infrastructure

The undying sun hangs in the sky, as people gather around signal towers, working through their digital devices.

Dreams and Realities in Modi’s AI Impact Summit

related posts

  • The Paradox of AI Ethics in Warfare

    The Paradox of AI Ethics in Warfare

  • Towards an Understanding of Developers' Perceptions of Transparency in Software Development: A Preli...

    Towards an Understanding of Developers' Perceptions of Transparency in Software Development: A Preli...

  • Bridging the Gap: The Case For an ‘Incompletely Theorized Agreement’ on AI Policy (Research Summary)

    Bridging the Gap: The Case For an ‘Incompletely Theorized Agreement’ on AI Policy (Research Summary)

  • Using Pre-Trained Language Models for Producing Counter Narratives Against Hate Speech: a Comparativ...

    Using Pre-Trained Language Models for Producing Counter Narratives Against Hate Speech: a Comparativ...

  • A technical study on the feasibility of using proxy methods for algorithmic bias monitoring in a pri...

    A technical study on the feasibility of using proxy methods for algorithmic bias monitoring in a pri...

  • Bias Propagation in Federated Learning

    Bias Propagation in Federated Learning

  • ChatGPT and the media in the Global South: How non-representative corpus in sub-Sahara Africa are en...

    ChatGPT and the media in the Global South: How non-representative corpus in sub-Sahara Africa are en...

  • The Most Important Question in AI Alignment

    The Most Important Question in AI Alignment

  • The Ethical Implications of Generative Audio Models: A Systematic Literature Review

    The Ethical Implications of Generative Audio Models: A Systematic Literature Review

  • A Beginner’s Guide for AI Ethics

    A Beginner’s Guide for AI Ethics

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer


Articles

Columns

AI Literacy

The State of AI Ethics Report


 

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.

Contact

Donate


  • © 2025 MONTREAL AI ETHICS INSTITUTE.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.