• Skip to primary navigation
  • Skip to main content
  • LinkedIn
  • RSS
  • Twitter
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy.

  • Content
    • The State of AI Ethics
    • The AI Ethics Brief
    • The Living Dictionary
    • Research Summaries
    • Columns
      • Social Context in LLM Research: the BigScience Approach
      • Recess
      • Like Talking to a Person
      • Sociology of AI Ethics
      • The New Heartbeat of Healthcare
      • Office Hours
      • Permission to Be Uncertain
      • AI Application Spotlight
      • Ethical AI Startups
    • Publications
  • Community
    • Events
    • Learning Community
    • Code of Conduct
  • Team
  • Donate
  • About
    • Our Open Access Policy
    • Our Contributions Policy
    • Press
  • Contact
  • 🇫🇷
Subscribe

Ethical concerns with replacing human relations with humanoid robots: an Ubuntu perspective

June 28, 2022 by MAIEI

🔬 Research summary by Connor Wright, our Partnerships Manager.

[Original paper by Cindy Friedman]


Overview: Would you be comfortable replacing some of your human connections with a robot one? With a focus on humanoid robots, the Ubuntu perspective is harnessed to argue that your answer should be ‘no’.


Introduction

Would you be comfortable replacing some of your human connections with a robot one? With a focus on humanoid robots, the Ubuntu perspective is harnessed to argue that your answer should be ‘no’.  Humanoid robots are defined as those that act and appear human. For this to occur, the robot must also exhibit social qualities. Hence, such robots respond socially, allowing humans to relate and empathise with them etc. To explore this argument, I will take you through the Ubuntu stance before mentioning robot interactions. I will mention their benefits before exploring their perils. I’ll conclude that talk of replacing human interactions with humanoid relationships cannot be contemplated until we can treat robots on equal standing as other humans.

Key Insights

Ubuntu philosophy

As a fundamental baseline, Ubuntu holds human interaction benefits us. The philosophy postulates that we become “more human” (p. 2) through interdependent interactions and relationships with other human beings. However, what does becoming more human mean?

What it means to be human according to Ubuntu premises

Being fully human is not just being biologically human but also being moral through actions of interdependence and interconnection. Beyond that, embracing our humanity includes exhibiting moral traits that no other species can through interdependent relationships. Hence, Ubuntu acts as a map to offer guidance on how we can be better moral versions of ourselves, which Shutte takes as a moral obligation. 

In sum, should we replace these human interactions, Ubuntu provides a firm grounding as to why this is concerning. Replacing human relations with robot relations means we cannot become fully human. So, why is this a possibility at all?

Humanoid interactions

Humans still feel their emotions are or at least can be, reciprocated by the humanoid robots even when they’re not. We feel as if we’re engaging in a loving relationship as the robot displays all the social queues we have evolved to latch on to and take to feel that way. This willingness to interact can be a potential force for good, especially concerning the acquisition of social skills. For example, robots designed for sex could help people overcome sexual trauma though operating in a controlled environment. Yet, from an Ubuntu perspective, the perils of such willingness to engage far outweigh the benefits.

The dangers of replacing human interactions with humanoid relationships

Relationships with robots replacing human relationships could affect how we relate to others, blurring the line between what’s human and what’s simply anthropomorphised. If we replace human interactions for good, going back to the infinitely more complicated and challenging conversation with humans becomes increasingly less likely. For example, the author mentions how autistic children at times find it easier to communicate with a robot given its more predictable and straightforward nature. Hence, these dangers can be summed up in two premises and a conclusion: 

Premise 1: we become more human through having human relations with other humans.

A human being is a human through its relation to other human beings. As eloquently put by the author, “one can only experience genuinely human equality, reciprocity, and solidarity with another human being.” (p. 7).

Premise 2: we cannot have humane relations with robots.

Humanoid robots are not human. In reality, a relationship with a robot is unidirectional; the robot cannot reciprocate genuine feelings of love or respect as it cannot currently do so. It can simulate what we take to be sentiments of love and respect, but this cannot equate to genuine sentiment.

Conclusion: We should not replace humane relations with robots.

Between the lines

A poignant thought I took away from this piece is in terms of equality. A core facet of the Ubuntu way of life, which it seems we cannot practice in robot relations given how a robot is not on the same level playing field as a human. In other words, how can we be of equal standing if we are creating robots for specific uses that we decide and on which we subsequently evaluate their performance? Thus, should we replace human relationships with humanoid robots, we need to be able to treat them on equal footing. We are most certainly not there yet.

Category iconResearch Summaries

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We write every week.
  • LinkedIn
  • RSS
  • Twitter
  • © MONTREAL AI ETHICS INSTITUTE. All rights reserved 2021.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Creative Commons LicenseLearn more about our open access policy here.