• Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • Core Principles of Responsible AI
    • Accountability
    • Fairness
    • Privacy
    • Safety and Security
    • Sustainability
    • Transparency
  • Special Topics
    • AI in Industry
    • Ethical Implications
    • Human-Centered Design
    • Regulatory Landscape
    • Technical Methods
  • Living Dictionary
  • State of AI Ethics
  • AI Ethics Brief
  • 🇫🇷
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

Ethical concerns with replacing human relations with humanoid robots: an Ubuntu perspective

June 28, 2022

🔬 Research summary by Connor Wright, our Partnerships Manager.

[Original paper by Cindy Friedman]


Overview: Would you be comfortable replacing some of your human connections with a robot one? With a focus on humanoid robots, the Ubuntu perspective is harnessed to argue that your answer should be ‘no’.


Introduction

Would you be comfortable replacing some of your human connections with a robot one? With a focus on humanoid robots, the Ubuntu perspective is harnessed to argue that your answer should be ‘no’.  Humanoid robots are defined as those that act and appear human. For this to occur, the robot must also exhibit social qualities. Hence, such robots respond socially, allowing humans to relate and empathise with them etc. To explore this argument, I will take you through the Ubuntu stance before mentioning robot interactions. I will mention their benefits before exploring their perils. I’ll conclude that talk of replacing human interactions with humanoid relationships cannot be contemplated until we can treat robots on equal standing as other humans.

Key Insights

Ubuntu philosophy

As a fundamental baseline, Ubuntu holds human interaction benefits us. The philosophy postulates that we become “more human” (p. 2) through interdependent interactions and relationships with other human beings. However, what does becoming more human mean?

What it means to be human according to Ubuntu premises

Being fully human is not just being biologically human but also being moral through actions of interdependence and interconnection. Beyond that, embracing our humanity includes exhibiting moral traits that no other species can through interdependent relationships. Hence, Ubuntu acts as a map to offer guidance on how we can be better moral versions of ourselves, which Shutte takes as a moral obligation. 

In sum, should we replace these human interactions, Ubuntu provides a firm grounding as to why this is concerning. Replacing human relations with robot relations means we cannot become fully human. So, why is this a possibility at all?

Humanoid interactions

Humans still feel their emotions are or at least can be, reciprocated by the humanoid robots even when they’re not. We feel as if we’re engaging in a loving relationship as the robot displays all the social queues we have evolved to latch on to and take to feel that way. This willingness to interact can be a potential force for good, especially concerning the acquisition of social skills. For example, robots designed for sex could help people overcome sexual trauma though operating in a controlled environment. Yet, from an Ubuntu perspective, the perils of such willingness to engage far outweigh the benefits.

The dangers of replacing human interactions with humanoid relationships

Relationships with robots replacing human relationships could affect how we relate to others, blurring the line between what’s human and what’s simply anthropomorphised. If we replace human interactions for good, going back to the infinitely more complicated and challenging conversation with humans becomes increasingly less likely. For example, the author mentions how autistic children at times find it easier to communicate with a robot given its more predictable and straightforward nature. Hence, these dangers can be summed up in two premises and a conclusion: 

Premise 1: we become more human through having human relations with other humans.

A human being is a human through its relation to other human beings. As eloquently put by the author, “one can only experience genuinely human equality, reciprocity, and solidarity with another human being.” (p. 7).

Premise 2: we cannot have humane relations with robots.

Humanoid robots are not human. In reality, a relationship with a robot is unidirectional; the robot cannot reciprocate genuine feelings of love or respect as it cannot currently do so. It can simulate what we take to be sentiments of love and respect, but this cannot equate to genuine sentiment.

Conclusion: We should not replace humane relations with robots.

Between the lines

A poignant thought I took away from this piece is in terms of equality. A core facet of the Ubuntu way of life, which it seems we cannot practice in robot relations given how a robot is not on the same level playing field as a human. In other words, how can we be of equal standing if we are creating robots for specific uses that we decide and on which we subsequently evaluate their performance? Thus, should we replace human relationships with humanoid robots, we need to be able to treat them on equal footing. We are most certainly not there yet.

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

🔍 SEARCH

Spotlight

Canada’s Minister of AI and Digital Innovation is a Historic First. Here’s What We Recommend.

Am I Literate? Redefining Literacy in the Age of Artificial Intelligence

AI Policy Corner: The Texas Responsible AI Governance Act

AI Policy Corner: Singapore’s National AI Strategy 2.0

AI Governance in a Competitive World: Balancing Innovation, Regulation and Ethics | Point Zero Forum 2025

related posts

  • Research summary: Troops, Trolls and Troublemakers: A Global Inventory of Organized Social Media Man...

    Research summary: Troops, Trolls and Troublemakers: A Global Inventory of Organized Social Media Man...

  • Designing Fiduciary Artificial Intelligence

    Designing Fiduciary Artificial Intelligence

  • Rethink reporting of evaluation results in AI

    Rethink reporting of evaluation results in AI

  • Collectionless Artificial Intelligence

    Collectionless Artificial Intelligence

  • Research summary: Appendix C: Model Benefit-Risk Analysis

    Research summary: Appendix C: Model Benefit-Risk Analysis

  • Towards Algorithmic Fairness in Space-Time: Filling in Black Holes and Detecting Bias in the Presenc...

    Towards Algorithmic Fairness in Space-Time: Filling in Black Holes and Detecting Bias in the Presenc...

  • Rewiring What-to-Watch-Next Recommendations to Reduce Radicalization Pathways

    Rewiring What-to-Watch-Next Recommendations to Reduce Radicalization Pathways

  • Evaluating a Methodology for Increasing AI Transparency: A Case Study

    Evaluating a Methodology for Increasing AI Transparency: A Case Study

  • The State of AI Ethics Report (Oct 2020)

    The State of AI Ethics Report (Oct 2020)

  • A Snapshot of the Frontiers of Fairness in Machine Learning (Research Summary)

    A Snapshot of the Frontiers of Fairness in Machine Learning (Research Summary)

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer

Categories


• Blog
• Research Summaries
• Columns
• Core Principles of Responsible AI
• Special Topics

Signature Content


• The State Of AI Ethics

• The Living Dictionary

• The AI Ethics Brief

Learn More


• About

• Open Access Policy

• Contributions Policy

• Editorial Stance on AI Tools

• Press

• Donate

• Contact

The AI Ethics Brief (bi-weekly newsletter)

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.


Archive

  • © MONTREAL AI ETHICS INSTITUTE. All rights reserved 2024.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.