• Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • Core Principles of Responsible AI
    • Accountability
    • Fairness
    • Privacy
    • Safety and Security
    • Sustainability
    • Transparency
  • Special Topics
    • AI in Industry
    • Ethical Implications
    • Human-Centered Design
    • Regulatory Landscape
    • Technical Methods
  • Living Dictionary
  • State of AI Ethics
  • AI Ethics Brief
  • 🇫🇷
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

Ethical concerns with replacing human relations with humanoid robots: an Ubuntu perspective

June 28, 2022

🔬 Research summary by Connor Wright, our Partnerships Manager.

[Original paper by Cindy Friedman]


Overview: Would you be comfortable replacing some of your human connections with a robot one? With a focus on humanoid robots, the Ubuntu perspective is harnessed to argue that your answer should be ‘no’.


Introduction

Would you be comfortable replacing some of your human connections with a robot one? With a focus on humanoid robots, the Ubuntu perspective is harnessed to argue that your answer should be ‘no’.  Humanoid robots are defined as those that act and appear human. For this to occur, the robot must also exhibit social qualities. Hence, such robots respond socially, allowing humans to relate and empathise with them etc. To explore this argument, I will take you through the Ubuntu stance before mentioning robot interactions. I will mention their benefits before exploring their perils. I’ll conclude that talk of replacing human interactions with humanoid relationships cannot be contemplated until we can treat robots on equal standing as other humans.

Key Insights

Ubuntu philosophy

As a fundamental baseline, Ubuntu holds human interaction benefits us. The philosophy postulates that we become “more human” (p. 2) through interdependent interactions and relationships with other human beings. However, what does becoming more human mean?

What it means to be human according to Ubuntu premises

Being fully human is not just being biologically human but also being moral through actions of interdependence and interconnection. Beyond that, embracing our humanity includes exhibiting moral traits that no other species can through interdependent relationships. Hence, Ubuntu acts as a map to offer guidance on how we can be better moral versions of ourselves, which Shutte takes as a moral obligation. 

In sum, should we replace these human interactions, Ubuntu provides a firm grounding as to why this is concerning. Replacing human relations with robot relations means we cannot become fully human. So, why is this a possibility at all?

Humanoid interactions

Humans still feel their emotions are or at least can be, reciprocated by the humanoid robots even when they’re not. We feel as if we’re engaging in a loving relationship as the robot displays all the social queues we have evolved to latch on to and take to feel that way. This willingness to interact can be a potential force for good, especially concerning the acquisition of social skills. For example, robots designed for sex could help people overcome sexual trauma though operating in a controlled environment. Yet, from an Ubuntu perspective, the perils of such willingness to engage far outweigh the benefits.

The dangers of replacing human interactions with humanoid relationships

Relationships with robots replacing human relationships could affect how we relate to others, blurring the line between what’s human and what’s simply anthropomorphised. If we replace human interactions for good, going back to the infinitely more complicated and challenging conversation with humans becomes increasingly less likely. For example, the author mentions how autistic children at times find it easier to communicate with a robot given its more predictable and straightforward nature. Hence, these dangers can be summed up in two premises and a conclusion: 

Premise 1: we become more human through having human relations with other humans.

A human being is a human through its relation to other human beings. As eloquently put by the author, “one can only experience genuinely human equality, reciprocity, and solidarity with another human being.” (p. 7).

Premise 2: we cannot have humane relations with robots.

Humanoid robots are not human. In reality, a relationship with a robot is unidirectional; the robot cannot reciprocate genuine feelings of love or respect as it cannot currently do so. It can simulate what we take to be sentiments of love and respect, but this cannot equate to genuine sentiment.

Conclusion: We should not replace humane relations with robots.

Between the lines

A poignant thought I took away from this piece is in terms of equality. A core facet of the Ubuntu way of life, which it seems we cannot practice in robot relations given how a robot is not on the same level playing field as a human. In other words, how can we be of equal standing if we are creating robots for specific uses that we decide and on which we subsequently evaluate their performance? Thus, should we replace human relationships with humanoid robots, we need to be able to treat them on equal footing. We are most certainly not there yet.

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

🔍 SEARCH

Spotlight

Beyond Dependency: The Hidden Risk of Social Comparison in Chatbot Companionship

AI Policy Corner: Restriction vs. Regulation: Comparing State Approaches to AI Mental Health Legislation

Beyond Consultation: Building Inclusive AI Governance for Canada’s Democratic Future

AI Policy Corner: U.S. Executive Order on Advancing AI Education for American Youth

AI Policy Corner: U.S. Copyright Guidance on Works Created with AI

related posts

  • Responsible Generative AI: A Reference Architecture for Designing Foundation Model-based Agents

    Responsible Generative AI: A Reference Architecture for Designing Foundation Model-based Agents

  • Listen to What They Say: Better Understand and Detect Online Misinformation with User Feedback

    Listen to What They Say: Better Understand and Detect Online Misinformation with User Feedback

  • The Ethics Owners — A New Model of Organizational Responsibility in Data-Driven Technology Companies...

    The Ethics Owners — A New Model of Organizational Responsibility in Data-Driven Technology Companies...

  • Social media polarization reflects shifting political alliances in Pakistan

    Social media polarization reflects shifting political alliances in Pakistan

  • Ethics of AI in Education: Towards a Community-wide Framework

    Ethics of AI in Education: Towards a Community-wide Framework

  • Towards Climate Awareness in NLP Research

    Towards Climate Awareness in NLP Research

  • Research summary: Overcoming Barriers to Cross-Cultural Cooperation in AI Ethics and Governance

    Research summary: Overcoming Barriers to Cross-Cultural Cooperation in AI Ethics and Governance

  • NATO Artificial Intelligence Strategy

    NATO Artificial Intelligence Strategy

  • Re-imagining Algorithmic Fairness in India and Beyond (Research Summary)

    Re-imagining Algorithmic Fairness in India and Beyond (Research Summary)

  • Judging the algorithm: A case study on the risk assessment tool for gender-based violence implemente...

    Judging the algorithm: A case study on the risk assessment tool for gender-based violence implemente...

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer

Categories


• Blog
• Research Summaries
• Columns
• Core Principles of Responsible AI
• Special Topics

Signature Content


• The State Of AI Ethics

• The Living Dictionary

• The AI Ethics Brief

Learn More


• About

• Open Access Policy

• Contributions Policy

• Editorial Stance on AI Tools

• Press

• Donate

• Contact

The AI Ethics Brief (bi-weekly newsletter)

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.


Archive

  • © 2025 MONTREAL AI ETHICS INSTITUTE.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.