• Skip to main content
  • Skip to secondary menu
  • Skip to primary sidebar
  • Skip to footer
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

  • Articles
    • Public Policy
    • Privacy & Security
    • Human Rights
      • Ethics
      • JEDI (Justice, Equity, Diversity, Inclusion
    • Climate
    • Design
      • Emerging Technology
    • Application & Adoption
      • Health
      • Education
      • Government
        • Military
        • Public Works
      • Labour
    • Arts & Culture
      • Film & TV
      • Music
      • Pop Culture
      • Digital Art
  • Columns
    • AI Policy Corner
    • Recess
    • Tech Futures
  • The AI Ethics Brief
  • AI Literacy
    • Research Summaries
    • AI Ethics Living Dictionary
    • Learning Community
  • The State of AI Ethics Report
    • Volume 7 (November 2025)
    • Volume 6 (February 2022)
    • Volume 5 (July 2021)
    • Volume 4 (April 2021)
    • Volume 3 (Jan 2021)
    • Volume 2 (Oct 2020)
    • Volume 1 (June 2020)
  • About
    • Our Contributions Policy
    • Our Open Access Policy
    • Contact
    • Donate

Anthropomorphism and the Social Robot

January 12, 2022

šŸ”¬ Research summary by Connor Wright, our Partnerships Manager.

[Original paper by Brian R. Duffy]


Overview: Have you ever found a technology too human for your liking? The benefit of utilising anthropomorphism to involve AI in our lives is a popular strategy. Yet, its success depends on how well the balance between too much and too little is achieved.


Introduction

Anthropomorphism is the sure-fire way to integrate AI (social robots) into our lives. Construed in this way, familiarity with AI and the human interaction this brings about is one of the benefits of anthropomorphism. In this way, anthropomorphism can act as the medium between humans and AI. However, this can go too far, with too much anthropomorphisation proving counterproductive. Hence, exploring AI integration into our lives becomes a game of balance.

Key Insights

Integrating AI into our lives

Anthropomorphism is the most obvious way to integrate robots into our life. It harnesses the familiarity of human movement, allowing us to better rationalise and relate to the AI’s behaviour. In this way, we are more likely to interact with the technology, deepening its involvement in human lives. Hence, rather than being seen as an endeavour to design humanoid robots, anthropomorphism aims to integrate AI into our society successfully.

As a result, social acceptance of the technology is also necessary. Familiarity with an AI through interaction is part of it, but so too is the emotional functionality of the AI. It is a tricky path to tread in terms of appropriately portraying that the AI has emotions, but it can prove pivotal in solidifying the social acceptance of the robot. As shown in the case of Phineas Gage, a lack of appropriate emotional response when interacting with humans makes it impossible to fit in. an emotionless robot also plays into the traditional AI-fear of killer robots.

Questions of designing social robots need to consider how the robots are not to seem fearsome to a human. For the ā€œharmonious coexistenceā€ (p.g 186) to be established, the aforementioned familiarity with the technology can lend a hand. In this way, anthropomorphism can act as a medium for human-AI interaction.

Anthropomorphism as a medium

Given the use of ā€˜social’ in ā€˜social robots’, they are viewed as ā€œthe interface between man and technologyā€ (p.g 178), proving the first step in preventing seeing machines as purely tools and instead fellow society members. Here, anthropomorphism is the ā€œlanguageā€ (p.g 181) to facilitate interaction between AI and humans.

An interesting question is whether this medium allows humans to believe they are interacting with an ā€˜intelligent’ entity. Here, on the strong AI view, the technology can be intelligent once it replicates the biological mechanisms of the human brain. On the other hand, the weak AI view sees ā€˜artificial intelligence’ as an oxymoron, stating how AI can only ever display signs of intelligence (good enough to pass the historic Turing Test). No matter which view you subscribe to, anthropomorphism is not a way to seek out human approval of intelligence but rather to encourage human interaction.

However, with this aim in mind, an AI can be too anthropomorphic.

An AI can be too anthropomorphic

Duffy notes how ā€œSuccessful design in both software and robots in HCI needs to involve a balance of illusionā€ (p.g 178). Social robots thrive on creating the ā€œillusion of life and intelligenceā€ (p.g 178), and this illusion can sometimes be taken too far. Anthropomorphic AI can reach a point where its similarity to a human is unsettling and counter-productive. 

Exemplified in Mori’s Uncanny Valley, there comes the point where anthropomorphism leads to a severe drop in human interaction. In this way, an AI doesn’t need a full range of motion if it doesn’t help facilitate the production of its desired output.

Between the lines

I am in total agreement that anthropomorphism can reach a sub-optimal point. It can become an obsession for designers in the social robot arena, often obscuring the main driving force of the technique, soliciting human interaction. Hence, I love the analogy offered by Duffy that when you employ anthropomorphism, you are ā€œnot trying to replicate a bird to fly but rather recognising those qualities that lead to the invention of the plane.ā€ (p.g 183). Here, anthropomorphism is all about learning and employing the qualities of what makes good interaction, rather than just recreating a human.

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

šŸ” SEARCH

Spotlight

Illustration of a coral reef ecosystem

Tech Futures: Diversity of Thought and Experience: The UN’s Scientific Panel on AI

This image shows a large white, traditional, old building. The top half of the building represents the humanities (which is symbolised by the embedded text from classic literature which is faintly shown ontop the building). The bottom section of the building is embossed with mathematical formulas to represent the sciences. The middle layer of the image is heavily pixelated. On the steps at the front of the building there is a group of scholars, wearing formal suits and tie attire, who are standing around at the enternace talking and some of them are sitting on the steps. There are two stone, statute-like hands that are stretching the building apart from the left side. In the forefront of the image, there are 8 students - which can only be seen from the back. Their graduation gowns have bright blue hoods and they all look as though they are walking towards the old building which is in the background at a distance. There are a mix of students in the foreground.

Tech Futures: Co-opting Research and Education

Agentic AI systems and algorithmic accountability: a new era of e-commerce

ALL IN Conference 2025: Four Key Takeaways from Montreal

Beyond Dependency: The Hidden Risk of Social Comparison in Chatbot Companionship

related posts

  • Sex Trouble: Sex/Gender Slippage, Sex Confusion, and Sex Obsession in Machine Learning Using Electro...

    Sex Trouble: Sex/Gender Slippage, Sex Confusion, and Sex Obsession in Machine Learning Using Electro...

  • Battle of Biometrics: The use and issues of facial recognition in Canada

    Battle of Biometrics: The use and issues of facial recognition in Canada

  • Research summary: Warning Signs: The Future of Privacy and Security in the Age of Machine Learning

    Research summary: Warning Signs: The Future of Privacy and Security in the Age of Machine Learning

  • LLMCarbon: Modeling the end-to-end Carbon Footprint of Large Language Models

    LLMCarbon: Modeling the end-to-end Carbon Footprint of Large Language Models

  • Probing Networked Agency: Where is the Locus of Moral Responsibility?

    Probing Networked Agency: Where is the Locus of Moral Responsibility?

  • Energy and Policy Considerations in Deep Learning for NLP

    Energy and Policy Considerations in Deep Learning for NLP

  • A Virtue-Based Framework to Support Putting AI Ethics into Practice

    A Virtue-Based Framework to Support Putting AI Ethics into Practice

  • A Snapshot of the Frontiers of Fairness in Machine Learning (Research Summary)

    A Snapshot of the Frontiers of Fairness in Machine Learning (Research Summary)

  • AI vs. Maya Angelou: Experimental Evidence That People Cannot Differentiate AI-Generated From Human-...

    AI vs. Maya Angelou: Experimental Evidence That People Cannot Differentiate AI-Generated From Human-...

  • Implications of the use of artificial intelligence in public governance: A systematic literature rev...

    Implications of the use of artificial intelligence in public governance: A systematic literature rev...

Partners

  • Ā 
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer


Articles

Columns

AI Literacy

The State of AI Ethics Report


 

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.

Contact

Donate


  • Ā© 2025 MONTREAL AI ETHICS INSTITUTE.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.