• Skip to main content
  • Skip to secondary menu
  • Skip to primary sidebar
  • Skip to footer
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

  • Articles
    • Public Policy
    • Privacy & Security
    • Human Rights
      • Ethics
      • JEDI (Justice, Equity, Diversity, Inclusion
    • Climate
    • Design
      • Emerging Technology
    • Application & Adoption
      • Health
      • Education
      • Government
        • Military
        • Public Works
      • Labour
    • Arts & Culture
      • Film & TV
      • Music
      • Pop Culture
      • Digital Art
  • Columns
    • AI Policy Corner
    • Recess
    • Tech Futures
  • The AI Ethics Brief
  • AI Literacy
    • Research Summaries
    • AI Ethics Living Dictionary
    • Learning Community
  • The State of AI Ethics Report
    • Volume 7 (November 2025)
    • Volume 6 (February 2022)
    • Volume 5 (July 2021)
    • Volume 4 (April 2021)
    • Volume 3 (Jan 2021)
    • Volume 2 (Oct 2020)
    • Volume 1 (June 2020)
  • About
    • Our Contributions Policy
    • Our Open Access Policy
    • Contact
    • Donate

Anthropomorphism and the Social Robot

January 12, 2022

šŸ”¬ Research summary by Connor Wright, our Partnerships Manager.

[Original paper by Brian R. Duffy]


Overview: Have you ever found a technology too human for your liking? The benefit of utilising anthropomorphism to involve AI in our lives is a popular strategy. Yet, its success depends on how well the balance between too much and too little is achieved.


Introduction

Anthropomorphism is the sure-fire way to integrate AI (social robots) into our lives. Construed in this way, familiarity with AI and the human interaction this brings about is one of the benefits of anthropomorphism. In this way, anthropomorphism can act as the medium between humans and AI. However, this can go too far, with too much anthropomorphisation proving counterproductive. Hence, exploring AI integration into our lives becomes a game of balance.

Key Insights

Integrating AI into our lives

Anthropomorphism is the most obvious way to integrate robots into our life. It harnesses the familiarity of human movement, allowing us to better rationalise and relate to the AI’s behaviour. In this way, we are more likely to interact with the technology, deepening its involvement in human lives. Hence, rather than being seen as an endeavour to design humanoid robots, anthropomorphism aims to integrate AI into our society successfully.

As a result, social acceptance of the technology is also necessary. Familiarity with an AI through interaction is part of it, but so too is the emotional functionality of the AI. It is a tricky path to tread in terms of appropriately portraying that the AI has emotions, but it can prove pivotal in solidifying the social acceptance of the robot. As shown in the case of Phineas Gage, a lack of appropriate emotional response when interacting with humans makes it impossible to fit in. an emotionless robot also plays into the traditional AI-fear of killer robots.

Questions of designing social robots need to consider how the robots are not to seem fearsome to a human. For the ā€œharmonious coexistenceā€ (p.g 186) to be established, the aforementioned familiarity with the technology can lend a hand. In this way, anthropomorphism can act as a medium for human-AI interaction.

Anthropomorphism as a medium

Given the use of ā€˜social’ in ā€˜social robots’, they are viewed as ā€œthe interface between man and technologyā€ (p.g 178), proving the first step in preventing seeing machines as purely tools and instead fellow society members. Here, anthropomorphism is the ā€œlanguageā€ (p.g 181) to facilitate interaction between AI and humans.

An interesting question is whether this medium allows humans to believe they are interacting with an ā€˜intelligent’ entity. Here, on the strong AI view, the technology can be intelligent once it replicates the biological mechanisms of the human brain. On the other hand, the weak AI view sees ā€˜artificial intelligence’ as an oxymoron, stating how AI can only ever display signs of intelligence (good enough to pass the historic Turing Test). No matter which view you subscribe to, anthropomorphism is not a way to seek out human approval of intelligence but rather to encourage human interaction.

However, with this aim in mind, an AI can be too anthropomorphic.

An AI can be too anthropomorphic

Duffy notes how ā€œSuccessful design in both software and robots in HCI needs to involve a balance of illusionā€ (p.g 178). Social robots thrive on creating the ā€œillusion of life and intelligenceā€ (p.g 178), and this illusion can sometimes be taken too far. Anthropomorphic AI can reach a point where its similarity to a human is unsettling and counter-productive. 

Exemplified in Mori’s Uncanny Valley, there comes the point where anthropomorphism leads to a severe drop in human interaction. In this way, an AI doesn’t need a full range of motion if it doesn’t help facilitate the production of its desired output.

Between the lines

I am in total agreement that anthropomorphism can reach a sub-optimal point. It can become an obsession for designers in the social robot arena, often obscuring the main driving force of the technique, soliciting human interaction. Hence, I love the analogy offered by Duffy that when you employ anthropomorphism, you are ā€œnot trying to replicate a bird to fly but rather recognising those qualities that lead to the invention of the plane.ā€ (p.g 183). Here, anthropomorphism is all about learning and employing the qualities of what makes good interaction, rather than just recreating a human.

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

šŸ” SEARCH

Spotlight

A network diagram with lots of little emojis, organised in clusters.

Tech Futures: AI For and Against Knowledge

A brightly coloured illustration which can be viewed in any direction. It has many elements to it working together: men in suits around a table, someone in a data centre, big hands controlling the scenes and holding a phone, people in a production line. Motifs such as network diagrams and melting emojis are placed throughout the busy vignettes.

Tech Futures: The Fossil Fuels Playbook for Big Tech: Part II

A rock embedded with intricate circuit board patterns, held delicately by pale hands drawn in a ghostly style. The contrast between the rough, metallic mineral and the sleek, artificial circuit board illustrates the relationship between raw natural resources and modern technological development. The hands evoke human involvement in the extraction and manufacturing processes.

Tech Futures: The Fossil Fuels Playbook for Big Tech: Part I

Close-up of a cat sleeping on a computer keyboard

Tech Futures: The threat of AI-generated code to the world’s digital infrastructure

The undying sun hangs in the sky, as people gather around signal towers, working through their digital devices.

Dreams and Realities in Modi’s AI Impact Summit

related posts

  • Research summary: Integrating ethical values and economic value to steer progress in AI

    Research summary: Integrating ethical values and economic value to steer progress in AI

  • Press Release: Analysis of COVI, Mila’s Contact Tracing Application

    Press Release: Analysis of COVI, Mila’s Contact Tracing Application

  • The state of the debate on the ethics of computer vision

    The state of the debate on the ethics of computer vision

  • Theorizing Femininity in AI: a Framework for Undoing Technology’s Gender Troubles (Research Summary)

    Theorizing Femininity in AI: a Framework for Undoing Technology’s Gender Troubles (Research Summary)

  • Beyond the Frontier: Fairness Without Accuracy Loss

    Beyond the Frontier: Fairness Without Accuracy Loss

  • The Epistemological View: Data Ethics, Privacy & Trust on Digital Platform

    The Epistemological View: Data Ethics, Privacy & Trust on Digital Platform

  • Friend or foe? Exploring the implications of large language models on the science system

    Friend or foe? Exploring the implications of large language models on the science system

  • The Role of Arts in Shaping AI Ethics

    The Role of Arts in Shaping AI Ethics

  • Intersectional Inquiry, on the Ground and in the Algorithm

    Intersectional Inquiry, on the Ground and in the Algorithm

  • Online public discourse on artificial intelligence and ethics in China: context, content, and implic...

    Online public discourse on artificial intelligence and ethics in China: context, content, and implic...

Partners

  • Ā 
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer


Articles

Columns

AI Literacy

The State of AI Ethics Report


 

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.

Contact

Donate


  • Ā© 2025 MONTREAL AI ETHICS INSTITUTE.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.