• Skip to main content
  • Skip to secondary menu
  • Skip to primary sidebar
  • Skip to footer
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

  • Articles
    • Public Policy
    • Privacy & Security
    • Human Rights
      • Ethics
      • JEDI (Justice, Equity, Diversity, Inclusion
    • Climate
    • Design
      • Emerging Technology
    • Application & Adoption
      • Health
      • Education
      • Government
        • Military
        • Public Works
      • Labour
    • Arts & Culture
      • Film & TV
      • Music
      • Pop Culture
      • Digital Art
  • Columns
    • AI Policy Corner
    • Recess
    • Tech Futures
  • The AI Ethics Brief
  • AI Literacy
    • Research Summaries
    • AI Ethics Living Dictionary
    • Learning Community
  • The State of AI Ethics Report
    • Volume 7 (November 2025)
    • Volume 6 (February 2022)
    • Volume 5 (July 2021)
    • Volume 4 (April 2021)
    • Volume 3 (Jan 2021)
    • Volume 2 (Oct 2020)
    • Volume 1 (June 2020)
  • About
    • Our Contributions Policy
    • Our Open Access Policy
    • Contact
    • Donate

Anthropomorphic interactions with a robot and robot-like agent

October 27, 2021

🔬 Research summary by Connor Wright, our Partnerships Manager.

[Original paper by  Sara Kiesler, Aaron Powers, Susan R. Fussell, and Cristen Torrey]


Overview: Would you be more comfortable disclosing personal health information to a physical robot or a chatbot? In this study, anthropomorphism may be seen as the way to interact with humans, but most definitely not in acquiring personal information.


Introduction

Would you be more comfortable disclosing personal health information to a physical robot or a chatbot? Explored in this study is whether a humanlike robot solicits stronger anthropomorphic interactions than just a chatbot. With both physical presence and physical distance measured, the anthropomorphised robot wins the interaction race hands-down. However, when it comes to acquiring the medical information, the anthropomorphic strategy leaves much to be desired.

Key Insights

Setting the scene

The main actors of the study were a physically embodied robot, the same robot projected onto a screen, a software agent (like a chatbot) on a computer next to the participant and a software agent projected onto a big screen farther away. From there, four scenarios were set out (p.g. 172):

  1. The participant interacts with a physically present and embodied robot.
  2. The participant communicates with the same robot, but it is projected on a big screen.
  3. The participant engages with a software agent on a nearby laptop.
  4. The participant converses with the software agent on the further away  big screen.

Two hypotheses were proposed

  1. The participants will interact and thus anthropomorphise the physically embodied robot more so than the software agent. However, they won’t disclose as much personal information to the embodied agent.
  2. The participants will interact and thus anthropomorphise a software agent on a computer more than a robot projected onto a big screen.

The instructions for the discussion mentioned how the goal was to “have a discussion with this robot about basic health habits.” (p.g. 173). Once carried out, the first conclusion drawn was on the importance of embodiment.

Robot embodiment is key

The participants interacted with the embodied robot a lot more than the social agent. Not only that, but it ranked top of all the robot trait ratings (such as trustworthiness and competency, see the table on p.g. 178).

In addition, the software agent was not seen as a “real” robot. The participants, of course, had their own preconceptions about how the robot was to look, with some being left disappointed when faced with a software agent.

The embodied agent vs. the software agent

Alongside the superior level of interaction, the first hypothesis was confirmed by how the participants did disclose less to the physical robot than the software agent. Instead, the software agent was viewed more as an administrative process that simply required personal information, which participants were more comfortable giving. While the software agent may have suffered in lacking human interaction, this proved beneficial in acquiring the desired medical information.

The distance factor

About the physical distance between the participant, the physical robot and the software agent did not differ. The variation in engagement time between having the robot and software agent projected and not projected was negligible. Hence, the study’s second hypothesis was proved false.

Between the lines

While the physical robot was more anthropomorphised, it was still not seen as a fully human interlocutor. Participants mentioned how the robot, at times, wasn’t flexible and interruptible enough for an entirely natural conversation to flow. Furthermore, the higher level of anthropomorphisation did not immediately lead to a sufficient level of trust to disclose personal health information. Hence, while anthropomorphisation does generate increased human interaction, it does not naturally follow that we trust the technology.

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

🔍 SEARCH

Spotlight

A brightly coloured illustration which can be viewed in any direction. It has many elements to it working together: men in suits around a table, someone in a data centre, big hands controlling the scenes and holding a phone, people in a production line. Motifs such as network diagrams and melting emojis are placed throughout the busy vignettes.

Tech Futures: The Fossil Fuels Playbook for Big Tech: Part II

A rock embedded with intricate circuit board patterns, held delicately by pale hands drawn in a ghostly style. The contrast between the rough, metallic mineral and the sleek, artificial circuit board illustrates the relationship between raw natural resources and modern technological development. The hands evoke human involvement in the extraction and manufacturing processes.

Tech Futures: The Fossil Fuels Playbook for Big Tech: Part I

Close-up of a cat sleeping on a computer keyboard

Tech Futures: The threat of AI-generated code to the world’s digital infrastructure

The undying sun hangs in the sky, as people gather around signal towers, working through their digital devices.

Dreams and Realities in Modi’s AI Impact Summit

Illustration of a coral reef ecosystem

Tech Futures: Diversity of Thought and Experience: The UN’s Scientific Panel on AI

related posts

  • GenAI Against Humanity: Nefarious Applications of Generative Artificial Intelligence and Large Langu...

    GenAI Against Humanity: Nefarious Applications of Generative Artificial Intelligence and Large Langu...

  • Model Positionality and Computational Reflexivity: Promoting Reflexivity in Data Science

    Model Positionality and Computational Reflexivity: Promoting Reflexivity in Data Science

  • Regulating AI to ensure Fundamental Human Rights: reflections from the Grand Challenge EU AI Act

    Regulating AI to ensure Fundamental Human Rights: reflections from the Grand Challenge EU AI Act

  • Research summary: Bring the People Back In: Contesting Benchmark Machine Learning

    Research summary: Bring the People Back In: Contesting Benchmark Machine Learning

  • Anthropomorphized AI as Capitalist Agents: The Price We Pay for Familiarity

    Anthropomorphized AI as Capitalist Agents: The Price We Pay for Familiarity

  • Public Perceptions of Gender Bias in Large Language Models: Cases of ChatGPT and Ernie

    Public Perceptions of Gender Bias in Large Language Models: Cases of ChatGPT and Ernie

  • The Evolution of War: How AI has Changed Military Weaponry and Technology

    The Evolution of War: How AI has Changed Military Weaponry and Technology

  • Dual Governance: The intersection of centralized regulation and crowdsourced safety mechanisms for G...

    Dual Governance: The intersection of centralized regulation and crowdsourced safety mechanisms for G...

  • Engaging the Public in AI's Journey: Lessons from the UK AI Safety Summit on Standards, Policy, and ...

    Engaging the Public in AI's Journey: Lessons from the UK AI Safety Summit on Standards, Policy, and ...

  • Why We Need to Audit Government AI

    Why We Need to Audit Government AI

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer


Articles

Columns

AI Literacy

The State of AI Ethics Report


 

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.

Contact

Donate


  • © 2025 MONTREAL AI ETHICS INSTITUTE.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.