• Skip to main content
  • Skip to secondary menu
  • Skip to primary sidebar
  • Skip to footer
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

  • Articles
    • Public Policy
    • Privacy & Security
    • Human Rights
      • Ethics
      • JEDI (Justice, Equity, Diversity, Inclusion
    • Climate
    • Design
      • Emerging Technology
    • Application & Adoption
      • Health
      • Education
      • Government
        • Military
        • Public Works
      • Labour
    • Arts & Culture
      • Film & TV
      • Music
      • Pop Culture
      • Digital Art
  • Columns
    • AI Policy Corner
    • Recess
    • Tech Futures
  • The AI Ethics Brief
  • AI Literacy
    • Research Summaries
    • AI Ethics Living Dictionary
    • Learning Community
  • The State of AI Ethics Report
    • Volume 7 (November 2025)
    • Volume 6 (February 2022)
    • Volume 5 (July 2021)
    • Volume 4 (April 2021)
    • Volume 3 (Jan 2021)
    • Volume 2 (Oct 2020)
    • Volume 1 (June 2020)
  • About
    • Our Contributions Policy
    • Our Open Access Policy
    • Contact
    • Donate

The Robot Made Me Do It: Human–Robot Interaction and Risk-Taking Behavior (Research Summary)

January 12, 2021

Summary contributed by our researcher Victoria Heath (@victoria_heath7), who’s also a Communications Manager at Creative Commons.

*Link to original paper + authors at the bottom.


Overview: Can robots impact human risk-taking behavior? In this study, the authors use the balloon analogue risk task (BART) to measure risk-taking behavior among participants when they were 1) alone, 2) among a silent robot, and 3) among a robot that encouraged risky behavior. The results show that risk-taking behavior did increase among participants when encouraged by the robot.


Can robots impact human risk-taking behavior? If so, how? These are important questions to examine and understand due to the fact that human behavior has, as the authors of this study write, “clear ethical, policy, and theoretical implications.” Previous studies on behavioral risk-taking among human peers show that “in the presence of peers” participants “focused more on the benefits compared to the risks, and, importantly, exhibited riskier behavior.” Would a similar behavior be replicated among robot peers? Although previous studies examining the influence of robots on human decision-making have been conducted, there are still no clear answers. 

In this study, the authors use the balloon analogue risk task (BART) to measure risk-taking behavior among participants (180 undergraduate psychology students; 154 women and 26 men) when they were 1) alone (control condition), 2) among a silent robot (robot control condition), and 3) among a robot that encouraged risky behavior by providing instructions and statements (experimental condition). The authors also measure participants’ Godspeed (“attitudes toward robots”) and their self-reported risk-taking. The robot used for the experiment is SoftBank Robotics Pepper robot, a “medium-sized humanoid robot.” 

The results show that risk-taking behavior increases among participants when encouraged by the robot (experimental condition). The authors write, “They pumped the balloon significantly more often, experienced a higher number of explosions, and earned significantly more money.” Interestingly, the participants in the robot control condition did not show higher risk-taking behavior than the control. The mere presence of a robot didn’t influence their behavior. This is in contrast to findings of human peer studies in which “evaluation apprehension” often causes people to increase risk-taking behaviors because they fear being negatively evaluated by others. It would be interesting to see if this finding is replicated in a study that allows the robot control condition to interact with the robot before beginning the experiment. 

The authors also find that although participants in the experimental condition experienced explosions, they did not alter their risk-taking behavior like those in the other groups. It seems, they write, “receiving direct encouragement from a risk-promoting robot seemed to override participants’ direct experiences and feedback.” This could be linked to the fact that participants in this group had a generally positive impression of the robot and felt “safe” by the end of the experiment.

While the authors acknowledge the limitations to their study (e.g., participants consisted of mostly women of the same age, focus on financial risk, etc.), the findings do raise several questions and issues that should be further investigated. For example, can robots also reduce risk-taking behavior? Would it be ethical to use a robot to help someone stop smoking or drinking? Understanding our interactions with robots (or other AI agents) and their influence on our decision-making and behavior is essential as these technologies continue to become a part of our daily lives. Arguably, many of us still struggle to understand—and resist—the negative influences of our peers. Resisting the negative influence of a machine? That may be even more difficult.  


Original paper by Yaniv Hanoch, Francesco Arvizzigno, Daniel Hernandez García, Sue Denham, Tony Belpaeme, and Michaela Gummerum: https://www.liebertpub.com/doi/10.1089/cyber.2020.0148

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

🔍 SEARCH

Spotlight

Illustration of a coral reef ecosystem

Tech Futures: Diversity of Thought and Experience: The UN’s Scientific Panel on AI

This image shows a large white, traditional, old building. The top half of the building represents the humanities (which is symbolised by the embedded text from classic literature which is faintly shown ontop the building). The bottom section of the building is embossed with mathematical formulas to represent the sciences. The middle layer of the image is heavily pixelated. On the steps at the front of the building there is a group of scholars, wearing formal suits and tie attire, who are standing around at the enternace talking and some of them are sitting on the steps. There are two stone, statute-like hands that are stretching the building apart from the left side. In the forefront of the image, there are 8 students - which can only be seen from the back. Their graduation gowns have bright blue hoods and they all look as though they are walking towards the old building which is in the background at a distance. There are a mix of students in the foreground.

Tech Futures: Co-opting Research and Education

Agentic AI systems and algorithmic accountability: a new era of e-commerce

ALL IN Conference 2025: Four Key Takeaways from Montreal

Beyond Dependency: The Hidden Risk of Social Comparison in Chatbot Companionship

related posts

  • Creative Agents: Rethinking Agency and Creativity in Human and Artificial Systems

    Creative Agents: Rethinking Agency and Creativity in Human and Artificial Systems

  • Visions of Artificial Intelligence and Robots in Science Fiction: a computational analysis

    Visions of Artificial Intelligence and Robots in Science Fiction: a computational analysis

  • Research summary: AI Mediated Exchange Theory by Xiao Ma and Taylor W. Brown

    Research summary: AI Mediated Exchange Theory by Xiao Ma and Taylor W. Brown

  • Beyond Bias and Compliance: Towards Individual Agency and Plurality of Ethics in AI

    Beyond Bias and Compliance: Towards Individual Agency and Plurality of Ethics in AI

  • Research Summary: Geo-indistinguishability: Differential privacy for location-based systems

    Research Summary: Geo-indistinguishability: Differential privacy for location-based systems

  • Judging the algorithm: A case study on the risk assessment tool for gender-based violence implemente...

    Judging the algorithm: A case study on the risk assessment tool for gender-based violence implemente...

  • AI Bias in Healthcare: Using ImpactPro as a Case Study for Healthcare Practitioners’ Duties to Engag...

    AI Bias in Healthcare: Using ImpactPro as a Case Study for Healthcare Practitioners’ Duties to Engag...

  • Summoning a New Artificial Intelligence Patent Model: In the Age of Pandemic

    Summoning a New Artificial Intelligence Patent Model: In the Age of Pandemic

  • A Case for AI Safety via Law

    A Case for AI Safety via Law

  • Public Perceptions of Gender Bias in Large Language Models: Cases of ChatGPT and Ernie

    Public Perceptions of Gender Bias in Large Language Models: Cases of ChatGPT and Ernie

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer


Articles

Columns

AI Literacy

The State of AI Ethics Report


 

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.

Contact

Donate


  • © 2025 MONTREAL AI ETHICS INSTITUTE.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.