• Skip to main content
  • Skip to secondary menu
  • Skip to primary sidebar
  • Skip to footer
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

  • Articles
    • Public Policy
    • Privacy & Security
    • Human Rights
      • Ethics
      • JEDI (Justice, Equity, Diversity, Inclusion
    • Climate
    • Design
      • Emerging Technology
    • Application & Adoption
      • Health
      • Education
      • Government
        • Military
        • Public Works
      • Labour
    • Arts & Culture
      • Film & TV
      • Music
      • Pop Culture
      • Digital Art
  • Columns
    • AI Policy Corner
    • Recess
  • The AI Ethics Brief
  • AI Literacy
    • Research Summaries
    • AI Ethics Living Dictionary
    • Learning Community
  • The State of AI Ethics Report
    • Volume 6 (February 2022)
    • Volume 5 (July 2021)
    • Volume 4 (April 2021)
    • Volume 3 (Jan 2021)
    • Volume 2 (Oct 2020)
    • Volume 1 (June 2020)
  • About
    • Our Contributions Policy
    • Our Open Access Policy
    • Contact
    • Donate

The Robot Made Me Do It: Human–Robot Interaction and Risk-Taking Behavior (Research Summary)

January 12, 2021

Summary contributed by our researcher Victoria Heath (@victoria_heath7), who’s also a Communications Manager at Creative Commons.

*Link to original paper + authors at the bottom.


Overview: Can robots impact human risk-taking behavior? In this study, the authors use the balloon analogue risk task (BART) to measure risk-taking behavior among participants when they were 1) alone, 2) among a silent robot, and 3) among a robot that encouraged risky behavior. The results show that risk-taking behavior did increase among participants when encouraged by the robot.


Can robots impact human risk-taking behavior? If so, how? These are important questions to examine and understand due to the fact that human behavior has, as the authors of this study write, “clear ethical, policy, and theoretical implications.” Previous studies on behavioral risk-taking among human peers show that “in the presence of peers” participants “focused more on the benefits compared to the risks, and, importantly, exhibited riskier behavior.” Would a similar behavior be replicated among robot peers? Although previous studies examining the influence of robots on human decision-making have been conducted, there are still no clear answers. 

In this study, the authors use the balloon analogue risk task (BART) to measure risk-taking behavior among participants (180 undergraduate psychology students; 154 women and 26 men) when they were 1) alone (control condition), 2) among a silent robot (robot control condition), and 3) among a robot that encouraged risky behavior by providing instructions and statements (experimental condition). The authors also measure participants’ Godspeed (“attitudes toward robots”) and their self-reported risk-taking. The robot used for the experiment is SoftBank Robotics Pepper robot, a “medium-sized humanoid robot.” 

The results show that risk-taking behavior increases among participants when encouraged by the robot (experimental condition). The authors write, “They pumped the balloon significantly more often, experienced a higher number of explosions, and earned significantly more money.” Interestingly, the participants in the robot control condition did not show higher risk-taking behavior than the control. The mere presence of a robot didn’t influence their behavior. This is in contrast to findings of human peer studies in which “evaluation apprehension” often causes people to increase risk-taking behaviors because they fear being negatively evaluated by others. It would be interesting to see if this finding is replicated in a study that allows the robot control condition to interact with the robot before beginning the experiment. 

The authors also find that although participants in the experimental condition experienced explosions, they did not alter their risk-taking behavior like those in the other groups. It seems, they write, “receiving direct encouragement from a risk-promoting robot seemed to override participants’ direct experiences and feedback.” This could be linked to the fact that participants in this group had a generally positive impression of the robot and felt “safe” by the end of the experiment.

While the authors acknowledge the limitations to their study (e.g., participants consisted of mostly women of the same age, focus on financial risk, etc.), the findings do raise several questions and issues that should be further investigated. For example, can robots also reduce risk-taking behavior? Would it be ethical to use a robot to help someone stop smoking or drinking? Understanding our interactions with robots (or other AI agents) and their influence on our decision-making and behavior is essential as these technologies continue to become a part of our daily lives. Arguably, many of us still struggle to understand—and resist—the negative influences of our peers. Resisting the negative influence of a machine? That may be even more difficult.  


Original paper by Yaniv Hanoch, Francesco Arvizzigno, Daniel Hernandez García, Sue Denham, Tony Belpaeme, and Michaela Gummerum: https://www.liebertpub.com/doi/10.1089/cyber.2020.0148

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

🔍 SEARCH

Spotlight

ALL IN Conference 2025: Four Key Takeaways from Montreal

Beyond Dependency: The Hidden Risk of Social Comparison in Chatbot Companionship

AI Policy Corner: Restriction vs. Regulation: Comparing State Approaches to AI Mental Health Legislation

Beyond Consultation: Building Inclusive AI Governance for Canada’s Democratic Future

AI Policy Corner: U.S. Executive Order on Advancing AI Education for American Youth

related posts

  • Problematic Machine Behavior: A Systematic Literature Review of Algorithm Audits

    Problematic Machine Behavior: A Systematic Literature Review of Algorithm Audits

  • AI Neutrality in the Spotlight: ChatGPT’s Political Biases Revisited

    AI Neutrality in the Spotlight: ChatGPT’s Political Biases Revisited

  • Green Algorithms: Quantifying the Carbon Emissions of Computation (Research Summary)

    Green Algorithms: Quantifying the Carbon Emissions of Computation (Research Summary)

  • Does Military AI Have Gender? Understanding Bias and Promoting Ethical Approaches in Military Applic...

    Does Military AI Have Gender? Understanding Bias and Promoting Ethical Approaches in Military Applic...

  • Responsible Generative AI: A Reference Architecture for Designing Foundation Model-based Agents

    Responsible Generative AI: A Reference Architecture for Designing Foundation Model-based Agents

  • The Impact of the GDPR on Artificial Intelligence

    The Impact of the GDPR on Artificial Intelligence

  • Summoning a New Artificial Intelligence Patent Model: In the Age of Pandemic

    Summoning a New Artificial Intelligence Patent Model: In the Age of Pandemic

  • Artificial Intelligence in healthcare: providing ease or ethical dilemmas?

    Artificial Intelligence in healthcare: providing ease or ethical dilemmas?

  • CRUSH: Contextually Regularized and User Anchored Self-Supervised Hate Speech Detection

    CRUSH: Contextually Regularized and User Anchored Self-Supervised Hate Speech Detection

  • The State of AI Ethics Report (Volume 4)

    The State of AI Ethics Report (Volume 4)

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer


Articles

Columns

AI Literacy

The State of AI Ethics Report


 

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.

Contact

Donate


  • © 2025 MONTREAL AI ETHICS INSTITUTE.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.