• Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • Core Principles of Responsible AI
    • Accountability
    • Fairness
    • Privacy
    • Safety and Security
    • Sustainability
    • Transparency
  • Special Topics
    • AI in Industry
    • Ethical Implications
    • Human-Centered Design
    • Regulatory Landscape
    • Technical Methods
  • Living Dictionary
  • State of AI Ethics
  • AI Ethics Brief
  • 🇫🇷
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

The Robot Made Me Do It: Human–Robot Interaction and Risk-Taking Behavior (Research Summary)

January 12, 2021

Summary contributed by our researcher Victoria Heath (@victoria_heath7), who’s also a Communications Manager at Creative Commons.

*Link to original paper + authors at the bottom.


Overview: Can robots impact human risk-taking behavior? In this study, the authors use the balloon analogue risk task (BART) to measure risk-taking behavior among participants when they were 1) alone, 2) among a silent robot, and 3) among a robot that encouraged risky behavior. The results show that risk-taking behavior did increase among participants when encouraged by the robot.


Can robots impact human risk-taking behavior? If so, how? These are important questions to examine and understand due to the fact that human behavior has, as the authors of this study write, “clear ethical, policy, and theoretical implications.” Previous studies on behavioral risk-taking among human peers show that “in the presence of peers” participants “focused more on the benefits compared to the risks, and, importantly, exhibited riskier behavior.” Would a similar behavior be replicated among robot peers? Although previous studies examining the influence of robots on human decision-making have been conducted, there are still no clear answers. 

In this study, the authors use the balloon analogue risk task (BART) to measure risk-taking behavior among participants (180 undergraduate psychology students; 154 women and 26 men) when they were 1) alone (control condition), 2) among a silent robot (robot control condition), and 3) among a robot that encouraged risky behavior by providing instructions and statements (experimental condition). The authors also measure participants’ Godspeed (“attitudes toward robots”) and their self-reported risk-taking. The robot used for the experiment is SoftBank Robotics Pepper robot, a “medium-sized humanoid robot.” 

The results show that risk-taking behavior increases among participants when encouraged by the robot (experimental condition). The authors write, “They pumped the balloon significantly more often, experienced a higher number of explosions, and earned significantly more money.” Interestingly, the participants in the robot control condition did not show higher risk-taking behavior than the control. The mere presence of a robot didn’t influence their behavior. This is in contrast to findings of human peer studies in which “evaluation apprehension” often causes people to increase risk-taking behaviors because they fear being negatively evaluated by others. It would be interesting to see if this finding is replicated in a study that allows the robot control condition to interact with the robot before beginning the experiment. 

The authors also find that although participants in the experimental condition experienced explosions, they did not alter their risk-taking behavior like those in the other groups. It seems, they write, “receiving direct encouragement from a risk-promoting robot seemed to override participants’ direct experiences and feedback.” This could be linked to the fact that participants in this group had a generally positive impression of the robot and felt “safe” by the end of the experiment.

While the authors acknowledge the limitations to their study (e.g., participants consisted of mostly women of the same age, focus on financial risk, etc.), the findings do raise several questions and issues that should be further investigated. For example, can robots also reduce risk-taking behavior? Would it be ethical to use a robot to help someone stop smoking or drinking? Understanding our interactions with robots (or other AI agents) and their influence on our decision-making and behavior is essential as these technologies continue to become a part of our daily lives. Arguably, many of us still struggle to understand—and resist—the negative influences of our peers. Resisting the negative influence of a machine? That may be even more difficult.  


Original paper by Yaniv Hanoch, Francesco Arvizzigno, Daniel Hernandez García, Sue Denham, Tony Belpaeme, and Michaela Gummerum: https://www.liebertpub.com/doi/10.1089/cyber.2020.0148

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

🔍 SEARCH

Spotlight

AI Policy Corner: Singapore’s National AI Strategy 2.0

AI Governance in a Competitive World: Balancing Innovation, Regulation and Ethics | Point Zero Forum 2025

AI Policy Corner: Frontier AI Safety Commitments, AI Seoul Summit 2024

AI Policy Corner: The Colorado State Deepfakes Act

Special Edition: Honouring the Legacy of Abhishek Gupta (1992–2024)

related posts

  • Research summary: Algorithmic Injustices towards a Relational Ethics

    Research summary: Algorithmic Injustices towards a Relational Ethics

  • AI Neutrality in the Spotlight: ChatGPT’s Political Biases Revisited

    AI Neutrality in the Spotlight: ChatGPT’s Political Biases Revisited

  • Who will share Fake-News on Twitter? Psycholinguistic cues in online post histories discriminate bet...

    Who will share Fake-News on Twitter? Psycholinguistic cues in online post histories discriminate bet...

  • Seeing Like a Toolkit: How Toolkits Envision the Work of AI Ethics

    Seeing Like a Toolkit: How Toolkits Envision the Work of AI Ethics

  • Robust Distortion-free Watermarks for Language Models

    Robust Distortion-free Watermarks for Language Models

  • Model Positionality and Computational Reflexivity: Promoting Reflexivity in Data Science

    Model Positionality and Computational Reflexivity: Promoting Reflexivity in Data Science

  • Group Fairness Is Not Derivable From Justice: a Mathematical Proof

    Group Fairness Is Not Derivable From Justice: a Mathematical Proof

  • Ten Simple Rules for Good Model-sharing Practices

    Ten Simple Rules for Good Model-sharing Practices

  • The Logic of Strategic Assets: From Oil to AI

    The Logic of Strategic Assets: From Oil to AI

  • Digital Sex Crime, Online Misogyny, and Digital Feminism in South Korea

    Digital Sex Crime, Online Misogyny, and Digital Feminism in South Korea

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer

Categories


• Blog
• Research Summaries
• Columns
• Core Principles of Responsible AI
• Special Topics

Signature Content


• The State Of AI Ethics

• The Living Dictionary

• The AI Ethics Brief

Learn More


• About

• Open Access Policy

• Contributions Policy

• Editorial Stance on AI Tools

• Press

• Donate

• Contact

The AI Ethics Brief (bi-weekly newsletter)

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.


Archive

  • © MONTREAL AI ETHICS INSTITUTE. All rights reserved 2024.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.