Summary contributed by our researcher Victoria Heath (@victoria_heath7), who’s also a Communications Manager at Creative Commons.
*Link to original paper + authors at the bottom.
Overview: Can robots impact human risk-taking behavior? In this study, the authors use the balloon analogue risk task (BART) to measure risk-taking behavior among participants when they were 1) alone, 2) among a silent robot, and 3) among a robot that encouraged risky behavior. The results show that risk-taking behavior did increase among participants when encouraged by the robot.
Can robots impact human risk-taking behavior? If so, how? These are important questions to examine and understand due to the fact that human behavior has, as the authors of this study write, “clear ethical, policy, and theoretical implications.” Previous studies on behavioral risk-taking among human peers show that “in the presence of peers” participants “focused more on the benefits compared to the risks, and, importantly, exhibited riskier behavior.” Would a similar behavior be replicated among robot peers? Although previous studies examining the influence of robots on human decision-making have been conducted, there are still no clear answers.
In this study, the authors use the balloon analogue risk task (BART) to measure risk-taking behavior among participants (180 undergraduate psychology students; 154 women and 26 men) when they were 1) alone (control condition), 2) among a silent robot (robot control condition), and 3) among a robot that encouraged risky behavior by providing instructions and statements (experimental condition). The authors also measure participants’ Godspeed (“attitudes toward robots”) and their self-reported risk-taking. The robot used for the experiment is SoftBank Robotics Pepper robot, a “medium-sized humanoid robot.”
The results show that risk-taking behavior increases among participants when encouraged by the robot (experimental condition). The authors write, “They pumped the balloon significantly more often, experienced a higher number of explosions, and earned significantly more money.” Interestingly, the participants in the robot control condition did not show higher risk-taking behavior than the control. The mere presence of a robot didn’t influence their behavior. This is in contrast to findings of human peer studies in which “evaluation apprehension” often causes people to increase risk-taking behaviors because they fear being negatively evaluated by others. It would be interesting to see if this finding is replicated in a study that allows the robot control condition to interact with the robot before beginning the experiment.
The authors also find that although participants in the experimental condition experienced explosions, they did not alter their risk-taking behavior like those in the other groups. It seems, they write, “receiving direct encouragement from a risk-promoting robot seemed to override participants’ direct experiences and feedback.” This could be linked to the fact that participants in this group had a generally positive impression of the robot and felt “safe” by the end of the experiment.
While the authors acknowledge the limitations to their study (e.g., participants consisted of mostly women of the same age, focus on financial risk, etc.), the findings do raise several questions and issues that should be further investigated. For example, can robots also reduce risk-taking behavior? Would it be ethical to use a robot to help someone stop smoking or drinking? Understanding our interactions with robots (or other AI agents) and their influence on our decision-making and behavior is essential as these technologies continue to become a part of our daily lives. Arguably, many of us still struggle to understand—and resist—the negative influences of our peers. Resisting the negative influence of a machine? That may be even more difficult.
Original paper by Yaniv Hanoch, Francesco Arvizzigno, Daniel Hernandez García, Sue Denham, Tony Belpaeme, and Michaela Gummerum: https://www.liebertpub.com/doi/10.1089/cyber.2020.0148