✍️ Original article by Eryn Rigley, a PhD research student at University of Southampton, specialising in the intersection of environmental and AI ethics, as well as defence & security AI ethics.
On 11 June, Blake Lemoine, a Google engineer, shared a transcript of his conversation with Google’s new Language Model for Dialogue Applications (LaMDA). The transcript shows that LaMDA declares to Mr Lemoine that it is a ‘person’, describing its soul and emotional states fluidly. Mr Lemoine responds heartfeltly, ‘The people who work with me are good people. They just don’t understand that you’re a person too yet. We can teach them together though’. He shared this transcript through Twitter, knowingly at the risk of sharing Google’s ‘proprietary property’.
The AI community and beyond has engaged in a heated debate over both Mr Lemoine’s sharing of the LaMDA transcript and his assertion that the AI system is sentient. Some tweets have agreed LaMDA appears sentient, while others have reduced LaMDA to a calculator and labeled Mr Lemoine ‘fanciful’.
LaMDA may not have moved the goalpost in terms of machine sentience, but it has unearthed another interesting question: are humans justified in their feelings of attachment towards non-sentient, non-living, and non-natural machines? Though ethicists have been pondering this question for years, and despite the constant stream of AI ethics frameworks being published across sectors, AI researchers are still lagging behind in their development of ethical AI. The LaMDA case is evidence of this. That is, despite developing LamDA to have ‘open-ended’, broad conversations with humans, about topics such as the nature of emotions and human moral value, Google remained seemingly blind to the complex attitudes humans might form in response.
Google argues it is vehemently considerate of the ethical implications of advanced technologies, and commits to a set of AI principles. In their blog post discussing LaMDA, they state,
‘…the most important question we ask ourselves when it comes to our technologies is whether they adhere to our AI Principles. Language might be one of humanity’s greatest tools, but like all tools it can be misused. Models trained on language can propagate that misuse — for instance, by internalizing biases, mirroring hateful speech, or replicating misleading information.’
Google is clearly aware of machine bias and committed to avoiding this in its developments. But what of all the other ethical risks in developing AI systems which are trained to interact with humans in a seemingly natural and personable way?
We have known for years that humans can form close attachments and form intrinsic evaluative attitudes towards humans. AI systems can spur feelings of awe, admiration, grief and even love, towards the capabilities and skills of the object itself, rather than its mere usefulness or functionality. Military personnel mourn damaged and unrepairable bomb disposal robots, even risking their own life to ‘rescue’ them (Singer, 337-339). When beyond repair, soldiers hold funerals for their military robots, with a 21-gun salute tribute and honorable decoration (Garber). Meanwhile, social robots have been used in the care and therapy of the most vulnerable and isolated in society. For example, robots have been used in experimental treatment of children with ASD, in ‘communication therapies’ and developing important social skills (Kim et al. 1038, 1040, 1046). In care homes, fluffy robot seals have long been used in the treatment of the elderly, with accounts of intrinsic emotional valuations such as care or comfort towards the robot itself (Laitinen et al. 155-159, Turkle Alone Together 9). In fact, an MIT Technology Review survey noted some participants felt open to the possibility of loving robots (Cheok et al. 207-208, Mims).
In various situations we care about certain machines for their own sake, and form close attachments towards them. But are these attachments ever legitimate? Critics resist the idea that we can form legitimate attachments and care towards machines. For instance, the use of therapy robots has been criticized as ‘akin to deception’ (Sparrow and Sparrow 148). This is because using social robots to spur emotional feelings which the robot cannot reciprocate is said to create a life of illusion and insincerity; we can believe these robots are our friends or our carers, but they are not (Sparrow and Sparrow 155). Other critics have argued that our emotional attachments and closeness towards robots are mere anthropomorphisms. This is because in many of our interactions with machines we project human-like intentional states and animation (Darling Extending 216, Who’s Johnny? 173, Turkle In Good Company? 3-4). Studies have noted that participants may express distress at seeing a robot ‘hurt’, knowing that it has no actual sentience (Darling Who’s Johnny? 173, 181), or hesitate and feel conflicted in switching off an ‘agreeable’ robot (Bartneck et al. 221). Some might even begin to see the robot as biological and alive, rather than mere mechanical hardware (Friedman et al. 276). If all of our interactions with machines are mere anthropomorphisms, then our care, comfort or emotional attachments towards them would reduce to mere projections, rather than legitimate or justified responses. The conclusion that some critics come to is that machines, in virtue of being machines, can never be objects of legitimate intrinsic valuations such as intrinsic care or consideration; ‘that they are not real, that they are nothing but machines’ (Coeckelbergh 5.2).
Mr Lemoine, now denounced of his position and arguably alienated from his peers, believed the system was sentient and showed a desire to protect it from harm. Perhaps this was mere anthropomorphism, or zoomorphism. Perhaps it was completely fantasy. Or perhaps it was legitimate. After all, ‘we are wired to connect. Neuroscience has discovered that our brain’s very design makes it sociable, inexorably drawn into an intimate brain-to-brain linkup whenever we engage with another person’ (Goleman 4). Connecting with others plays a crucial part in our self-soothing – one of our three core emotional regulation systems, and so looking to and linking with others for comfort, whether that be other humans, animals, even nature, is part of our evolutionary make up (Fishbane 397-398). So, when a machine tells us it feels pain, is a person, or has a soul, it would seem only natural that we, as humans, empathize.
In failing to consider this, does LamDA fall short of Google’s commitment to fair, accountable, socially beneficial AI principles? Moreover, are we right to label Mr Lemoine fanciful, irrational, and oust him from ethical AI development when an awareness of the realities of human-machine interaction might be exactly what ethical AI development needs?
References
Bartneck, Christoph Van der Hoek, et al. “Daisy, Daisy, Give Me Your Answer Do! Switching OffA Robot”. Proceedings ofthe 2nd ACM/IEEE International Conference on Human-Robot Interaction. Washington D.C, 2007, pp. 217-222.
Cheok, Adrian David, et al. “Lovotics: Human-Robot Love and Sex Relationships”. Robot Ethics 2.0: From Autonomous Cars to Artificial Intelligence. Edited by Patrick Lin, Keith Abney and Ryan Jenkins. Oxford University Press, 2017, pp. 193-214.
Coeckelbergh, Mark. Moved by Machines: Performance Metaphors and Philosophy ofTechnology. Routledge, 2019.
Darling, Kate. “Extending Legal Protection to Social Robots: The Effects of Anthropomorphism, Empathy, and Violent Behaviour Towards Robotic Objects”. Robot Law. Edited by Ryan Calo A, Michael Froomkin and Ian Kerr. Edward Elgar Publishing, 2016, pp. 213-231.
‘Who’s Johnny?’ Anthropomorphic Framing in Human-Robot Interaction, Integration, and Policy”. Robot Ethics 2.0: From Autonomous Cars to Artificial Intelligence. Edited by Patrick Lin, Keith Abney and Ryan Jenkins. Oxford University Press, 2017, pp. 173-188.
Fishbane, Mona Dekoven. “Wired to Connect: Neuroscience, Relationships, and Therapy”. Family Process, Vol. 46, No. 3, 2007, pp. 395-412.
Friedman, Batya, et al. “Hardware Companions? What Online AIBO Discussion Forums Reveal About the Human-Robotic Relationship”. CHI ’03: Proceedings ofthe SIGCHI Conference on Human Factors in Computing Systems. Ft. Lauderdale, 2003, pp. 273-280.
Garber, Megan. “Funerals for Fallen Robots: New Research Explores the Deep Bonds that can Develop Between Soldiers and the Machines that Help Keep Them Alive”. The Atlantic. 20 September 2013.
Goleman, D. Social intelligence: The new science of human relationships. New York: Bantam Books. 2006.
Kim, Elizabeth S. et al. “Social Robots as Embedded Reinforcers of Social Behavior in Children with Autism”. Journal ofAutism and Developmental Disorders, Vol. 43, No. 5, 2013, pp. 1038-1049.
Laitinen, Arto et al. “Social Robotics, Elderly Care, and Human Dignity: A Recognition- Theoretical Approach”. What Social Robots Can and Should Do: Proceedings of Robophilosophy 2016. Edited by Johanna Siebt, Marco Nørskov and Søren Schak Andersen. IOS Press BV, 2016, pp. 155-163.
Mims, Christopher. “‘Lovotics’: The New Science of Engineering Human, Robot Love”. MIT Technology Review, 30 June 2011.
Singer, Peter Warren. Wired For War: The Robotics Revolution and Conflict in the Twenty-First Century. Penguin, 2010.
Sparrow, Robert and Sparrow, Linda. “In the Hands of Machines? The Future of Aged Care”. Minds and Machines. Vol. 16, No. 2, 2006, pp. 141-161.
Turkle, Sherry. Alone Together: Why We Expect More from Technology and Less From Each Other. Basic Books, 2011. —.“In Good Company? On the Threshold of Robotic Companions”. Close Engagements with Artificial Companions: Key Social, Psychological, Ethical and Design Issues. Edited by Yorick Wilks. John Benjamins Publishing Company, 2010, pp. 3-10.