By Dalia Renzullo (Philosophy, McGill University)
Abstract: The development of anthropomorphic AI technology is part of a process of social acclimatization that facilitates its use for capitalist goals. It is specifically designed to elicit attachment and ease, so that the technology transcends being merely palatable and becomes actively desired in daily life. Profit is obtained from a consumer level until more contentious and lucrative uses, such as in the military, slowly gain public acceptance.
Introduction
As artificial intelligence technology continues its expansion, developers are faced with a series of issues that have great philosophical and ethical importance. AI boasts a multitude of possible applications that cater to the needs of human beings: education, medicine, administrative assistance, companionship, and so on. Of this myriad of applications, social robots stand out as the form of AI engaging in possibly the most contact with human beings and having the biggest influence in our daily lives.
Kate Darling defines social robots as “physically embodied, autonomous agents that communicate and interact with humans on a social level”1. These robots are specifically designed to elicit anthropomorphic projections and emotional effects when serving or assisting human needs2. The implications of anthropomorphism in our social uses of AI is the specific contentious facet of this emerging technology that this paper aims to discuss. How does anthropomorphizing our AI change the way we perceive and interact with it, if at all? Is the deliberate choice to develop AI with anthropomorphic capacity beneficial, benign, or harmful to humans in any way? What role does it serve?
I argue that the production, development, and use of anthropomorphic AI technology is part of a process of social acclimatization that facilitates this technology’s use as an agent of capitalism. To support this claim, I begin with a discussion concerning the effects of interacting with anthropomorphized AI and their implications on our feelings of trust and deceit regarding these machines. Next, I elaborate on how social robots contribute to an increase in surveillance capitalism, specifically using today’s embracing of AI virtual assistants as a notable example. Finally, I explore how this gradual acclimatization to anthropomorphized social robots serves to induce a sense of public acceptability for AI in military use, furthering capitalist goals.
Anthropomorphism, Trust, and Deceit
Anthropomorphized AI has profound effects on our interactions with social robots, and possibly with each other. Our interactions with these robots follow social behaviour patterns, which are designed to encourage emotional relationships3. This anthropomorphism no doubt increases its appeal to human beings4. It is to no surprise that there is an increase in the quantity and variety of socially-engaging robots that are being developed and marketed to us.
Examples of early social robots include: interactive robotic toys like Sony’s Aibo dog and Innovo Labs’ robotic dinosaur Pleo; companions such as Aldebaran’s NAO and Pepper robots; medical and health-monitoring devices like the therapeutic Paro baby seal and Intuitive Automata’s weight loss coach Autom; household robots like Jibo, and research robots like the Massachusetts Institute of Technology (MIT) robots Kismet, Cog, Leonardo, AIDA, Dragonbot, and Baxter5.
Darling suggests that a psychological caregiver effect may occur to explain why we form attachments to objects. We value a sense of personal responsibility we feel towards the object despite it being lifeless. However, the attachments we feel to anthropomorphized social robots are arguably stronger than other inanimate objects. Darling attributes this connection to three factors: physicality, perceived autonomous movement, and social behaviour6.
Because humans are physical creatures, we respond differently to objects that take up physical space as well, compared to an object that is on a screen, as an example. When these physical objects move in ways we can’t anticipate, we often project intent, leading to the perception of autonomous movement7. An example of this is the anthropomorphization of a Roomba vacuum cleaner, which does nothing but move around and clean floors8.
These two factors alone are enough to anthropomorphize certain objects, but social behaviour reinforces the effect, through a deliberate mimicking of cues that we normally associate with specific feelings or states of mind9. An example of this over-ascription of agency and projection of sentiments is the robotic seal Paro, used as a therapeutic device in nursing homes. It reacts to touches and words, learns individual voices, responds to people’s actions, and conveys emotional states10. Most patients who interacted with Paro were found to treat it as if it were alive. People would no doubt be more prone to anthropomorphizing social robots if they have little to no understanding of its functioning and complexity, such as children and the elderly. However, it was found that the effect of projection holds for everyone, through a study that analyzed individuals’ behaviour with Sony’s Aibo dog11.
Now that it is understood what constitutes anthropomorphism of social robots, its effects on society can be discussed. Darling discusses the possibility of regulating people’s behaviour towards these robots, and instantiating a legal framework analogous to animal abuse laws that would discourage harmful behaviour in other contexts. To support this, she extends a Kantian argument to robots, stating that inhumane treatment of robots and animals makes for an inhumane person. She also states that such treatment may reinforce morally reprehensible behaviour, in addition to having a desensitizing effect that undermines humans’ empathy for each other.
I disagree with Darling’s position and am inclined to lend more support to the views of Deborah G. Johnson and Mario Verdicchio. They argue that analogies with animals are misleading because they neglect the fundamental difference between animals and robots: suffering12. The authors contend that what we see is simply the appearance of suffering, which is not only completely avoidable by manufacturers, but has no evidence for any negative “carry-over” effects on human beings.
Discussions regarding the legal framework surrounding social AI reveal a construction of them that occludes their role as capitalist agents. Darling cites multiple concerns over privacy, data collection, advertising, and increased inclination to reveal personal information. These concerns are relevant, but are nonetheless present with our current technology, specifically through social media and AI virtual assistants, which will be discussed later.
Suggesting that social robots merit legal protection analogously to animals is simply the product of anthropomorphism. It occludes the fact that they are forms of technology that function through high-calibre personal data collection and programming, meriting a legal framework that treats them as such. This is arguably more effective at reducing the harm Darling discusses, because it preserves the accountability of manufacturers, whose primary interest in our profit-driven market is personal data as a currency of exchange. Altruistic ideals about helping humans and great technological feats come second, because they are only permitted to come to fruition when capital is promised and available.
If social robots develop to a point where they are almost indistinguishable from humans in terms of physical appearance, the possibility of deception complicates our dealings with them, socially and legally. Grodzinsky, Miller, and Wolf analyze the impact of deception and trust in the development of artificial agents. Like the opinions of previous authors discussed, Grodzinsky acknowledges that despite being aware of the nature of an artificial agent’s existence, the power of deception can cause changes in our interactions and emotional attachments with them13.
Grodzinsky’s position is particularly strong because she recognizes the power imbalance between developers and users: developers can be held to a higher ethical standard since they have the capability to deceive14. She defines deception as “an intentional, successful attempt by developers to deceive users, and a misapprehension by people other than the developers”15. If a user is deceived into thinking that an artificial agent is a person, or even a pseudo-person via deliberate and high-calibre anthropomorphization, the user can make inappropriate choices with ethical significance because of this false belief16.
Grodzinsky states that developers may incorporate deception into their artificial agents not because it is required, but because of a mere interest and challenge to mimic human behaviour or appearance17. Despite this, a person may change their expectations of a robot and begin to trust it after repeated exposure, especially if it is given the “benefit of the doubt” due to its human-like appearance. If a robot continually reminds a person that it is not human, it is arguably still practicing deception because its nonhumanness is not observable18.
Even if a developer’s intent is ethical, the creation of a deceptive robot can facilitate unethical acts, such as a copy of an ethically programmed robot that is modified for unethical intent. In addition, ethical users can use robots in ways that are not competent or predictable. These robots are powerful because they invite our attachment, and in so doing, alter our tendency to trust19. If robots become equipped with the ability to detect human deception, this process can work in the reverse direction. Current research is working on detecting micro-expressions made in lying human faces20, so this is not an impossibility.
Grodzinsky is evidently suspicious of anthropomorphized artificial intelligence due to its ability to deceive. She suggests that responsible developers should be required to make strong case-by-case analyses of any deceptions they plan to implement and justify why they are an exception to being unethical. The possibility for robots to deceive us due to their anthropomorphic nature further facilitates personal data collection and surveillance from manufacturers. The collection of our data is not completely with malicious intent, since the robot’s ability to function depends on using this data to act, respond and fulfill its use.
Considering its increased propensity to deceive if it is highly anthropomorphized, it seems that there is an economic incentive to develop robots in this way so as to maximize profit from the collection of personal data. Considering analogous technology today, social media platforms like Facebook are free to use. They instead profit from personal data that users voluntarily provide and sell it to which ever third parties can further profit from it. It is no wonder that we often get ads for things we have recently searched in Google, or typed in Facebook chat messages, which are supposedly private.
Social Robots and Surveillance Capitalism: Alexa and Siri as Forerunners
Now that the mechanisms of anthropomorphization and deceit have been identified in social robots, it is possible to understand how they are leveraged in order to acclimatize people to surveillance capitalism. Surveillance capitalism is defined by Shoshana Zuboff as “a new form of information capitalism [that] aims to predict and modify human behaviour as a means to produce revenue and market control”21.
The use of anthropomorphized artificial intelligence to propagate surveillance capitalism is already rearing its head with present-day weak AI voice assistant technology. The market is currently dominated by Apple’s Siri and Amazon’s Alexa. Heather Suzanne Woods examines the use of these technologies and argues that gendered stereotypes can be leveraged to assuage anxieties surrounding these artificially intelligent voice assistants. She states that they mobilize normative feminine gender roles to obfuscate modes of surveillance and to engage users productively and “naturally” with surveillance capitalism22.
Both Siri and Alexa, with their feminine names and voices, enact digital domesticity by performing within the confines of the stereotypical feminine role. They act as companionship and administrative service, are celebrated for their “calming” and “gentle guidance”, and “patiently accept whatever you tell [them], without judgement or criticism”23. This evokes the imagery of a 1960s secretary; the voice assistants “provide care labour in accordance with gender expectations”24.
An analysis of Alexa’s reviews reveals deeply gendered anthropomorphism as well, with comments referring to the voice assistant as the “perfect wife”, and “my love”25. Alexa always responds calmly and civilly when given verbal abuse, creating the persona of the “nurturing mother” as well.26 Siri provides assistance with mundane labour some users would rather not do, even beyond the home, and performs this digital domesticity without complaint27. An analysis of records documenting humorous experiences with Siri reveals many sexually explicit and violent submissions, deepening Siri’s gendered persona28. Siri’s responses to these prompts always acknowledge the inappropriate comment, defuse the situation, and return to business at hand, eerily mirroring the ways women in patriarchal societies have had to adapt with daily violence29.
Despite not having any physicality, AI voice assistants like Siri and Alexa are deeply anthropomorphized as gendered technology. They are not the futuristic, humanoid social robots of the previous discussion, but are arguably the best precursors we have to them that come into daily contact with us. This performance of an anthropomorphized persona acclimatizes people to being in regular contact with artificial intelligence, having it in their homes, and sharing intimate details of their life with them, or simply around them. This acclimatization “makes palatable the surveillant logics of platform capitalism”30.
Zuboff states that surveillance capitalism happens “when market logic of accumulation runs into an era of big data”, coming from multiple sources and including “smart” devices equipped with weak AI, such as Siri and Alexa31. Having such intimate contact with its users, these voice assistants serve as nodal points for gathering data between users and corporations32. They are specifically designed to entice users, and their developers have communicated interest in making them even more approachable and less “robotic”33.
Despite not being any form of artificial intelligence, social media shares an “inversion” principle with virtual assistants: the mass delivery of information that would normally disempower its users becomes a requirement for successfully participating in life34. Orchestrated by surveillance capitalists, this implies that as the technology becomes more commonplace, users feel the need to “hop on the bandwagon” and participate.
This is more common with social media, especially due to its ego-enhancing and validation-providing capabilities, but also applies to voice assistants because the delegation of tasks to a technological entity is attractive. In this situation, privacy is willingly traded, paid for, and becomes weaponized against the individual35. The surveillance becomes more apparent as the technology progresses: in 2017, Amazon released a new series of devices equipped with Alexa that featured screens and video cameras designed to be put in closets or bedrooms: it is advertised as a fashion assistant36.
Public Acceptability for Military Use
Artificial intelligence promises significant profit not only in the commercial industry, but in the military as well. Steven Cave and Seán ÓhÈigeartaigh discuss the intensification of an AI arms race as the applications of its research become more lucrative. As stated in its State Council report of 2017, China “aims to seize major strategic opportunity for the development of AI [and] to build China’s first-mover advantage in the development of AI”37. Russia’s president Vladimir Putin stated that “whoever becomes the leader in this sphere will become the ruler of the world”38. OpenAI cofounder Elon Musk also stated that “competition for AI superiority at national level [is the] most likely cause of WW3”39.
These statements by such key global actors suggest that there is a very real desire to reap the advantages of becoming the frontrunner of AI development. Because of the versatility of the technology, general superiority would also imply military superiority, as it offers a political and economic advantage. It is unquestionable that military superiority is highly sought after by states, and that this would push them to search for new advancements in the technology.
One technology that has been on the rise are unmanned aerial vehicles (UAVs), commonly known as drones. These robots can be in the air, on the ground, or in the water, and are theoretically capable of executing missions on their own40. Much controversy has surrounded UAVs, including discussions about whether these “killer robots”, as some call them, should be banned. The debate that surrounds them concerns whether artificially intelligent machines should be allowed to execute military missions, especially if human life could be at stake41. Military AI research is directed towards the design of autonomous systems that demonstrate “independent capacity for knowledge and expert based-reasoning”, as there are no such autonomous systems currently in operation42. Current UAVs have some low-level autonomy that allows them to land, and navigate independently, but require significant human intervention to execute their missions.
The implementation of fully-operational autonomous weapons would radically transform international conflict and warfare. Having “robot soldiers” would technically act to preserve life for the state that owns them, but could spell horror for the state that they are deployed in. Significantly more casualties would occur, especially civilian casualties, from autonomous weapons having free reign to kill human life. Deploying such machines would arguably exacerbate and deeply disturb our already fraught political climate. On a more global scale, these machines represent a deliberate choice to invest time, money, and intellectual resources into a creation whose sole purpose is to cause suffering and death, instead of investing these resources into something that could benefit human life. It is no surprise that the general public is very uneasy and opposed to this type of research. Companies like Microsoft are equally opposed as well; workers have recently demanded that the company cancel its $480 million contract with the US military43.
It is in the state’s interest to develop this technology, but the wider public presents an opposition. To sway the public, the state uses the commercial sector to slowly acclimatize people to AI, in the hopes of reaching a point when its use in the military will be fully accepted. This military use, and the conflict it will be mobilized in, represents the overarching goal of the state’s capitalist interests. As M. L. Cummings states, “the rapid development of commercial autonomous systems could normalize the acceptance of autonomous systems for the military and the public”44. Individuals, already accustomed to AI in their daily lives through commercial products like virtual assistants, and later, social robots, have developed a tolerance and attachment to them because they are able to anthropomorphize them. Eventually, when the technology of the commercial sector becomes fully integrated into daily life through surveillance capitalism, its transition to the military suddenly becomes more palatable and less overt.
Conclusion
The connection between anthropomorphized AI, surveillance capitalism, and the military is clearer than expected. The world-wide interest and global effort to develop and propel anthropomorphic AI technology is part of a process of social acclimatization that facilitates its use as a means to a capitalist end. Its development in the commercial sector propels it to act as an agent of capitalism. To begin the process, the technology is developed in a way that promotes attachment, trust, and ease through its anthropomorphic form. This makes the population more comfortable with having AI around.
Next, the technology is developed so that it becomes a wanted part of our daily, intimate lives. It is familiar, it assists us, and it feels benign as it collects vast amounts of our personal data to use as currency. Once this process has become palatable and commonplace, the military’s use of AI technology for autonomous weaponry suddenly seems less horrific. Profit is accumulated from all ends: through our data, and through profitable international conflict.
Despite this possible picture of the future, artificial intelligence is not to be considered inherently bad. It offers a wealth of possibility for the improvement of human life, and so it is developers’ and policy-makers’ jobs to ensure that other than profit, this reality remains the technology’s objective goal.
Works Cited
- Kate Darling, “Extending legal protection to social robots: The effects of anthropomorphism, empathy, and violent behavior towards robotic objects”, in R. Calo, M. Froomkin, & I. Kerr (eds.), Robot Law, Edward Elger (2016): 213-233, https://doi.org/10.4337/9781783476732
- Darling, 214.
- Ibid., 215.
- Masahiro Mori, Karl F. MacDorman and Norri Kageki, “The Uncanny Valley [From the Field]”, IEEE Robotics & Automation Magazine 19, no.2 (2012): 98-100, https://doi.org/10.1109/MRA.2012.2192811, 98. One well known example of this phenomenon in robotics is “the Uncanny Valley” effect, where an increase in a robot’s human resemblance increases its appeal to humans until a point where the resemblance becomes almost perfect. The robot then becomes strange and uncanny to humans and induces a negative emotional reaction.
- Darling, 215.
- Ibid., 217-218.
- Ibid., 216.
- Ibid.
- Ibid.
- Ibid., 219.
- Ibid., 220.
- Deborah G. Johnson and Mario Verdicchio, “Why robots should not be treated like animals”, Ethics and Information Technology 20, no. 4 (2019): 291-301, https://doi.org/10.1007/s10676-018-9481-5, 292.
- Frances S. Grodzinsky, Keith W. Miller, and Marty J. Wolf, “Developing Automated Deceptions and the Impact on Trust”, Philosophy & Technology 28, no. 1 (2015): 91-105, https://doi.org/10.1007/s13347-014-0158-7, 92.
- Grodzinsky et al., 93.
- Ibid., 95.
- Ibid.
- Ibid., 99.
- Ibid., 100.
- Ibid.
- Ibid., 102.
- Shoshana Zuboff, “Big other: Surveillance capitalism and the prospects of an information civilization”, Journal of Information Technology 30, no. 1 (2015): 75-89, https://doi.org/10.1057/jit.2015.5, 75.
- Heather Suzanne Woods, “Asking more of Siri and Alexa: feminine persona in service of surveillance capitalism”, Critical Studies in Media Communication 35, no. 4 (2018): 334-349, https://doi.org/10.1080/15295036.2018.1488082, 334.
- Woods, 339.
- Ibid.
- Ibid., 340.
- Ibid.
- Ibid., 341.
- Ibid., 342.
- Ibid., 343.
- Ibid.
- Ibid.
- Ibid., 344.
- Ibid.
- Ibid.
- Ibid., 345.
- Ibid., 346.
- Steven Cave and Seán ÓhÈigeartaigh, “An AI Race for Strategic Advantage: Rhetoric and Risks”, AAAI/ACM Conference on AI, Ethics, and Society, (2018): 1-5, http://www.aies-conference.com/wp-content/papers/main/AIES_2018_paper_163.pdf, 1.
- Ibid.
- Ibid.
- M. L. Cummings, “Artificial Intelligence and the Future of Warfare”, International Security Department and US and the Americas Programme, The Royal Institute of International Affairs Chatham House (2017): 1-18, https://www.chathamhouse.org/sites/default/files/publications/research/2017-01-26-artificial-intelligence-future-warfare-cummings-final.pdf, 2.
- Ibid.
- Ibid., 8.
- Shirin Ghaffary, “Microsoft workers are demanding the company cancel its $480 million contract with the US military”, Vox Media, February 22, 2019, accessed April 1, 2019, https://www.vox.com/2019/2/22/18236290/microsoft-military-contract-augmented-reality-ar-vr.
- Cummings, 12.
References
Carpenter, Julie, Joan M. Davis, Norah Erwin-Stewart, Tiffany R. Lee, John D. Bransford, and Nancy Vye. “Gender Representation and Humanoid Robots Designed for Domestic Use”. International Journal of Social Robotics 1, no. 3 (2009): 261-265. https://doi.org/10.1007/s12369-009-0016-4.
Carpenter, Julie. “Just Doesn’t Look Right: Exploring the Impact of Humanoid Robot Integration into Explosive Ordnance Disposal Teams”. Handbook of Research on Technoself: Identity in a Technological Society, ed. Rocci Luppicini, (2013): 609-636. https://doi.org/10.4018/978-1-4666-2211-1.ch032.
Cave, Steven and Seán ÓhÈigeartaigh. “An AI Race for Strategic Advantage: Rhetoric and Risks”. AAAI/ACM Conference on AI, Ethics, and Society, (2018): 1-5. http://www.aies-conference.com/wp-content/papers/main/AIES_2018_paper_163.pdf.
Cummings, M. L. “Artificial Intelligence and the Future of Warfare”. International Security Department and US and the Americas Programme. The Royal Institute of International Affairs Chatham House, (2017): 1-18. https://www.chathamhouse.org/sites/default/files/publications/research/2017-01-26-artificial-intelligence-future-warfare-cummings-final.pdf.
Darling, Kate. “Extending legal protection to social robots: The effects of anthropomorphism, empathy, and violent behavior towards robotic objects”. In R. Calo, M. Froomkin, & I. Kerr (eds.), Robot Law. Edward Elger (2016): 213-233. https://doi.org/10.4337/9781783476732.
Ghaffary, Shirin. “Microsoft workers are demanding the company cancel its $480 million contract with the US military”. Vox Media. February 22, 2019. Accessed April 1, 2019. https://www.vox.com/2019/2/22/18236290/microsoft-military-contract-augmented-reality-ar-vr.
Grodzinsky, Frances S., Keith W. Miller, and Marty J. Wolf. “Developing Automated Deceptions and the Impact on Trust”. Philosophy & Technology 28, no. 1 (2015): 91-105. https://doi.org/10.1007/s13347-014-0158-7.
Johnson, Deborah G., and Mario Verdicchio. “Why robots should not be treated like animals”. Ethics and Information Technology 20, no. 4 (2019); 291-301. https://doi.org/10.1007/s10676-018-9481-5.
Miller, Keith W. “It’s Not Nice to Fool Humans”. IT Professional 12, no. 1 (2010): 51-52. https://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=5403178.
Mori, Masahiro, Karl F. MacDorman and Norri Kageki. “The Uncanny Valley [From the Field]”. IEEE Robotics & Automation Magazine 19, no.2 (2012), 98-100. https://doi.org/10.1109/MRA.2012.2192811.
Woods, Heather Suzanne. “Asking more of Siri and Alexa: feminine persona in service of surveillance capitalism”. Critical Studies in Media Communication 35, no. 4 (2018): 334-349. https://doi.org/10.1080/15295036.2018.1488082.
Zuboff, Shoshana. “Big other: Surveillance capitalism and the prospects of an information civilization”. Journal of Information Technology 30, no. 1 (2015): 75–89. https://doi.org/10.1057/jit.2015.5.