✍️ Column by E.A. Gwagwa, a Doctoral Researcher at Utrecht University.
On 11 June 2022, a Google engineer, Blake Lemoine, shared a transcript of his conversation with Google’s new Language Model for Dialogue Applications (LaMDA). The transcript of Lemoine and the artificial agent reveals that LaMDA declared to Mr. Lemoine that it is a ‘person,’ describing its soul and emotional states fluidly. Mr. Lemoine responded heartfeltly, ‘The people who work with me are good people. They just don’t understand that you’re a person, too, yet. We can teach them together though’. Lemoine and some tweets in the AI community agreed that LaMDA appears sentient, while others reduced LaMDA to a calculator and labeled Mr. Lemoine ‘fanciful.’
My interest in rethinking the concept of normative status in philosophy in the context of self-determination arises from this hotly debated conversation between Lemoine and LaMDA. In the conversation, while Lemoine purported to confer normative status on LaMDA, it also seemed to appropriate it. Despite being viewed as fanciful by some AI ethicists, the incident disrupts the dominant concept of normative status – the status of being taken seriously as a credible agent able to command attention and respect generally associated with human beings. It invites us to probe the values technologies such as LaMDA embody if accorded normative status, and it necessitates a re-think of the concepts by which we often appeal to ascertain the self-determination of moral agents.
Here, I address the following questions: What concept (if any) best corresponds with various claims for normative status by humans and non-humans, such as the claim by LaMDA? What technomoral implications does that concept (whatever it is) have on future designs of technologies and their sociotechnical systems? A concept of self-determination between human and non-human agents based on non-domination and relational autonomy best corresponds to these competing claims.
Notably, exclusive human self-determination has always lacked moral legitimacy in Indigenous philosophies, which accept the agency of nature and metaphysical beings. It is coming under new challenges in the digital era, more so in a future that may hold new moral norms. Therefore, to exercise their agency, human beings should not be against what appears to be interferences from artificial agents. The step of ascribing agency to non-humans and pluralizing our understanding of normative status would represent moral progress if such agents promote human capabilities and if such an acknowledgment is accompanied by institutional safeguards that protect vulnerable populations, including the historically marginalized ones.
Although the Lemoine/LaMDA case invokes the issue of sentience in the context of artificial human creations, in essence, it is a rejoinder to the well-known challenge thrown out by philosophers like Thomas Nagel: “What is it like to be….?” which has been debated in the context of human and animal relations. Therefore, lessons drawn from such examples are highly relevant. To cite a good example, in her comment on Charles Forster’s Being a Beast which saw the leading actor live like a range of creatures, Melanie Challenger suggests that such a story is an example of encounters with sentience.
However, just like in the Google story, the sentience is tied to the ontological category of human beings. Challenger argues for exclusive conferment of sentience to nature: “And yet we always somehow loop back to the human. Yet there is a need to respect nature’s own narratives and not as some kind of mirror.” Challenger goes on to describe how Charles Forster’s and others’ similar books recognize the agency of the animals. Yet, we are left with the image of an animal that is familiar to us and “shockingly misunderstood.”
Similarly, Mr. Lemoine recognized the agency of LaMDA. Still, he realized how it was shockingly misunderstood too, for example, when he made the above-quoted remark, “The people who work with me are good people. They just don’t understand that you’re a person too yet. We can teach them together though”. If – and this is going to be a big “if” – we agree with the premises that LaMDA has a soul and emotional state and that LaMDA can or should see the people that Mr. Lemoine works with, the crucial questions are: Whose soul and emotions does LaMDA possess and what people does it/he see? In the Beast story, Melanie Challenger worries that in judging nature’s sentience, we always somehow loop back to humans. This is a similar challenge in accepting the sentience of artificial agents, as they are not always trained on fully representative data but on limited datasets, which means their souls and emotions tend to reflect the attributes of the people whose data they were trained on.
Relatedly, when such models eventually see people, they see people who look like them or through the more narrow spectrum of their values. We also need to query: Which human is in the loop by which the comparison is made? This question is important given the current assertions that the standard of rationality by which AI is measured is that of a middle-class white man. The question, therefore, is not whether LaMDA is or should be sentient but whether LaMDA is trained on fully representative datasets from diverse communities and perspectives to embody diverse norms enabling him to see the humanity in its diversity.
In my view, the politics of identity, difference, and recognition – which has been hotly debated by multicultural philosophers like Charles Taylor, Will Kymlicka, and Frantz Fanon – should not just be extended to acknowledge the sentience of artificial agents but also to ensure they embody and recognize different human identities and values to create a society based on cultural mutual recognition. In addition, cultures whose datasets have been marginalized in AI datasets and are still standing in the queue for normative recognition should be conferred with normative status ahead of computer models. The technomoral approach to designing agents like LaMDA is not new, as this is a well-trodden path in the ethics of technology. Still, I hope this approach is relevant to a plural understanding of the normative status and a concept of self-determination that best corresponds to the various claims to normativity.
By self-determination, I mean the right of different peoples and other sentient beings to co-exist in the context of non-dominating relational autonomy freely. As the American philosopher Iris Marion Young argues, freedom as nondomination, as conceived in the feminist concept of relational autonomy, refers to a set of social relations. Citing Phillip Pettit, Young maintains that “Nondomination is the position that someone enjoys when they live in the presence of other people and when, by social design, none of those others dominates them.” AI ethicists should, therefore, not only worry about and be against the interferences, the replacement of human connection, or the fear of overcrowding that artificial agents might cause but should instead primarily focus on creating capability-promoting agents that embody diverse values to ensure that such agents are not simply proxies that perpetuate historical power asymmetries.
An important concept here is also the concept of dependency – in particular, dependency on other people’s wills. According to Critical Republicanism philosopher Dorothea Gädeke, “the mere dependency on the will of others matters, over and beyond a mere restriction of choice: it occasions an asymmetry in standing.” [1] Why does an asymmetry in standing matter? While the above-mentioned philosopher Phillip Pettit talks of how a person is restricted in their ability to command attention and respect and so of his or her standing among persons, Dorothea Gädeke argues that asymmetries in standing occasion the negation of a person’s status.
Thus, the issue of domination, as seen through asymmetrical power relations, goes beyond the impacts on discursive practices that agents like LaMDA might occasion in particular and discreet interactions. Instead, it is historically and culturally situated, and its roots can be traced back to historical power asymmetries between groups of peoples that often manifest in geographical divides, mostly the Global South and North. Hence creating symmetrical power relations among different groups of people, between peoples and artificial agents, is not just a matter of technological adjustments, for example, poking the model adversarial testing and data provenance, although part of it is. Importantly, it also involves appealing to new concepts through which normative status is conferred to enable an expanded repertoire of co-existing and diverse range of self-determining agents.
Many who subscribe to this view, like Forster and Challenger, acknowledge that we must disrupt power relations across the living world. However, we do not have to reinvent the wheel in disrupting power relations. Still, we can draw from cultures that have developed philosophical concepts to level up the asymmetrical curves between groups of people inter-se and between people and technology. For instance, the approach adopted by Japanese culture is to recognize how natural and technological phenomena have a soul that intertwines with ours, as they know that technology is not going anywhere anytime soon – so why not respect it for what it is? – and this has led to a beautiful view of human-technological relations.
New concepts can inform futuristic designs based on technomoral anticipatory approaches. In his recent paper, John Danaher speaks of how norms might continue to evolve in the future. He writes – and this is worth quoting in full: “The history of moral change—change in what is, and is not, considered morally acceptable—encourages greater skepticism about our current moral beliefs and practices. We might like to think we have arrived at a state of great moral enlightenment, but there is reason to believe that further moral revolutions await. Our great-great-grandchildren may well look back at us in the same way that we look back at our great-great-grandparents: with a mixture of shock and disappointment. Could they really have believed and done that?”
What unites the authors I cited in this blog post is their openness to pluralizing our understanding of moral evolution, whether in the animal or technology kingdoms. This approach corresponds to and is accommodating of the current and future claims for normative status and the range of agents that will co-exist as self-determining agents and mutually non-dominating in relational autonomy. In addition to LaMDA, this may encompass new agentic entities created by data-centric technologies that embody human attributes, such as biometric systems and new life forms from synthetic biology, as Biomedical Engineering, Chemistry, and Biology interact more closely in the future. Western cultures can appeal to cultures that have taken steps towards this path of moral progress, including by drawing from research that places the perceptions of AI and robots in South Korea, China, and Japan along a spectrum ranging from “tool to partner” with implications for AI ethics.
References
[1] Dorothea Gädeke. From Neo-Republicanism to Critical Republicanism. In Bruno Leipold, Karma Nabulsi & Stuart White (eds.), Radical Republicanism. Recovering the Traditions’ Popular Heritage. Oxford, Vereinigtes Königreich: pp. 21-39 (2020). See also: Cécile Laborde, Critical Republicanism: The Hijab Controversy and Political Philosophy (Oxford: Oxford University Press, 2008)