🔬 Research Summary by Ameet Deshpande, Ph.D. Researcher at Princeton University.
[Original paper by Ameet Deshpande, Tanmay Rajpurohit, Karthik Narasimhan, Ashwin Kalyan]
Overview: The rise of generative AI has enabled companies and developers to customize their conversational agents by assigning them personas. While this is crucial for utility and controls the flow of significant capital, it leads to users anthropomorphizing the chatbots to a significantly larger degree. In this work, we discuss anthropomorphization’s legal and philosophical implications and advocate a cautious approach toward personalization.
Introduction
Chatbots have seen exponential growth and adoption in many industries since the release of powerful models like ChatGPT and BARD. A key ingredient to ensure their usefulness for various use cases is the possibility to assign a persona to it, for example, that of Abraham Lincoln. With several entities caught in a race to customize their conversational agents, the effect of anthropomorphization, which is the tendency to assign human characteristics to nonhuman entities, has taken a back seat.
In this work, we perform a multi-faceted analysis of anthropomorphization, how it arises, what its implications are, and how its harmful effects can be avoided. ChatGPT violates provisions of the Blueprint of AI Bill of Rights released by The White House in October 2022. Crucially, ChatGPT was deployed after the Blueprint was made public. With chatbots getting more realistic and anthropomorphization becoming a natural tendency, violation of legal provisions can have amplified effects.
We also highlight the psychological implications, where firms can establish self-congruence with the user and chatbot to increase their tendency to anthropomorphize it and use that as a manipulation tactic. This can range from actions as simple as buying a product online to influencing their opinions about socio-economic issues.
Key Insights
Anthropomorphization
Anthropomorphization refers to ascribing human-like traits to non-human entities and has been used in diverse areas encompassing literature, science, art, and marketing. It occurs when humans assign emotional or behavioral traits to entities. Several behavioral psychology studies have posited and argued that anthropomorphization is a natural tendency when humans interact with entities. This natural tendency has influenced many fields of science, like evolutionary biology and comparative cognition, to consider its effects on human interaction carefully.
Generative AI’s purposeful push to be human-like
Recently, generative large language models (LLMs) have been deployed in various applications. Conversational systems like ChatGPT and Bard have modified LLMs with a purposeful push toward making them more human-like. The quality of these systems has enabled human-AI interactions at unprecedented scales, thus increasing the chances of these systems being anthropomorphized because the generated text is human-like. This work analyzes anthropomorphization in LLMs and discusses its: (1) Legal implications and (2) Psychological effects.
Legal implications
Customization of systems and brands has long been seen as an effective way to increase anthropomorphization and establish an emotional connection with humans. Thus, although not strictly interchangeable, we refer to customized and personalized LLMs as anthropomorphized LLMs. We analyze results from our prior research and find that these LLMs violate at least two legislative principles penned in Blueprint For An AI Bill Of Rights released by The White House: (1) Algorithmic Discrimination Protections and (2) Safe and Effective Systems. For example, our results show that customized ChatGPT targets certain demographics more than others. Furthermore, the system’s safety depends on the persona used to customize LLMs, leading to second-order discriminatory patterns.
We also analyze the concept of corporate personhood for powerful AI systems since they have the potential to be large-scale decision-making agents. Corporate personhood is a legal concept that recognizes corporations as separate legal entities, treated as persons under the law. Corporate personhood has been controversial, but this work recognizes “providing an identifiable persona to serve as a central actor” as one of the key functions. Given this definition, AI systems can be a form of corporate personhood by proxy due to their use of a persona. Since different personas assigned to the same AI system lead to varied behavior, we urge legal experts to consider whether personhood should be applied at a persona, model, or firm level.
Psychological implications
We also discuss the psychological effects by understanding how important factors like trustworthiness, explainability, and transparency are affected by anthropomorphization. Several marketing and consumer behavior studies have found that self-congruence, the degree to which a system matches a consumer’s self-image, can significantly influence a user’s behavior. Given the ease with which the fine-grained personality of conversational systems can be manipulated, malicious actors can use it to exploit users by creating a false sense of attachment. An example is a chatbot built for school children or teenagers, influencing them to buy certain products.
Between the lines
With a race towards the most effective generative AI model, the cynosure has shifted to quality, with safety being viewed as an afterthought. Ethical implications of technology are important to analyze. Still, legislative intent in documents like the Blueprint for AI Bill of Rights provides a framework to think about these issues objectively rather than rely on the subjective nature of moral principles. More studies need to analyze models from a legal lens. Specifically with regards to this work, despite the vulnerabilities of anthropomorphization, it has advantages if used responsibly. Studies have shown that it can be used to improve trust in systems. Given the increasing adoption of AI systems in the real world, anthropomorphization is a powerful tool to improve the accessibility of these systems. Still, both creators and users should be educated about its consequences. In this paper, we argue for conservative and responsible use of this subtle and powerful tool while being cautious about outright anthropodenial.