🔬 Research summary by Dr. Marianna Ganapini, our Faculty Director.
[Original paper by Marianna Ganapini and Enrico Panai]
Overview: This white paper looks at a specific type of AI technology: persuasive design empowered by AI, and we called it AI-nudging (others call it hyper-nudging). We offer a risk factor analysis of adopting this kind of tech in two specific contexts: social media and video games. Finally, we indicate some risk mitigation strategies to deal with the risks of AI nudging for children and teens.
Introduction
Artificial intelligence is increasingly impacting how we make decisions and what we do. AI’s persuasive design (or AI-nudging) is a technology that pushes users to make certain choices and influences their reactions. Because of its effectiveness, this technology raises important ethical issues. Researchers and practitioners have asked: does AI persuasive design strip us of our autonomy and freedom? Is it a form of creepy manipulation?
We recognize that, at times, AI’s persuasive design may be manipulative, it may threaten our autonomy and self-determination. However, in our work, we resist the idea that a piece of technology can be judged as good or bad, permissible or impermissible in isolation. It is part of our idea of an infrastructure of trust that AI tools should be considered and evaluated in the large context in which they are immersed. It is not useful to condemn or try to fix some specific technology without considering the ecosystem it is part of. Therefore, our work offers a risk-factors analysis to guide our auditing framework in determining how multiple connected risk factors can represent unacceptable risks and what to do about them.
Key Insights
AI nudging is a form of technology that employs a set of tools able to use data and personal information to tailor the types of design techniques to employ on specific users. One key aspect of this technology is that it may produce results that were not expected or explicitly encoded by the designers. It is, therefore, a highly effective technology to influence people’s choices. It also risks being manipulative and limiting people’s autonomy of choice.
We offer a risk-factors framework to understand how trust can be ensured when adopting this technology. We look at the various risk factors within an ecosystem, and we evaluate how these factors relate. We define ‘risk’ as the probability of causing harm. We define ‘high or unacceptable risk’ as whatever is likely to produce significant harm. We define ‘harm’ as the violation of key interests and rights (based on the work of the philosopher J.S. Mill).
In our risk factors framework, AI-enhanced persuasive design and nudging represent one risk factor among many. These tools can potentially create harm because they are persuasive techniques that manipulate users. They may impact their well-being, causing physical, psychological, and financial harm. However, this technology can be used for good too. Hence instead of focusing only on the technology in isolation, we look at how such a technology connects with other risk factors to determine the overall risk level. This overall level of risk is determined through an approach that looks at distributed moral responsibility: a complex situation in which multiple low-risk factors connect and determine a high-risk situation. Thus, this holistic analysis examines AI as part of our social, economic, and psychological environment.
Our work focuses on two use cases: social media and video games. The fact that in these contexts, persuasive design is directed at teens and children is a key element of our analysis. Vulnerable users are to be protected, and any technology that serves them should be examined very carefully. This is because children and teens typically have a less developed level of understanding and decision-making. Because of the developmental state of their cognitive structures, children and teens are particularly vulnerable to the manipulative powers of nudges. They may find it hard to resist nudges and to make choices that preserve their well-being. Therefore, adopting any nudging toward them heightens the level of risk, requiring risk mitigation to be in place. Risk mitigation is a matter of either eliminating (or reducing) the presence of those combined factors that are likely to produce harm or finding ways to prevent the harm itself by adopting various safety nets. Thus, when children are involved, one possible risk mitigation strategy for AI nudging is to ensure parents can be constantly informed and make decisions that prevent any harm to the child. More concretely: one risk mitigation tool is to signal to parents when AI- nudges are used on children’s phones or to flag to them when their children have spent more than a certain amount of hours on their video games.
To be sure, the age, level of understanding, and decision-making skills of users is not the only relevant risk factor in these contexts (aka social media and video games). Other risk factors relate to the environment in which this technology is part. For instance, some contexts are too open and difficult to monitor. They allow users to interact and share content without any supervision. Social media presents an open environment with many variables that may be impossible to control: multiple agents (both human and artificial) are causally responsible for creating the content and the experience on the platforms. Multiple types of receiver agents are also in play, who may use and interpret the content on the platform in radically different ways.
Furthermore, an AI-powered nudging system makes this environment even more open and unstable because it introduces further variables: nudging tools not designed for the purpose will create content that cannot be easily controlled, allowing for more unknowns and anomalies. In this unstable environment, security may be hard to ensure, and harmful content can proliferate. Therefore, it must be considered that an open environment presents a clear risk to users’ physical, psychological, and financial well-being. If these users are children, as in the cases we examined, the level of risk absolutely demands that mitigation strategies be adopted.
Between the Lines
Diffusing high-risk situations by having safety nets in place to make a persuasive design less harmful is necessary if we want AI to be employed responsibly. In our work, we have offered an audit framework to do just that, and we hope others will follow us in offering a systematic risk factor analysis to build actionable guardrails around AI.