🔬 Research Summary by Iyadunni J. Adenuga, a Ph.D. Candidate (ABD) at Pennsylvania State University, College of Information Sciences and Technology, with research interests in human-centered AI systems.
[Original paper by Iyadunni J. Adenuga and Jonathan E. Dodge]
Overview: Current AI-infused systems, including explainable versions, usually do not prioritize human agency. This paper examines the relationship between explanations and human agency and the possible modes of agency in an AI-infused system.
Introduction
AI technologies are commonplace in today’s society. Their increased use has called for transparent forms of these complex and usually opaque technologies. To mitigate this issue, researchers have created system-focused and people-centered explainable AI (XAI) technologies that explain the decisions they make. These explanations are limited by their in-actionability to laypeople. While explanations are usually confluent with agency and people feeling in control, in this paper, we propose uniquely teasing out agency and examining how they exist in these complex systems and, possibly, how they interact with explanations. According to Ben Shneiderman’s 2020 Human-centered Artificial Intelligence article, complex systems can effectively accommodate human agency and automation. However, designing agentic systems is not as widely studied as making them explainable, and even more rarely do researchers study the two in concert.
Key Insights
What is Explanation?
Explanation is a human characteristic that provides knowledge and understanding of why an occurrence happened with the intention of improving the mental models of its recipient.
Current techniques that introduce explanations into usually opaque AI-infused systems are post-hoc and attempt to provide an understanding of the decisions made by AI methods (e.g., neural networks, ensemble models, etc.). These techniques can operate on the input/output boundaries of the AI methods (e.g., LIME, LORE) and the internal structures (e.g., deconvnet, network dissection). These explanations are not as relatable and understandable as the “everyday” explanations laypeople use.
What is Agency?
Agency is the innate need to control the outcomes in an environment by performing intentional actions. According to Bandura, people may abandon ongoing tasks or distrust their actions if they do not feel in control. Enhancing agency has been shown to lead to increased satisfaction, productivity, and a better user experience.
Technological environments that allow for user agency are flexible to user interactions and inputs so people can modify their experience.
Relationship between Explanation and Agency
We will examine this possible relationship in the context of explainable AI-infused systems in 3 ways.
First, there is a two-way relationship between agency and explanation. Explanation informs Agency while Agency tests Explanation. The first step when a person feels agency is the generation of an initial forethought before performing actions commensurate with these thoughts and then observing the outcomes in the environment. The explanations provided in that environment can improve a person’s forethought to perform the appropriate actions for successful task completion. Similarly, agency functions can help determine the effectiveness of the provided explanations to end-users.
Second, designing for agency makes it possible for people to take an active part in their absorption of explanations. Users would be able to customize explanations to their taste and also consume them at their own pace. This active form of explanation is particularly useful for people with a tinkering learning cognitive style, as identified by GenderMag.
Third, illusory agency may prove helpful in low-stake scenarios (e.g., training environments) where XAI systems cannot honor some user inputs. Game designers utilize this tool to enhance user experience while avoiding changes to their rigid game narrative. Vaccaro et al., in their The Illusion of Control paper, showed that illusory agency provides benefits similar to the real kind (e.g., increased satisfaction). XAI system creators can use this method to provide useful outcomes to user inputs without affecting the underlying algorithms.
Adjusting Agency
We examine this concept in a single-user and system scenario with two examples. First, consider the Rube Goldberg machine with the simple output of wiping the mouth. Suppose a button is added such that pushing the button starts the wiping mouth process instead of the automated process that occurs when a person eats. Now, we have three levels of agency, in decreasing order: directly wiping the mouth oneself, the button version of the Rube Goldberg machine, and the unmodified version. This illustration introduces a way of achieving agency: adding user-interface features (i.e., buttons). If we assume that adding a button increases perceived agency, it shows that there is a user-interface complexity threshold where increases in perceived agency diminish or invert.
Second, consider Living Documents, an interactive multi-document text summarization system with control functions. What could different levels of agency look like? The highest level of agency would be access to all the control functions. In contrast, no agency could access control functions, so they only experience the automated version. To determine the medium level, an interaction designer can utilize specified criteria such as “magnitude of impact” (i.e., from influencing the whole document to just sections to the sentences/words).
Agency and/or Explanations for whom?
Now, we will broaden the scope from the system-user scenario described above to an environment with multiple types of users. We would explore agency in two ways, using a multi-user platform like ridesharing.
First, in the case of different user groups, such as drivers and passengers, there is the question of the right balance of agency. For example, if a veto feature exists for either group, the group’s agency with the feature would increase at the expense of the other. Explanations for the occurrences may become very necessary for the affected party.
Second, if social explanations exist in an environment (where a user group performs joint actions) such that there is a common platform where “knowledge sharing” and “social learning” occur, collective perceived efficacy and agency can develop. When there is collective agency, people can contest the decisions made by AI systems and affect meaningful change.
Between the lines
This paper starts the conversation on designing user agency in XAI systems. As explained above, the relationship between explanation and agency may be incomplete. Further research should formalize these observed relationships to encourage XAI designers to prioritize agency while working on explainability.
It is not enough for AI-infused systems to explain their decisions. These explanations need to be useful and actionable. As personalized forms of these AI-infused systems proliferate, people should be able to modify their experience. This addition would open a lot of research paths. For example, in the security sphere, answering questions such as “Who should have access to a certain agency level?” would be important, especially if it affects others.