• Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • Core Principles of Responsible AI
    • Accountability
    • Fairness
    • Privacy
    • Safety and Security
    • Sustainability
    • Transparency
  • Special Topics
    • AI in Industry
    • Ethical Implications
    • Human-Centered Design
    • Regulatory Landscape
    • Technical Methods
  • Living Dictionary
  • State of AI Ethics
  • AI Ethics Brief
  • 🇫🇷
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

Counterfactual Explanations via Locally-guided Sequential Algorithmic Recourse

October 4, 2023

🔬 Research Summary by Edward Small, a Ph.D. candidate in computer science at the Royal Melbourne Institute of Technology with his research focused on fair and explainable artificial intelligence.

[Original paper by Edward A. Small, Jeffrey N. Clark, Christopher J. McWilliams, Kacper Sokol, Jeffrey Chan, Flora D. Salim, and Raul Santos-Rodriguez]


Overview: As AI becomes more prevalent in everyday life, it is becoming increasingly important for users to have automated decisions explained to them. If a user is unhappy with a decision, they may wish to ask how I can change my outcome. This paper looks to answer that question by automating the creation of feasible, actionable, and meaningful algorithmic recourse – the actions needed for a user to change their fate.


Introduction

Whenever an individual experiences an automated decision made on their behalf by an AI that they deem to be unfavorable, two questions will naturally come to mind: 

Why did the AI make this decision, and what can I do to change it?

We refer to this change as algorithmic recourse (AR). It defines the path a user must take to alter the decision made by an automated agent, such as AI. Automating meaningful and actionable AR construction can be challenging, especially in complex systems for difficult problems. Thus, this task often falls on the shoulders of a human agent should a user require an explanation. However, if AI is to be truly scalable, then its explanations must be scalable too. This has become more important with regulations such as GDPR invoking the “right to an explanation.” To tackle this, we developed a method that automatically produces a set of feasible actions that creates a pathway between the factual (the current outcome) and a counterfactual (the desired outcome).

Key Insights

The Problem with Counterfactuals

Counterfactual thinking is an incredibly human-centric way to explore actions and outcomes and is deeply rooted in psychology and law. In short, counterfactual thinking captures the thought experiment:

X occurred, but if I had done Z, then Y would have happened instead?

Here, X is the true outcome (factual), Y is a desired outcome (counterfactual), and Z is an action(s) (recourse). Counterfactual explanations are a machine learning extension of counterfactual thinking. For counterfactual explanations, we look to find an example Y from knowing X whilst imposing some constraint. For example, we may wish for X and Y to be as close as possible or look for a Y with a very high probability of being correct, etc. 

The problem with counterfactual explanations in machine learning is that there is a strong focus on finding the counterfactual but almost no focus on asserting that there exists a recourse between the factual and counterfactual. Essentially, there is always an implicit assumption that the pathway from X to Y is linear and traversable, so Z is simply the difference between Y and X. 

However, this is clearly not the case. Counterfactual explanations have been shown to give obviously impossible actions, such as “change race” or “become younger,” the current fix is basically just manual intervention – we constrain the search for Y so that a user cannot change race, for example. This is very limiting and shows that counterfactual explainers have the capability to suggest seemingly sensible counterfactuals that are, in fact, not feasible. Furthermore, if Z contains a set of actions, counterfactual explanations cannot tell a user which actions should be performed first. As we show in our paper, this is especially important in areas such as healthcare, where critical therapies must be performed in a certain order.

Feasible Algorithmic Recourse

Most counterfactual explainers prioritize finding the counterfactual Y, and the recourse Z is merely an afterthought. In our work, we flip this process on its head. Using data density as a proxy for feasibility (i.e., if data does not exist somewhere, we assume that space is not traversable), we instead find a suboptimal set of feasible actions Z’ that has the capacity to change the outcome from X to Y. We then do a second pass on the actions Z’ to create an optimal set of actions Z such that data density is adhered to.

In experiments, we found that such a method offered much more intelligent algorithmic recourse, especially in areas where the order of the actions was critical for success. The example we give is discharging patients from intensive care. One critical factor in discharging a patient from intensive care is assuring their breathing is stable and done manually (i.e., not on intubation). Our method captured sensible behavior, such as weaning individuals off mechanical breathing and not removing mechanical breathing until other vital signs (such as consciousness and heart rate variance) were improved and stable.

Between the Lines

Automated explanations for the everyday person are poised to become an increasingly important gap to fill. When offering explanations to an individual, the recourse is just as important as the counterfactual. In fact, the counterfactual explanation is harmful if there exists no set of actions one can feasibly execute to achieve it. Therefore, there must be a focus on good algorithmic recourse that leads to feasible counterfactuals. Here, we use data density as a proxy for feasibility, allowing us to capture complex behavior and dynamics. However, this also has its flaws. Further research into constructing feasible recourse is required, such as using time series or casual models to assert further how feasible an action is.

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

🔍 SEARCH

Spotlight

AI Policy Corner: Singapore’s National AI Strategy 2.0

AI Governance in a Competitive World: Balancing Innovation, Regulation and Ethics | Point Zero Forum 2025

AI Policy Corner: Frontier AI Safety Commitments, AI Seoul Summit 2024

AI Policy Corner: The Colorado State Deepfakes Act

Special Edition: Honouring the Legacy of Abhishek Gupta (1992–2024)

related posts

  • AI Safety, Security, and Stability Among Great Powers (Research Summary)

    AI Safety, Security, and Stability Among Great Powers (Research Summary)

  • Atomist or holist? A diagnosis and vision for more productive interdisciplinary AI ethics dialogue

    Atomist or holist? A diagnosis and vision for more productive interdisciplinary AI ethics dialogue

  • Consent as a Foundation for Responsible Autonomy

    Consent as a Foundation for Responsible Autonomy

  • Bias Propagation in Federated Learning

    Bias Propagation in Federated Learning

  • From Sparse to Dense: GPT-4 Summarization with Chain of Density Prompting

    From Sparse to Dense: GPT-4 Summarization with Chain of Density Prompting

  • Fairness and Bias in Algorithmic Hiring

    Fairness and Bias in Algorithmic Hiring

  • The Impact of Artificial Intelligence on Military Defence and Security

    The Impact of Artificial Intelligence on Military Defence and Security

  • Bias in Automated Speaker Recognition

    Bias in Automated Speaker Recognition

  • Fairness Amidst Non-IID Graph Data: A Literature Review

    Fairness Amidst Non-IID Graph Data: A Literature Review

  • Exploring Collaboration Mechanisms for LLM Agents: A Social Psychology View

    Exploring Collaboration Mechanisms for LLM Agents: A Social Psychology View

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer

Categories


• Blog
• Research Summaries
• Columns
• Core Principles of Responsible AI
• Special Topics

Signature Content


• The State Of AI Ethics

• The Living Dictionary

• The AI Ethics Brief

Learn More


• About

• Open Access Policy

• Contributions Policy

• Editorial Stance on AI Tools

• Press

• Donate

• Contact

The AI Ethics Brief (bi-weekly newsletter)

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.


Archive

  • © MONTREAL AI ETHICS INSTITUTE. All rights reserved 2024.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.