• Skip to main content
  • Skip to secondary menu
  • Skip to primary sidebar
  • Skip to footer
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

  • Articles
    • Public Policy
    • Privacy & Security
    • Human Rights
      • Ethics
      • JEDI (Justice, Equity, Diversity, Inclusion
    • Climate
    • Design
      • Emerging Technology
    • Application & Adoption
      • Health
      • Education
      • Government
        • Military
        • Public Works
      • Labour
    • Arts & Culture
      • Film & TV
      • Music
      • Pop Culture
      • Digital Art
  • Columns
    • AI Policy Corner
    • Recess
  • The AI Ethics Brief
  • AI Literacy
    • Research Summaries
    • AI Ethics Living Dictionary
    • Learning Community
  • The State of AI Ethics Report
    • Volume 6 (February 2022)
    • Volume 5 (July 2021)
    • Volume 4 (April 2021)
    • Volume 3 (Jan 2021)
    • Volume 2 (Oct 2020)
    • Volume 1 (June 2020)
  • About
    • Our Contributions Policy
    • Our Open Access Policy
    • Contact
    • Donate

Counterfactual Explanations via Locally-guided Sequential Algorithmic Recourse

October 4, 2023

🔬 Research Summary by Edward Small, a Ph.D. candidate in computer science at the Royal Melbourne Institute of Technology with his research focused on fair and explainable artificial intelligence.

[Original paper by Edward A. Small, Jeffrey N. Clark, Christopher J. McWilliams, Kacper Sokol, Jeffrey Chan, Flora D. Salim, and Raul Santos-Rodriguez]


Overview: As AI becomes more prevalent in everyday life, it is becoming increasingly important for users to have automated decisions explained to them. If a user is unhappy with a decision, they may wish to ask how I can change my outcome. This paper looks to answer that question by automating the creation of feasible, actionable, and meaningful algorithmic recourse – the actions needed for a user to change their fate.


Introduction

Whenever an individual experiences an automated decision made on their behalf by an AI that they deem to be unfavorable, two questions will naturally come to mind: 

Why did the AI make this decision, and what can I do to change it?

We refer to this change as algorithmic recourse (AR). It defines the path a user must take to alter the decision made by an automated agent, such as AI. Automating meaningful and actionable AR construction can be challenging, especially in complex systems for difficult problems. Thus, this task often falls on the shoulders of a human agent should a user require an explanation. However, if AI is to be truly scalable, then its explanations must be scalable too. This has become more important with regulations such as GDPR invoking the “right to an explanation.” To tackle this, we developed a method that automatically produces a set of feasible actions that creates a pathway between the factual (the current outcome) and a counterfactual (the desired outcome).

Key Insights

The Problem with Counterfactuals

Counterfactual thinking is an incredibly human-centric way to explore actions and outcomes and is deeply rooted in psychology and law. In short, counterfactual thinking captures the thought experiment:

X occurred, but if I had done Z, then Y would have happened instead?

Here, X is the true outcome (factual), Y is a desired outcome (counterfactual), and Z is an action(s) (recourse). Counterfactual explanations are a machine learning extension of counterfactual thinking. For counterfactual explanations, we look to find an example Y from knowing X whilst imposing some constraint. For example, we may wish for X and Y to be as close as possible or look for a Y with a very high probability of being correct, etc. 

The problem with counterfactual explanations in machine learning is that there is a strong focus on finding the counterfactual but almost no focus on asserting that there exists a recourse between the factual and counterfactual. Essentially, there is always an implicit assumption that the pathway from X to Y is linear and traversable, so Z is simply the difference between Y and X. 

However, this is clearly not the case. Counterfactual explanations have been shown to give obviously impossible actions, such as “change race” or “become younger,” the current fix is basically just manual intervention – we constrain the search for Y so that a user cannot change race, for example. This is very limiting and shows that counterfactual explainers have the capability to suggest seemingly sensible counterfactuals that are, in fact, not feasible. Furthermore, if Z contains a set of actions, counterfactual explanations cannot tell a user which actions should be performed first. As we show in our paper, this is especially important in areas such as healthcare, where critical therapies must be performed in a certain order.

Feasible Algorithmic Recourse

Most counterfactual explainers prioritize finding the counterfactual Y, and the recourse Z is merely an afterthought. In our work, we flip this process on its head. Using data density as a proxy for feasibility (i.e., if data does not exist somewhere, we assume that space is not traversable), we instead find a suboptimal set of feasible actions Z’ that has the capacity to change the outcome from X to Y. We then do a second pass on the actions Z’ to create an optimal set of actions Z such that data density is adhered to.

In experiments, we found that such a method offered much more intelligent algorithmic recourse, especially in areas where the order of the actions was critical for success. The example we give is discharging patients from intensive care. One critical factor in discharging a patient from intensive care is assuring their breathing is stable and done manually (i.e., not on intubation). Our method captured sensible behavior, such as weaning individuals off mechanical breathing and not removing mechanical breathing until other vital signs (such as consciousness and heart rate variance) were improved and stable.

Between the Lines

Automated explanations for the everyday person are poised to become an increasingly important gap to fill. When offering explanations to an individual, the recourse is just as important as the counterfactual. In fact, the counterfactual explanation is harmful if there exists no set of actions one can feasibly execute to achieve it. Therefore, there must be a focus on good algorithmic recourse that leads to feasible counterfactuals. Here, we use data density as a proxy for feasibility, allowing us to capture complex behavior and dynamics. However, this also has its flaws. Further research into constructing feasible recourse is required, such as using time series or casual models to assert further how feasible an action is.

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

🔍 SEARCH

Spotlight

ALL IN Conference 2025: Four Key Takeaways from Montreal

Beyond Dependency: The Hidden Risk of Social Comparison in Chatbot Companionship

AI Policy Corner: Restriction vs. Regulation: Comparing State Approaches to AI Mental Health Legislation

Beyond Consultation: Building Inclusive AI Governance for Canada’s Democratic Future

AI Policy Corner: U.S. Executive Order on Advancing AI Education for American Youth

related posts

  • Anthropomorphized AI as Capitalist Agents: The Price We Pay for Familiarity

    Anthropomorphized AI as Capitalist Agents: The Price We Pay for Familiarity

  • The Nonexistent Moral Agency of Robots – A Lack of Intentionality and Free Will

    The Nonexistent Moral Agency of Robots – A Lack of Intentionality and Free Will

  • Getting from Commitment to Content in AI and Data Ethics: Justice and Explainability

    Getting from Commitment to Content in AI and Data Ethics: Justice and Explainability

  • Participation and Division of Labor in User-Driven Algorithm Audits: How Do Everyday Users Work toge...

    Participation and Division of Labor in User-Driven Algorithm Audits: How Do Everyday Users Work toge...

  • Towards a Framework for Human-AI Interaction Patterns in Co-Creative GAN Applications

    Towards a Framework for Human-AI Interaction Patterns in Co-Creative GAN Applications

  • Does Military AI Have Gender? Understanding Bias and Promoting Ethical Approaches in Military Applic...

    Does Military AI Have Gender? Understanding Bias and Promoting Ethical Approaches in Military Applic...

  • Adding Structure to AI Harm

    Adding Structure to AI Harm

  • Augmented Datasheets for Speech Datasets and Ethical Decision-Making

    Augmented Datasheets for Speech Datasets and Ethical Decision-Making

  • Fairness Uncertainty Quantification: How certain are you that the model is fair?

    Fairness Uncertainty Quantification: How certain are you that the model is fair?

  • Large Language Models Can Be Used To Effectively Scale Spear Phishing Campaigns

    Large Language Models Can Be Used To Effectively Scale Spear Phishing Campaigns

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer


Articles

Columns

AI Literacy

The State of AI Ethics Report


 

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.

Contact

Donate


  • © 2025 MONTREAL AI ETHICS INSTITUTE.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.