• Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • Core Principles of Responsible AI
    • Accountability
    • Fairness
    • Privacy
    • Safety and Security
    • Sustainability
    • Transparency
  • Special Topics
    • AI in Industry
    • Ethical Implications
    • Human-Centered Design
    • Regulatory Landscape
    • Technical Methods
  • Living Dictionary
  • State of AI Ethics
  • AI Ethics Brief
  • 🇫🇷
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

Setting the Right Expectations: Algorithmic Recourse Over Time

December 21, 2023

🔬 Research Summary by Joao Fonseca, Andrew Bell, Carlo Abrate, and Julia Stoyanovich.

Joao Fonseca is an invited assistant professor at Nova Information Management School in Lisbon, Portugal, and researches synthetic data generation and algorithmic recourse.

Andrew Bell is a Ph.D. Candidate in Computer Science at New York University studying the practical application of concepts from Responsible AI like explainability, fairness, and algorithmic recourse.

Carlo Abrate is a Ph.D. candidate in Data Science on Explainability and Counterfactuals at Sapienza University in Rome, Italy, and Centai in Turin, Italy.

Julia Stoyanovich is an Associate Professor of Computer Science & Engineering and Data Science and Director of the Center for Responsible AI at New York University.

[Original paper by Joao Fonseca, Andrew Bell, Carlo Abrate, Francesco Bonchi, and Julia Stoyanovich]


Overview: Artificial intelligence (AI) systems offer the potential to enhance lives, but they also pose the risk of biased or erroneous decision-making.  Algorithmic recourse methods aim to empower individuals to take action against an unfavorable outcome produced by these systems. In a nutshell, a system that supports algorithmic recourse will generate a recommendation for an action that an individual can take to change their outcome from unfavorable to favorable.  The contract is that, if an individual acts on the recommendation, then they will receive a favorable outcome. However, as we show in this paper, this contract is rarely upheld, because the environment may change from the moment an individual receives the recommendation until they take action. This paper describes a simulation framework to study the effects of a continuously changing environment in algorithmic recourse for multiple agents.


Introduction

When we receive an undesirable result from an automated system, it is common to ask (1) why we received such an outcome and (2) how to reverse it. Algorithmic recourse aims to answer these questions. However, the temporal dimension plays a crucial role in this context. Consider a scenario where an AI system advises a loan applicant to improve their credit score. If it takes several months to achieve this, economic conditions and lending criteria might have evolved, rendering the initial advice obsolete. 

This research highlights the importance of time in the reliability of algorithmic recourse.

We propose a simulation framework to study and model environmental uncertainty over time. Furthermore, we examine the dynamics emerging from multiple individuals competing to obtain a limited resource, introducing an additional layer of uncertainty in algorithmic recourse. 

Our findings highlight the lack of reliability in recourse recommendations over several competitive settings, potentially setting misguided expectations that could result in detrimental outcomes. These findings emphasize the importance of meticulous consideration when AI systems offer guidance in dynamic environments.

Key Insights

Transparency and agency in AI-based decision-making

AI is becoming an integral part of critical decision-making domains like healthcare, finance, and hiring. While AI can potentially improve our lives, it also carries the risk of erroneous outcomes. Algorithmic recourse addresses this problem by enabling individuals to understand why a particular AI decision was made and what actions can be taken to potentially reverse it. Typically, recourse consists of an individual making a first, unsuccessful, attempt and then being given an opportunity to make a second attempt at a later time. Depending on the delay between receiving recourse, taking action, and retrying, the original recourse recommendations may become demonstrably less reliable.

The temporal aspect of algorithmic recourse

Consider, as an example, a loan application scenario, wherein an AI system denies an individual’s application for a loan but provides information on what that individual can do to be approved for the loan if they apply again at a later date. The individual may be told that their loan application was denied because their credit score is 50 points lower than necessary. One could imagine that it takes the individual 6 months to a year to improve their credit score. However, in the meantime, the criteria for loan approval might change, rendering the initial recourse invalid. As a result, the initial recommendation of “improving your credit score by 50 points” may have set false expectations.

The temporal aspect in algorithmic recourse is often overlooked. In a practical setting, we consider time to be intrinsic to the concept of recourse itself, since it involves individuals receiving advice and having the opportunity to act on it at a later time. Ignoring the temporal dimension can lead to unreliable recommendations and false expectations. We addressed this problem by formalizing multi-agent algorithmic recourse, proposing a simulation framework to evaluate recourse over time, and defining a recourse reliability metric.

Simulating multi-agent algorithmic recourse over time

In the multi-agent algorithmic recourse setting, individuals compete for scarce resources over time. Using the previous example, there might be multiple loan applicants competing for a limited number of loans. A black-box classifier (or ranker) determines which applicants should receive positive outcomes. Instead of just looking at one individual trying to achieve a certain score, in this situation, we need to rank individuals to identify the top candidates. Therefore, there is no predefined threshold; it is dynamically set based on the number of available resources and the scores of the highest-ranked individuals.

We introduce two important factors to model individuals’ behavior: adaptation and effort. Adaptation refers to how faithfully an individual follows the provided recommendation, while effort reflects their willingness to make changes based on the recommendation. We also provide examples of possible settings for each of these factors and combine them into two settings, based on how agents perceive the effort required to match a recommendation (i.e., incentive for adaptation), and how they are able to adapt.

The simulation framework encompasses these concepts. At each time step, applicants receive scores, the top-scored applicants receive positive outcomes, and the threshold for a positive outcome is adjusted accordingly. Those who don’t receive positive outcomes are offered recourse recommendations, which they can choose to take action on at each time step of the simulation with varying degrees of adaptation and effort. We also incorporate global and individual parameters to control the difficulty of taking recourse actions and individual willingness.

Quantifying Recourse Reliability

To assess the reliability of recourse, we introduce a new metric, Recourse Reliability (RR). RR quantifies the extent to which agents’ expectations of positive outcomes through recourse align with reality. It measures the proportion of agents who acted on recourse and successfully received positive outcomes compared to those who acted on recourse and expected positive outcomes.

This comprehensive framework allows us to explore and understand the dynamics of multi-agent recourse in various settings, providing insights into the evolving landscape of access to valuable resources.

Between the lines

Among growing concerns on AI ethics, regulations, and digital agency, algorithmic recourse emerged as an essential tool to address these pressing issues. In fact, algorithmic recourse may become legally necessary with the passing of legislation like the European Union AI Act.

Systems providing recourse without considering temporal changes can create unrealistic expectations and even lead to harmful outcomes. For example, in college admissions, a previously denied applicant’s recommended improvements may fall short if the selectivity of universities changes over time. Our framework can be used to provide guidance to system-level decision-makers like banks, colleges, and governments. It offers a means to measure recourse reliability and provide individuals with uncertainty estimates regarding their actions. By adjusting resource constraints based on empirical insights, decision-makers can optimize recourse reliability. For example, colleges can estimate the ideal incoming class size to maintain stable admission thresholds.

In conclusion, the temporal dimension in algorithmic recourse is critical. Understanding the impact of time and evolving contexts is essential for AI systems to provide reliable, fair, and ethical guidance in a rapidly changing world. The consideration of time in algorithmic recourse calls for substantial additional work on understanding recourse reliability in multi-agent multi-time step settings, and on developing recourse methods that reward individuals’ efforts towards recommendations.

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

🔍 SEARCH

Spotlight

Canada’s Minister of AI and Digital Innovation is a Historic First. Here’s What We Recommend.

Am I Literate? Redefining Literacy in the Age of Artificial Intelligence

AI Policy Corner: The Texas Responsible AI Governance Act

AI Policy Corner: Singapore’s National AI Strategy 2.0

AI Governance in a Competitive World: Balancing Innovation, Regulation and Ethics | Point Zero Forum 2025

related posts

  • A collection of principles for guiding and evaluating large language models

    A collection of principles for guiding and evaluating large language models

  • Why AI ethics is a critical theory

    Why AI ethics is a critical theory

  • Justice in Misinformation Detection Systems

    Justice in Misinformation Detection Systems

  • Designing for Meaningful Human Control in Military Human-Machine Teams

    Designing for Meaningful Human Control in Military Human-Machine Teams

  • Choices, Risks, and Reward Reports: Charting Public Policy for Reinforcement Learning Systems

    Choices, Risks, and Reward Reports: Charting Public Policy for Reinforcement Learning Systems

  • A Systematic Review of Ethical Concerns with Voice Assistants

    A Systematic Review of Ethical Concerns with Voice Assistants

  • NATO Artificial Intelligence Strategy

    NATO Artificial Intelligence Strategy

  • Emerging trends: Unfair, biased, addictive, dangerous, deadly, and insanely profitable

    Emerging trends: Unfair, biased, addictive, dangerous, deadly, and insanely profitable

  • Prediction Sensitivity: Continual Audit of Counterfactual Fairness in Deployed Classifiers

    Prediction Sensitivity: Continual Audit of Counterfactual Fairness in Deployed Classifiers

  • Down the Toxicity Rabbit Hole: Investigating PaLM 2 Guardrails

    Down the Toxicity Rabbit Hole: Investigating PaLM 2 Guardrails

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer

Categories


• Blog
• Research Summaries
• Columns
• Core Principles of Responsible AI
• Special Topics

Signature Content


• The State Of AI Ethics

• The Living Dictionary

• The AI Ethics Brief

Learn More


• About

• Open Access Policy

• Contributions Policy

• Editorial Stance on AI Tools

• Press

• Donate

• Contact

The AI Ethics Brief (bi-weekly newsletter)

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.


Archive

  • © MONTREAL AI ETHICS INSTITUTE. All rights reserved 2024.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.