• Skip to main content
  • Skip to secondary menu
  • Skip to primary sidebar
  • Skip to footer
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

  • Articles
    • Public Policy
    • Privacy & Security
    • Human Rights
      • Ethics
      • JEDI (Justice, Equity, Diversity, Inclusion
    • Climate
    • Design
      • Emerging Technology
    • Application & Adoption
      • Health
      • Education
      • Government
        • Military
        • Public Works
      • Labour
    • Arts & Culture
      • Film & TV
      • Music
      • Pop Culture
      • Digital Art
  • Columns
    • AI Policy Corner
    • Recess
    • Tech Futures
  • The AI Ethics Brief
  • AI Literacy
    • Research Summaries
    • AI Ethics Living Dictionary
    • Learning Community
  • The State of AI Ethics Report
    • Volume 7 (November 2025)
    • Volume 6 (February 2022)
    • Volume 5 (July 2021)
    • Volume 4 (April 2021)
    • Volume 3 (Jan 2021)
    • Volume 2 (Oct 2020)
    • Volume 1 (June 2020)
  • About
    • Our Contributions Policy
    • Our Open Access Policy
    • Contact
    • Donate

Setting the Right Expectations: Algorithmic Recourse Over Time

December 21, 2023

šŸ”¬ Research Summary by Joao Fonseca, Andrew Bell, Carlo Abrate, and Julia Stoyanovich.

Joao Fonseca is an invited assistant professor at Nova Information Management School in Lisbon, Portugal, and researches synthetic data generation and algorithmic recourse.

Andrew Bell is a Ph.D. Candidate in Computer Science at New York University studying the practical application of concepts from Responsible AI like explainability, fairness, and algorithmic recourse.

Carlo Abrate is a Ph.D. candidate in Data Science on Explainability and Counterfactuals at Sapienza University in Rome, Italy, and Centai in Turin, Italy.

Julia Stoyanovich is an Associate Professor of Computer Science & Engineering and Data Science and Director of the Center for Responsible AI at New York University.

[Original paper by Joao Fonseca, Andrew Bell, Carlo Abrate, Francesco Bonchi, and Julia Stoyanovich]


Overview: Artificial intelligence (AI) systems offer the potential to enhance lives, but they also pose the risk of biased or erroneous decision-making.Ā  Algorithmic recourse methods aim to empower individuals to take action against an unfavorable outcome produced by these systems. In a nutshell, a system that supports algorithmic recourse will generate a recommendation for an action that an individual can take to change their outcome from unfavorable to favorable.Ā  The contract is that, if an individual acts on the recommendation, then they will receive a favorable outcome. However, as we show in this paper, this contract is rarely upheld, because the environment may change from the moment an individual receives the recommendation until they take action. This paper describes a simulation framework to study the effects of a continuously changing environment in algorithmic recourse for multiple agents.


Introduction

When we receive an undesirable result from an automated system, it is common to ask (1) why we received such an outcome and (2) how to reverse it. Algorithmic recourse aims to answer these questions. However, the temporal dimension plays a crucial role in this context. Consider a scenario where an AI system advises a loan applicant to improve their credit score. If it takes several months to achieve this, economic conditions and lending criteria might have evolved, rendering the initial advice obsolete. 

This research highlights the importance of time in the reliability of algorithmic recourse.

We propose a simulation framework to study and model environmental uncertainty over time. Furthermore, we examine the dynamics emerging from multiple individuals competing to obtain a limited resource, introducing an additional layer of uncertainty in algorithmic recourse. 

Our findings highlight the lack of reliability in recourse recommendations over several competitive settings, potentially setting misguided expectations that could result in detrimental outcomes. These findings emphasize the importance of meticulous consideration when AI systems offer guidance in dynamic environments.

Key Insights

Transparency and agency in AI-based decision-making

AI is becoming an integral part of critical decision-making domains like healthcare, finance, and hiring. While AI can potentially improve our lives, it also carries the risk of erroneous outcomes. Algorithmic recourse addresses this problem by enabling individuals to understand why a particular AI decision was made and what actions can be taken to potentially reverse it. Typically, recourse consists of an individual making a first, unsuccessful, attempt and then being given an opportunity to make a second attempt at a later time. Depending on the delay between receiving recourse, taking action, and retrying, the original recourse recommendations may become demonstrably less reliable.

The temporal aspect of algorithmic recourse

Consider, as an example, a loan application scenario, wherein an AI system denies an individual’s application for a loan but provides information on what that individual can do to be approved for the loan if they apply again at a later date. The individual may be told that their loan application was denied because their credit score is 50 points lower than necessary. One could imagine that it takes the individual 6 months to a year to improve their credit score. However, in the meantime, the criteria for loan approval might change, rendering the initial recourse invalid. As a result, the initial recommendation of ā€œimproving your credit score by 50 pointsā€ may have set false expectations.

The temporal aspect in algorithmic recourse is often overlooked. In a practical setting, we consider time to be intrinsic to the concept of recourse itself, since it involves individuals receiving advice and having the opportunity to act on it at a later time. Ignoring the temporal dimension can lead to unreliable recommendations and false expectations. We addressed this problem by formalizing multi-agent algorithmic recourse, proposing a simulation framework to evaluate recourse over time, and defining a recourse reliability metric.

Simulating multi-agent algorithmic recourse over time

In the multi-agent algorithmic recourse setting, individuals compete for scarce resources over time. Using the previous example, there might be multiple loan applicants competing for a limited number of loans. A black-box classifier (or ranker) determines which applicants should receive positive outcomes. Instead of just looking at one individual trying to achieve a certain score, in this situation, we need to rank individuals to identify the top candidates. Therefore, there is no predefined threshold; it is dynamically set based on the number of available resources and the scores of the highest-ranked individuals.

We introduce two important factors to model individuals’ behavior: adaptation and effort. Adaptation refers to how faithfully an individual follows the provided recommendation, while effort reflects their willingness to make changes based on the recommendation. We also provide examples of possible settings for each of these factors and combine them into two settings, based on how agents perceive the effort required to match a recommendation (i.e., incentive for adaptation), and how they are able to adapt.

The simulation framework encompasses these concepts. At each time step, applicants receive scores, the top-scored applicants receive positive outcomes, and the threshold for a positive outcome is adjusted accordingly. Those who don’t receive positive outcomes are offered recourse recommendations, which they can choose to take action on at each time step of the simulation with varying degrees of adaptation and effort. We also incorporate global and individual parameters to control the difficulty of taking recourse actions and individual willingness.

Quantifying Recourse Reliability

To assess the reliability of recourse, we introduce a new metric, Recourse Reliability (RR). RR quantifies the extent to which agents’ expectations of positive outcomes through recourse align with reality. It measures the proportion of agents who acted on recourse and successfully received positive outcomes compared to those who acted on recourse and expected positive outcomes.

This comprehensive framework allows us to explore and understand the dynamics of multi-agent recourse in various settings, providing insights into the evolving landscape of access to valuable resources.

Between the lines

Among growing concerns on AI ethics, regulations, and digital agency, algorithmic recourse emerged as an essential tool to address these pressing issues. In fact, algorithmic recourse may become legally necessary with the passing of legislation like the European Union AI Act.

Systems providing recourse without considering temporal changes can create unrealistic expectations and even lead to harmful outcomes. For example, in college admissions, a previously denied applicant’s recommended improvements may fall short if the selectivity of universities changes over time. Our framework can be used to provide guidance to system-level decision-makers like banks, colleges, and governments. It offers a means to measure recourse reliability and provide individuals with uncertainty estimates regarding their actions. By adjusting resource constraints based on empirical insights, decision-makers can optimize recourse reliability. For example, colleges can estimate the ideal incoming class size to maintain stable admission thresholds.

In conclusion, the temporal dimension in algorithmic recourse is critical. Understanding the impact of time and evolving contexts is essential for AI systems to provide reliable, fair, and ethical guidance in a rapidly changing world. The consideration of time in algorithmic recourse calls for substantial additional work on understanding recourse reliability in multi-agent multi-time step settings, and on developing recourse methods that reward individuals’ efforts towards recommendations.

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

šŸ” SEARCH

Spotlight

Illustration of a coral reef ecosystem

Tech Futures: Diversity of Thought and Experience: The UN’s Scientific Panel on AI

This image shows a large white, traditional, old building. The top half of the building represents the humanities (which is symbolised by the embedded text from classic literature which is faintly shown ontop the building). The bottom section of the building is embossed with mathematical formulas to represent the sciences. The middle layer of the image is heavily pixelated. On the steps at the front of the building there is a group of scholars, wearing formal suits and tie attire, who are standing around at the enternace talking and some of them are sitting on the steps. There are two stone, statute-like hands that are stretching the building apart from the left side. In the forefront of the image, there are 8 students - which can only be seen from the back. Their graduation gowns have bright blue hoods and they all look as though they are walking towards the old building which is in the background at a distance. There are a mix of students in the foreground.

Tech Futures: Co-opting Research and Education

Agentic AI systems and algorithmic accountability: a new era of e-commerce

ALL IN Conference 2025: Four Key Takeaways from Montreal

Beyond Dependency: The Hidden Risk of Social Comparison in Chatbot Companionship

related posts

  • The Ethical Considerations of Self-Driving Cars

    The Ethical Considerations of Self-Driving Cars

  • Corporate Governance of Artificial Intelligence in the Public Interest

    Corporate Governance of Artificial Intelligence in the Public Interest

  • Contextualizing Artificially Intelligent Morality: A Meta-Ethnography of Top-Down, Bottom-Up, and Hy...

    Contextualizing Artificially Intelligent Morality: A Meta-Ethnography of Top-Down, Bottom-Up, and Hy...

  • Assessing the nature of large language models: A caution against anthropocentrism

    Assessing the nature of large language models: A caution against anthropocentrism

  • The Ethics of AI in Finance

    The Ethics of AI in Finance

  • From the Gut? Questions on Artificial Intelligence and Music

    From the Gut? Questions on Artificial Intelligence and Music

  • You cannot have AI ethics without ethics

    You cannot have AI ethics without ethics

  • Bridging the Gap: The Case For an ā€˜Incompletely Theorized Agreement’ on AI Policy (Research Summary)

    Bridging the Gap: The Case For an ā€˜Incompletely Theorized Agreement’ on AI Policy (Research Summary)

  • Knowing Your Annotator: Rapidly Testing the Reliability of Affect Annotation

    Knowing Your Annotator: Rapidly Testing the Reliability of Affect Annotation

  • How Artifacts Afford: The Power and Politics of Everyday Things

    How Artifacts Afford: The Power and Politics of Everyday Things

Partners

  • Ā 
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer


Articles

Columns

AI Literacy

The State of AI Ethics Report


 

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.

Contact

Donate


  • Ā© 2025 MONTREAL AI ETHICS INSTITUTE.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.