• Skip to main content
  • Skip to secondary menu
  • Skip to primary sidebar
  • Skip to footer
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

  • Articles
    • Public Policy
    • Privacy & Security
    • Human Rights
      • Ethics
      • JEDI (Justice, Equity, Diversity, Inclusion
    • Climate
    • Design
      • Emerging Technology
    • Application & Adoption
      • Health
      • Education
      • Government
        • Military
        • Public Works
      • Labour
    • Arts & Culture
      • Film & TV
      • Music
      • Pop Culture
      • Digital Art
  • Columns
    • AI Policy Corner
    • Recess
    • Tech Futures
  • The AI Ethics Brief
  • AI Literacy
    • Research Summaries
    • AI Ethics Living Dictionary
    • Learning Community
  • The State of AI Ethics Report
    • Volume 7 (November 2025)
    • Volume 6 (February 2022)
    • Volume 5 (July 2021)
    • Volume 4 (April 2021)
    • Volume 3 (Jan 2021)
    • Volume 2 (Oct 2020)
    • Volume 1 (June 2020)
  • About
    • Our Contributions Policy
    • Our Open Access Policy
    • Contact
    • Donate

The philosophical basis of algorithmic recourse

May 22, 2023

🔬 Research Summary by Mark Alfano, a philosopher at Macquarie University who specializes in ethical and epistemic issues related to computational social science.

[Original paper by Suresh Venkatasubramanian, Mark Alfano]


Overview: Many things we care most about are modally robust because they systematically deliver correlative benefits across various counterfactual scenarios. We contend that recourse — the systematic capacity to reverse unfavorable decisions by algorithms and bureaucracies — is a modally robust good. In particular, we argue that two essential components of a good life — temporally extended agency and trust — are underwritten by recourse, and we offer a novel account of how to implement it in a morally and politically acceptable way.


Robodebt and Recourse

In 2016, the Australian government replaced manual fraud detection with the Robodebt scheme, an algorithmic decision-making process. Years later, it was revealed that Robodebt led to nearly half a million false accusations of welfare fraud. In 2020, the government eventually settled a class action lawsuit and is currently on the hook for nearly two billion dollars in repayments to victims, in addition to facing a Royal Commission set to begin in August.

Citizens who faced false accusations had little recourse. They were classified by an algorithm as fraudsters and faced an uphill battle in getting a human to listen to them. Our research addresses this problem of algorithmic recourse, which we conceptualize as an offshoot of the problem of bureaucratic recourse. Large organizations operating with complex, opaque rules sometimes — even often — get things wrong. The error may be traceable to a human mistake or prejudice, failure to anticipate novel phenomena, algorithmic bias, or some combination of these. Regardless of the cause, when people are affected by such errors, they need recourse. 

And even if no error has been made, people still need to understand why the verdict was what it was and — perhaps even more so — what it would take to reverse it. Thus, we understand recourse as a set of instructions for the easiest way to get a decision reversed — either by correcting the record or addressing existing shortcomings. Algorithmic recourse is a special case where the decision is at least partly made or informed by an algorithm. Using the method of conceptual analysis from philosophy, we articulate a precise definition of algorithmic recourse, explain what makes it valuable, and articulate how to implement it.

What Recourse Is and Why It Matters

Humans are an unusual species in our capacity to make long-term plans that depend constitutively on the decision-making of others. We do X to Y to do Z, sometimes years or even decades in the future. And we do so in a way that presupposes that other people and the groups and institutions they belong to will play the required part when the time comes. This capacity for long-term planning is underwritten by our capacity to reasonably trust that the world will be and work as we expect far into the future. There’s little point in spending years in training (e.g., pursuing a professional, undergraduate, or postgraduate degree) if your diploma is not accepted as proof of expertise. There’s not much reason to save for a down payment on a house if the banks deny you a mortgage for inscrutable or capricious reasons. There’s no rationale for applying for welfare benefits if the tax office is liable to accuse you of fraud falsely, and you won’t have a reasonable way to clear your name.

One crucial role that society and governance play is to ensure and stabilize the material, social, economic, and political conditions for the possibility of long-term planning and the trust that underpins it. In other words, it’s up to us to ensure that our world is organized so that people can make long-term plans that depend fundamentally on the decisions that others, bureaucracies, and algorithms make, have made, and have canalized in physical and digital infrastructures.

Fostering this capacity for long-term planning is especially important for marginalized individuals and communities, who typically face additional challenges to exercising their agency due to the precarity of their situation. For instance, given that they were on welfare, many of the victims of the Robodebt scandal lacked the education, legal expertise, and technical know-how to explain how the algorithm had gotten things so wrong. As is often the case, the least-well-off members of society were not benefited by but bore the brunt of a reckless policy choice. Nearly 5% of all Australian households suffered from the Robodebt fiasco. Many were burdened financially for years. Some lost their homes. The toll on the mental health of Australian citizens has been enormous, including everything from stress to suicide.

Given Robodebt’s shortcomings, it would have been better not to use it at all. Alternatively, suppose the system had been adequately improved due to pressure from independent audits. In that case, it may have been acceptable to employ it in an advisory (rather than a decision-making) role. 

But even when algorithms are as good as we can make them, inevitably, they will make mistakes, and inevitably people will want to know what they need to do to get a more favorable decision. This is where algorithmic recourse comes in. When people receive significant, negative verdicts influenced or made by algorithms, they should always be supplied with one or more actionable paths to reversing the verdict. A path is actionable if and only if there is something the agent could reasonably be expected to do to follow it. For example, if the only way to get a decision reversed is to change your parentage or your national origin, that’s not actionable. By contrast, if you can get a decision reversed by correcting an error in the record or by completing some training, that would typically be actionable.

Actionable recourse recommendations are important because they enable people to engage in the long-term planning mentioned earlier. And knowing that you operate in a world that guarantees actionable recourse recommendations is even more valuable because it gives you the confidence to move forward with plans that would otherwise appear far too risky. If you’re sure that, should things go pear-shaped, you’ll be able to switch to a contingency plan B (or plan C or plan D), you’re in a better position to get started on a new, uncertain venture. This is why algorithmic recourse isn’t just nice-to-have, but essential to our emerging digital society.

Looking Ahead

As artificial intelligence becomes cheaper and more scalable, governments and industries will surely accelerate its implementation. The decisions prompted or even made by these algorithms will affect people significantly — for good and ill. We should expect that the Robodebt fiasco will not be the last or the most egregious case in which algorithms automatically ruin people’s lives. The first line of defense against bad algorithmic decision-making remains improving accuracy and to refuse to allow algorithms to make decisions when they are not up to snuff. We argue that a second line of defense is to make recourse possible, actionable, and affordable. 

Neither government agencies like the Australian Tax Office nor industry actors can be relied upon to self-regulate. This is why it’s encouraging that both the European Union (through the GDPR and other policies) and the United States (through the AI Bill of Rights), among other polities, are bolstering these lines of defense. Further work needs to be done to ensure that independent authorities audit algorithmic decision-making with a fiduciary mandate to prioritize the protection and well-being of citizens and non-citizen residents who are affected or even just potentially affected by algorithmic decision-making.

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

🔍 SEARCH

Spotlight

Illustration of a coral reef ecosystem

Tech Futures: Diversity of Thought and Experience: The UN’s Scientific Panel on AI

This image shows a large white, traditional, old building. The top half of the building represents the humanities (which is symbolised by the embedded text from classic literature which is faintly shown ontop the building). The bottom section of the building is embossed with mathematical formulas to represent the sciences. The middle layer of the image is heavily pixelated. On the steps at the front of the building there is a group of scholars, wearing formal suits and tie attire, who are standing around at the enternace talking and some of them are sitting on the steps. There are two stone, statute-like hands that are stretching the building apart from the left side. In the forefront of the image, there are 8 students - which can only be seen from the back. Their graduation gowns have bright blue hoods and they all look as though they are walking towards the old building which is in the background at a distance. There are a mix of students in the foreground.

Tech Futures: Co-opting Research and Education

Agentic AI systems and algorithmic accountability: a new era of e-commerce

ALL IN Conference 2025: Four Key Takeaways from Montreal

Beyond Dependency: The Hidden Risk of Social Comparison in Chatbot Companionship

related posts

  • Research summary: Algorithmic Injustices towards a Relational Ethics

    Research summary: Algorithmic Injustices towards a Relational Ethics

  • Towards Intersectionality in Machine Learning: Including More Identities, Handling Underrepresentati...

    Towards Intersectionality in Machine Learning: Including More Identities, Handling Underrepresentati...

  • AI Neutrality in the Spotlight: ChatGPT’s Political Biases Revisited

    AI Neutrality in the Spotlight: ChatGPT’s Political Biases Revisited

  • Algorithmic Impact Assessments – What Impact Do They Have?

    Algorithmic Impact Assessments – What Impact Do They Have?

  • Can LLMs Enhance the Conversational AI Experience?

    Can LLMs Enhance the Conversational AI Experience?

  • Technical methods for regulatory inspection of algorithmic systems in social media platforms

    Technical methods for regulatory inspection of algorithmic systems in social media platforms

  • Principios Ă©ticos para una inteligencia artificial antropocĂ©ntrica: consensos actuales desde una per...

    Principios éticos para una inteligencia artificial antropocéntrica: consensos actuales desde una per...

  • Deployment corrections: An incident response framework for frontier AI models

    Deployment corrections: An incident response framework for frontier AI models

  • Understanding the Effect of Counterfactual Explanations on Trust and Reliance on AI for Human-AI Col...

    Understanding the Effect of Counterfactual Explanations on Trust and Reliance on AI for Human-AI Col...

  • Rewiring What-to-Watch-Next Recommendations to Reduce Radicalization Pathways

    Rewiring What-to-Watch-Next Recommendations to Reduce Radicalization Pathways

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer


Articles

Columns

AI Literacy

The State of AI Ethics Report


 

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.

Contact

Donate


  • © 2025 MONTREAL AI ETHICS INSTITUTE.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.