🔬 Research Summary by Mark Alfano, a philosopher at Macquarie University who specializes in ethical and epistemic issues related to computational social science.
[Original paper by Suresh Venkatasubramanian, Mark Alfano]
Overview: Many things we care most about are modally robust because they systematically deliver correlative benefits across various counterfactual scenarios. We contend that recourse — the systematic capacity to reverse unfavorable decisions by algorithms and bureaucracies — is a modally robust good. In particular, we argue that two essential components of a good life — temporally extended agency and trust — are underwritten by recourse, and we offer a novel account of how to implement it in a morally and politically acceptable way.
Robodebt and Recourse
In 2016, the Australian government replaced manual fraud detection with the Robodebt scheme, an algorithmic decision-making process. Years later, it was revealed that Robodebt led to nearly half a million false accusations of welfare fraud. In 2020, the government eventually settled a class action lawsuit and is currently on the hook for nearly two billion dollars in repayments to victims, in addition to facing a Royal Commission set to begin in August.
Citizens who faced false accusations had little recourse. They were classified by an algorithm as fraudsters and faced an uphill battle in getting a human to listen to them. Our research addresses this problem of algorithmic recourse, which we conceptualize as an offshoot of the problem of bureaucratic recourse. Large organizations operating with complex, opaque rules sometimes — even often — get things wrong. The error may be traceable to a human mistake or prejudice, failure to anticipate novel phenomena, algorithmic bias, or some combination of these. Regardless of the cause, when people are affected by such errors, they need recourse.
And even if no error has been made, people still need to understand why the verdict was what it was and — perhaps even more so — what it would take to reverse it. Thus, we understand recourse as a set of instructions for the easiest way to get a decision reversed — either by correcting the record or addressing existing shortcomings. Algorithmic recourse is a special case where the decision is at least partly made or informed by an algorithm. Using the method of conceptual analysis from philosophy, we articulate a precise definition of algorithmic recourse, explain what makes it valuable, and articulate how to implement it.
What Recourse Is and Why It Matters
Humans are an unusual species in our capacity to make long-term plans that depend constitutively on the decision-making of others. We do X to Y to do Z, sometimes years or even decades in the future. And we do so in a way that presupposes that other people and the groups and institutions they belong to will play the required part when the time comes. This capacity for long-term planning is underwritten by our capacity to reasonably trust that the world will be and work as we expect far into the future. There’s little point in spending years in training (e.g., pursuing a professional, undergraduate, or postgraduate degree) if your diploma is not accepted as proof of expertise. There’s not much reason to save for a down payment on a house if the banks deny you a mortgage for inscrutable or capricious reasons. There’s no rationale for applying for welfare benefits if the tax office is liable to accuse you of fraud falsely, and you won’t have a reasonable way to clear your name.
One crucial role that society and governance play is to ensure and stabilize the material, social, economic, and political conditions for the possibility of long-term planning and the trust that underpins it. In other words, it’s up to us to ensure that our world is organized so that people can make long-term plans that depend fundamentally on the decisions that others, bureaucracies, and algorithms make, have made, and have canalized in physical and digital infrastructures.
Fostering this capacity for long-term planning is especially important for marginalized individuals and communities, who typically face additional challenges to exercising their agency due to the precarity of their situation. For instance, given that they were on welfare, many of the victims of the Robodebt scandal lacked the education, legal expertise, and technical know-how to explain how the algorithm had gotten things so wrong. As is often the case, the least-well-off members of society were not benefited by but bore the brunt of a reckless policy choice. Nearly 5% of all Australian households suffered from the Robodebt fiasco. Many were burdened financially for years. Some lost their homes. The toll on the mental health of Australian citizens has been enormous, including everything from stress to suicide.
Given Robodebt’s shortcomings, it would have been better not to use it at all. Alternatively, suppose the system had been adequately improved due to pressure from independent audits. In that case, it may have been acceptable to employ it in an advisory (rather than a decision-making) role.
But even when algorithms are as good as we can make them, inevitably, they will make mistakes, and inevitably people will want to know what they need to do to get a more favorable decision. This is where algorithmic recourse comes in. When people receive significant, negative verdicts influenced or made by algorithms, they should always be supplied with one or more actionable paths to reversing the verdict. A path is actionable if and only if there is something the agent could reasonably be expected to do to follow it. For example, if the only way to get a decision reversed is to change your parentage or your national origin, that’s not actionable. By contrast, if you can get a decision reversed by correcting an error in the record or by completing some training, that would typically be actionable.
Actionable recourse recommendations are important because they enable people to engage in the long-term planning mentioned earlier. And knowing that you operate in a world that guarantees actionable recourse recommendations is even more valuable because it gives you the confidence to move forward with plans that would otherwise appear far too risky. If you’re sure that, should things go pear-shaped, you’ll be able to switch to a contingency plan B (or plan C or plan D), you’re in a better position to get started on a new, uncertain venture. This is why algorithmic recourse isn’t just nice-to-have, but essential to our emerging digital society.
Looking Ahead
As artificial intelligence becomes cheaper and more scalable, governments and industries will surely accelerate its implementation. The decisions prompted or even made by these algorithms will affect people significantly — for good and ill. We should expect that the Robodebt fiasco will not be the last or the most egregious case in which algorithms automatically ruin people’s lives. The first line of defense against bad algorithmic decision-making remains improving accuracy and to refuse to allow algorithms to make decisions when they are not up to snuff. We argue that a second line of defense is to make recourse possible, actionable, and affordable.
Neither government agencies like the Australian Tax Office nor industry actors can be relied upon to self-regulate. This is why it’s encouraging that both the European Union (through the GDPR and other policies) and the United States (through the AI Bill of Rights), among other polities, are bolstering these lines of defense. Further work needs to be done to ensure that independent authorities audit algorithmic decision-making with a fiduciary mandate to prioritize the protection and well-being of citizens and non-citizen residents who are affected or even just potentially affected by algorithmic decision-making.