• Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • Core Principles of Responsible AI
    • Accountability
    • Fairness
    • Privacy
    • Safety and Security
    • Sustainability
    • Transparency
  • Special Topics
    • AI in Industry
    • Ethical Implications
    • Human-Centered Design
    • Regulatory Landscape
    • Technical Methods
  • Living Dictionary
  • State of AI Ethics
  • AI Ethics Brief
  • 🇫🇷
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

Towards User-Guided Actionable Recourse

December 7, 2023

🔬 Research Summary by Jayanth Yetukuri, a final year Ph.D. student at UCSC, advised by Professor Yang Liu, where his research focuses on improving the trustworthiness of Machine Learning models.

[Original paper by Jayanth Yetukuri, Ian Hardy, and Yang Liu]


Overview: Recent years have seen a proliferation of Machine Learning systems in several critical decision-making domains. Actionable Recourse provides necessary action to an adversely affected individual by the model’s decision. This paper focuses on allowing such individuals to maneuver the recourse generation process by capturing their individual preferences.


Introduction

Actionable Recourse is a list of actions an individual can take to obtain a desired outcome from a fixed Machine Learning model. Several domains such as lending, insurance, resource allocation, and hiring decisions are required to suggest recourses to ensure the trust of a decision system; in such scenarios, it is critical to ensure the actionability (the viability of taking a suggested action) of recourse, otherwise, the suggestions are pointless. 

Existing research focuses on providing feasible recourses, yet comprehensive literature on understanding and incorporating user preferences within the recourse generation mechanism is lacking. Efforts to elicit user preferences include recent work by De Toni et al. 2022. The authors provide an interactive human-in-the-loop approach, where a user continuously interacts with the system. However, learning user preferences by asking them to select from one of the partial interventions provided is a derivative of providing a diverse set of recourse candidates. 

We argue that the inherent problem of feasibility can be solved more accurately by capturing and understanding Alice’s recourse preference and adhering to her constraints, which can vary between Hard Rules, such as being unable to bring a co-applicant, and Soft Rules, such as hesitation to reduce the amount, which should not be interpreted as unwillingness.

Key Insights

Motivated by the above considerations, we capture soft user preferences and hard constraints and identify recourse based on local desires without affecting the success rate of identifying recourse. For example, consider Alice prefers to have 80% of the recourse “cost” from loan duration and only 20% from the loan amount, meaning she prefers to have recourse with a minor reduction in the loan amount. Such recourse enables Alice to get the benefits of a loan on her terms and can easily be calculated according to Alice’s desire. Hence, user-preferred recourse is obtained by solving a custom optimization for individual preferences. 

User preferences can be captured via soft constraints in three simple forms: 

i) scoring continuous features, 

ii) bounding feature values, and 

iii) ranking categorical features. 

These preferences can be embedded into the gradient-based recourse identification approach approach to design User Preferred Actionable Recourse (UP-AR). User Preferred Actionable Recourse (UP-AR) consists of two stages. The first stage generates a candidate recourse by following a connected gradient-based iterative approach. The second stage then improves upon the redundancy metric of the generated recourse for better actionability. 

UP-AR holistically performs favorably to its counterparts. Critically, it respects feature constraints (which are fundamental to actionable recourse) while maintaining a significantly low redundancy and sparsity. This indicates that it tends to change fewer necessary features. Its speed makes it tractable for real-world use, while its proximity values show that it recovers relatively low-cost recourse. These results highlight the promise of UP-AR as a performative, low-cost option for calculating recourse when user preferences are paramount. UP-AR shows consistent improvements over all the performance metrics. 

Between the lines

In this study, we propose to capture different forms of user preferences and propose an optimization function to generate actionable recourse adhering to such constraints. We further provide an approach to generate a connected recourse guided by the user. We show how UP-AR adheres to soft constraints by evaluating user satisfaction in fractional cost ratio. We emphasize the need to capture various user preferences and communicate with the user in a comprehensible form. This work motivates further research on how truthful reporting of preferences can help improve overall user satisfaction.

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

🔍 SEARCH

Spotlight

ALL IN Conference 2025: Four Key Takeaways from Montreal

Beyond Dependency: The Hidden Risk of Social Comparison in Chatbot Companionship

AI Policy Corner: Restriction vs. Regulation: Comparing State Approaches to AI Mental Health Legislation

Beyond Consultation: Building Inclusive AI Governance for Canada’s Democratic Future

AI Policy Corner: U.S. Executive Order on Advancing AI Education for American Youth

related posts

  • Self-Consuming Generative Models Go MAD

    Self-Consuming Generative Models Go MAD

  • Listen to What They Say: Better Understand and Detect Online Misinformation with User Feedback

    Listen to What They Say: Better Understand and Detect Online Misinformation with User Feedback

  • Subreddit Links Drive Community Creation and User Engagement on Reddit

    Subreddit Links Drive Community Creation and User Engagement on Reddit

  • Research summary: PolicyKit: Building Governance in Online Communities

    Research summary: PolicyKit: Building Governance in Online Communities

  • Science Communications for Explainable Artificial Intelligence

    Science Communications for Explainable Artificial Intelligence

  • Perspectives and Approaches in AI Ethics: East Asia (Research Summary)

    Perspectives and Approaches in AI Ethics: East Asia (Research Summary)

  • From AI Winter to AI Hype: The Story of AI in Montreal

    From AI Winter to AI Hype: The Story of AI in Montreal

  • Clueless AI: Should AI Models Report to Us When They Are Clueless?

    Clueless AI: Should AI Models Report to Us When They Are Clueless?

  • Exploring Collaboration Mechanisms for LLM Agents: A Social Psychology View

    Exploring Collaboration Mechanisms for LLM Agents: A Social Psychology View

  • Representation Engineering: A Top-Down Approach to AI Transparency

    Representation Engineering: A Top-Down Approach to AI Transparency

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer

Categories


• Blog
• Research Summaries
• Columns
• Core Principles of Responsible AI
• Special Topics

Signature Content


• The State Of AI Ethics

• The Living Dictionary

• The AI Ethics Brief

Learn More


• About

• Open Access Policy

• Contributions Policy

• Editorial Stance on AI Tools

• Press

• Donate

• Contact

The AI Ethics Brief (bi-weekly newsletter)

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.


Archive

  • © 2025 MONTREAL AI ETHICS INSTITUTE.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.