• Skip to main content
  • Skip to secondary menu
  • Skip to primary sidebar
  • Skip to footer
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

  • Articles
    • Public Policy
    • Privacy & Security
    • Human Rights
      • Ethics
      • JEDI (Justice, Equity, Diversity, Inclusion
    • Climate
    • Design
      • Emerging Technology
    • Application & Adoption
      • Health
      • Education
      • Government
        • Military
        • Public Works
      • Labour
    • Arts & Culture
      • Film & TV
      • Music
      • Pop Culture
      • Digital Art
  • Columns
    • AI Policy Corner
    • Recess
  • The AI Ethics Brief
  • AI Literacy
    • Research Summaries
    • AI Ethics Living Dictionary
    • Learning Community
  • The State of AI Ethics Report
    • Volume 6 (February 2022)
    • Volume 5 (July 2021)
    • Volume 4 (April 2021)
    • Volume 3 (Jan 2021)
    • Volume 2 (Oct 2020)
    • Volume 1 (June 2020)
  • About
    • Our Contributions Policy
    • Our Open Access Policy
    • Contact
    • Donate

Towards User-Guided Actionable Recourse

December 7, 2023

🔬 Research Summary by Jayanth Yetukuri, a final year Ph.D. student at UCSC, advised by Professor Yang Liu, where his research focuses on improving the trustworthiness of Machine Learning models.

[Original paper by Jayanth Yetukuri, Ian Hardy, and Yang Liu]


Overview: Recent years have seen a proliferation of Machine Learning systems in several critical decision-making domains. Actionable Recourse provides necessary action to an adversely affected individual by the model’s decision. This paper focuses on allowing such individuals to maneuver the recourse generation process by capturing their individual preferences.


Introduction

Actionable Recourse is a list of actions an individual can take to obtain a desired outcome from a fixed Machine Learning model. Several domains such as lending, insurance, resource allocation, and hiring decisions are required to suggest recourses to ensure the trust of a decision system; in such scenarios, it is critical to ensure the actionability (the viability of taking a suggested action) of recourse, otherwise, the suggestions are pointless. 

Existing research focuses on providing feasible recourses, yet comprehensive literature on understanding and incorporating user preferences within the recourse generation mechanism is lacking. Efforts to elicit user preferences include recent work by De Toni et al. 2022. The authors provide an interactive human-in-the-loop approach, where a user continuously interacts with the system. However, learning user preferences by asking them to select from one of the partial interventions provided is a derivative of providing a diverse set of recourse candidates. 

We argue that the inherent problem of feasibility can be solved more accurately by capturing and understanding Alice’s recourse preference and adhering to her constraints, which can vary between Hard Rules, such as being unable to bring a co-applicant, and Soft Rules, such as hesitation to reduce the amount, which should not be interpreted as unwillingness.

Key Insights

Motivated by the above considerations, we capture soft user preferences and hard constraints and identify recourse based on local desires without affecting the success rate of identifying recourse. For example, consider Alice prefers to have 80% of the recourse “cost” from loan duration and only 20% from the loan amount, meaning she prefers to have recourse with a minor reduction in the loan amount. Such recourse enables Alice to get the benefits of a loan on her terms and can easily be calculated according to Alice’s desire. Hence, user-preferred recourse is obtained by solving a custom optimization for individual preferences. 

User preferences can be captured via soft constraints in three simple forms: 

i) scoring continuous features, 

ii) bounding feature values, and 

iii) ranking categorical features. 

These preferences can be embedded into the gradient-based recourse identification approach approach to design User Preferred Actionable Recourse (UP-AR). User Preferred Actionable Recourse (UP-AR) consists of two stages. The first stage generates a candidate recourse by following a connected gradient-based iterative approach. The second stage then improves upon the redundancy metric of the generated recourse for better actionability. 

UP-AR holistically performs favorably to its counterparts. Critically, it respects feature constraints (which are fundamental to actionable recourse) while maintaining a significantly low redundancy and sparsity. This indicates that it tends to change fewer necessary features. Its speed makes it tractable for real-world use, while its proximity values show that it recovers relatively low-cost recourse. These results highlight the promise of UP-AR as a performative, low-cost option for calculating recourse when user preferences are paramount. UP-AR shows consistent improvements over all the performance metrics. 

Between the lines

In this study, we propose to capture different forms of user preferences and propose an optimization function to generate actionable recourse adhering to such constraints. We further provide an approach to generate a connected recourse guided by the user. We show how UP-AR adheres to soft constraints by evaluating user satisfaction in fractional cost ratio. We emphasize the need to capture various user preferences and communicate with the user in a comprehensible form. This work motivates further research on how truthful reporting of preferences can help improve overall user satisfaction.

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

🔍 SEARCH

Spotlight

ALL IN Conference 2025: Four Key Takeaways from Montreal

Beyond Dependency: The Hidden Risk of Social Comparison in Chatbot Companionship

AI Policy Corner: Restriction vs. Regulation: Comparing State Approaches to AI Mental Health Legislation

Beyond Consultation: Building Inclusive AI Governance for Canada’s Democratic Future

AI Policy Corner: U.S. Executive Order on Advancing AI Education for American Youth

related posts

  • The Next Frontier of AI: Lower Emission Processing Using Analog Computers

    The Next Frontier of AI: Lower Emission Processing Using Analog Computers

  • The European Commission’s Artificial Intelligence Act (Stanford HAI Policy Brief)

    The European Commission’s Artificial Intelligence Act (Stanford HAI Policy Brief)

  • AI Safety, Security, and Stability Among Great Powers (Research Summary)

    AI Safety, Security, and Stability Among Great Powers (Research Summary)

  • It’s COMPASlicated: The Messy Relationship between RAI Datasets and Algorithmic Fairness Benchmarks

    It’s COMPASlicated: The Messy Relationship between RAI Datasets and Algorithmic Fairness Benchmarks

  • Research summary: Appendix C: Model Benefit-Risk Analysis

    Research summary: Appendix C: Model Benefit-Risk Analysis

  • Humans, AI, and Context: Understanding End-Users’ Trust in a Real-World Computer Vision Application

    Humans, AI, and Context: Understanding End-Users’ Trust in a Real-World Computer Vision Application

  • Democratising AI: Multiple Meanings, Goals, and Methods

    Democratising AI: Multiple Meanings, Goals, and Methods

  • Research summary: Challenging Truth and Trust: A Global Inventory of Organized Social Media Manipula...

    Research summary: Challenging Truth and Trust: A Global Inventory of Organized Social Media Manipula...

  • DICES Dataset: Diversity in Conversational AI Evaluation for Safety

    DICES Dataset: Diversity in Conversational AI Evaluation for Safety

  • The algorithmic imaginary: exploring the ordinary affects of Facebook algorithms (Research Summary)

    The algorithmic imaginary: exploring the ordinary affects of Facebook algorithms (Research Summary)

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer


Articles

Columns

AI Literacy

The State of AI Ethics Report


 

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.

Contact

Donate


  • © 2025 MONTREAL AI ETHICS INSTITUTE.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.