• Skip to main content
  • Skip to secondary menu
  • Skip to primary sidebar
  • Skip to footer
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

  • Articles
    • Public Policy
    • Privacy & Security
    • Human Rights
      • Ethics
      • JEDI (Justice, Equity, Diversity, Inclusion
    • Climate
    • Design
      • Emerging Technology
    • Application & Adoption
      • Health
      • Education
      • Government
        • Military
        • Public Works
      • Labour
    • Arts & Culture
      • Film & TV
      • Music
      • Pop Culture
      • Digital Art
  • Columns
    • AI Policy Corner
    • Recess
  • The AI Ethics Brief
  • AI Literacy
    • Research Summaries
    • AI Ethics Living Dictionary
    • Learning Community
  • The State of AI Ethics Report
    • Volume 7 (November 2025)
    • Volume 6 (February 2022)
    • Volume 5 (July 2021)
    • Volume 4 (April 2021)
    • Volume 3 (Jan 2021)
    • Volume 2 (Oct 2020)
    • Volume 1 (June 2020)
  • About
    • Our Contributions Policy
    • Our Open Access Policy
    • Contact
    • Donate

Understanding the Effect of Counterfactual Explanations on Trust and Reliance on AI for Human-AI Collaborative Clinical Decision Making

September 21, 2023

🔬 Research Summary by Min Lee, an Assistant Professor in Computer Science at Singapore Management University, where he creates and evaluates interactive, human-centered AI systems for societal problems (e.g. health).

[Original paper by Min Hun Lee and Chong Jun Chew]


Overview: Although advanced artificial intelligence (AI) and machine learning (ML) models are increasingly being explored to assist various decision-making tasks (e.g., health, bail decisions), users might place too much trust even with ‘wrong’ AI outputs. This paper explores the effect of counterfactual explanations on users’ trust and reliance on AI during a clinical decision-making task.


Introduction

Advanced artificial intelligence (AI) and machine learning (ML) models are increasingly being considered to increase efficiency and reduce the cost of performing decision-making tasks from various types of organizations and domains (e.g., health, bail decisions, child welfare services, etc.). However, users might place too much trust in the AI/ML system and even agree with ‘wrong’ AI outputs, and they achieve worse performance than humans or AI/ML models alone.

Key Insights

What did we do? 

In this work, we contribute to an empirical study that analyzes the effect of AI explanations on users’ trust and reliance on AI during clinical decision-making. Specifically, we focus on the task of assessing post-stroke survivors’ quality of motion. We conducted a within-subject experiment with seven therapists and ten laypersons to compare the effect of counterfactual explanations with one of the widely used AI explanations, feature importance explanations.

  • Feature importance: describes the contribution/importance of each input feature (e.g., kinematic variables — joint angle, distance between joints for the context of the study)
  • Counterfactual explanations: describe how the inputs can be modified to achieve an AI output in a certain way (e.g., how does a patient’s incorrect/abnormal motion need to be changed to become a normal motion?)

One potential reason for overreliance on AI might be that humans rarely involve analytical thinking on AI outputs. This work hypothesizes that reviewing counterfactual explanations will allow a user to think critically about changing AI inputs to update an AI output and improve the user’s analytical review of an AI output to reduce overreliance on AI.

What did we learn?

  • When ‘right’ AI outputs were presented, human+AI performance with both feature importance and counterfactual explanations increased than humans alone
  • When ‘wrong’ AI outputs were presented, human+AI performance with both feature importance and counterfactual explanations decreased than humans alone
  • Counterfactual explanations reduced overreliance on ‘wrong’ AI outputs by 21% compared to feature importance
  • Domain experts (i.e., therapists) had lower performance degradation and overreliance on ‘wrong’ AI outputs than laypersons while using both feature importance and counterfactual explanations
  • Both experts and laypersons expressed higher subjective usability scores of ‘usefulness,’ ‘less effort & frustration,’ ‘trust,’ and ‘usage intent’ on feature importance than counterfactual explanations.

Between the lines

Implications: Our work shows that providing AI explanations does not necessarily indicate improved human-AI collaborative decision-making. This work provides new insights into:

1) the potential of counterfactual explanations to improve analytical reviews on AI outputs and reduce overreliance on ‘wrong’ AI outputs with the cost of cognitive burdens.

2) a gap between users’ perceived benefits and actual trustworthiness/usefulness of an AI system (e.g., improving performance while relying on ‘right’ outcomes)

Please check our paper for the details of this work (link). If you are interested in further discussing this work or collaborating in this space, please contact Min Lee (link).

Citation Format: Min Hun Lee and Chong Jun Chew. 2023. Understanding the Effect of Counterfactual Explanations on Trust and Reliance on AI for Human-AI Collaborative Clinical Decision Making. Proc. ACM Hum.-Comput. Interact. 7, CSCW2, Article 369 (October 2023), 22 pages. https://doi.org/10.1145/3610218

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

🔍 SEARCH

Spotlight

ALL IN Conference 2025: Four Key Takeaways from Montreal

Beyond Dependency: The Hidden Risk of Social Comparison in Chatbot Companionship

AI Policy Corner: Restriction vs. Regulation: Comparing State Approaches to AI Mental Health Legislation

Beyond Consultation: Building Inclusive AI Governance for Canada’s Democratic Future

AI Policy Corner: U.S. Executive Order on Advancing AI Education for American Youth

related posts

  • Unlocking Accuracy and Fairness in Differentially Private Image Classification

    Unlocking Accuracy and Fairness in Differentially Private Image Classification

  • “Cold Hard Data” – Nothing Cold or Hard About It

    “Cold Hard Data” – Nothing Cold or Hard About It

  • Research summary: What does it mean for ML to be trustworthy?

    Research summary: What does it mean for ML to be trustworthy?

  • The Epistemological View: Data Ethics, Privacy & Trust on Digital Platform

    The Epistemological View: Data Ethics, Privacy & Trust on Digital Platform

  • Research summary: Algorithmic Accountability

    Research summary: Algorithmic Accountability

  • Demystifying Local and Global Fairness Trade-offs in Federated Learning Using Partial Information De...

    Demystifying Local and Global Fairness Trade-offs in Federated Learning Using Partial Information De...

  • How Cognitive Biases Affect XAI-assisted Decision-making: A Systematic Review

    How Cognitive Biases Affect XAI-assisted Decision-making: A Systematic Review

  • The Logic of Strategic Assets: From Oil to AI

    The Logic of Strategic Assets: From Oil to AI

  • Rewiring What-to-Watch-Next Recommendations to Reduce Radicalization Pathways

    Rewiring What-to-Watch-Next Recommendations to Reduce Radicalization Pathways

  • An Empirical Study of Modular Bias Mitigators and Ensembles

    An Empirical Study of Modular Bias Mitigators and Ensembles

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer


Articles

Columns

AI Literacy

The State of AI Ethics Report


 

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.

Contact

Donate


  • © 2025 MONTREAL AI ETHICS INSTITUTE.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.