• Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • Core Principles of Responsible AI
    • Accountability
    • Fairness
    • Privacy
    • Safety and Security
    • Sustainability
    • Transparency
  • Special Topics
    • AI in Industry
    • Ethical Implications
    • Human-Centered Design
    • Regulatory Landscape
    • Technical Methods
  • Living Dictionary
  • State of AI Ethics
  • AI Ethics Brief
  • 🇫🇷
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

Understanding the Effect of Counterfactual Explanations on Trust and Reliance on AI for Human-AI Collaborative Clinical Decision Making

September 21, 2023

🔬 Research Summary by Min Lee, an Assistant Professor in Computer Science at Singapore Management University, where he creates and evaluates interactive, human-centered AI systems for societal problems (e.g. health).

[Original paper by Min Hun Lee and Chong Jun Chew]


Overview: Although advanced artificial intelligence (AI) and machine learning (ML) models are increasingly being explored to assist various decision-making tasks (e.g., health, bail decisions), users might place too much trust even with ‘wrong’ AI outputs. This paper explores the effect of counterfactual explanations on users’ trust and reliance on AI during a clinical decision-making task.


Introduction

Advanced artificial intelligence (AI) and machine learning (ML) models are increasingly being considered to increase efficiency and reduce the cost of performing decision-making tasks from various types of organizations and domains (e.g., health, bail decisions, child welfare services, etc.). However, users might place too much trust in the AI/ML system and even agree with ‘wrong’ AI outputs, and they achieve worse performance than humans or AI/ML models alone.

Key Insights

What did we do? 

In this work, we contribute to an empirical study that analyzes the effect of AI explanations on users’ trust and reliance on AI during clinical decision-making. Specifically, we focus on the task of assessing post-stroke survivors’ quality of motion. We conducted a within-subject experiment with seven therapists and ten laypersons to compare the effect of counterfactual explanations with one of the widely used AI explanations, feature importance explanations.

  • Feature importance: describes the contribution/importance of each input feature (e.g., kinematic variables — joint angle, distance between joints for the context of the study)
  • Counterfactual explanations: describe how the inputs can be modified to achieve an AI output in a certain way (e.g., how does a patient’s incorrect/abnormal motion need to be changed to become a normal motion?)

One potential reason for overreliance on AI might be that humans rarely involve analytical thinking on AI outputs. This work hypothesizes that reviewing counterfactual explanations will allow a user to think critically about changing AI inputs to update an AI output and improve the user’s analytical review of an AI output to reduce overreliance on AI.

What did we learn?

  • When ‘right’ AI outputs were presented, human+AI performance with both feature importance and counterfactual explanations increased than humans alone
  • When ‘wrong’ AI outputs were presented, human+AI performance with both feature importance and counterfactual explanations decreased than humans alone
  • Counterfactual explanations reduced overreliance on ‘wrong’ AI outputs by 21% compared to feature importance
  • Domain experts (i.e., therapists) had lower performance degradation and overreliance on ‘wrong’ AI outputs than laypersons while using both feature importance and counterfactual explanations
  • Both experts and laypersons expressed higher subjective usability scores of ‘usefulness,’ ‘less effort & frustration,’ ‘trust,’ and ‘usage intent’ on feature importance than counterfactual explanations.

Between the lines

Implications: Our work shows that providing AI explanations does not necessarily indicate improved human-AI collaborative decision-making. This work provides new insights into:

1) the potential of counterfactual explanations to improve analytical reviews on AI outputs and reduce overreliance on ‘wrong’ AI outputs with the cost of cognitive burdens.

2) a gap between users’ perceived benefits and actual trustworthiness/usefulness of an AI system (e.g., improving performance while relying on ‘right’ outcomes)

Please check our paper for the details of this work (link). If you are interested in further discussing this work or collaborating in this space, please contact Min Lee (link).

Citation Format: Min Hun Lee and Chong Jun Chew. 2023. Understanding the Effect of Counterfactual Explanations on Trust and Reliance on AI for Human-AI Collaborative Clinical Decision Making. Proc. ACM Hum.-Comput. Interact. 7, CSCW2, Article 369 (October 2023), 22 pages. https://doi.org/10.1145/3610218

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

🔍 SEARCH

Spotlight

Beyond Dependency: The Hidden Risk of Social Comparison in Chatbot Companionship

AI Policy Corner: Restriction vs. Regulation: Comparing State Approaches to AI Mental Health Legislation

Beyond Consultation: Building Inclusive AI Governance for Canada’s Democratic Future

AI Policy Corner: U.S. Executive Order on Advancing AI Education for American Youth

AI Policy Corner: U.S. Copyright Guidance on Works Created with AI

related posts

  • Transparency as design publicity: explaining and justifying inscrutable algorithms

    Transparency as design publicity: explaining and justifying inscrutable algorithms

  • Language Models: A Guide for the Perplexed

    Language Models: A Guide for the Perplexed

  • Measuring Value Understanding in Language Models through Discriminator-Critique Gap

    Measuring Value Understanding in Language Models through Discriminator-Critique Gap

  • Machines as teammates: A research agenda on AI in team collaboration

    Machines as teammates: A research agenda on AI in team collaboration

  • Research summary:  The Flight to Safety-Critical AI

    Research summary: The Flight to Safety-Critical AI

  • Representation and Imagination for Preventing AI Harms

    Representation and Imagination for Preventing AI Harms

  • Research summary: Legal Risks of Adversarial Machine Learning Research

    Research summary: Legal Risks of Adversarial Machine Learning Research

  • Designing Fiduciary Artificial Intelligence

    Designing Fiduciary Artificial Intelligence

  • Writer-Defined AI Personas for On-Demand Feedback Generation

    Writer-Defined AI Personas for On-Demand Feedback Generation

  • Towards Community-Driven Generative AI

    Towards Community-Driven Generative AI

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer

Categories


• Blog
• Research Summaries
• Columns
• Core Principles of Responsible AI
• Special Topics

Signature Content


• The State Of AI Ethics

• The Living Dictionary

• The AI Ethics Brief

Learn More


• About

• Open Access Policy

• Contributions Policy

• Editorial Stance on AI Tools

• Press

• Donate

• Contact

The AI Ethics Brief (bi-weekly newsletter)

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.


Archive

  • © 2025 MONTREAL AI ETHICS INSTITUTE.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.