• Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • Core Principles of Responsible AI
    • Accountability
    • Fairness
    • Privacy
    • Safety and Security
    • Sustainability
    • Transparency
  • Special Topics
    • AI in Industry
    • Ethical Implications
    • Human-Centered Design
    • Regulatory Landscape
    • Technical Methods
  • Living Dictionary
  • State of AI Ethics
  • AI Ethics Brief
  • 🇫🇷
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

Humans, AI, and Context: Understanding End-Users’ Trust in a Real-World Computer Vision Application

June 16, 2023

🔬 Research Summary by Sunnie S. Y. Kim, a PhD student in computer science at Princeton University working on AI transparency and explainability to help people better understand and interact with AI systems.

[Original paper by Sunnie S. Y. Kim, Elizabeth Anne Watkins, Olga Russakovsky, Ruth Fong, and Andrés Monroy-Hernández]


Overview: Trust is a key factor in human-AI interaction. This paper provides a holistic and nuanced understanding of trust in AI by describing multiple aspects of trust and what factors influenced each in a qualitative case study of a real-world AI application.


Introduction

Appropriate trust is crucial for safe and effective interactions with AI systems. However, there is a lack of empirical studies investigating what factors influence trust in AI and how in real-world contexts. Most research investigates how a certain factor affects trust in controlled lab settings. While they provide valuable insights into the relationship between trust and the factor of interest, their research design does not allow for capturing the contextual aspects of trust or discovering new trust-influencing factors. In this paper, we address these two gaps and deepen the understanding of trust in AI through a qualitative case study of a real-world AI application.

Concretely, we interviewed 20 end-users of a popular, AI-based app for bird identification. We inquired about their trust in the app from many angles, asking questions about their context of app use, perception and experience with it, and intention to use it in hypothetical, high-stakes scenarios. Afterward, we analyzed the collected data with the widely-accepted trust model of Mayer et al. In the next section, we describe participants’ trust in AI in three parts: (1) trustworthiness perception and trust attitude, (2) AI output acceptance, and (3) AI adoption.

Key insights

(1) Trustworthiness perception and trust attitude

Overall, participants assessed the app to be trustworthy and trusted it. We drew this conclusion based on participants’ responses regarding the app’s ability, integrity, and benevolence—the three factors of perceived trustworthiness in Mayer et al.’s trust model. Participants assessed that the app possesses all three, based on their positive prior experience with it, its popularity, and the domain’s and the developers’ good reputation.

(2) AI output acceptance

However, we observed a more complex picture of trust when we examined participants’ app output acceptance decisions. Participants did not accept the app’s outputs as true in every usage instance. To the extent possible, they carefully assessed the outputs, using their knowledge about the domain and engaging in verification behaviors. When unable to verify, some participants disregarded the outputs, even though they described the app as trustworthy. 

(3) AI adoption

Finally, we examined participants’ AI adoption decision-making by asking whether they would use the app in hypothetical, high-stakes scenarios with health-related and financial outcomes. We found that while participants always used the app in their actual use setting, they made different decisions for the high-stakes scenarios based on various factors: the app’s ability, familiarity, and ease of use (AI-related factors); their ability to assess the app’s outputs and use the app (human-related factors); and finally, task difficulty, perceived risks and benefits of the situation, and other situational characteristics (context-related factors).

Trust in AI is multifaceted and influenced by many factors

In short, we found that end-users’ trust relationship with AI is complex. Overall, participants found the app trustworthy and trusted it. Still, they carefully assessed the correctness of individual outputs and decided against app adoption in certain high-stakes scenarios. This discrepancy illustrates that trust is a multifaceted construct that must be approached holistically. To get a full and accurate picture of trust, it is crucial to examine both general aspects, such as trustworthiness perceptions and trust attitudes, and instance-specific aspects, such as AI output acceptance and adoption decisions. 

We also highlight that many factors influence trust in AI. In the below table, we organize the factors we identified based on whether they are related to the human trustor, the AI trustee, or the context. Human-related factors include domain knowledge and other factors influenced by it, such as the ability to assess the AI’s outputs, the ability to assess the AI’s ability, and the ability to use the AI. AI-related factors include internal factors such as ability, integrity, and benevolence; external factors such as popularity; and user-dependent factors such as familiarity and ease of use. Context-related factors include task difficulty, perceived risks and benefits of the situation, other situational characteristics, and the reputation of the domain and the developers. While this is not a complete set of factors that can influence trust in AI, what we observed in our case study, we hope it helps researchers and practitioners anticipate what can influence trust in AI in their context of interest.

Human-relatedAI-relatedContext-related
Domain knowledge

Ability to assess the AI’s outputs

Ability to assess the AI’s ability

Ability to use the AI
Ability

Integrity

Benevolence

Popularity

Familiarity

Ease of use
Task difficulty

Perceived risks and benefits

Situational characteristics

Domain’s reputation

Developer’s reputation

Between the lines

Our qualitative case study revealed a comprehensive picture of real end-users’ trust in AI, adding nuances to existing understandings. Yet, much remains to be explored. More research is needed on how trust is initially developed and changes over time and how trust in AI varies across stakeholders and user groups. In doing so, we urge the field to move from studying one or a few factors in lab settings with hypothetical end-users to studying multiple factors in real-world settings with actual end-users. This shift is necessary for understanding the interactions between factors and contextual influences on trust. We hope our paper, especially the way in which we delineated trust from its antecedents, context, and products, and the trust-influencing factors we identified, aid future research on other types of AI applications.

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

🔍 SEARCH

Spotlight

AI Policy Corner: Frontier AI Safety Commitments, AI Seoul Summit 2024

AI Policy Corner: The Colorado State Deepfakes Act

Special Edition: Honouring the Legacy of Abhishek Gupta (1992–2024)

AI Policy Corner: The Turkish Artificial Intelligence Law Proposal

From Funding Crisis to AI Misuse: Critical Digital Rights Challenges from RightsCon 2025

related posts

  • Problematic Machine Behavior: A Systematic Literature Review of Algorithm Audits

    Problematic Machine Behavior: A Systematic Literature Review of Algorithm Audits

  • Exploiting Large Language Models (LLMs) through Deception Techniques and Persuasion Principles

    Exploiting Large Language Models (LLMs) through Deception Techniques and Persuasion Principles

  • Handling Bias in Toxic Speech Detection: A Survey

    Handling Bias in Toxic Speech Detection: A Survey

  • Fairness and Bias in Algorithmic Hiring

    Fairness and Bias in Algorithmic Hiring

  • Research summary: Lexicon of Lies: Terms for Problematic Information

    Research summary: Lexicon of Lies: Terms for Problematic Information

  • Auditing for Human Expertise

    Auditing for Human Expertise

  • Human-AI Collaboration in Decision-Making: Beyond Learning to Defer

    Human-AI Collaboration in Decision-Making: Beyond Learning to Defer

  • Democratising AI: Multiple Meanings, Goals, and Methods

    Democratising AI: Multiple Meanings, Goals, and Methods

  • FairQueue: Rethinking Prompt Learning for Fair Text-to-Image Generation (NeurIPS 2024)

    FairQueue: Rethinking Prompt Learning for Fair Text-to-Image Generation (NeurIPS 2024)

  • Democracy, epistemic agency, and AI: Political Epistemology in Times of Artificial Intelligence

    Democracy, epistemic agency, and AI: Political Epistemology in Times of Artificial Intelligence

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer

Categories


• Blog
• Research Summaries
• Columns
• Core Principles of Responsible AI
• Special Topics

Signature Content


• The State Of AI Ethics

• The Living Dictionary

• The AI Ethics Brief

Learn More


• About

• Open Access Policy

• Contributions Policy

• Editorial Stance on AI Tools

• Press

• Donate

• Contact

The AI Ethics Brief (bi-weekly newsletter)

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.


Archive

  • © MONTREAL AI ETHICS INSTITUTE. All rights reserved 2024.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.