• Skip to main content
  • Skip to secondary menu
  • Skip to primary sidebar
  • Skip to footer
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

  • Articles
    • Public Policy
    • Privacy & Security
    • Human Rights
      • Ethics
      • JEDI (Justice, Equity, Diversity, Inclusion
    • Climate
    • Design
      • Emerging Technology
    • Application & Adoption
      • Health
      • Education
      • Government
        • Military
        • Public Works
      • Labour
    • Arts & Culture
      • Film & TV
      • Music
      • Pop Culture
      • Digital Art
  • Columns
    • AI Policy Corner
    • Recess
    • Tech Futures
  • The AI Ethics Brief
  • AI Literacy
    • Research Summaries
    • AI Ethics Living Dictionary
    • Learning Community
  • The State of AI Ethics Report
    • Volume 7 (November 2025)
    • Volume 6 (February 2022)
    • Volume 5 (July 2021)
    • Volume 4 (April 2021)
    • Volume 3 (Jan 2021)
    • Volume 2 (Oct 2020)
    • Volume 1 (June 2020)
  • About
    • Our Contributions Policy
    • Our Open Access Policy
    • Contact
    • Donate

Humans, AI, and Context: Understanding End-Users’ Trust in a Real-World Computer Vision Application

June 16, 2023

🔬 Research Summary by Sunnie S. Y. Kim, a PhD student in computer science at Princeton University working on AI transparency and explainability to help people better understand and interact with AI systems.

[Original paper by Sunnie S. Y. Kim, Elizabeth Anne Watkins, Olga Russakovsky, Ruth Fong, and Andrés Monroy-Hernández]


Overview: Trust is a key factor in human-AI interaction. This paper provides a holistic and nuanced understanding of trust in AI by describing multiple aspects of trust and what factors influenced each in a qualitative case study of a real-world AI application.


Introduction

Appropriate trust is crucial for safe and effective interactions with AI systems. However, there is a lack of empirical studies investigating what factors influence trust in AI and how in real-world contexts. Most research investigates how a certain factor affects trust in controlled lab settings. While they provide valuable insights into the relationship between trust and the factor of interest, their research design does not allow for capturing the contextual aspects of trust or discovering new trust-influencing factors. In this paper, we address these two gaps and deepen the understanding of trust in AI through a qualitative case study of a real-world AI application.

Concretely, we interviewed 20 end-users of a popular, AI-based app for bird identification. We inquired about their trust in the app from many angles, asking questions about their context of app use, perception and experience with it, and intention to use it in hypothetical, high-stakes scenarios. Afterward, we analyzed the collected data with the widely-accepted trust model of Mayer et al. In the next section, we describe participants’ trust in AI in three parts: (1) trustworthiness perception and trust attitude, (2) AI output acceptance, and (3) AI adoption.

Key insights

(1) Trustworthiness perception and trust attitude

Overall, participants assessed the app to be trustworthy and trusted it. We drew this conclusion based on participants’ responses regarding the app’s ability, integrity, and benevolence—the three factors of perceived trustworthiness in Mayer et al.’s trust model. Participants assessed that the app possesses all three, based on their positive prior experience with it, its popularity, and the domain’s and the developers’ good reputation.

(2) AI output acceptance

However, we observed a more complex picture of trust when we examined participants’ app output acceptance decisions. Participants did not accept the app’s outputs as true in every usage instance. To the extent possible, they carefully assessed the outputs, using their knowledge about the domain and engaging in verification behaviors. When unable to verify, some participants disregarded the outputs, even though they described the app as trustworthy. 

(3) AI adoption

Finally, we examined participants’ AI adoption decision-making by asking whether they would use the app in hypothetical, high-stakes scenarios with health-related and financial outcomes. We found that while participants always used the app in their actual use setting, they made different decisions for the high-stakes scenarios based on various factors: the app’s ability, familiarity, and ease of use (AI-related factors); their ability to assess the app’s outputs and use the app (human-related factors); and finally, task difficulty, perceived risks and benefits of the situation, and other situational characteristics (context-related factors).

Trust in AI is multifaceted and influenced by many factors

In short, we found that end-users’ trust relationship with AI is complex. Overall, participants found the app trustworthy and trusted it. Still, they carefully assessed the correctness of individual outputs and decided against app adoption in certain high-stakes scenarios. This discrepancy illustrates that trust is a multifaceted construct that must be approached holistically. To get a full and accurate picture of trust, it is crucial to examine both general aspects, such as trustworthiness perceptions and trust attitudes, and instance-specific aspects, such as AI output acceptance and adoption decisions. 

We also highlight that many factors influence trust in AI. In the below table, we organize the factors we identified based on whether they are related to the human trustor, the AI trustee, or the context. Human-related factors include domain knowledge and other factors influenced by it, such as the ability to assess the AI’s outputs, the ability to assess the AI’s ability, and the ability to use the AI. AI-related factors include internal factors such as ability, integrity, and benevolence; external factors such as popularity; and user-dependent factors such as familiarity and ease of use. Context-related factors include task difficulty, perceived risks and benefits of the situation, other situational characteristics, and the reputation of the domain and the developers. While this is not a complete set of factors that can influence trust in AI, what we observed in our case study, we hope it helps researchers and practitioners anticipate what can influence trust in AI in their context of interest.

Human-relatedAI-relatedContext-related
Domain knowledge

Ability to assess the AI’s outputs

Ability to assess the AI’s ability

Ability to use the AI
Ability

Integrity

Benevolence

Popularity

Familiarity

Ease of use
Task difficulty

Perceived risks and benefits

Situational characteristics

Domain’s reputation

Developer’s reputation

Between the lines

Our qualitative case study revealed a comprehensive picture of real end-users’ trust in AI, adding nuances to existing understandings. Yet, much remains to be explored. More research is needed on how trust is initially developed and changes over time and how trust in AI varies across stakeholders and user groups. In doing so, we urge the field to move from studying one or a few factors in lab settings with hypothetical end-users to studying multiple factors in real-world settings with actual end-users. This shift is necessary for understanding the interactions between factors and contextual influences on trust. We hope our paper, especially the way in which we delineated trust from its antecedents, context, and products, and the trust-influencing factors we identified, aid future research on other types of AI applications.

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

🔍 SEARCH

Spotlight

A brightly coloured illustration which can be viewed in any direction. It has many elements to it working together: men in suits around a table, someone in a data centre, big hands controlling the scenes and holding a phone, people in a production line. Motifs such as network diagrams and melting emojis are placed throughout the busy vignettes.

Tech Futures: The Fossil Fuels Playbook for Big Tech: Part II

A rock embedded with intricate circuit board patterns, held delicately by pale hands drawn in a ghostly style. The contrast between the rough, metallic mineral and the sleek, artificial circuit board illustrates the relationship between raw natural resources and modern technological development. The hands evoke human involvement in the extraction and manufacturing processes.

Tech Futures: The Fossil Fuels Playbook for Big Tech: Part I

Close-up of a cat sleeping on a computer keyboard

Tech Futures: The threat of AI-generated code to the world’s digital infrastructure

The undying sun hangs in the sky, as people gather around signal towers, working through their digital devices.

Dreams and Realities in Modi’s AI Impact Summit

Illustration of a coral reef ecosystem

Tech Futures: Diversity of Thought and Experience: The UN’s Scientific Panel on AI

related posts

  • Cinderella’s shoe won’t fit Soundarya: An audit of facial processing tools on Indian faces

    Cinderella’s shoe won’t fit Soundarya: An audit of facial processing tools on Indian faces

  • Hazard Contribution Modes of Machine Learning Components (Research Summary)

    Hazard Contribution Modes of Machine Learning Components (Research Summary)

  • Knowing Your Annotator: Rapidly Testing the Reliability of Affect Annotation

    Knowing Your Annotator: Rapidly Testing the Reliability of Affect Annotation

  • A Prompt Array Keeps the Bias Away: Debiasing Vision-Language Models with Adversarial Learning

    A Prompt Array Keeps the Bias Away: Debiasing Vision-Language Models with Adversarial Learning

  • Research Summary: Countering Information Influence Activities: The State of the Art

    Research Summary: Countering Information Influence Activities: The State of the Art

  • It’s COMPASlicated: The Messy Relationship between RAI Datasets and Algorithmic Fairness Benchmarks

    It’s COMPASlicated: The Messy Relationship between RAI Datasets and Algorithmic Fairness Benchmarks

  • The Ethical AI Startup Ecosystem 04: Targeted AI Solutions and Technologies

    The Ethical AI Startup Ecosystem 04: Targeted AI Solutions and Technologies

  • The Most Important Question in AI Alignment

    The Most Important Question in AI Alignment

  • The Ethical AI Startup Ecosystem 03: ModelOps, Monitoring, and Observability

    The Ethical AI Startup Ecosystem 03: ModelOps, Monitoring, and Observability

  • Mapping the Design Space of Human-AI Interaction in Text Summarization

    Mapping the Design Space of Human-AI Interaction in Text Summarization

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer


Articles

Columns

AI Literacy

The State of AI Ethics Report


 

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.

Contact

Donate


  • © 2025 MONTREAL AI ETHICS INSTITUTE.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.