• Skip to main content
  • Skip to secondary menu
  • Skip to primary sidebar
  • Skip to footer
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

  • Articles
    • Public Policy
    • Privacy & Security
    • Human Rights
      • Ethics
      • JEDI (Justice, Equity, Diversity, Inclusion
    • Climate
    • Design
      • Emerging Technology
    • Application & Adoption
      • Health
      • Education
      • Government
        • Military
        • Public Works
      • Labour
    • Arts & Culture
      • Film & TV
      • Music
      • Pop Culture
      • Digital Art
  • Columns
    • AI Policy Corner
    • Recess
    • Tech Futures
  • The AI Ethics Brief
  • AI Literacy
    • Research Summaries
    • AI Ethics Living Dictionary
    • Learning Community
  • The State of AI Ethics Report
    • Volume 7 (November 2025)
    • Volume 6 (February 2022)
    • Volume 5 (July 2021)
    • Volume 4 (April 2021)
    • Volume 3 (Jan 2021)
    • Volume 2 (Oct 2020)
    • Volume 1 (June 2020)
  • About
    • Our Contributions Policy
    • Our Open Access Policy
    • Contact
    • Donate

Who to Trust, How and Why: Untangling AI Ethics Principles, Trustworthiness and Trust

January 21, 2024

🔬 Research Summary by Andreas Duenser and David M. Douglas.

Andreas Duenser is a Principal Research Scientist at CSIRO – Data61, Hobart, Australia, and is interested in the convergence of psychology and emerging technology systems to develop a deeper understanding of human behavior and cognition in a technology context and to drive innovation and adoption.

David M. Douglas is a Research Scientist at CSIRO, Brisbane, Australia, and his current research interests include responsible innovation, AI ethics, and ethics of technology.

[Original paper by Andreas Duenser and David M. Douglas]


Overview: This work presents an overview of how trust in AI and AI trustworthiness are discussed in the context of AI ethics principles. These concepts must be more clearly distinguished, and we need more empirical evidence on what contributes to people’s trusting behaviors. AI systems should also be recognized as socio-technical systems, where the people involved in designing, developing, deploying, and using the system are as important as the system for determining whether it is trustworthy.


Introduction

What do we really mean when we say that we ‘trust’ an AI? Trust in AI involves relying on the system itself to act as we expect and trusting the AI’s developers not to exploit the confidence we have placed in them by using their system. AI ethics principles such as explainability and transparency are often assumed to promote user trust. Still, empirical evidence of how such features actually affect how users perceive the system’s trustworthiness is not as abundant or not that clear. As others have argued, without recognizing these nuances, ‘trust in AI’ and ‘trustworthy AI’ risk becoming nebulous terms for any desirable feature of AI systems.

A common theme among AI ethics principles is the need for AI systems to be trustworthy. Trustworthiness is a property of the AI system, while trust is an attitude that users must grant. Users must properly calibrate their trust so they do not distrust a trustworthy AI or an untrustworthy one. Focusing on the trustworthiness of AI systems without considering the factors that influence how users develop trust in them will not achieve the goals of trustworthy AI.

Key Insights

We Need Clearer Terminology and Empirical Evidence for Trust in AI

Our brief survey of the literature mentioning trust in the context of AI ethics principles revealed the need for a more nuanced discussion of the topic, especially trust in AI and trustworthy AI. While trustworthiness is a property of the AI system, trust is an attitude that may or may not be granted by users of such technology. An automatic correspondence between user trust and AI trustworthiness is not guaranteed. To get a more comprehensive picture of the elements that may impact a person’s perception of a system’s trustworthiness (and related trust evaluation and behavior), we argue that the trustor’s attitudes towards or perceptions of the stakeholder ecosystem play a crucial, often overlooked, role.

Trust in Technology

Trust in technology differs from trust in humans. Technology cannot betray trust in the same way humans can, as humans can decide whether to uphold or betray the trust someone else has placed in them. Moral considerations and our beliefs about the abilities, benevolence, and integrity of others influence our willingness to trust and how we act when others trust us. In contrast, trust in technology depends on our perceptions of its functionality, reliability, and user-friendliness. We can think of ‘trust’ in a technology as both a reliance on the technology itself and trust in the supplier of the technology. This is called the ‘duality of trust’ in technology.

The ‘trust’ we place in technology and its suppliers may exist in multiple dimensions, depending on which factors influence our decisions to trust someone and the technology they are responsible for. Dispositional trust is based on our general willingness to depend on using technology for a particular purpose. Rational trust is the deliberate calculation of the costs and benefits of using a technology based on our perceptions of it. Affinitive (or emotional) trust may arise from our perceptions of a technology’s user-friendliness and reliability. Finally, procedural trust depends on whether an effective control system (such as laws and regulations) limits the risks we take in trusting a technology and its supplier.

For AI, one of the control systems that may encourage procedural trust is AI ethics principles. These sets of principles direct AI developers to consider particular ways that their systems may affect others, such as the potential for privacy infringements and the need for transparency in how these systems make decisions. However, empirical evidence is needed to grasp better the impact of specific AI ethics principles, combinations of principles, and potential trade-offs or conflicts between them on people’s trust. Certain principles may hold more significance for individuals based on their unique circumstances or experiences. Therefore, we argue that a more nuanced discussion and empirical evaluation are necessary to prevent trust and trustworthiness from becoming ambiguous, all-encompassing terms representing any positive aspects of AI systems.

Trust in AI as Socio-technical Systems

It is also important to recognize that AI technologies are not isolated from the social contexts in which they are developed and used. The people involved in developing and using an AI system, as well as those affected by how that system is used, are stakeholders in it. AI is a socio-technical system that is made up of the AI technology itself, the stakeholders associated with it, and the institutions that influence these stakeholders.

Viewing AI technologies as socio-technical systems underscores the importance of understanding how the dynamics between various stakeholders can influence how they perceive and evaluate their trustworthiness. It is essential to examine how relationships and dynamics among stakeholders, their adherence to AI ethics principles, and their perceived trustworthiness can affect people’s trust in AI systems, developers, and providers. This, in turn, can influence the acceptance and adoption of AI technologies.

Recognizing the significance of the broader stakeholder ecosystem emphasizes the importance of AI ethics principles that guide relationships between stakeholders. Principles such as accountability, contestability, and the theme of professional responsibility are just as crucial for fostering trustworthiness as other principles like safety, transparency, and explainability. The trustworthiness of AI systems should encompass features of the AI itself and the trustworthiness of associated stakeholders, including developers and those incorporating AI into their professional practice.

Between the lines

To bridge the gap between theoretical AI ethics principles and their practical application, we need to unravel and enhance our understanding of how these principles (which may inform the development of AI systems) align with individual perceptions of trustworthiness. Understanding how trustworthiness matters to people and the circumstances under which trustworthiness impacts trust, i.e., if and when trust is granted, is crucial. AI ethics encompasses multiple dimensions outlined by various principles, and trust is not universal but rather specific to the context. Depending on the situation, different dimensions may contribute to an individual’s evaluation of trust and their behaviors when interacting with AI. More nuanced evaluation, discussion, and language use might help avoid oversimplifying trust and trustworthiness as generic terms for the ‘good’ in AI systems. This, together with consideration of the broader socio-technical stakeholder ecosystem, is a crucial component in designing trustworthy AI systems that are trusted.

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

🔍 SEARCH

Spotlight

A network diagram with lots of little emojis, organised in clusters.

Tech Futures: AI For and Against Knowledge

A brightly coloured illustration which can be viewed in any direction. It has many elements to it working together: men in suits around a table, someone in a data centre, big hands controlling the scenes and holding a phone, people in a production line. Motifs such as network diagrams and melting emojis are placed throughout the busy vignettes.

Tech Futures: The Fossil Fuels Playbook for Big Tech: Part II

A rock embedded with intricate circuit board patterns, held delicately by pale hands drawn in a ghostly style. The contrast between the rough, metallic mineral and the sleek, artificial circuit board illustrates the relationship between raw natural resources and modern technological development. The hands evoke human involvement in the extraction and manufacturing processes.

Tech Futures: The Fossil Fuels Playbook for Big Tech: Part I

Close-up of a cat sleeping on a computer keyboard

Tech Futures: The threat of AI-generated code to the world’s digital infrastructure

The undying sun hangs in the sky, as people gather around signal towers, working through their digital devices.

Dreams and Realities in Modi’s AI Impact Summit

related posts

  • Research Summary: The cognitive science of fake news

    Research Summary: The cognitive science of fake news

  • 6 Ways Machine Learning Threatens Social Justice

    6 Ways Machine Learning Threatens Social Justice

  • When AI Ethics Goes Astray: A Case Study of Autonomous Vehicles

    When AI Ethics Goes Astray: A Case Study of Autonomous Vehicles

  • The Epistemological View: Data Ethics, Privacy & Trust on Digital Platform

    The Epistemological View: Data Ethics, Privacy & Trust on Digital Platform

  • The Short Anthropological Guide to the Study of Ethical AI

    The Short Anthropological Guide to the Study of Ethical AI

  • On the Challenges of Deploying Privacy-Preserving Synthetic Data in the Enterprise

    On the Challenges of Deploying Privacy-Preserving Synthetic Data in the Enterprise

  • Exploring Collaboration Mechanisms for LLM Agents: A Social Psychology View

    Exploring Collaboration Mechanisms for LLM Agents: A Social Psychology View

  • A Hazard Analysis Framework for Code Synthesis Large Language Models

    A Hazard Analysis Framework for Code Synthesis Large Language Models

  • Scientists' Perspectives on the Potential for Generative AI in their Fields

    Scientists' Perspectives on the Potential for Generative AI in their Fields

  • Relative Behavioral Attributes: Filling the Gap between Symbolic Goal Specification and Reward Learn...

    Relative Behavioral Attributes: Filling the Gap between Symbolic Goal Specification and Reward Learn...

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer


Articles

Columns

AI Literacy

The State of AI Ethics Report


 

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.

Contact

Donate


  • © 2025 MONTREAL AI ETHICS INSTITUTE.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.