• Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • Core Principles of Responsible AI
    • Accountability
    • Fairness
    • Privacy
    • Safety and Security
    • Sustainability
    • Transparency
  • Special Topics
    • AI in Industry
    • Ethical Implications
    • Human-Centered Design
    • Regulatory Landscape
    • Technical Methods
  • Living Dictionary
  • State of AI Ethics
  • AI Ethics Brief
  • 🇫🇷
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

Who to Trust, How and Why: Untangling AI Ethics Principles, Trustworthiness and Trust

January 21, 2024

🔬 Research Summary by Andreas Duenser and David M. Douglas.

Andreas Duenser is a Principal Research Scientist at CSIRO – Data61, Hobart, Australia, and is interested in the convergence of psychology and emerging technology systems to develop a deeper understanding of human behavior and cognition in a technology context and to drive innovation and adoption.

David M. Douglas is a Research Scientist at CSIRO, Brisbane, Australia, and his current research interests include responsible innovation, AI ethics, and ethics of technology.

[Original paper by Andreas Duenser and David M. Douglas]


Overview: This work presents an overview of how trust in AI and AI trustworthiness are discussed in the context of AI ethics principles. These concepts must be more clearly distinguished, and we need more empirical evidence on what contributes to people’s trusting behaviors. AI systems should also be recognized as socio-technical systems, where the people involved in designing, developing, deploying, and using the system are as important as the system for determining whether it is trustworthy.


Introduction

What do we really mean when we say that we ‘trust’ an AI? Trust in AI involves relying on the system itself to act as we expect and trusting the AI’s developers not to exploit the confidence we have placed in them by using their system. AI ethics principles such as explainability and transparency are often assumed to promote user trust. Still, empirical evidence of how such features actually affect how users perceive the system’s trustworthiness is not as abundant or not that clear. As others have argued, without recognizing these nuances, ‘trust in AI’ and ‘trustworthy AI’ risk becoming nebulous terms for any desirable feature of AI systems.

A common theme among AI ethics principles is the need for AI systems to be trustworthy. Trustworthiness is a property of the AI system, while trust is an attitude that users must grant. Users must properly calibrate their trust so they do not distrust a trustworthy AI or an untrustworthy one. Focusing on the trustworthiness of AI systems without considering the factors that influence how users develop trust in them will not achieve the goals of trustworthy AI.

Key Insights

We Need Clearer Terminology and Empirical Evidence for Trust in AI

Our brief survey of the literature mentioning trust in the context of AI ethics principles revealed the need for a more nuanced discussion of the topic, especially trust in AI and trustworthy AI. While trustworthiness is a property of the AI system, trust is an attitude that may or may not be granted by users of such technology. An automatic correspondence between user trust and AI trustworthiness is not guaranteed. To get a more comprehensive picture of the elements that may impact a person’s perception of a system’s trustworthiness (and related trust evaluation and behavior), we argue that the trustor’s attitudes towards or perceptions of the stakeholder ecosystem play a crucial, often overlooked, role.

Trust in Technology

Trust in technology differs from trust in humans. Technology cannot betray trust in the same way humans can, as humans can decide whether to uphold or betray the trust someone else has placed in them. Moral considerations and our beliefs about the abilities, benevolence, and integrity of others influence our willingness to trust and how we act when others trust us. In contrast, trust in technology depends on our perceptions of its functionality, reliability, and user-friendliness. We can think of ‘trust’ in a technology as both a reliance on the technology itself and trust in the supplier of the technology. This is called the ‘duality of trust’ in technology.

The ‘trust’ we place in technology and its suppliers may exist in multiple dimensions, depending on which factors influence our decisions to trust someone and the technology they are responsible for. Dispositional trust is based on our general willingness to depend on using technology for a particular purpose. Rational trust is the deliberate calculation of the costs and benefits of using a technology based on our perceptions of it. Affinitive (or emotional) trust may arise from our perceptions of a technology’s user-friendliness and reliability. Finally, procedural trust depends on whether an effective control system (such as laws and regulations) limits the risks we take in trusting a technology and its supplier.

For AI, one of the control systems that may encourage procedural trust is AI ethics principles. These sets of principles direct AI developers to consider particular ways that their systems may affect others, such as the potential for privacy infringements and the need for transparency in how these systems make decisions. However, empirical evidence is needed to grasp better the impact of specific AI ethics principles, combinations of principles, and potential trade-offs or conflicts between them on people’s trust. Certain principles may hold more significance for individuals based on their unique circumstances or experiences. Therefore, we argue that a more nuanced discussion and empirical evaluation are necessary to prevent trust and trustworthiness from becoming ambiguous, all-encompassing terms representing any positive aspects of AI systems.

Trust in AI as Socio-technical Systems

It is also important to recognize that AI technologies are not isolated from the social contexts in which they are developed and used. The people involved in developing and using an AI system, as well as those affected by how that system is used, are stakeholders in it. AI is a socio-technical system that is made up of the AI technology itself, the stakeholders associated with it, and the institutions that influence these stakeholders.

Viewing AI technologies as socio-technical systems underscores the importance of understanding how the dynamics between various stakeholders can influence how they perceive and evaluate their trustworthiness. It is essential to examine how relationships and dynamics among stakeholders, their adherence to AI ethics principles, and their perceived trustworthiness can affect people’s trust in AI systems, developers, and providers. This, in turn, can influence the acceptance and adoption of AI technologies.

Recognizing the significance of the broader stakeholder ecosystem emphasizes the importance of AI ethics principles that guide relationships between stakeholders. Principles such as accountability, contestability, and the theme of professional responsibility are just as crucial for fostering trustworthiness as other principles like safety, transparency, and explainability. The trustworthiness of AI systems should encompass features of the AI itself and the trustworthiness of associated stakeholders, including developers and those incorporating AI into their professional practice.

Between the lines

To bridge the gap between theoretical AI ethics principles and their practical application, we need to unravel and enhance our understanding of how these principles (which may inform the development of AI systems) align with individual perceptions of trustworthiness. Understanding how trustworthiness matters to people and the circumstances under which trustworthiness impacts trust, i.e., if and when trust is granted, is crucial. AI ethics encompasses multiple dimensions outlined by various principles, and trust is not universal but rather specific to the context. Depending on the situation, different dimensions may contribute to an individual’s evaluation of trust and their behaviors when interacting with AI. More nuanced evaluation, discussion, and language use might help avoid oversimplifying trust and trustworthiness as generic terms for the ‘good’ in AI systems. This, together with consideration of the broader socio-technical stakeholder ecosystem, is a crucial component in designing trustworthy AI systems that are trusted.

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

🔍 SEARCH

Spotlight

AI Policy Corner: Texas and New York: Comparing U.S. State-Level AI Laws

What is Sovereign Artificial Intelligence?

AI Policy Corner: The Kenya National AI Strategy

AI Policy Corner: New York City Local Law 144

Canada’s Minister of AI and Digital Innovation is a Historic First. Here’s What We Recommend.

related posts

  • Risk of AI in Healthcare: A Study Framework

    Risk of AI in Healthcare: A Study Framework

  • Regulating computer vision & the ongoing relevance of AI ethics

    Regulating computer vision & the ongoing relevance of AI ethics

  • Human-AI Collaboration in Decision-Making: Beyond Learning to Defer

    Human-AI Collaboration in Decision-Making: Beyond Learning to Defer

  • Looking before we leap: Expanding ethical review processes for AI and data science research

    Looking before we leap: Expanding ethical review processes for AI and data science research

  • Research Summary: Risk Shifts in the Gig Economy: The Normative Case for an Insurance Scheme against...

    Research Summary: Risk Shifts in the Gig Economy: The Normative Case for an Insurance Scheme against...

  • Compute Trends Across Three Eras of Machine Learning

    Compute Trends Across Three Eras of Machine Learning

  • CodeAid: Evaluating a Classroom Deployment of an LLM-based Programming Assistant that Balances Stude...

    CodeAid: Evaluating a Classroom Deployment of an LLM-based Programming Assistant that Balances Stude...

  • Perspectives and Approaches in AI Ethics: East Asia (Research Summary)

    Perspectives and Approaches in AI Ethics: East Asia (Research Summary)

  • Considerations for Closed Messaging Research in Democratic Contexts  (Research summary)

    Considerations for Closed Messaging Research in Democratic Contexts (Research summary)

  • You cannot have AI ethics without ethics

    You cannot have AI ethics without ethics

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer

Categories


• Blog
• Research Summaries
• Columns
• Core Principles of Responsible AI
• Special Topics

Signature Content


• The State Of AI Ethics

• The Living Dictionary

• The AI Ethics Brief

Learn More


• About

• Open Access Policy

• Contributions Policy

• Editorial Stance on AI Tools

• Press

• Donate

• Contact

The AI Ethics Brief (bi-weekly newsletter)

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.


Archive

  • © MONTREAL AI ETHICS INSTITUTE. All rights reserved 2024.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.