• Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • Core Principles of Responsible AI
    • Accountability
    • Fairness
    • Privacy
    • Safety and Security
    • Sustainability
    • Transparency
  • Special Topics
    • AI in Industry
    • Ethical Implications
    • Human-Centered Design
    • Regulatory Landscape
    • Technical Methods
  • Living Dictionary
  • State of AI Ethics
  • AI Ethics Brief
  • šŸ‡«šŸ‡·
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

Can we trust robots?

February 11, 2022

šŸ”¬ Research summary by Connor Wright, our Partnerships Manager.

[Original paper by Mark Coeckelbergh]


Overview: Can robots be trusted? Machines form a more significant part of our social and political existence, proving to be more than mere tools. To manage such encroachment, a deeper understanding of trust as a concept is the first step.


Introduction

AI is encroaching on more and more environments where we have to trust the technology, such as autonomous vehicles, healthcare and the military environment. Hence, knowing what trust involves, how we approach such a powerful phenomenon and knowing how we can trust at all can prove pivotal in managing this encroachment. The management style will differ from culture to culture, but, ultimately, we must know why we trust the technology set before us. To do so, the first step is knowing what trust can involve.

Key Insights

What does trust involve?

The account details how trust usually involves a trustor and a trustee, which creates an ethical dimension. That is to say, when trust is established between one person giving the trust and another receiving it, the concept brings along some added ethical ā€˜baggage’. From my readings, Simpson (2012) observes how this is a common theme within trust whereby it relegates all other considerations to one side when mentioned.

Applied to technology, the intertwining of trust and reliance is allowed to surface. When thinking about trust in terms of reliance, we expect the technology to do what we want it to do. However, trust as a concept is seen as a much more profound phenomenon than simply being a deeper form of reliance. In this way, the author considers two different approaches to trust.

Two different approaches to trust

The contractarian-individualist approach

Under this view, individuals first exist and then establish trust relationships. Within these relationships, trust is a moral language. Here, when I trust someone, I evoke a normative sense of responsibility on the other to do what I have trusted them to do.

The author then contrasts this with the following

The social-phenomenological view

Through this approach, trust is already present in the very social fabric of society in the form of a ā€œbasic confidenceā€ (p. 55), which humans then are born into and embrace. in the social. Given how humans are thrust into this confidence, we at times exist in a ā€œmode of ā€˜trust assessmentā€™ā€ (p. 57), reflecting on whether to trust another or not. In this way, how do we know when we can trust someone at all?

How can we trust someone at all?

The author details three conditions which we must presuppose about a person in order to trust them:

–          The ability to use language, especially given the moral language trust entails.

–          Both receiver and trustor must be free.

–          Social relations are required to facilitate trust.

These conditions bring up some interesting questions. Can AI be classed as being capable of using moral language? Is AI free? Can AI count as being part of the social tissue? Answers to these questions vary, especially by culture.

Cultural differences

The author acknowledges how the norms proposed rely heavily on the context in which someone grows up. This includes different views on the uses and functionality of AI, the value placed on individual freedom, how integrated the robot becomes in the social fabric and more.Building on this, it could thus be said that trusting robots will be more readily practiced in some cultures rather than others. For example, my reading has included how Meinert details how within Uganda, the norm for trusting relationships is to start from a position of distrust, a ā€˜guilty until proven innocent’ approach. On the other hand, as detailed by Kitano, Japanese culture is far more open to integrating technology into their kinship, making it easier to conceive of trusting AI. Consequently, pondering trusting robots ultimately boils down to a balancing act between how much trust we deem sufficient.

Between the lines

It goes without saying that AI is more than a mere tool. Hence, there is certainly a risk of humans getting carried away with how much confidence is placed in seemingly supra-human technology. For this reason, the author classes trusting robots based on certain qualities such as appearance as ā€œquasi-trustā€ (p. 59), which I fully agree with. Fully qualified trust, in this way, requires a deeper examination of the AI at hand and a reflection on the power that trust possesses. Trust carries a certain amount of ethical ā€˜baggage’, which I think can help us ensure trust is not so simply given to AI. As I mentioned in my ā€œWelcome to AIā€ talk, AI is a sword to be wielded, but only with proper training. This includes knowing why we trust the technology in front of us.

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

šŸ” SEARCH

Spotlight

AI Policy Corner: Frontier AI Safety Commitments, AI Seoul Summit 2024

AI Policy Corner: The Colorado State Deepfakes Act

Special Edition: Honouring the Legacy of Abhishek Gupta (1992–2024)

AI Policy Corner: The Turkish Artificial Intelligence Law Proposal

From Funding Crisis to AI Misuse: Critical Digital Rights Challenges from RightsCon 2025

related posts

  • Automating Extremism: Mapping the Affective Roles of Artificial Agents in Online Radicalization

    Automating Extremism: Mapping the Affective Roles of Artificial Agents in Online Radicalization

  • Research summary: Principles alone cannot guarantee ethical AI

    Research summary: Principles alone cannot guarantee ethical AI

  • Target specification bias, counterfactual prediction, and algorithmic fairness in healthcare

    Target specification bias, counterfactual prediction, and algorithmic fairness in healthcare

  • Whose AI Dream? In search of the aspiration in data annotation.

    Whose AI Dream? In search of the aspiration in data annotation.

  • Perspectives and Approaches in AI Ethics: East Asia (Research Summary)

    Perspectives and Approaches in AI Ethics: East Asia (Research Summary)

  • Strategic Behavior of Large Language Models: Game Structure vs. Contextual Framing

    Strategic Behavior of Large Language Models: Game Structure vs. Contextual Framing

  • NATO Artificial Intelligence Strategy

    NATO Artificial Intelligence Strategy

  • Evaluating a Methodology for Increasing AI Transparency: A Case Study

    Evaluating a Methodology for Increasing AI Transparency: A Case Study

  • Brave: what it means to be an AI Ethicist

    Brave: what it means to be an AI Ethicist

  • Do Less Teaching, Do More Coaching: Toward Critical Thinking for Ethical Applications of Artificial ...

    Do Less Teaching, Do More Coaching: Toward Critical Thinking for Ethical Applications of Artificial ...

Partners

  • Ā 
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer

Categories


• Blog
• Research Summaries
• Columns
• Core Principles of Responsible AI
• Special Topics

Signature Content


• The State Of AI Ethics

• The Living Dictionary

• The AI Ethics Brief

Learn More


• About

• Open Access Policy

• Contributions Policy

• Editorial Stance on AI Tools

• Press

• Donate

• Contact

The AI Ethics Brief (bi-weekly newsletter)

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.


Archive

  • Ā© MONTREAL AI ETHICS INSTITUTE. All rights reserved 2024.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.