• Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • Core Principles of Responsible AI
    • Accountability
    • Fairness
    • Privacy
    • Safety and Security
    • Sustainability
    • Transparency
  • Special Topics
    • AI in Industry
    • Ethical Implications
    • Human-Centered Design
    • Regulatory Landscape
    • Technical Methods
  • Living Dictionary
  • State of AI Ethics
  • AI Ethics Brief
  • 🇫🇷
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

Can we trust robots?

February 11, 2022

🔬 Research summary by Connor Wright, our Partnerships Manager.

[Original paper by Mark Coeckelbergh]


Overview: Can robots be trusted? Machines form a more significant part of our social and political existence, proving to be more than mere tools. To manage such encroachment, a deeper understanding of trust as a concept is the first step.


Introduction

AI is encroaching on more and more environments where we have to trust the technology, such as autonomous vehicles, healthcare and the military environment. Hence, knowing what trust involves, how we approach such a powerful phenomenon and knowing how we can trust at all can prove pivotal in managing this encroachment. The management style will differ from culture to culture, but, ultimately, we must know why we trust the technology set before us. To do so, the first step is knowing what trust can involve.

Key Insights

What does trust involve?

The account details how trust usually involves a trustor and a trustee, which creates an ethical dimension. That is to say, when trust is established between one person giving the trust and another receiving it, the concept brings along some added ethical ‘baggage’. From my readings, Simpson (2012) observes how this is a common theme within trust whereby it relegates all other considerations to one side when mentioned.

Applied to technology, the intertwining of trust and reliance is allowed to surface. When thinking about trust in terms of reliance, we expect the technology to do what we want it to do. However, trust as a concept is seen as a much more profound phenomenon than simply being a deeper form of reliance. In this way, the author considers two different approaches to trust.

Two different approaches to trust

The contractarian-individualist approach

Under this view, individuals first exist and then establish trust relationships. Within these relationships, trust is a moral language. Here, when I trust someone, I evoke a normative sense of responsibility on the other to do what I have trusted them to do.

The author then contrasts this with the following

The social-phenomenological view

Through this approach, trust is already present in the very social fabric of society in the form of a “basic confidence” (p. 55), which humans then are born into and embrace. Given how humans are thrust into this confidence, we at times exist in a “mode of ‘trust assessment’” (p. 57), reflecting on whether to trust another or not. In this way, how do we know when we can trust someone at all?

How can we trust someone at all?

The author details three conditions which we must presuppose about a person in order to trust them:

–          The ability to use language, especially given the moral language trust entails.

–          Both receiver and trustor must be free.

–          Social relations are required to facilitate trust.

These conditions bring up some interesting questions. Can AI be classed as being capable of using moral language? Is AI free? Can AI count as being part of the social tissue? Answers to these questions vary, especially by culture.

Cultural differences

The author acknowledges how the norms proposed rely heavily on the context in which someone grows up. This includes different views on the uses and functionality of AI, the value placed on individual freedom, how integrated the robot becomes in the social fabric and more.Building on this, it could thus be said that trusting robots will be more readily practiced in some cultures rather than others. For example, my reading has included how Meinert details how within Uganda, the norm for trusting relationships is to start from a position of distrust, a ‘guilty until proven innocent’ approach. On the other hand, as detailed by Kitano, Japanese culture is far more open to integrating technology into their kinship, making it easier to conceive of trusting AI. Consequently, pondering trusting robots ultimately boils down to a balancing act between how much trust we deem sufficient.

Between the lines

It goes without saying that AI is more than a mere tool. Hence, there is certainly a risk of humans getting carried away with how much confidence is placed in seemingly supra-human technology. For this reason, the author classes trusting robots based on certain qualities such as appearance as “quasi-trust” (p. 59), which I fully agree with. Fully qualified trust, in this way, requires a deeper examination of the AI at hand and a reflection on the power that trust possesses. Trust carries a certain amount of ethical ‘baggage’, which I think can help us ensure trust is not so simply given to AI. As I mentioned in my “Welcome to AI” talk, AI is a sword to be wielded, but only with proper training. This includes knowing why we trust the technology in front of us.

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

🔍 SEARCH

Spotlight

Beyond Dependency: The Hidden Risk of Social Comparison in Chatbot Companionship

AI Policy Corner: Restriction vs. Regulation: Comparing State Approaches to AI Mental Health Legislation

Beyond Consultation: Building Inclusive AI Governance for Canada’s Democratic Future

AI Policy Corner: U.S. Executive Order on Advancing AI Education for American Youth

AI Policy Corner: U.S. Copyright Guidance on Works Created with AI

related posts

  • Fairness Uncertainty Quantification: How certain are you that the model is fair?

    Fairness Uncertainty Quantification: How certain are you that the model is fair?

  • Research Summary: Explaining and Harnessing Adversarial Examples

    Research Summary: Explaining and Harnessing Adversarial Examples

  • Ethics of AI in Education: Towards a Community-wide Framework

    Ethics of AI in Education: Towards a Community-wide Framework

  • Research summary: Digital Abundance and Scarce Genius: Implications for Wages, Interest Rates, and G...

    Research summary: Digital Abundance and Scarce Genius: Implications for Wages, Interest Rates, and G...

  • “Cool Projects” or “Expanding the Efficiency of the Murderous American War Machine?” (Research Summa...

    “Cool Projects” or “Expanding the Efficiency of the Murderous American War Machine?” (Research Summa...

  • How Culturally Aligned are Large Language Models?

    How Culturally Aligned are Large Language Models?

  • Who Audits the Auditors? Recommendations from a field scan of the algorithmic auditing ecosystem

    Who Audits the Auditors? Recommendations from a field scan of the algorithmic auditing ecosystem

  • Automated Interviewer or Augmented Survey? Collecting Social Data with Large Language Models

    Automated Interviewer or Augmented Survey? Collecting Social Data with Large Language Models

  • Unpacking Invisible Work Practices, Constraints, and Latent Power Relationships in Child Welfare thr...

    Unpacking Invisible Work Practices, Constraints, and Latent Power Relationships in Child Welfare thr...

  • Scoping AI Governance: A Smarter Tool Kit for Beneficial Applications

    Scoping AI Governance: A Smarter Tool Kit for Beneficial Applications

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer

Categories


• Blog
• Research Summaries
• Columns
• Core Principles of Responsible AI
• Special Topics

Signature Content


• The State Of AI Ethics

• The Living Dictionary

• The AI Ethics Brief

Learn More


• About

• Open Access Policy

• Contributions Policy

• Editorial Stance on AI Tools

• Press

• Donate

• Contact

The AI Ethics Brief (bi-weekly newsletter)

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.


Archive

  • © 2025 MONTREAL AI ETHICS INSTITUTE.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.