• Skip to main content
  • Skip to secondary menu
  • Skip to primary sidebar
  • Skip to footer
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

  • Articles
    • Public Policy
    • Privacy & Security
    • Human Rights
      • Ethics
      • JEDI (Justice, Equity, Diversity, Inclusion
    • Climate
    • Design
      • Emerging Technology
    • Application & Adoption
      • Health
      • Education
      • Government
        • Military
        • Public Works
      • Labour
    • Arts & Culture
      • Film & TV
      • Music
      • Pop Culture
      • Digital Art
  • Columns
    • AI Policy Corner
    • Recess
  • The AI Ethics Brief
  • AI Literacy
    • Research Summaries
    • AI Ethics Living Dictionary
    • Learning Community
  • The State of AI Ethics Report
    • Volume 6 (February 2022)
    • Volume 5 (July 2021)
    • Volume 4 (April 2021)
    • Volume 3 (Jan 2021)
    • Volume 2 (Oct 2020)
    • Volume 1 (June 2020)
  • About
    • Our Contributions Policy
    • Our Open Access Policy
    • Contact
    • Donate

Can we trust robots?

February 11, 2022

šŸ”¬ Research summary by Connor Wright, our Partnerships Manager.

[Original paper by Mark Coeckelbergh]


Overview: Can robots be trusted? Machines form a more significant part of our social and political existence, proving to be more than mere tools. To manage such encroachment, a deeper understanding of trust as a concept is the first step.


Introduction

AI is encroaching on more and more environments where we have to trust the technology, such as autonomous vehicles, healthcare and the military environment. Hence, knowing what trust involves, how we approach such a powerful phenomenon and knowing how we can trust at all can prove pivotal in managing this encroachment. The management style will differ from culture to culture, but, ultimately, we must know why we trust the technology set before us. To do so, the first step is knowing what trust can involve.

Key Insights

What does trust involve?

The account details how trust usually involves a trustor and a trustee, which creates an ethical dimension. That is to say, when trust is established between one person giving the trust and another receiving it, the concept brings along some added ethical ā€˜baggage’. From my readings, Simpson (2012) observes how this is a common theme within trust whereby it relegates all other considerations to one side when mentioned.

Applied to technology, the intertwining of trust and reliance is allowed to surface. When thinking about trust in terms of reliance, we expect the technology to do what we want it to do. However, trust as a concept is seen as a much more profound phenomenon than simply being a deeper form of reliance. In this way, the author considers two different approaches to trust.

Two different approaches to trust

The contractarian-individualist approach

Under this view, individuals first exist and then establish trust relationships. Within these relationships, trust is a moral language. Here, when I trust someone, I evoke a normative sense of responsibility on the other to do what I have trusted them to do.

The author then contrasts this with the following

The social-phenomenological view

Through this approach, trust is already present in the very social fabric of society in the form of a ā€œbasic confidenceā€ (p. 55), which humans then are born into and embrace. Given how humans are thrust into this confidence, we at times exist in a ā€œmode of ā€˜trust assessmentā€™ā€ (p. 57), reflecting on whether to trust another or not. In this way, how do we know when we can trust someone at all?

How can we trust someone at all?

The author details three conditions which we must presuppose about a person in order to trust them:

–          The ability to use language, especially given the moral language trust entails.

–          Both receiver and trustor must be free.

–          Social relations are required to facilitate trust.

These conditions bring up some interesting questions. Can AI be classed as being capable of using moral language? Is AI free? Can AI count as being part of the social tissue? Answers to these questions vary, especially by culture.

Cultural differences

The author acknowledges how the norms proposed rely heavily on the context in which someone grows up. This includes different views on the uses and functionality of AI, the value placed on individual freedom, how integrated the robot becomes in the social fabric and more.Building on this, it could thus be said that trusting robots will be more readily practiced in some cultures rather than others. For example, my reading has included how Meinert details how within Uganda, the norm for trusting relationships is to start from a position of distrust, a ā€˜guilty until proven innocent’ approach. On the other hand, as detailed by Kitano, Japanese culture is far more open to integrating technology into their kinship, making it easier to conceive of trusting AI. Consequently, pondering trusting robots ultimately boils down to a balancing act between how much trust we deem sufficient.

Between the lines

It goes without saying that AI is more than a mere tool. Hence, there is certainly a risk of humans getting carried away with how much confidence is placed in seemingly supra-human technology. For this reason, the author classes trusting robots based on certain qualities such as appearance as ā€œquasi-trustā€ (p. 59), which I fully agree with. Fully qualified trust, in this way, requires a deeper examination of the AI at hand and a reflection on the power that trust possesses. Trust carries a certain amount of ethical ā€˜baggage’, which I think can help us ensure trust is not so simply given to AI. As I mentioned in my ā€œWelcome to AIā€ talk, AI is a sword to be wielded, but only with proper training. This includes knowing why we trust the technology in front of us.

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

šŸ” SEARCH

Spotlight

ALL IN Conference 2025: Four Key Takeaways from Montreal

Beyond Dependency: The Hidden Risk of Social Comparison in Chatbot Companionship

AI Policy Corner: Restriction vs. Regulation: Comparing State Approaches to AI Mental Health Legislation

Beyond Consultation: Building Inclusive AI Governance for Canada’s Democratic Future

AI Policy Corner: U.S. Executive Order on Advancing AI Education for American Youth

related posts

  • Target specification bias, counterfactual prediction, and algorithmic fairness in healthcare

    Target specification bias, counterfactual prediction, and algorithmic fairness in healthcare

  • Using attention methods to predict judicial outcomes

    Using attention methods to predict judicial outcomes

  • Disaster City Digital Twin: A Vision for Integrating Artificial and Human Intelligence for Disaster ...

    Disaster City Digital Twin: A Vision for Integrating Artificial and Human Intelligence for Disaster ...

  • From Pretraining Data to Language Models to Downstream Tasks: Tracking the Trails of Political Biase...

    From Pretraining Data to Language Models to Downstream Tasks: Tracking the Trails of Political Biase...

  • Who is afraid of black box algorithms? On the epistemological and ethical basis of trust in medical ...

    Who is afraid of black box algorithms? On the epistemological and ethical basis of trust in medical ...

  • Research summary: Legal Risks of Adversarial Machine Learning Research

    Research summary: Legal Risks of Adversarial Machine Learning Research

  • Levels of AGI: Operationalizing Progress on the Path to AGI

    Levels of AGI: Operationalizing Progress on the Path to AGI

  • SHADES: Towards a Multilingual Assessment of Stereotypes in Large Language Models

    SHADES: Towards a Multilingual Assessment of Stereotypes in Large Language Models

  • Confidence-Building Measures for Artificial Intelligence

    Confidence-Building Measures for Artificial Intelligence

  • SECure: A Social and Environmental Certificate for AI Systems

    SECure: A Social and Environmental Certificate for AI Systems

Partners

  • Ā 
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer


Articles

Columns

AI Literacy

The State of AI Ethics Report


 

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.

Contact

Donate


  • Ā© 2025 MONTREAL AI ETHICS INSTITUTE.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.