• Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • Core Principles of Responsible AI
    • Accountability
    • Fairness
    • Privacy
    • Safety and Security
    • Sustainability
    • Transparency
  • Special Topics
    • AI in Industry
    • Ethical Implications
    • Human-Centered Design
    • Regulatory Landscape
    • Technical Methods
  • Living Dictionary
  • State of AI Ethics
  • AI Ethics Brief
  • 🇫🇷
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

Reliabilism and the Testimony of Robots (Research Summary)

January 18, 2021

Summary contributed by our researcher Dr. Marianna Ganapini (@MariannaBergama), who is also an Assistant Professor of Philosophy at Union College.

[Authors of original paper + link at the bottom]


Overview: In this paper, the author Billy Wheeler asks whether we should treat the knowledge gained from robots as a form of testimonial versus instrument-based knowledge. In other words, should we consider robots as able to offer testimony, or are they simply instruments similar to calculators or thermostats? Seeing robots as a source of testimony could shape the epistemic and social relations we have with them. The author’s main suggestion in this paper is that some robots can be seen as capable of testimony because they share the following human-like characteristic: their ability to be a source of epistemic trust. 


Should we see sophisticated robots as a human-like source of information or are they simply instruments similar to watches or smoke detectors? Are robots testifiers or are they purely offering instrumental knowledge?  In this paper, the author Billy Wheeler raises these questions arguing this topic is key in view of the growing role social robots have in our lives. These ‘social’ robots interact with humans helping with a number of different tasks (as already discussed in MAIEI newsletter n.33) but little is known of humans’ epistemic relation with them. 

It is common for any philosophical discussion about ‘knowledge’ to identify knowledge with having a justified true belief. The basic idea is that we want to distinguish between knowledge vs lucky guesses: knowledge requires a certain degree of support or justification that beliefs that happen to be true by chance do not have. And one way to capture this idea is to say that a belief constitutes actual knowledge only if it is formed via some reliable method. Common reliable belief forming-processes are perception and reasoning. Absent relevant defeaters, testimony is also seen as a way of gaining knowledge (and thus beliefs that are reliably true) from other people. Finally, using instruments (e.g. calculators, watches) that are well functioning is also a way of gaining knowledge. 

What is the difference between testimony based vs instrument- based knowledge then? Well, when we receive information from people, what we do is that we trust them. This kind of trust is substantially different from the ‘trust’ that we extend to instruments. Of course, if I deem it well-functioning, I can ‘trust’ my watch to be reliable and thus a source of knowledge about time.  However, the author points out that testimony is based on social trust and “its purpose is to forge bonds between individuals for the sake of forming epistemic communities”. At least according to some views of testimony (called ‘non-reductionist’) these interpersonal bonds make it the case that, to form a reliable or justified belief, I don’t need any additional positive reason to trust someone else’s word. Unless I have reason to doubt what you said or your sincerity, I will believe what you tell me. That also means that if it turns out that what they said was false, I can blame you or hold you responsible. Conversely, we always need positive reasons in order to gain knowledge from a watch (that is, we need to assume that it is not defective, that it is reliable, and so on). Parallelly, we do not strictly blame an instrument for getting things wrong.

Time to go back to the main question of the paper: should we grant robots the status of testifiers? As the author points out, to answer this question we might need to first figure out, or reach a consensus on, the moral and agential status of these robots. This seems to be a necessary step for deciding whether we could rationally grant normative and social trust to robots ( which is a necessary condition for seeing them as testifiers too). However, the author also points to another way to tackle this issue by observing the way humans interact with these robots. If humans come to see robots as capable of establishing relationships with them while also considering them as “honest” or “deceitful” (and there is seems to be empirical evidence pointing in that direction), then we might conclude that de facto robots are (or will be) part of that bond-forming process that is key for delivering testimony.  In other words, because of how we treat them, it is likely that we will eventually start to see robots as trustworthy and able to share their knowledge via testimony.


Original paper by Billy Wheeler: https://www.pdcnet.org/techne/content/techne_2020_0024_0003_0332_0356

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

🔍 SEARCH

Spotlight

AI Policy Corner: Frontier AI Safety Commitments, AI Seoul Summit 2024

AI Policy Corner: The Colorado State Deepfakes Act

Special Edition: Honouring the Legacy of Abhishek Gupta (1992–2024)

AI Policy Corner: The Turkish Artificial Intelligence Law Proposal

From Funding Crisis to AI Misuse: Critical Digital Rights Challenges from RightsCon 2025

related posts

  • Evolution in Age-Verification Applications: Can AI Open Some New Horizons?

    Evolution in Age-Verification Applications: Can AI Open Some New Horizons?

  • How the TAII Framework Could Influence the Amazon's Astro Home Robot Development

    How the TAII Framework Could Influence the Amazon's Astro Home Robot Development

  • Does Military AI Have Gender? Understanding Bias and Promoting Ethical Approaches in Military Applic...

    Does Military AI Have Gender? Understanding Bias and Promoting Ethical Approaches in Military Applic...

  • A hunt for the Snark: Annotator Diversity in Data Practices

    A hunt for the Snark: Annotator Diversity in Data Practices

  • Exploring the Carbon Footprint of Hugging Face's ML Models: A Repository Mining Study

    Exploring the Carbon Footprint of Hugging Face's ML Models: A Repository Mining Study

  • Using Pre-Trained Language Models for Producing Counter Narratives Against Hate Speech: a Comparativ...

    Using Pre-Trained Language Models for Producing Counter Narratives Against Hate Speech: a Comparativ...

  • Model Positionality and Computational Reflexivity: Promoting Reflexivity in Data Science

    Model Positionality and Computational Reflexivity: Promoting Reflexivity in Data Science

  • Defining a Research Testbed for Manned-Unmanned Teaming Research

    Defining a Research Testbed for Manned-Unmanned Teaming Research

  • Research Summary: Explaining and Harnessing Adversarial Examples

    Research Summary: Explaining and Harnessing Adversarial Examples

  • Levels of AGI: Operationalizing Progress on the Path to AGI

    Levels of AGI: Operationalizing Progress on the Path to AGI

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer

Categories


• Blog
• Research Summaries
• Columns
• Core Principles of Responsible AI
• Special Topics

Signature Content


• The State Of AI Ethics

• The Living Dictionary

• The AI Ethics Brief

Learn More


• About

• Open Access Policy

• Contributions Policy

• Editorial Stance on AI Tools

• Press

• Donate

• Contact

The AI Ethics Brief (bi-weekly newsletter)

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.


Archive

  • Š MONTREAL AI ETHICS INSTITUTE. All rights reserved 2024.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.