• Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • Core Principles of Responsible AI
    • Accountability
    • Fairness
    • Privacy
    • Safety and Security
    • Sustainability
    • Transparency
  • Special Topics
    • AI in Industry
    • Ethical Implications
    • Human-Centered Design
    • Regulatory Landscape
    • Technical Methods
  • Living Dictionary
  • State of AI Ethics
  • AI Ethics Brief
  • šŸ‡«šŸ‡·
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

Reliabilism and the Testimony of Robots (Research Summary)

January 18, 2021

Summary contributed by our researcher Dr. Marianna Ganapini (@MariannaBergama), who is also an Assistant Professor of Philosophy at Union College.

[Authors of original paper + link at the bottom]


Overview: In this paper, the author Billy Wheeler asks whether we should treat the knowledge gained from robots as a form of testimonial versus instrument-based knowledge. In other words, should we consider robots as able to offer testimony, or are they simply instruments similar to calculators or thermostats? Seeing robots as a source of testimony could shape the epistemic and social relations we have with them. The author’s main suggestion in this paper is that some robots can be seen as capable of testimony because they share the following human-like characteristic: their ability to be a source of epistemic trust.Ā 


Should we see sophisticated robots as a human-like source of information or are they simply instruments similar to watches or smoke detectors? Are robots testifiers or are they purely offering instrumental knowledge?  In this paper, the author Billy Wheeler raises these questions arguing this topic is key in view of the growing role social robots have in our lives. These ā€˜social’ robots interact with humans helping with a number of different tasks (as already discussed in MAIEI newsletter n.33) but little is known of humans’ epistemic relation with them. 

It is common for any philosophical discussion about ā€˜knowledge’ to identify knowledge with having a justified true belief. The basic idea is that we want to distinguish between knowledge vs lucky guesses: knowledge requires a certain degree of support or justification that beliefs that happen to be true by chance do not have. And one way to capture this idea is to say that a belief constitutes actual knowledge only if it is formed via some reliable method. Common reliable belief forming-processes are perception and reasoning. Absent relevant defeaters, testimony is also seen as a way of gaining knowledge (and thus beliefs that are reliably true) from other people. Finally, using instruments (e.g. calculators, watches) that are well functioning is also a way of gaining knowledge. 

What is the difference between testimony based vs instrument- based knowledge then? Well, when we receive information from people, what we do is that we trust them. This kind of trust is substantially different from the ā€˜trust’ that we extend to instruments. Of course, if I deem it well-functioning, I can ā€˜trust’ my watch to be reliable and thus a source of knowledge about time.  However, the author points out that testimony is based on social trust and ā€œits purpose is to forge bonds between individuals for the sake of forming epistemic communitiesā€. At least according to some views of testimony (called ā€˜non-reductionist’) these interpersonal bonds make it the case that, to form a reliable or justified belief, I don’t need any additional positive reason to trust someone else’s word. Unless I have reason to doubt what you said or your sincerity, I will believe what you tell me. That also means that if it turns out that what they said was false, I can blame you or hold you responsible. Conversely, we always need positive reasons in order to gain knowledge from a watch (that is, we need to assume that it is not defective, that it is reliable, and so on). Parallelly, we do not strictly blame an instrument for getting things wrong.

Time to go back to the main question of the paper: should we grant robots the status of testifiers? As the author points out, to answer this question we might need to first figure out, or reach a consensus on, the moral and agential status of these robots. This seems to be a necessary step for deciding whether we could rationally grant normative and social trust to robots ( which is a necessary condition for seeing them as testifiers too). However, the author also points to another way to tackle this issue by observing the way humans interact with these robots. If humans come to see robots as capable of establishing relationships with them while also considering them as ā€œhonestā€ or ā€œdeceitfulā€ (and there is seems to be empirical evidence pointing in that direction), then we might conclude that de facto robots are (or will be) part of that bond-forming process that is key for delivering testimony.  In other words, because of how we treat them, it is likely that we will eventually start to see robots as trustworthy and able to share their knowledge via testimony.


Original paper by Billy Wheeler: https://www.pdcnet.org/techne/content/techne_2020_0024_0003_0332_0356

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

šŸ” SEARCH

Spotlight

AI Policy Corner: New York City Local Law 144

Canada’s Minister of AI and Digital Innovation is a Historic First. Here’s What We Recommend.

Am I Literate? Redefining Literacy in the Age of Artificial Intelligence

AI Policy Corner: The Texas Responsible AI Governance Act

AI Policy Corner: Singapore’s National AI Strategy 2.0

related posts

  • The Proliferation of AI Ethics Principles: What's Next?

    The Proliferation of AI Ethics Principles: What's Next?

  • On the Challenges of Using Black-Box APIs for Toxicity Evaluation in Research

    On the Challenges of Using Black-Box APIs for Toxicity Evaluation in Research

  • Value-based Fast and Slow AI Nudging

    Value-based Fast and Slow AI Nudging

  • Hiring Algorithms Based on Junk Science May Cost You Your Dream Job

    Hiring Algorithms Based on Junk Science May Cost You Your Dream Job

  • One Map to Rule Them All? Google Maps as Digital Technical Object

    One Map to Rule Them All? Google Maps as Digital Technical Object

  • The Values Encoded in Machine Learning Research

    The Values Encoded in Machine Learning Research

  • ChatGPT and the media in the Global South: How non-representative corpus in sub-Sahara Africa are en...

    ChatGPT and the media in the Global South: How non-representative corpus in sub-Sahara Africa are en...

  • Intersectional Inquiry, on the Ground and in the Algorithm

    Intersectional Inquiry, on the Ground and in the Algorithm

  • Examining the Impact of Provenance-Enabled Media on Trust and Accuracy Perceptions

    Examining the Impact of Provenance-Enabled Media on Trust and Accuracy Perceptions

  • How Prevalent is Gender Bias in ChatGPT? - Exploring German and English ChatGPT Responses

    How Prevalent is Gender Bias in ChatGPT? - Exploring German and English ChatGPT Responses

Partners

  • Ā 
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer

Categories


• Blog
• Research Summaries
• Columns
• Core Principles of Responsible AI
• Special Topics

Signature Content


• The State Of AI Ethics

• The Living Dictionary

• The AI Ethics Brief

Learn More


• About

• Open Access Policy

• Contributions Policy

• Editorial Stance on AI Tools

• Press

• Donate

• Contact

The AI Ethics Brief (bi-weekly newsletter)

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.


Archive

  • Ā© MONTREAL AI ETHICS INSTITUTE. All rights reserved 2024.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.