Summary contributed by our researcher Dr. Marianna Ganapini (@MariannaBergama), who is also an Assistant Professor of Philosophy at Union College.
[Authors of original paper + link at the bottom]
Overview: In this paper, the author Billy Wheeler asks whether we should treat the knowledge gained from robots as a form of testimonial versus instrument-based knowledge. In other words, should we consider robots as able to offer testimony, or are they simply instruments similar to calculators or thermostats? Seeing robots as a source of testimony could shape the epistemic and social relations we have with them. The author’s main suggestion in this paper is that some robots can be seen as capable of testimony because they share the following human-like characteristic: their ability to be a source of epistemic trust.
Should we see sophisticated robots as a human-like source of information or are they simply instruments similar to watches or smoke detectors? Are robots testifiers or are they purely offering instrumental knowledge? In this paper, the author Billy Wheeler raises these questions arguing this topic is key in view of the growing role social robots have in our lives. These ‘social’ robots interact with humans helping with a number of different tasks (as already discussed in MAIEI newsletter n.33) but little is known of humans’ epistemic relation with them.
It is common for any philosophical discussion about ‘knowledge’ to identify knowledge with having a justified true belief. The basic idea is that we want to distinguish between knowledge vs lucky guesses: knowledge requires a certain degree of support or justification that beliefs that happen to be true by chance do not have. And one way to capture this idea is to say that a belief constitutes actual knowledge only if it is formed via some reliable method. Common reliable belief forming-processes are perception and reasoning. Absent relevant defeaters, testimony is also seen as a way of gaining knowledge (and thus beliefs that are reliably true) from other people. Finally, using instruments (e.g. calculators, watches) that are well functioning is also a way of gaining knowledge.
What is the difference between testimony based vs instrument- based knowledge then? Well, when we receive information from people, what we do is that we trust them. This kind of trust is substantially different from the ‘trust’ that we extend to instruments. Of course, if I deem it well-functioning, I can ‘trust’ my watch to be reliable and thus a source of knowledge about time. However, the author points out that testimony is based on social trust and “its purpose is to forge bonds between individuals for the sake of forming epistemic communities”. At least according to some views of testimony (called ‘non-reductionist’) these interpersonal bonds make it the case that, to form a reliable or justified belief, I don’t need any additional positive reason to trust someone else’s word. Unless I have reason to doubt what you said or your sincerity, I will believe what you tell me. That also means that if it turns out that what they said was false, I can blame you or hold you responsible. Conversely, we always need positive reasons in order to gain knowledge from a watch (that is, we need to assume that it is not defective, that it is reliable, and so on). Parallelly, we do not strictly blame an instrument for getting things wrong.
Time to go back to the main question of the paper: should we grant robots the status of testifiers? As the author points out, to answer this question we might need to first figure out, or reach a consensus on, the moral and agential status of these robots. This seems to be a necessary step for deciding whether we could rationally grant normative and social trust to robots ( which is a necessary condition for seeing them as testifiers too). However, the author also points to another way to tackle this issue by observing the way humans interact with these robots. If humans come to see robots as capable of establishing relationships with them while also considering them as “honest” or “deceitful” (and there is seems to be empirical evidence pointing in that direction), then we might conclude that de facto robots are (or will be) part of that bond-forming process that is key for delivering testimony. In other words, because of how we treat them, it is likely that we will eventually start to see robots as trustworthy and able to share their knowledge via testimony.
Original paper by Billy Wheeler: https://www.pdcnet.org/techne/content/techne_2020_0024_0003_0332_0356