Top-level summary: This paper by Abeba Birhane and Jelle van Dijk highlights a quintessential dilemma that often pops up when entrants to the domain of AI ethics begin learning about the field. They stumble upon articles that tout the importance of treating robots like humans and giving them rights akin to humans. What it often ignores is the degree to which completely sentient machines are a pipe dream in the near and medium term and based on current estimates by reputable technical scientists in the field, a fantasy even in the long run. So why then the focus on them? The paper dives into where this stems from and why this is deeply problematic. The premise centers on the idea that given present levels of technology and their impacts on humans today, we ignore or divert away resources and attention from addressing concerns about how AI systems disproportionately impact the marginalized and focus instead on problems of an imaginary future scenario. Through numerous examples, the authors illustrate how today’s machine learning systems have a great deal of human input behind them and are essentially human-machine systems where there is a class of workers who operate in the shadows to enable the wonders of automated technology. Their pervasive impacts and problems with bias and fairness, entrenching existing stereotypes and creating further disadvantages for the most vulnerable, they need scrutiny and analysis before they are made an invisible part of our everyday social fabric. Robots today, even in social contexts where they might appear to be warm and humans might cherish them such as care robots hold nothing more than the same significance as objects that one relishes like a nice espresso machine. Attributing agency and autonomy to such systems beyond their capabilities and thus asking us to think about rights that they deserve is putting the cart way before the horse. True AI ethics should concern itself with mitigating real harms to real humans that they are experiencing and meaningfully balancing that against efforts devoted towards potential problems in a distant future.
The debate when ethicists ask for rights to be granted to robots is based on notions of biological chauvinism and that if robots display the same level of agency and autonomy, not doing so would not only be unethical but also cause a setback for the rights that were denied to disadvantaged groups. By branding robots as slaves and implying that they don’t deserve rights has fatal flaws in that they both use a term, slave, that has connotations that have significantly harmed people in the past and also that dehumanization of robots is not possible because it assumes that they are not human to begin with.
While it may be possible to build a sentient robot in the distant future, in such a case there would be no reason to not grant it rights but until then, real, present problems are being ignored for imaginary future ones. The relationship between machines and humans is tightly intertwined but it’s not symmetrical and hence we must not confound the “being” part of human beings with the characteristics of present technological artifacts.
Technologists assume that since there is a dualism to a human being, in the sense of the mind and the body, then it maps neatly such that the software is the mind and the robot body maps to the physical body of a human, which leads them to believe that a sentient robot, in our image, can be constructed, it’s just a very complex configuration that we haven’t completely figured out yet. The more representative view of thinking about robots at present is to see them as objects that inhabit our physical and social spaces.
Objects in our environment take on meaning based on the purpose they serve to us, such as a park bench meaning one thing to a skateboarder and another to a casual park visitor. Similarly, our social interactions are always situated within a larger ecosystem and that needs to be taken into consideration when thinking about the interactions between humans and objects. In other words, things are what they are, because of the way they configure our social practices and how technology extends the biological body.Our conception of human beings, then, is that we are and have always been fully embedded and enmeshed with our designed surroundings, and that we are critically dependent on this embeddedness for sustaining ourselves.
Because of this deep embedding, instead of seeing the objects around us merely as machines or on the other end as ‘intelligent others’, we must realize that they are very much a part of ourselves because of the important role they play in defining both our physical and social existence.
Some argue that robots take on a greater meaning when they are in a social context like care robots and people might be attached to them, yet that is quite similar to the attachment one develops to other artifacts like a nice espresso machine or a treasured object handed down for generations. They have meaning to the person but that doesn’t mean that the robot, as present technology, needs to be granted rights.
While a comparison to slaves and other disenfranchised groups is made when robots are denied rights because they are seen as ‘less’ than others, one mustn’t forget that it happens to be the case that it is so because they are perceived as instruments and means to achieve an end. By comparing these groups to robots, one dehumanizes actual human beings. It may be called anthropocentric to deny rights to robots but that’s what needs to be done: to center on the welfare of humans rather than inanimate machines.
An interesting analogue that drives home the point when thinking about this is the Milgram Prison experiment where subjects who thought they had inflicted harms on the actors, who were a part of the experiment, were traumatized even after being told that the screams they heard were from the actors. From an outside perspective, we may say that no harm was done because they were just actors but to the person who was the subject of the experiment, the experience was real and not an illusion and it had real consequences. In our discussion, the robot is an actor and if we treat it poorly, then that reflects more so on our interactions with other artifacts than on whether robots are “deserving” of rights or not. Taking care of various artifacts can be thought of as something that is done to render respect to the human creators and the effort that they expended to create it.
Discussion of robot rights for an imaginary future that may or may not arrive takes away focus and perhaps resources from the harms being done to real humans today as part of the AI systems being built with bias and fairness issues in them. Invasion of privacy, bias against the disadvantaged, among other issues are just some of the few already existing harms that are being leveled on humans as intelligent systems percolate into the everyday fabric of social and economic life.
From a for-profit perspective, such systems are poised and deployed with the aims of boosting the bottom line without necessarily considering the harms that emerge as a consequence. In pro-social contexts, they are seen as a quick fix solution to inherently messy and complex problems.
The most profound technologies are those that disappear into the background and in subtle ways shape and form our existence. We already see that with intelligent systems pervading many aspects of our lives. So we’re not as much in threat from a system like Sophia which is a rudimentary chatbot hidden behind a facade of flashy machinery but more so from Roomba which impacts us more and could be used as a tool to surveil our homes. Taking ethical concerns seriously means considering the impact of weaving in automated technology into daily life and how the marginalized are disproportionately harmed.
In the current dominant paradigm of supervised machine learning, the systems aren’t truly autonomous, there is a huge amount of human input that goes into enabling the functioning of the system, and thus we actually have human-machine systems rather than just pure machinic systems. The more impressive the system seems, the more likely that there was a ton of human labor that went into making it possible. Sometimes, we even see systems that started off with a different purpose such as reCAPTCHA that are used to prevent spam being refitted to train ML systems. The building of AI systems today doesn’t just require highly skilled human labor but it must be supplemented with mundane jobs of labeling data that are poorly compensated and involve increasingly harder tasks as, for example, image recognition systems become more powerful, leading to the labeling of more and more complex images which require greater effort. This also frames the humans doing the low skilled work squarely in the category of being dehumanized because of them being used as a means to an end without adequate respect, compensation and dignity.
An illustrative example where robots and welfare of humans comes into conflict was when a wheelchair user wasn’t able to access the sidewalk because it was blocked by a robot and she mentioned that without building for considering the needs of humans, especially those with special needs, we’ll have to make debilitating compromises in our shared physical and social spaces. Ultimately, realizing the goals of the domain of AI ethics needs to reposition our focus on humans and their welfare, especially when conflicts arise between the “needs” of automated systems compared to those of humans.
Original paper by Abeba Birhane and Jelle van Dijk: https://arxiv.org/abs/2001.05046