š¬ Research summary by Rosalie Waelen, a Ph.D. candidate in the Ethics of AI at the University of Twente in the Netherlands. Her research is part of the EU Horizon 2020 ITN project PROTECT under the Marie SkÅodowska-Curie grant agreement No 813497.
[Original paper by Rosalie Waelen]
Overview: Facial recognition technology does not always live up to its promises ā numerous examples show that it often fails to recognize peopleās identity or characteristics. In this article, I argue that such misrecognition by FRT can harm peopleās self-respect and self-esteem.Ā Ā
Introduction
What happens when facial recognition technology (hereafter FRT) fails to adequately recognize our identity or characteristics? We are misidentified and misunderstood, which can be inconvenient and sometimes even lead to discriminatory treatment. In recent years we have become painfully aware of this through numerous examples of discrimination by FRT. But if we follow the work of philosophers Axel Honneth and Charles Taylor on the topic of recognition, we find that misrecognition by FRT may have another problematic consequence: it can harm our self-development and sense of self-worth.
Key Insights
The philosophy of recognition
In philosophy, the concept of recognition refers to the social acknowledgement of certain identities. Honneth and Taylor, alongside other philosophers, argue that such social recognition can be obtained on three different levels: 1) we can be recognized in the private sphere, by having people close to us that care about our needs; 2) we can be recognized in the legal sphere, by having the same rights as our fellow human-beings; and 3) we can be recognized on a societal level, by receiving acknowledgement for our societal roles and contributions.
A lack of recognition harms peopleās sense of self-worth. According to Honneth and Taylor, we need social recognition in order to develop self-confidence, self-respect and self-esteem. Moreover, they argue that a just society is one in which everyone receives due recognition.
Facial recognitionās failures
FRT can fail to recognize us in at least three distinct ways. First of all, FRT systems aimed at identifying persons (e.g. as āRosalie Waelenā or āclient number 123ā) can misidentify a person. This can have serious results, such as innocent people being arrested by the police.
Secondly, FRT aimed at categorizing persons (e.g. as āfemaleā, āWhiteā, āworriedā) can misrecognize them when they attribute the wrong characteristics to a person. Infamous examples of this form of misrecognition are the cases in which images of Black people were mistakenly categorized as āgorillasā or āprimatesā.
A third way in which FRT can misrecognize us is by categorizing or profiling us in ways that do not resonate with our own sense of who we are ā our āsubjective identityā. For example, when a FRT is only programmed to categorize peopleās gender as āmaleā or āfemaleā, it cannot do justice to those who identify as non-binary.
Following the philosophy of recognition, I understand these failures of FRT as misrecognition in the philosophical, normative sense. This is especially so, because FRT has been found to systematically misrecognize specific groups: darker-skinned persons and women, as well as those who do not fit normalized labels (take the example of non-binary persons). Hence, by constantly being misidentified and miscategorized, these groups receive the message that they are not equally important members of society. In other words, they are not recognized as having equal rights or equally valuable roles in society. As a result, their development of self-respect and self-esteem may be compromised.
Can we be misrecognized by a technology?
Recognition is usually discussed as a social relation between individual persons or groups. But in this article, I suggest that technology can recognize and misrecognize us in the same ways as humans can. Of course, technology cannot intentionally recognize individuals, or receive social recognition themselves. But we can experience our own relation to technology as being similar to our relation to other social actors. As a result, we can suffer psychological harm when a technology did not recognize our needs, rights or social contributions.
Between the lines
This article makes use of social philosophy (namely, the philosophy of recognition) to uncover ethical issues that are usually left out of discussions about the ethics of FRT or AI in general. This approach shows that social and political analyses and critiques can inform and improve the ethical analysis of new technologies. FRTās ethical implications cannot only be solved by improving the technology, they might also require a change of societal norms and practices.