✍️ Column by Rosalie Waelen, a philosopher and AI ethicist, doing her Ph.D. at the University of Twente in the Netherlands.
Overview: The increasing development and use of computer vision applications give rise to various ethical and societal issues. In this blog, I discuss computer vision’s impact on privacy, identity, and human agency, as a part of my column on the ethics of computer vision.
Not everyone is familiar with the term ‘computer vision.’ However, it is safe to assume that, at least in the Global North, everybody has encountered or used a computer vision application at some point. Think about Snapchat filters, automated passport controls, or cash desk-free stores (such as Amazon Go stores). All of these technologies make use of computer vision. As for any form of AI, it is important to carefully reflect on how this technology might impact our lives and societies.
Concerns raised in the existing literature on the ethics of computer vision (see the previous blog) ar, for example, that computer vision applications violate people’s privacy, that errors in these systems can lead to discrimination, and that the systems are untransparent. In this blog, I discuss these and other concerns related to computer vision in more detail.
Privacy is both a legal and a moral right. This means that technology can be considered privacy-invasive, even if it aligns with established privacy and data protection legislation. Privacy is probably the most raised concern in relation to smart camera applications. In fact, the concept of privacy has a history intimately related to camera technologies. Warren and Brandeis’ famous 1890 paper – which established the idea of a right to privacy – was in part a response to the emergence of images of persons in the press.
We can distinguish three different types of privacy-related concerns in the context of computer vision applications. First of all, there is the fact that computer vision applications challenge people’s ability to control their personal information. While this problem applies to many modern technologies, computer vision makes informational control extra challenging. In the case of computer vision, it is one’s appearance that is turned into data. This means that it can be even more challenging for laypeople to grasp what exact information is at stake.
Another privacy-related problem has to do with anonymity. Anonymity is not only valuable to those who have something to hide. Anonymity creates a safe space where people can be who they want to be and develop themselves. Therefore, anonymity promotes freedom and autonomy.
Finally, a lack of privacy often goes hand-in-hand with a sense of discomfort. Constantly being “watched” by smart camera applications can make some people uncomfortable. This discomfort can, in turn, lead to so-called chilling effects. Chilling effects imply that people refrain from acting in a certain way. Such behavior change becomes especially concerning when it implies that people refrain from exercising their rights – such as the right to protest.
The concept of privacy is closely related to the concept of identity. Having informational control allows one to control how their identity is perceived, and having anonymity gives one the space to be and become their authentic self. However, it is important to think about computer vision’s impact on identity independently from the concept of privacy.
Computer vision applications, particularly facial recognition systems, remove people’s ability to communicate their identity to the outside world. Another word for this ability is testimonial agency. For example, if a system categorizes me as ‘female’ and ‘European,’ I do not get a chance to contest these labels or to emphasize elements of my identity that I find more important.
As Crawford and Paglen (2021) also point out, computer vision systems assign labels to human persons that are not neutral but political. Labels express a certain vision of identity and biases towards certain labels reflect the biases and forms of discrimination deeply embedded in our societies. Furthermore, misrecognizing people’s unique identity through irrelevant or false labels can harm people’s development of self-respect and self-esteem (as I have argued elsewhere).
Computer vision applications’ increasing presence and use affect human agency in various ways. As mentioned, labels given to people based on their appearance can affect their ability to shape how others perceive them.
Computer vision can also impact people’s ability to know and understand their environment positively and negatively. On the one hand, computer vision systems are very complex. Consequently, it can be difficult for laypeople to understand how these systems reach conclusions and what information they can retrieve based on a person’s appearance. This is why there is so much talk about the need for transparency in AI. On the other hand, computer vision can promote people’s understanding of their environment. Think, for instance, about Google Lens – this application can help you to translate text written in a foreign language or to identify the types of insects or plants you find in your garden.
Finally, computer vision can harm an ability that philosophers like to call moral autonomy. Moral autonomy implies acting out of moral duty rather than in accordance with moral duty. In other words, it means doing the right thing, not because you’re forced to, but because you know it is right. Computer vision can impact this ability because this technology is a uniquely efficient tool for law enforcement. By surveilling citizens’ every move, people might no longer be encouraged to become virtuous citizens who act morally, not because they are forced to by technology but because they know it is right.
Computer vision applications can seriously impact the privacy, identity formation, and human agency of individuals and groups. For example, computer vision can change how we see ourselves and communicate our identity, fundamentally change our sense of privacy, or decrease our ability to follow our own (moral) judgment. All of these issues have in common that they are important because they support people’s autonomy. However, these issues do not cover all ethically relevant ways in which computer vision can impact our lives and societies. In the next blogs, I will zoom in on computer vision’s potential for social control and the sustainability of computer vision.