✍️ Column by Rosalie Waelen, a philosopher and applied ethicist, who recently completed her Ph.D. at the University of Twente (The Netherlands) and is now working as a Senior Researcher at the Sustainable AI Lab of the University of Bonn (Germany).
Overview: What should we do about computer vision´s potential ethical and societal implications? This column discusses whether computer vision requires special treatment concerning AI governance, how the EU´s AI Act tackles computer vision´s potential implications, why AI ethics is still needed after the AI Act, and what implications of computer vision deserve more attention in public and political debate.
Is computer vision special?
Computer vision is a subfield of AI. Therefore, the ethics of computer vision – the topic of this column – can be seen as a subfield of AI ethics. However, one might be skeptical of the need for a separate discussion of the ethics of computer vision. If it is a subfield of AI, aren´t all of computer vision´s implications already covered by general AI ethics discussions and initiatives?
Computer vision is unique because it focuses on visual data like videos and images. Analyzing visual data requires a very different, and arguably more, interpretation than the analysis of textual data. Such interpretation comes with human biases and represents specific worldviews. Also special about computer vision is the fact that it often turns into data that people did not necessarily conceive of as ´data´ or ´information´ before – like one´s appearance, for instance.
Given these special features, the question arises: Does computer vision need its own regulatory framework? On the one hand, regulating computer vision comes with some unique challenges. People´s appearance or facial features are highly accessible data sources. This makes data protection and privacy regulation much different than in the case of confidential medical records. On the other hand, many problematic implications of computer vision are similar to the ethical issues associated with other forms of AI or digital technologies. Hence, separate legislation seems redundant from the perspective of the challenges to be addressed. Furthermore, it is often the case in practice that computer vision is combined with other forms of AI or data analytics and that visual data is combined with other data types. Because of this, it is difficult to separate computer vision from general AI and arguably pointless to develop distinct rules for computer vision.
What the EU´s AI Act means for computer vision
In June 2023, the European Parliament passed the AI Act – a regulatory framework for AI. The European Commission first proposed that in April 2021. The AI Act is based on a risk-based approach to AI governance. This entails that AI systems are categorized under different risk levels, and certain measurements are prescribed depending on the risk level. Unacceptable risk entails that systems threaten people – such systems are banned completely by the EU. High-risk systems that negatively affect safety or fundamental human rights should be assessed before being put on the market and throughout their life cycle. Limited risk systems need to meet minimal transparency requirements.
Although the AI Act does not explicitly or exclusively propose legislation for computer vision, some parts of the regulation are nevertheless particularly relevant to computer vision tools. The AI Act suggests that there are three categories of unacceptable risk systems: cognitive behavioral manipulation, social scoring, and real-time and remote biometric identification. The latter two are particularly relevant to computer vision. Biometric identification, of course, refers above all to facial recognition, and it has been shown in China that computer vision is a highly effective technology for social scoring, too. According to the AI Act, high-risk is the use of computer vision in products that meet special safety requirements – such as self-driving cars, toys, or medical devices. Also qualifying as high risk is using computer vision for identification purposes, workspace surveillance, border control, and law enforcement. Limited risk systems would include deepfakes and image-generation tools.
AI ethics after the AI act
Now that the AI Act has been adopted, we could question how urgent and necessary AI ethics debates continue to be in Europe. From a European perspective, one might ponder: With legislation in place that tackles many of the pressing concerns regarding computer vision, why should we continue to discuss the ethics of computer vision? Indeed, to the extent that ethical assessments are a first step to developing legislation, one could argue that there is less of a need for AI ethics debates within the EU now that the European Parliament has passed the AI Act. However, there are still reasons to believe that AI ethics debates, including debates about computer vision ethics, remain important.
First of all, although ethics is intimately related to law and policymaking, AI ethics cannot be equated with AI governance. Ethical assessments of AI are valuable not only in the function of legislation but also because they provide us with more knowledge and understanding of how modern technologies affect our lives and societies. For instance, knowledge and understanding of AI´s impact on identity and self-development can strengthen people´s autonomy and self-ownership.
A second argument for AI ethics’ ongoing importance and relevance is that not all ethical issues can or should be addressed by legislation. The AI Act puts in place restrictions and procedures for the development and use of AI, but not all ethical or societal issues can be addressed in such ways. For example, the extent to which biased facial recognition systems harm the self-esteem and self-respect of those misrecognized by the system will differ among individuals. Because of these individual differences, developing general rules to address the issue is challenging. Moreover, while the potential psychological harm done by facial recognition systems is an issue that should be taken seriously, citizens might consider it too restrictive (or paternalistic) to limit the implementation of facial recognition systems for this reason.
Next steps
Discussing the ethics of computer vision remains important, even with dedicated regulation in place, because AI ethics can create awareness and understanding of AI´s implications among users and because not all ethical issues can or should be addressed by legislation. In addition to these arguments for the ongoing importance of AI ethics debates, there is a third reason why the ethics of computer vision still needs our attention: Policymakers could and should do more.
I argued above that not all ethical and societal implications can be addressed by legislation. I also stated that not all issues should be addressed in this way because AI governance would become too restrictive. That said, these issues also deserve more attention from policymakers.
In this series of columns on the ethics of computer vision, I discussed the impact of computer vision on human autonomy, surveillance and social control, and the environment. Both the impact on human autonomy and the use of computer vision for social control are addressed, at least in part, by the AI Act´s protection of fundamental human rights and ban on using AI for social scoring. The AI Act also restricts the use of computer vision for surveillance purposes. However, the AI Act restricts and supervises only the development and use of specific AI applications. Similarly, many AI ethics debates have focused on assessing specific AI systems. What is needed, both in AI ethics and in AI governance, is a broader perspective that critiques and changes systemic problems. Systemic problems related to AI include the environmental cost of AI and the social and economic power of the AI industry. To really improve AI, from an ethics standpoint, policymakers need to put these systemic issues (higher) on their agendas.