✍️ Column by Rosalie Waelen, a philosopher and AI ethicist, doing her Ph.D. at the University of Twente in the Netherlands.
Overview: Computer vision technology is inescapably connected to surveillance. As a surveillance tool, computer vision can help governments and companies to exercise social control. Computer vision’s potential for surveillance and social control raises a lot of worries – this blog discusses why.
Computer vision is watching you
Computer vision technology can serve many purposes – healthcare, research, or business intelligence. But one of computer vision’s biggest perks is automating surveillance practices. While surveillance can take many forms, the camera is the most emblematic representative of the watching eye. Computer vision can automate camera surveillance: it replaces the need for humans to monitor CCTV footage and extends the scope of knowledge that can be retrieved from videos and images. In other words, computer vision creates ‘unblinking eyes’ (Macnish, 2012) that see more than a human eye can.
As a result of automated video surveillance, computer vision could realize an infallible surveillance system. So far, however, computer vision has shown to be very much fallible. Especially facial recognition, the most discussed type of computer vision application, has received a lot of criticism for being prone to error due to algorithmic biases. The inaccuracy of facial recognition is a major concern, as it is likely to lead to discrimination.
The question this blog addresses is: What about the prospect of an infallible surveillance system, powered and automated by artificial intelligence, that scares many of us?
Why worry about surveillance?
State surveillance implies the use of surveillance tools to enforce the law. A simple example of state surveillance is using speed cameras to make citizens comply with traffic rules. Surveillance tools not only help to punish law violations (e.g., in the form of a fine), but they also help to prevent them (for instance, by predicting the likelihood of a crime). We can distinguish two concerns regarding state surveillance: worries about over-enforcement and worries about abuse of power.
An infallible computer vision system makes for a hyper-efficient surveillance tool for law enforcement. As the use of such surveillance tools increases, eventually, no infringement of the law will go unnoticed. As a result, even seemingly innocent acts, such as jaywalking across an empty street, will have repercussions. This possibility is often referred to as ‘overenforcement’ – which is not to be mistaken for the abuse of power.
While our intuition might tell us that overenforcement is problematic, it is difficult to explain why. After all, in liberal democracies, we all implicitly “agree” to the rules of law through a social contract. The question that automated surveillance raises is thus the following: Even if we collectively agree that we want all citizens to abide by a certain set of laws that ensure a safe and free society, would we still want those citizens to have some degree of freedom to break the law and, consequently, harm the safety and freedom of other citizens? This question is about paternalism and proportionality. As surveillance practices become more efficient, societies must reconsider where and how to draw the line between freedom and state control.
A second concern about state surveillance is the abuse of power. Surveillance systems help governments to acquire knowledge about citizens. And knowledge, as we all know, implies power. Hence, by supporting surveillance practices, computer vision can support the exercise of social control by authoritarian regimes. It is especially for this reason that the use of computer vision for surveillance in China has been widely criticized in Western media. But also, in democratic societies, where governments are expected not to abuse the power of surveillance tools, AI-powered surveillance practices still seem to cause discomfort. Could the mere potential of the abuse of surveillance for social control pose a democratic problem? And how should democracies handle this problem?
Public-private partnerships
State surveillance is, of course, not the only form of surveillance. Surveillance can also refer to store owners trying to spot shoplifters, employers making sure their employees do not waste company time, or even dog owners keeping an eye on their fur friend at home. Perhaps just as notorious as state surveillance is surveillance by big tech companies.
Surveillance capitalism is a term coined by Shoshana Zuboff to refer to a popular business model today that involves the datafication and commodification of people’s behavior. Put differently: big tech companies are financially interested in surveilling people’s use of smart devices and internet platforms. The more these companies know about users, the more power they have to control their consumer choices, lifestyle, and political views.
Governments worldwide are highly dependent on private companies using AI-powered surveillance solutions. And while many American companies (such as IBM) decided to put a pause on selling facial recognition products to police departments, once it became common knowledge that these products can support discriminatory practices, Chinese and Russian computer vision companies (e.g., SenseTime and NTechLab) continue to grow their markets. Because of these public-private partnerships, we should be worried about not only the power of states but also the social power of tech companies.
Summary
In the previous blog of this series, I discussed the negative impact that computer vision applications can have on individuals’ privacy, identity formation, and agency. In the context of surveillance, often-heard concerns are about violating the right to privacy or other human rights. In this blog, on the other hand, I focused on the wider societal concerns that computer vision raises. More precisely, I discussed how computer vision can improve surveillance practices and how such surveillance practices grant social power to governments and big tech companies.