• Skip to main content
  • Skip to secondary menu
  • Skip to primary sidebar
  • Skip to footer
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

  • Articles
    • Public Policy
    • Privacy & Security
    • Human Rights
      • Ethics
      • JEDI (Justice, Equity, Diversity, Inclusion
    • Climate
    • Design
      • Emerging Technology
    • Application & Adoption
      • Health
      • Education
      • Government
        • Military
        • Public Works
      • Labour
    • Arts & Culture
      • Film & TV
      • Music
      • Pop Culture
      • Digital Art
  • Columns
    • AI Policy Corner
    • Recess
    • Tech Futures
  • The AI Ethics Brief
  • AI Literacy
    • Research Summaries
    • AI Ethics Living Dictionary
    • Learning Community
  • The State of AI Ethics Report
    • Volume 7 (November 2025)
    • Volume 6 (February 2022)
    • Volume 5 (July 2021)
    • Volume 4 (April 2021)
    • Volume 3 (Jan 2021)
    • Volume 2 (Oct 2020)
    • Volume 1 (June 2020)
  • About
    • Our Contributions Policy
    • Our Open Access Policy
    • Contact
    • Donate

Computer vision, surveillance, and social control

July 26, 2023

✍️ Column by Rosalie Waelen, a philosopher and AI ethicist, doing her Ph.D. at the University of Twente in the Netherlands.


Overview: Computer vision technology is inescapably connected to surveillance. As a surveillance tool, computer vision can help governments and companies to exercise social control. Computer vision’s potential for surveillance and social control raises a lot of worries – this blog discusses why.  


Computer vision is watching you 

Computer vision technology can serve many purposes – healthcare, research, or business intelligence. But one of computer vision’s biggest perks is automating surveillance practices. While surveillance can take many forms, the camera is the most emblematic representative of the watching eye. Computer vision can automate camera surveillance: it replaces the need for humans to monitor CCTV footage and extends the scope of knowledge that can be retrieved from videos and images. In other words, computer vision creates ‘unblinking eyes’ (Macnish, 2012) that see more than a human eye can. 

As a result of automated video surveillance, computer vision could realize an infallible surveillance system. So far, however, computer vision has shown to be very much fallible. Especially facial recognition, the most discussed type of computer vision application, has received a lot of criticism for being prone to error due to algorithmic biases. The inaccuracy of facial recognition is a major concern, as it is likely to lead to discrimination. 

The question this blog addresses is: What about the prospect of an infallible surveillance system, powered and automated by artificial intelligence, that scares many of us? 

Why worry about surveillance? 

State surveillance implies the use of surveillance tools to enforce the law. A simple example of state surveillance is using speed cameras to make citizens comply with traffic rules. Surveillance tools not only help to punish law violations (e.g., in the form of a fine), but they also help to prevent them (for instance, by predicting the likelihood of a crime). We can distinguish two concerns regarding state surveillance: worries about over-enforcement and worries about abuse of power. 

An infallible computer vision system makes for a hyper-efficient surveillance tool for law enforcement. As the use of such surveillance tools increases, eventually, no infringement of the law will go unnoticed. As a result, even seemingly innocent acts, such as jaywalking across an empty street, will have repercussions. This possibility is often referred to as ‘overenforcement’ – which is not to be mistaken for the abuse of power. 

While our intuition might tell us that overenforcement is problematic, it is difficult to explain why. After all, in liberal democracies, we all implicitly “agree” to the rules of law through a social contract. The question that automated surveillance raises is thus the following: Even if we collectively agree that we want all citizens to abide by a certain set of laws that ensure a safe and free society, would we still want those citizens to have some degree of freedom to break the law and, consequently, harm the safety and freedom of other citizens? This question is about paternalism and proportionality. As surveillance practices become more efficient, societies must reconsider where and how to draw the line between freedom and state control. 

A second concern about state surveillance is the abuse of power. Surveillance systems help governments to acquire knowledge about citizens. And knowledge, as we all know, implies power. Hence, by supporting surveillance practices, computer vision can support the exercise of social control by authoritarian regimes. It is especially for this reason that the use of computer vision for surveillance in China has been widely criticized in Western media. But also, in democratic societies, where governments are expected not to abuse the power of surveillance tools, AI-powered surveillance practices still seem to cause discomfort. Could the mere potential of the abuse of surveillance for social control pose a democratic problem? And how should democracies handle this problem? 

Public-private partnerships

State surveillance is, of course, not the only form of surveillance. Surveillance can also refer to store owners trying to spot shoplifters, employers making sure their employees do not waste company time, or even dog owners keeping an eye on their fur friend at home. Perhaps just as notorious as state surveillance is surveillance by big tech companies. 

Surveillance capitalism is a term coined by Shoshana Zuboff to refer to a popular business model today that involves the datafication and commodification of people’s behavior. Put differently: big tech companies are financially interested in surveilling people’s use of smart devices and internet platforms. The more these companies know about users, the more power they have to control their consumer choices, lifestyle, and political views. 

Governments worldwide are highly dependent on private companies using AI-powered surveillance solutions. And while many American companies (such as IBM) decided to put a pause on selling facial recognition products to police departments, once it became common knowledge that these products can support discriminatory practices, Chinese and Russian computer vision companies (e.g., SenseTime and NTechLab) continue to grow their markets. Because of these public-private partnerships, we should be worried about not only the power of states but also the social power of tech companies. 

Summary

In the previous blog of this series, I discussed the negative impact that computer vision applications can have on individuals’ privacy, identity formation, and agency. In the context of surveillance, often-heard concerns are about violating the right to privacy or other human rights. In this blog, on the other hand, I focused on the wider societal concerns that computer vision raises. More precisely, I discussed how computer vision can improve surveillance practices and how such surveillance practices grant social power to governments and big tech companies.

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

🔍 SEARCH

Spotlight

A brightly coloured illustration which can be viewed in any direction. It has many elements to it working together: men in suits around a table, someone in a data centre, big hands controlling the scenes and holding a phone, people in a production line. Motifs such as network diagrams and melting emojis are placed throughout the busy vignettes.

Tech Futures: The Fossil Fuels Playbook for Big Tech: Part II

A rock embedded with intricate circuit board patterns, held delicately by pale hands drawn in a ghostly style. The contrast between the rough, metallic mineral and the sleek, artificial circuit board illustrates the relationship between raw natural resources and modern technological development. The hands evoke human involvement in the extraction and manufacturing processes.

Tech Futures: The Fossil Fuels Playbook for Big Tech: Part I

Close-up of a cat sleeping on a computer keyboard

Tech Futures: The threat of AI-generated code to the world’s digital infrastructure

The undying sun hangs in the sky, as people gather around signal towers, working through their digital devices.

Dreams and Realities in Modi’s AI Impact Summit

Illustration of a coral reef ecosystem

Tech Futures: Diversity of Thought and Experience: The UN’s Scientific Panel on AI

related posts

  • Research summary: The Deepfake Detection  Challenge: Insights and Recommendations  for AI and Media ...

    Research summary: The Deepfake Detection Challenge: Insights and Recommendations for AI and Media ...

  • Connecting the dots in trustworthy Artificial Intelligence: From AI principles, ethics, and key requ...

    Connecting the dots in trustworthy Artificial Intelligence: From AI principles, ethics, and key requ...

  • Towards User-Guided Actionable Recourse

    Towards User-Guided Actionable Recourse

  • The Larger The Fairer? Small Neural Networks Can Achieve Fairness for Edge Devices

    The Larger The Fairer? Small Neural Networks Can Achieve Fairness for Edge Devices

  • Bias Propagation in Federated Learning

    Bias Propagation in Federated Learning

  • Clinical trial site matching with improved diversity using fair policy learning

    Clinical trial site matching with improved diversity using fair policy learning

  • Faith and Fate: Limits of Transformers on Compositionality

    Faith and Fate: Limits of Transformers on Compositionality

  • Algorithmic accountability for the public sector

    Algorithmic accountability for the public sector

  • FaiRIR: Mitigating Exposure Bias from Related Item Recommendations in Two-Sided Platforms

    FaiRIR: Mitigating Exposure Bias from Related Item Recommendations in Two-Sided Platforms

  • From Pretraining Data to Language Models to Downstream Tasks: Tracking the Trails of Political Biase...

    From Pretraining Data to Language Models to Downstream Tasks: Tracking the Trails of Political Biase...

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer


Articles

Columns

AI Literacy

The State of AI Ethics Report


 

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.

Contact

Donate


  • © 2025 MONTREAL AI ETHICS INSTITUTE.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.