• Skip to main content
  • Skip to secondary menu
  • Skip to primary sidebar
  • Skip to footer
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

  • Articles
    • Public Policy
    • Privacy & Security
    • Human Rights
      • Ethics
      • JEDI (Justice, Equity, Diversity, Inclusion
    • Climate
    • Design
      • Emerging Technology
    • Application & Adoption
      • Health
      • Education
      • Government
        • Military
        • Public Works
      • Labour
    • Arts & Culture
      • Film & TV
      • Music
      • Pop Culture
      • Digital Art
  • Columns
    • AI Policy Corner
    • Recess
    • Tech Futures
  • The AI Ethics Brief
  • AI Literacy
    • Research Summaries
    • AI Ethics Living Dictionary
    • Learning Community
  • The State of AI Ethics Report
    • Volume 7 (November 2025)
    • Volume 6 (February 2022)
    • Volume 5 (July 2021)
    • Volume 4 (April 2021)
    • Volume 3 (Jan 2021)
    • Volume 2 (Oct 2020)
    • Volume 1 (June 2020)
  • About
    • Our Contributions Policy
    • Our Open Access Policy
    • Contact
    • Donate

The state of the debate on the ethics of computer vision

May 31, 2023

✍️ Column by Rosalie Waelen, a philosopher and AI ethicist, doing her Ph.D. at the University of Twente in the Netherlands.


Overview: In this blog, I present an overview of the literature on the ethics of CV. To do so, I conducted a targeted review of the literature on AI ethics. I focused on general introductions to the field, on the one hand, and discussions of CV or specific CV applications, on the other hand. To find focused articles on the ethics of CV, I used combinations of the following search terms on Google Scholar: ‘Computer Vision,’ ‘Face recognition,’ ‘Facial recognition,’ ‘Video analytics,’ ‘Ethics,’ ‘Ethical,’ and ‘Implications.’ This search may not have covered all work available on the topic, but it suffices to offer an insight into the state of the debate on the ethical implications of CV.  


Introduction

While computer vision is crucial in AI research, it is not as central in AI ethics. In this blog, I offer a brief (non-systematic) review of the existing literature on the ethical implications of computer vision.

In 2012, the computer vision model AlexNet won the ImageNet competition and put neural networks on the map as the road for development in artificial intelligence. Hence, computer vision research is central to AI research in general. Computer vision (hereafter ‘CV’) is the subfield of AI concerned with automating visual data analysis to make computers interpret, describe, and respond to images and videos. CV can be used in many different ways, serving the purpose of surveillance, business intelligence, healthcare, or personal convenience.

The ethics of CV in AI ethics overviews

Going through general discussions of AI ethics, CV and its different applications do not play a dominant role. The Stanford Encyclopedia of Philosophy entry on ‘Ethics of Artificial Intelligence and Robotics’ (Müller, 2020) does not mention CV at all and only refers to face recognition once. The same goes for the Internet Encyclopedia of Philosophy entry on ‘Ethics of Artificial Intelligence’ (Gordon & Nyholm, 2021). 

In The Oxford Handbook of Ethics of AI (Dubber, Pasquale & Das, 2020), there is not a single chapter dedicated to CV or CV applications. However, CV applications are, of course, among the examples discussed in some of the chapters. In Coeckelbergh’s book AI Ethics (2020), computer vision is briefly mentioned as one of many AI techniques. Facial recognition repeatedly pops up in examples – regarding surveillance and privacy, data and biases, Walmart’s analysis of customers, and Facebook’s photo tagging. But Coeckelbergh discusses neither the technology nor the example use cases in much detail. 

Crawford’s book The Atlas of AI. Power, Politics, and the Planetary Cost of Artificial Intelligence (2021) deals with CV in more detail. For instance, Crawford discusses how energy-hungry the development of CV models is, the problematic assumption that images are apolitical and can be given a single label, and the unscientific basis of CV-based emotion recognition. However, CV is again only brought up as an example for certain issues and not given a central place in the outline of the discussion on AI ethics. 

Micro-ethical work on CV 

CV might not have a central place in the ethics of AI, but that does not mean that the ethics of CV has been entirely ignored. As said, CV is often mentioned as an example case in discussions about AI ethics – particularly in relation to surveillance and privacy or biased data and discrimination. Furthermore, within the broader literature, there are some articles available that focus specifically on CV or specific CV applications. 

I found two sources that cover the ethics of CV in general. The first is a Master’s thesis by Lauronen (2017), in which six ethical themes in computer vision are identified based on a literature review: espionage, identity theft, malicious attacks, copyright infringement, discrimination, and misinformation. The second is a conference paper by Skirpan and Yeh (2017) about designing a moral compass for CV research. Skirpan and Yeh categorize five risks in CV: privacy violations, discrimination, security breaches, spoofing and adversarial inputs, and psychological harm.

Other articles specify applications or use cases of CV. Coupland and colleagues (2009) discuss how ethical consideration could or should be part of developing CV applications based on a case study about a CV system for person tracking, occupancy, and fall detection. Huffer et al. (2019) go into the ethics of CV in the context of human remains trafficking. Dufresne-Camaro and colleagues wrote a paper on CV research for the global South and the risks related to the uses of CV in the global South (2020). And on the webpage excavating.ai, Crawford discusses in detail the problems surrounding ImageNet – the primary database used in CV research.

Finally, in a report for the American Civil Liberties Union, Stanley (2019) discusses the dangers of AI cameras and video analytics. Those dangers include chilling effects, the new types of data smart cameras can gather, the unscientific basis of certain forms of analytics, discriminatory effects, and the possibility of over-enforcement and abuse of the technology.

The ethics of facial recognition 

The topic in CV that received the most attention is facial recognition. In 2004, Brey discussed the ethics of using facial recognition in public spaces. He highlights the problem of error, the problem of function creep, and privacy (Brey, 2004). Buolamwini and Gebru (2018) discuss algorithmic fairness in the context of facial analysis systems that classify a person’s gender based on the image of their face. Blank and colleagues (2019) analyze the ethics of facial recognition in light of human rights, error rates, and bias. Moraes and colleagues (2020) discuss the use of facial recognition in Brazil and identify as risks of the lack of a legal basis, inaccuracy, normalization of surveillance, and a lack of transparency. Last but not least, Waelen (2022) – that’s me – considers the psychological harm of misrecognition by facial recognition systems.

Conclusion

While CV plays a crucial role in AI research, it is not as central in AI ethics. Of course, many general concerns in AI ethics and related disciplines might apply to the CV context – think about surveillance and privacy, bias and transparency, or automation and unemployment. And, as the overview in this blog has shown, some literature is available on the ethics of CV or specific CV applications. To strengthen the debate on this topic, I will write a series of blogs on the ethics of computer vision for the Montreal AI Ethics Institute.

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

🔍 SEARCH

Spotlight

Close-up of a cat sleeping on a computer keyboard

Tech Futures: The threat of AI-generated code to the world’s digital infrastructure

The undying sun hangs in the sky, as people gather around signal towers, working through their digital devices.

Dreams and Realities in Modi’s AI Impact Summit

Illustration of a coral reef ecosystem

Tech Futures: Diversity of Thought and Experience: The UN’s Scientific Panel on AI

This image shows a large white, traditional, old building. The top half of the building represents the humanities (which is symbolised by the embedded text from classic literature which is faintly shown ontop the building). The bottom section of the building is embossed with mathematical formulas to represent the sciences. The middle layer of the image is heavily pixelated. On the steps at the front of the building there is a group of scholars, wearing formal suits and tie attire, who are standing around at the enternace talking and some of them are sitting on the steps. There are two stone, statute-like hands that are stretching the building apart from the left side. In the forefront of the image, there are 8 students - which can only be seen from the back. Their graduation gowns have bright blue hoods and they all look as though they are walking towards the old building which is in the background at a distance. There are a mix of students in the foreground.

Tech Futures: Co-opting Research and Education

Agentic AI systems and algorithmic accountability: a new era of e-commerce

related posts

  • The Chinese Approach to AI: An Analysis of Policy, Ethics, and Regulation

    The Chinese Approach to AI: An Analysis of Policy, Ethics, and Regulation

  • The Political Power of Platforms: How Current Attempts to Regulate Misinformation Amplify Opinion Po...

    The Political Power of Platforms: How Current Attempts to Regulate Misinformation Amplify Opinion Po...

  • When Algorithms Infer Pregnancy or Other Sensitive Information About People

    When Algorithms Infer Pregnancy or Other Sensitive Information About People

  • The 28 Computer Vision Datasets Used in Algorithmic Fairness Research

    The 28 Computer Vision Datasets Used in Algorithmic Fairness Research

  • The Ethics of Emotion in AI Systems (Research Summary)

    The Ethics of Emotion in AI Systems (Research Summary)

  • The AI Gambit – Leveraging Artificial Intelligence to Combat Climate Change

    The AI Gambit – Leveraging Artificial Intelligence to Combat Climate Change

  • Research summary: The Deepfake Detection  Challenge: Insights and Recommendations  for AI and Media ...

    Research summary: The Deepfake Detection Challenge: Insights and Recommendations for AI and Media ...

  • Prediction Sensitivity: Continual Audit of Counterfactual Fairness in Deployed Classifiers

    Prediction Sensitivity: Continual Audit of Counterfactual Fairness in Deployed Classifiers

  • Mapping AI Arguments in Journalism and Communication Studies

    Mapping AI Arguments in Journalism and Communication Studies

  • Slow AI and The Culture of Speed

    Slow AI and The Culture of Speed

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer


Articles

Columns

AI Literacy

The State of AI Ethics Report


 

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.

Contact

Donate


  • © 2025 MONTREAL AI ETHICS INSTITUTE.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.