• Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • Core Principles of Responsible AI
    • Accountability
    • Fairness
    • Privacy
    • Safety and Security
    • Sustainability
    • Transparency
  • Special Topics
    • AI in Industry
    • Ethical Implications
    • Human-Centered Design
    • Regulatory Landscape
    • Technical Methods
  • Living Dictionary
  • State of AI Ethics
  • AI Ethics Brief
  • 🇫🇷
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

The state of the debate on the ethics of computer vision

May 31, 2023

✍️ Column by Rosalie Waelen, a philosopher and AI ethicist, doing her Ph.D. at the University of Twente in the Netherlands.


Overview: In this blog, I present an overview of the literature on the ethics of CV. To do so, I conducted a targeted review of the literature on AI ethics. I focused on general introductions to the field, on the one hand, and discussions of CV or specific CV applications, on the other hand. To find focused articles on the ethics of CV, I used combinations of the following search terms on Google Scholar: ‘Computer Vision,’ ‘Face recognition,’ ‘Facial recognition,’ ‘Video analytics,’ ‘Ethics,’ ‘Ethical,’ and ‘Implications.’ This search may not have covered all work available on the topic, but it suffices to offer an insight into the state of the debate on the ethical implications of CV.  


Introduction

While computer vision is crucial in AI research, it is not as central in AI ethics. In this blog, I offer a brief (non-systematic) review of the existing literature on the ethical implications of computer vision.

In 2012, the computer vision model AlexNet won the ImageNet competition and put neural networks on the map as the road for development in artificial intelligence. Hence, computer vision research is central to AI research in general. Computer vision (hereafter ‘CV’) is the subfield of AI concerned with automating visual data analysis to make computers interpret, describe, and respond to images and videos. CV can be used in many different ways, serving the purpose of surveillance, business intelligence, healthcare, or personal convenience.

The ethics of CV in AI ethics overviews

Going through general discussions of AI ethics, CV and its different applications do not play a dominant role. The Stanford Encyclopedia of Philosophy entry on ‘Ethics of Artificial Intelligence and Robotics’ (Müller, 2020) does not mention CV at all and only refers to face recognition once. The same goes for the Internet Encyclopedia of Philosophy entry on ‘Ethics of Artificial Intelligence’ (Gordon & Nyholm, 2021). 

In The Oxford Handbook of Ethics of AI (Dubber, Pasquale & Das, 2020), there is not a single chapter dedicated to CV or CV applications. However, CV applications are, of course, among the examples discussed in some of the chapters. In Coeckelbergh’s book AI Ethics (2020), computer vision is briefly mentioned as one of many AI techniques. Facial recognition repeatedly pops up in examples – regarding surveillance and privacy, data and biases, Walmart’s analysis of customers, and Facebook’s photo tagging. But Coeckelbergh discusses neither the technology nor the example use cases in much detail. 

Crawford’s book The Atlas of AI. Power, Politics, and the Planetary Cost of Artificial Intelligence (2021) deals with CV in more detail. For instance, Crawford discusses how energy-hungry the development of CV models is, the problematic assumption that images are apolitical and can be given a single label, and the unscientific basis of CV-based emotion recognition. However, CV is again only brought up as an example for certain issues and not given a central place in the outline of the discussion on AI ethics. 

Micro-ethical work on CV 

CV might not have a central place in the ethics of AI, but that does not mean that the ethics of CV has been entirely ignored. As said, CV is often mentioned as an example case in discussions about AI ethics – particularly in relation to surveillance and privacy or biased data and discrimination. Furthermore, within the broader literature, there are some articles available that focus specifically on CV or specific CV applications. 

I found two sources that cover the ethics of CV in general. The first is a Master’s thesis by Lauronen (2017), in which six ethical themes in computer vision are identified based on a literature review: espionage, identity theft, malicious attacks, copyright infringement, discrimination, and misinformation. The second is a conference paper by Skirpan and Yeh (2017) about designing a moral compass for CV research. Skirpan and Yeh categorize five risks in CV: privacy violations, discrimination, security breaches, spoofing and adversarial inputs, and psychological harm.

Other articles specify applications or use cases of CV. Coupland and colleagues (2009) discuss how ethical consideration could or should be part of developing CV applications based on a case study about a CV system for person tracking, occupancy, and fall detection. Huffer et al. (2019) go into the ethics of CV in the context of human remains trafficking. Dufresne-Camaro and colleagues wrote a paper on CV research for the global South and the risks related to the uses of CV in the global South (2020). And on the webpage excavating.ai, Crawford discusses in detail the problems surrounding ImageNet – the primary database used in CV research.

Finally, in a report for the American Civil Liberties Union, Stanley (2019) discusses the dangers of AI cameras and video analytics. Those dangers include chilling effects, the new types of data smart cameras can gather, the unscientific basis of certain forms of analytics, discriminatory effects, and the possibility of over-enforcement and abuse of the technology.

The ethics of facial recognition 

The topic in CV that received the most attention is facial recognition. In 2004, Brey discussed the ethics of using facial recognition in public spaces. He highlights the problem of error, the problem of function creep, and privacy (Brey, 2004). Buolamwini and Gebru (2018) discuss algorithmic fairness in the context of facial analysis systems that classify a person’s gender based on the image of their face. Blank and colleagues (2019) analyze the ethics of facial recognition in light of human rights, error rates, and bias. Moraes and colleagues (2020) discuss the use of facial recognition in Brazil and identify as risks of the lack of a legal basis, inaccuracy, normalization of surveillance, and a lack of transparency. Last but not least, Waelen (2022) – that’s me – considers the psychological harm of misrecognition by facial recognition systems.

Conclusion

While CV plays a crucial role in AI research, it is not as central in AI ethics. Of course, many general concerns in AI ethics and related disciplines might apply to the CV context – think about surveillance and privacy, bias and transparency, or automation and unemployment. And, as the overview in this blog has shown, some literature is available on the ethics of CV or specific CV applications. To strengthen the debate on this topic, I will write a series of blogs on the ethics of computer vision for the Montreal AI Ethics Institute.

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

🔍 SEARCH

Spotlight

AI Policy Corner: Frontier AI Safety Commitments, AI Seoul Summit 2024

AI Policy Corner: The Colorado State Deepfakes Act

Special Edition: Honouring the Legacy of Abhishek Gupta (1992–2024)

AI Policy Corner: The Turkish Artificial Intelligence Law Proposal

From Funding Crisis to AI Misuse: Critical Digital Rights Challenges from RightsCon 2025

related posts

  • Is ChatGPT for everyone? Seeing beyond the hype toward responsible use in education

    Is ChatGPT for everyone? Seeing beyond the hype toward responsible use in education

  • Discover Weekly: How the Music Platform Spotify Collects and Uses Your Data

    Discover Weekly: How the Music Platform Spotify Collects and Uses Your Data

  • Can we blame a chatbot if it goes wrong?

    Can we blame a chatbot if it goes wrong?

  • Responsible sourcing and the professionalization of data work

    Responsible sourcing and the professionalization of data work

  • AI Governance on the Ground: Canada’s Algorithmic Impact Assessment Process and Algorithm has evolve...

    AI Governance on the Ground: Canada’s Algorithmic Impact Assessment Process and Algorithm has evolve...

  • Real talk: What is Responsible AI?

    Real talk: What is Responsible AI?

  • Will an Artificial Intellichef be Cooking Your Next Meal at a Michelin Star Restaurant?

    Will an Artificial Intellichef be Cooking Your Next Meal at a Michelin Star Restaurant?

  • How Do We Teach Tech Ethics? How Should We?

    How Do We Teach Tech Ethics? How Should We?

  • Why civic competence in AI ethics is needed in 2021

    Why civic competence in AI ethics is needed in 2021

  • Oppenheimer As A Timely Warning to the AI Community

    Oppenheimer As A Timely Warning to the AI Community

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer

Categories


• Blog
• Research Summaries
• Columns
• Core Principles of Responsible AI
• Special Topics

Signature Content


• The State Of AI Ethics

• The Living Dictionary

• The AI Ethics Brief

Learn More


• About

• Open Access Policy

• Contributions Policy

• Editorial Stance on AI Tools

• Press

• Donate

• Contact

The AI Ethics Brief (bi-weekly newsletter)

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.


Archive

  • © MONTREAL AI ETHICS INSTITUTE. All rights reserved 2024.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.