• Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • Core Principles of Responsible AI
    • Accountability
    • Fairness
    • Privacy
    • Safety and Security
    • Sustainability
    • Transparency
  • Special Topics
    • AI in Industry
    • Ethical Implications
    • Human-Centered Design
    • Regulatory Landscape
    • Technical Methods
  • Living Dictionary
  • State of AI Ethics
  • AI Ethics Brief
  • 🇫🇷
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

Computer Vision’s implications for human autonomy

June 16, 2023

✍️ Column by Rosalie Waelen, a philosopher and AI ethicist, doing her Ph.D. at the University of Twente in the Netherlands.


Overview: The increasing development and use of computer vision applications give rise to various ethical and societal issues. In this blog, I discuss computer vision’s impact on privacy, identity, and human agency, as a part of my column on the ethics of computer vision.  


Introduction

Not everyone is familiar with the term ‘computer vision.’ However, it is safe to assume that, at least in the Global North, everybody has encountered or used a computer vision application at some point. Think about Snapchat filters, automated passport controls, or cash desk-free stores (such as Amazon Go stores). All of these technologies make use of computer vision. As for any form of AI, it is important to carefully reflect on how this technology might impact our lives and societies. 

Concerns raised in the existing literature on the ethics of computer vision (see the previous blog) ar, for example, that computer vision applications violate people’s privacy, that errors in these systems can lead to discrimination, and that the systems are untransparent. In this blog, I discuss these and other concerns related to computer vision in more detail. 

Privacy

Privacy is both a legal and a moral right. This means that technology can be considered privacy-invasive, even if it aligns with established privacy and data protection legislation. Privacy is probably the most raised concern in relation to smart camera applications. In fact, the concept of privacy has a history intimately related to camera technologies. Warren and Brandeis’ famous 1890 paper – which established the idea of a right to privacy – was in part a response to the emergence of images of persons in the press. 

We can distinguish three different types of privacy-related concerns in the context of computer vision applications. First of all, there is the fact that computer vision applications challenge people’s ability to control their personal information. While this problem applies to many modern technologies, computer vision makes informational control extra challenging. In the case of computer vision, it is one’s appearance that is turned into data. This means that it can be even more challenging for laypeople to grasp what exact information is at stake. 

Another privacy-related problem has to do with anonymity. Anonymity is not only valuable to those who have something to hide. Anonymity creates a safe space where people can be who they want to be and develop themselves. Therefore, anonymity promotes freedom and autonomy. 

Finally, a lack of privacy often goes hand-in-hand with a sense of discomfort. Constantly being “watched” by smart camera applications can make some people uncomfortable. This discomfort can, in turn, lead to so-called chilling effects. Chilling effects imply that people refrain from acting in a certain way. Such behavior change becomes especially concerning when it implies that people refrain from exercising their rights – such as the right to protest. 

Identity

The concept of privacy is closely related to the concept of identity. Having informational control allows one to control how their identity is perceived, and having anonymity gives one the space to be and become their authentic self. However, it is important to think about computer vision’s impact on identity independently from the concept of privacy.

Computer vision applications, particularly facial recognition systems, remove people’s ability to communicate their identity to the outside world. Another word for this ability is testimonial agency. For example, if a system categorizes me as ‘female’ and ‘European,’ I do not get a chance to contest these labels or to emphasize elements of my identity that I find more important. 

As Crawford and Paglen (2021) also point out, computer vision systems assign labels to human persons that are not neutral but political. Labels express a certain vision of identity and biases towards certain labels reflect the biases and forms of discrimination deeply embedded in our societies. Furthermore, misrecognizing people’s unique identity through irrelevant or false labels can harm people’s development of self-respect and self-esteem (as I have argued elsewhere).

Human agency

Computer vision applications’ increasing presence and use affect human agency in various ways. As mentioned, labels given to people based on their appearance can affect their ability to shape how others perceive them. 

Computer vision can also impact people’s ability to know and understand their environment positively and negatively. On the one hand, computer vision systems are very complex. Consequently, it can be difficult for laypeople to understand how these systems reach conclusions and what information they can retrieve based on a person’s appearance. This is why there is so much talk about the need for transparency in AI. On the other hand, computer vision can promote people’s understanding of their environment. Think, for instance, about Google Lens – this application can help you to translate text written in a foreign language or to identify the types of insects or plants you find in your garden. 

Finally, computer vision can harm an ability that philosophers like to call moral autonomy. Moral autonomy implies acting out of moral duty rather than in accordance with moral duty. In other words, it means doing the right thing, not because you’re forced to, but because you know it is right. Computer vision can impact this ability because this technology is a uniquely efficient tool for law enforcement. By surveilling citizens’ every move, people might no longer be encouraged to become virtuous citizens who act morally, not because they are forced to by technology but because they know it is right.

Summary

Computer vision applications can seriously impact the privacy, identity formation, and human agency of individuals and groups. For example, computer vision can change how we see ourselves and communicate our identity, fundamentally change our sense of privacy, or decrease our ability to follow our own (moral) judgment. All of these issues have in common that they are important because they support people’s autonomy. However, these issues do not cover all ethically relevant ways in which computer vision can impact our lives and societies. In the next blogs, I will zoom in on computer vision’s potential for social control and the sustainability of computer vision. 

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

🔍 SEARCH

Spotlight

AI Policy Corner: Frontier AI Safety Commitments, AI Seoul Summit 2024

AI Policy Corner: The Colorado State Deepfakes Act

Special Edition: Honouring the Legacy of Abhishek Gupta (1992–2024)

AI Policy Corner: The Turkish Artificial Intelligence Law Proposal

From Funding Crisis to AI Misuse: Critical Digital Rights Challenges from RightsCon 2025

related posts

  • Will an Artificial Intellichef be Cooking Your Next Meal at a Michelin Star Restaurant?

    Will an Artificial Intellichef be Cooking Your Next Meal at a Michelin Star Restaurant?

  • Can LLMs Enhance the Conversational AI Experience?

    Can LLMs Enhance the Conversational AI Experience?

  • Who Is Governing AI Matters Just as Much as How It's Designed

    Who Is Governing AI Matters Just as Much as How It's Designed

  • The state of the debate on the ethics of computer vision

    The state of the debate on the ethics of computer vision

  • From Funding Crisis to AI Misuse: Critical Digital Rights Challenges from RightsCon 2025

    From Funding Crisis to AI Misuse: Critical Digital Rights Challenges from RightsCon 2025

  • Exploring the under-explored areas in teaching tech ethics today

    Exploring the under-explored areas in teaching tech ethics today

  • Artificial Intelligence in healthcare: providing ease or ethical dilemmas?

    Artificial Intelligence in healthcare: providing ease or ethical dilemmas?

  • Teaching Responsible AI in a Time of Hype

    Teaching Responsible AI in a Time of Hype

  • The Future of Teaching Tech Ethics

    The Future of Teaching Tech Ethics

  • Discover Weekly: How the Music Platform Spotify Collects and Uses Your Data

    Discover Weekly: How the Music Platform Spotify Collects and Uses Your Data

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer

Categories


• Blog
• Research Summaries
• Columns
• Core Principles of Responsible AI
• Special Topics

Signature Content


• The State Of AI Ethics

• The Living Dictionary

• The AI Ethics Brief

Learn More


• About

• Open Access Policy

• Contributions Policy

• Editorial Stance on AI Tools

• Press

• Donate

• Contact

The AI Ethics Brief (bi-weekly newsletter)

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.


Archive

  • © MONTREAL AI ETHICS INSTITUTE. All rights reserved 2024.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.