• Skip to main content
  • Skip to secondary menu
  • Skip to primary sidebar
  • Skip to footer
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

  • Articles
    • Public Policy
    • Privacy & Security
    • Human Rights
      • Ethics
      • JEDI (Justice, Equity, Diversity, Inclusion
    • Climate
    • Design
      • Emerging Technology
    • Application & Adoption
      • Health
      • Education
      • Government
        • Military
        • Public Works
      • Labour
    • Arts & Culture
      • Film & TV
      • Music
      • Pop Culture
      • Digital Art
  • Columns
    • AI Policy Corner
    • Recess
  • The AI Ethics Brief
  • AI Literacy
    • Research Summaries
    • AI Ethics Living Dictionary
    • Learning Community
  • The State of AI Ethics Report
    • Volume 7 (November 2025)
    • Volume 6 (February 2022)
    • Volume 5 (July 2021)
    • Volume 4 (April 2021)
    • Volume 3 (Jan 2021)
    • Volume 2 (Oct 2020)
    • Volume 1 (June 2020)
  • About
    • Our Contributions Policy
    • Our Open Access Policy
    • Contact
    • Donate

The struggle for recognition in the age of facial recognition technology

May 15, 2022

šŸ”¬ Research summary by Rosalie Waelen, a Ph.D. candidate in the Ethics of AI at the University of Twente in the Netherlands. Her research is part of the EU Horizon 2020 ITN project PROTECT under the Marie Skłodowska-Curie grant agreement No 813497.

[Original paper by Rosalie Waelen]


Overview: Facial recognition technology does not always live up to its promises – numerous examples show that it often fails to recognize people’s identity or characteristics. In this article, I argue that such misrecognition by FRT can harm people’s self-respect and self-esteem.Ā Ā 


Introduction

What happens when facial recognition technology (hereafter FRT) fails to adequately recognize our identity or characteristics? We are misidentified and misunderstood, which can be inconvenient and sometimes even lead to discriminatory treatment. In recent years we have become painfully aware of this through numerous examples of discrimination by FRT. But if we follow the work of philosophers Axel Honneth and Charles Taylor on the topic of recognition, we find that misrecognition by FRT may have another problematic consequence: it can harm our self-development and sense of self-worth.

Key Insights

The philosophy of recognition

In philosophy, the concept of recognition refers to the social acknowledgement of certain identities. Honneth and Taylor, alongside other philosophers, argue that such social recognition can be obtained on three different levels: 1) we can be recognized in the private sphere, by having people close to us that care about our needs; 2) we can be recognized in the legal sphere, by having the same rights as our fellow human-beings; and 3) we can be recognized on a societal level, by receiving acknowledgement for our societal roles and contributions. 

A lack of recognition harms people’s sense of self-worth. According to Honneth and Taylor, we need social recognition in order to develop self-confidence, self-respect and self-esteem. Moreover, they argue that a just society is one in which everyone receives due recognition.

Facial recognition’s failures

FRT can fail to recognize us in at least three distinct ways. First of all, FRT systems aimed at identifying persons (e.g. as ā€˜Rosalie Waelen’ or ā€˜client number 123’) can misidentify a person. This can have serious results, such as innocent people being arrested by the police.

Secondly, FRT aimed at categorizing persons (e.g. as ā€˜female’, ā€˜White’, ā€˜worried’) can misrecognize them when they attribute the wrong characteristics to a person. Infamous examples of this form of misrecognition are the cases in which images of Black people were mistakenly categorized as ā€˜gorillas’ or ā€˜primates’.

A third way in which FRT can misrecognize us is by categorizing or profiling us in ways that do not resonate with our own sense of who we are – our ā€˜subjective identity’. For example, when a FRT is only programmed to categorize people’s gender as ā€˜male’ or ā€˜female’, it cannot do justice to those who identify as non-binary. 

Following the philosophy of recognition, I understand these failures of FRT as misrecognition in the philosophical, normative sense. This is especially so, because FRT has been found to systematically misrecognize specific groups: darker-skinned persons and women, as well as those who do not fit normalized labels (take the example of non-binary persons). Hence, by constantly being misidentified and miscategorized, these groups receive the message that they are not equally important members of society. In other words, they are not recognized as having equal rights or equally valuable roles in society. As a result, their development of self-respect and self-esteem may be compromised. 

Can we be misrecognized by a technology?

Recognition is usually discussed as a social relation between individual persons or groups. But in this article, I suggest that technology can recognize and misrecognize us in the same ways as humans can. Of course, technology cannot intentionally recognize individuals, or receive social recognition themselves. But we can experience our own relation to technology as being similar to our relation to other social actors. As a result, we can suffer psychological harm when a technology did not recognize our needs, rights or social contributions. 

Between the lines

This article makes use of social philosophy (namely, the philosophy of recognition) to uncover ethical issues that are usually left out of discussions about the ethics of FRT or AI in general. This approach shows that social and political analyses and critiques can inform and improve the ethical analysis of new technologies. FRT’s ethical implications cannot only be solved by improving the technology, they might also require a change of societal norms and practices. 

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

šŸ” SEARCH

Spotlight

This image shows a large white, traditional, old building. The top half of the building represents the humanities (which is symbolised by the embedded text from classic literature which is faintly shown ontop the building). The bottom section of the building is embossed with mathematical formulas to represent the sciences. The middle layer of the image is heavily pixelated. On the steps at the front of the building there is a group of scholars, wearing formal suits and tie attire, who are standing around at the enternace talking and some of them are sitting on the steps. There are two stone, statute-like hands that are stretching the building apart from the left side. In the forefront of the image, there are 8 students - which can only be seen from the back. Their graduation gowns have bright blue hoods and they all look as though they are walking towards the old building which is in the background at a distance. There are a mix of students in the foreground.

Tech Futures: Co-opting Research and Education

Agentic AI systems and algorithmic accountability: a new era of e-commerce

ALL IN Conference 2025: Four Key Takeaways from Montreal

Beyond Dependency: The Hidden Risk of Social Comparison in Chatbot Companionship

AI Policy Corner: Restriction vs. Regulation: Comparing State Approaches to AI Mental Health Legislation

related posts

  • ā€œCold Hard Dataā€ – Nothing Cold or Hard About It

    ā€œCold Hard Dataā€ – Nothing Cold or Hard About It

  • Scientists' Perspectives on the Potential for Generative AI in their Fields

    Scientists' Perspectives on the Potential for Generative AI in their Fields

  • AI agents for facilitating social interactions and wellbeing

    AI agents for facilitating social interactions and wellbeing

  • Energy and Policy Considerations in Deep Learning for NLP

    Energy and Policy Considerations in Deep Learning for NLP

  • The Abuse and Misogynoir Playbook, explained

    The Abuse and Misogynoir Playbook, explained

  • 3 activism lessons from Jane Goodall you can apply in AI Ethics

    3 activism lessons from Jane Goodall you can apply in AI Ethics

  • The Ethical Implications of Generative Audio Models: A Systematic Literature Review

    The Ethical Implications of Generative Audio Models: A Systematic Literature Review

  • Why AI Ethics Is a Critical Theory

    Why AI Ethics Is a Critical Theory

  • The Limits of Global Inclusion in AI Development (Research Summary)

    The Limits of Global Inclusion in AI Development (Research Summary)

  • Responsible Generative AI: A Reference Architecture for Designing Foundation Model-based Agents

    Responsible Generative AI: A Reference Architecture for Designing Foundation Model-based Agents

Partners

  • Ā 
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer


Articles

Columns

AI Literacy

The State of AI Ethics Report


 

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.

Contact

Donate


  • Ā© 2025 MONTREAL AI ETHICS INSTITUTE.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.