• Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • Core Principles of Responsible AI
    • Accountability
    • Fairness
    • Privacy
    • Safety and Security
    • Sustainability
    • Transparency
  • Special Topics
    • AI in Industry
    • Ethical Implications
    • Human-Centered Design
    • Regulatory Landscape
    • Technical Methods
  • Living Dictionary
  • State of AI Ethics
  • AI Ethics Brief
  • šŸ‡«šŸ‡·
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

The struggle for recognition in the age of facial recognition technology

May 15, 2022

šŸ”¬ Research summary by Rosalie Waelen, a Ph.D. candidate in the Ethics of AI at the University of Twente in the Netherlands. Her research is part of the EU Horizon 2020 ITN project PROTECT under the Marie Skłodowska-Curie grant agreement No 813497.

[Original paper by Rosalie Waelen]


Overview: Facial recognition technology does not always live up to its promises – numerous examples show that it often fails to recognize people’s identity or characteristics. In this article, I argue that such misrecognition by FRT can harm people’s self-respect and self-esteem.Ā Ā 


Introduction

What happens when facial recognition technology (hereafter FRT) fails to adequately recognize our identity or characteristics? We are misidentified and misunderstood, which can be inconvenient and sometimes even lead to discriminatory treatment. In recent years we have become painfully aware of this through numerous examples of discrimination by FRT. But if we follow the work of philosophers Axel Honneth and Charles Taylor on the topic of recognition, we find that misrecognition by FRT may have another problematic consequence: it can harm our self-development and sense of self-worth.

Key Insights

The philosophy of recognition

In philosophy, the concept of recognition refers to the social acknowledgement of certain identities. Honneth and Taylor, alongside other philosophers, argue that such social recognition can be obtained on three different levels: 1) we can be recognized in the private sphere, by having people close to us that care about our needs; 2) we can be recognized in the legal sphere, by having the same rights as our fellow human-beings; and 3) we can be recognized on a societal level, by receiving acknowledgement for our societal roles and contributions. 

A lack of recognition harms people’s sense of self-worth. According to Honneth and Taylor, we need social recognition in order to develop self-confidence, self-respect and self-esteem. Moreover, they argue that a just society is one in which everyone receives due recognition.

Facial recognition’s failures

FRT can fail to recognize us in at least three distinct ways. First of all, FRT systems aimed at identifying persons (e.g. as ā€˜Rosalie Waelen’ or ā€˜client number 123’) can misidentify a person. This can have serious results, such as innocent people being arrested by the police.

Secondly, FRT aimed at categorizing persons (e.g. as ā€˜female’, ā€˜White’, ā€˜worried’) can misrecognize them when they attribute the wrong characteristics to a person. Infamous examples of this form of misrecognition are the cases in which images of Black people were mistakenly categorized as ā€˜gorillas’ or ā€˜primates’.

A third way in which FRT can misrecognize us is by categorizing or profiling us in ways that do not resonate with our own sense of who we are – our ā€˜subjective identity’. For example, when a FRT is only programmed to categorize people’s gender as ā€˜male’ or ā€˜female’, it cannot do justice to those who identify as non-binary. 

Following the philosophy of recognition, I understand these failures of FRT as misrecognition in the philosophical, normative sense. This is especially so, because FRT has been found to systematically misrecognize specific groups: darker-skinned persons and women, as well as those who do not fit normalized labels (take the example of non-binary persons). Hence, by constantly being misidentified and miscategorized, these groups receive the message that they are not equally important members of society. In other words, they are not recognized as having equal rights or equally valuable roles in society. As a result, their development of self-respect and self-esteem may be compromised. 

Can we be misrecognized by a technology?

Recognition is usually discussed as a social relation between individual persons or groups. But in this article, I suggest that technology can recognize and misrecognize us in the same ways as humans can. Of course, technology cannot intentionally recognize individuals, or receive social recognition themselves. But we can experience our own relation to technology as being similar to our relation to other social actors. As a result, we can suffer psychological harm when a technology did not recognize our needs, rights or social contributions. 

Between the lines

This article makes use of social philosophy (namely, the philosophy of recognition) to uncover ethical issues that are usually left out of discussions about the ethics of FRT or AI in general. This approach shows that social and political analyses and critiques can inform and improve the ethical analysis of new technologies. FRT’s ethical implications cannot only be solved by improving the technology, they might also require a change of societal norms and practices. 

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

šŸ” SEARCH

Spotlight

AI Policy Corner: Singapore’s National AI Strategy 2.0

AI Governance in a Competitive World: Balancing Innovation, Regulation and Ethics | Point Zero Forum 2025

AI Policy Corner: Frontier AI Safety Commitments, AI Seoul Summit 2024

AI Policy Corner: The Colorado State Deepfakes Act

Special Edition: Honouring the Legacy of Abhishek Gupta (1992–2024)

related posts

  • On the Challenges of Deploying Privacy-Preserving Synthetic Data in the Enterprise

    On the Challenges of Deploying Privacy-Preserving Synthetic Data in the Enterprise

  • A Taxonomy of Foundation Model based Systems for Responsible-AI-by-Design

    A Taxonomy of Foundation Model based Systems for Responsible-AI-by-Design

  • Characterizing, Detecting, and Predicting Online Ban Evasion

    Characterizing, Detecting, and Predicting Online Ban Evasion

  • Self-Consuming Generative Models Go MAD

    Self-Consuming Generative Models Go MAD

  • AI Ethics and Ordoliberalism 2.0: Towards A ā€˜Digital Bill of Rights

    AI Ethics and Ordoliberalism 2.0: Towards A ā€˜Digital Bill of Rights

  • Why reciprocity prohibits autonomous weapons systems in war

    Why reciprocity prohibits autonomous weapons systems in war

  • Toward Responsible AI Use: Considerations for Sustainability Impact Assessment

    Toward Responsible AI Use: Considerations for Sustainability Impact Assessment

  • Understanding Toxicity Triggers on Reddit in the Context of Singapore

    Understanding Toxicity Triggers on Reddit in the Context of Singapore

  • Artificial Intelligence and Inequality in the Middle East: The Political Economy of Inclusion

    Artificial Intelligence and Inequality in the Middle East: The Political Economy of Inclusion

  • Governance of artificial intelligence

    Governance of artificial intelligence

Partners

  • Ā 
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer

Categories


• Blog
• Research Summaries
• Columns
• Core Principles of Responsible AI
• Special Topics

Signature Content


• The State Of AI Ethics

• The Living Dictionary

• The AI Ethics Brief

Learn More


• About

• Open Access Policy

• Contributions Policy

• Editorial Stance on AI Tools

• Press

• Donate

• Contact

The AI Ethics Brief (bi-weekly newsletter)

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.


Archive

  • Ā© MONTREAL AI ETHICS INSTITUTE. All rights reserved 2024.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.