• Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • Core Principles of Responsible AI
    • Accountability
    • Fairness
    • Privacy
    • Safety and Security
    • Sustainability
    • Transparency
  • Special Topics
    • AI in Industry
    • Ethical Implications
    • Human-Centered Design
    • Regulatory Landscape
    • Technical Methods
  • Living Dictionary
  • State of AI Ethics
  • AI Ethics Brief
  • 🇫🇷
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

The Struggle for AI’s Recognition: Understanding the Normative Implications of Gender Bias in AI with Honneth’s Theory of Recognition

June 10, 2022

🔬 Research summary by Connor Wright, our Partnerships Manager.

[Original paper by Rosalie Waelen & Michał Wieczorek]


Overview: The effect gender bias in AI has on women’s self-worth is not currently considered by ethical guidelines. Hence, exploring AI systems systemically as well as systematically proves crucial in exposing this truth.


Introduction

Axel Honneth’s theory of recognition is adopted to try and adequately tackle gender bias in AI. The effect gender bias in AI has on women’s self-worth is not currently considered by ethical guidelines. Hence, Waelen’s and Wieczorek’s exploration of Honneth’s theory to AI and gender bias aims to contribute to the debate about what recognition means in today’s digital age. To do so, we shall explore their interpretation of Honneth’s theory, how this applies to AI, and how we can surpass this reality.

Key Insights

Honneth’s theory of recognition

Our social relations have a significant influence on our identity and personality. Not only do the interactions we have affect how we see ourselves, but also the experiences we don’t have. To explain, Honneth presents his three relations of recognition.

Love

Recognition concerning love pertains to our physical and emotional needs being affirmed or denied by others. Love recognises the individual and their needs as valuable. While primarily situated between a mother and child, Honneth also shows that it comes to light in later life in the shape of basic self-confidence. The more love shown, the more self-confidence present.

In relation to AI, women suffer through the misrecognition of an individual’s uniqueness and particular needs. Being misrepresented in datasets which eventually leads to biased outcomes of the system against women, contributes to a sense of low self-confidence and self-worth.

Rights

The authors relate rights to recognition in terms of making decisions that are valued and respected by others. Here, we recognise a person’s capacity to be a moral agent, making decisions that are adhered and listened to by others. Being worthy of others’ value in this way leads to a sense of self-respect.

As previously mentioned, AI systems in this way disrespect women by not allowing for their total inclusion in data sets and considerations. This renders them helpless in shaping the future direction of the technology.

Solidarity

Recognising others through solidarity relates to people’s contributions to society and how they are evaluated by others, which eventually leads to differing levels of self-esteem. In terms of AI, misrecognition would involve under-appreciating women’s contributions to society and their role within it being trivialised.

AI gender bias

With Honneth’s theory in mind, bias in AI can come in 3 different ways (drawing on Friedman and Nissenbaum): pre-existing, technical and emergent bias.

  1. Pre-existing bias entails how the system reproduces existing human biases resulting from the system’s design or the data used.
  2. Technical bias is where systems draw problematic outcomes from the training data provided.
  3. Emergent biases occur when a system is used in a context or for a specific purpose not intended by the developers.

These can manifest themselves in 3 different ways:

Literally misrecognising women

Some AI systems are less accurate when recognising women’s voices and faces compared to men. Consequently, women’s interaction with such technology becomes more coarse and frustrating. Women are treated as “second-rate users” (p. 7), proving a misrecognition based on love (their needs are not met) and solidarity (damaging their self-esteem).

Reinforcing stereotypes about women’s role and status in society

Examples of stereotype reinforcement can be found in having voice assistants being mainly equipped with a female-sounding voice. This makes its users associate women with a servile existence. Such underpinnings present a false narrative that women are only to adopt specific roles while also undervaluing past contributions made by women outside of this role.

Excluding female needs, perspectives and values

Women are not often granted a seat at the table in technology companies, meaning their views and perspectives are absent. Subsequently, a norm arises where the male gaze, design and priorities are made central to all walks of technological life. This is reflected in how female influencers are at a disadvantage when it comes to social media outreach when compared to men.

With this reality in mind, the authors propose different avenues to tackle this issue:

  1. Utilising more inclusive datasets and research into how to best include different female experiences within technology.
  2. Products could present themselves in a less gendered form to avoid association with gendered stereotypes.
  3. We need to treat the problems associated with AI not only as design-specific but also societal. Analysing the power structures involved when designing an AI is as important as exploring the system itself.

Between the lines

A crucial insight I draw from this paper is the need to analyse AI problems systemically as well as systematically. Whether it be through the prioritisation of the white male experience or through biased historical data, tackling AI ethical problems is not to be centred around system changes alone. If we are to develop AI that augments our exploration of ourselves rather than detracts away from it, we must look at the circumstances in which the question arose in the first place.

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

🔍 SEARCH

Spotlight

Canada’s Minister of AI and Digital Innovation is a Historic First. Here’s What We Recommend.

Am I Literate? Redefining Literacy in the Age of Artificial Intelligence

AI Policy Corner: The Texas Responsible AI Governance Act

AI Policy Corner: Singapore’s National AI Strategy 2.0

AI Governance in a Competitive World: Balancing Innovation, Regulation and Ethics | Point Zero Forum 2025

related posts

  • Research summary: What does it mean for ML to be trustworthy?

    Research summary: What does it mean for ML to be trustworthy?

  • Mind your Language (Model): Fact-Checking LLMs and their Role in NLP Research and Practice

    Mind your Language (Model): Fact-Checking LLMs and their Role in NLP Research and Practice

  • AI agents for facilitating social interactions and wellbeing

    AI agents for facilitating social interactions and wellbeing

  • Towards User-Centered Metrics for Trustworthy AI in Immersive Cyberspace

    Towards User-Centered Metrics for Trustworthy AI in Immersive Cyberspace

  • Research summary: Challenges in Supporting Exploratory Search through Voice Assistants

    Research summary: Challenges in Supporting Exploratory Search through Voice Assistants

  • AI Deception: A Survey of Examples, Risks, and Potential Solutions

    AI Deception: A Survey of Examples, Risks, and Potential Solutions

  • Policy Brief: AI’s Promise and Peril for the U.S. Government (Research summary)

    Policy Brief: AI’s Promise and Peril for the U.S. Government (Research summary)

  • The Secret Revealer: Generative Model-Inversion Attacks Against Deep Neural Networks (Research Summa...

    The Secret Revealer: Generative Model-Inversion Attacks Against Deep Neural Networks (Research Summa...

  • Research Summary: Countering Information Influence Activities: The State of the Art

    Research Summary: Countering Information Influence Activities: The State of the Art

  • Can Large Language Models Provide Security & Privacy Advice? Measuring the Ability of LLMs to Re...

    Can Large Language Models Provide Security & Privacy Advice? Measuring the Ability of LLMs to Re...

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer

Categories


• Blog
• Research Summaries
• Columns
• Core Principles of Responsible AI
• Special Topics

Signature Content


• The State Of AI Ethics

• The Living Dictionary

• The AI Ethics Brief

Learn More


• About

• Open Access Policy

• Contributions Policy

• Editorial Stance on AI Tools

• Press

• Donate

• Contact

The AI Ethics Brief (bi-weekly newsletter)

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.


Archive

  • © MONTREAL AI ETHICS INSTITUTE. All rights reserved 2024.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.