🔬 Research summary by Connor Wright, our Partnerships Manager.
[Original paper by Rosalie Waelen & Michał Wieczorek]
Overview: The effect gender bias in AI has on women’s self-worth is not currently considered by ethical guidelines. Hence, exploring AI systems systemically as well as systematically proves crucial in exposing this truth.
Axel Honneth’s theory of recognition is adopted to try and adequately tackle gender bias in AI. The effect gender bias in AI has on women’s self-worth is not currently considered by ethical guidelines. Hence, Waelen’s and Wieczorek’s exploration of Honneth’s theory to AI and gender bias aims to contribute to the debate about what recognition means in today’s digital age. To do so, we shall explore their interpretation of Honneth’s theory, how this applies to AI, and how we can surpass this reality.
Honneth’s theory of recognition
Our social relations have a significant influence on our identity and personality. Not only do the interactions we have affect how we see ourselves, but also the experiences we don’t have. To explain, Honneth presents his three relations of recognition.
Recognition concerning love pertains to our physical and emotional needs being affirmed or denied by others. Love recognises the individual and their needs as valuable. While primarily situated between a mother and child, Honneth also shows that it comes to light in later life in the shape of basic self-confidence. The more love shown, the more self-confidence present.
In relation to AI, women suffer through the misrecognition of an individual’s uniqueness and particular needs. Being misrepresented in datasets which eventually leads to biased outcomes of the system against women, contributes to a sense of low self-confidence and self-worth.
The authors relate rights to recognition in terms of making decisions that are valued and respected by others. Here, we recognise a person’s capacity to be a moral agent, making decisions that are adhered and listened to by others. Being worthy of others’ value in this way leads to a sense of self-respect.
As previously mentioned, AI systems in this way disrespect women by not allowing for their total inclusion in data sets and considerations. This renders them helpless in shaping the future direction of the technology.
Recognising others through solidarity relates to people’s contributions to society and how they are evaluated by others, which eventually leads to differing levels of self-esteem. In terms of AI, misrecognition would involve under-appreciating women’s contributions to society and their role within it being trivialised.
AI gender bias
With Honneth’s theory in mind, bias in AI can come in 3 different ways (drawing on Friedman and Nissenbaum): pre-existing, technical and emergent bias.
- Pre-existing bias entails how the system reproduces existing human biases resulting from the system’s design or the data used.
- Technical bias is where systems draw problematic outcomes from the training data provided.
- Emergent biases occur when a system is used in a context or for a specific purpose not intended by the developers.
These can manifest themselves in 3 different ways:
Literally misrecognising women
Some AI systems are less accurate when recognising women’s voices and faces compared to men. Consequently, women’s interaction with such technology becomes more coarse and frustrating. Women are treated as “second-rate users” (p. 7), proving a misrecognition based on love (their needs are not met) and solidarity (damaging their self-esteem).
Reinforcing stereotypes about women’s role and status in society
Examples of stereotype reinforcement can be found in having voice assistants being mainly equipped with a female-sounding voice. This makes its users associate women with a servile existence. Such underpinnings present a false narrative that women are only to adopt specific roles while also undervaluing past contributions made by women outside of this role.
Excluding female needs, perspectives and values
Women are not often granted a seat at the table in technology companies, meaning their views and perspectives are absent. Subsequently, a norm arises where the male gaze, design and priorities are made central to all walks of technological life. This is reflected in how female influencers are at a disadvantage when it comes to social media outreach when compared to men.
With this reality in mind, the authors propose different avenues to tackle this issue:
- Utilising more inclusive datasets and research into how to best include different female experiences within technology.
- Products could present themselves in a less gendered form to avoid association with gendered stereotypes.
- We need to treat the problems associated with AI not only as design-specific but also societal. Analysing the power structures involved when designing an AI is as important as exploring the system itself.
Between the lines
A crucial insight I draw from this paper is the need to analyse AI problems systemically as well as systematically. Whether it be through the prioritisation of the white male experience or through biased historical data, tackling AI ethical problems is not to be centred around system changes alone. If we are to develop AI that augments our exploration of ourselves rather than detracts away from it, we must look at the circumstances in which the question arose in the first place.