• Skip to main content
  • Skip to secondary menu
  • Skip to primary sidebar
  • Skip to footer
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

  • Articles
    • Public Policy
    • Privacy & Security
    • Human Rights
      • Ethics
      • JEDI (Justice, Equity, Diversity, Inclusion
    • Climate
    • Design
      • Emerging Technology
    • Application & Adoption
      • Health
      • Education
      • Government
        • Military
        • Public Works
      • Labour
    • Arts & Culture
      • Film & TV
      • Music
      • Pop Culture
      • Digital Art
  • Columns
    • AI Policy Corner
    • Recess
    • Tech Futures
  • The AI Ethics Brief
  • AI Literacy
    • Research Summaries
    • AI Ethics Living Dictionary
    • Learning Community
  • The State of AI Ethics Report
    • Volume 7 (November 2025)
    • Volume 6 (February 2022)
    • Volume 5 (July 2021)
    • Volume 4 (April 2021)
    • Volume 3 (Jan 2021)
    • Volume 2 (Oct 2020)
    • Volume 1 (June 2020)
  • About
    • Our Contributions Policy
    • Our Open Access Policy
    • Contact
    • Donate

Race and AI: the Diversity Dilemma

May 30, 2022

šŸ”¬ Research summary by Connor Wright, our Partnerships Manager.

[Original paper by Stephen Cave and Kanta Dihal]


Overview: Are white and non-white AI presented as equals? The authors argue not. While more diversity helps dissipate this effect, the problem cannot be solved by more diversity alone.


Introduction

Are white AIs and non-white AIs portrayed the same? As a commentary response to ā€œMore than Skin Deepā€ by Shelley M. Park and a development of their 2020 paper ā€œThe Whiteness of AIā€, Cave and Dihal explore just that. To show this, I’ll get into the generalisability of white AI and touch upon Ex Machina as a key use case. I’ll then note the dilemma the authors point out, the solutions to this problem and how they can make a difference. I’ll then conclude with my view that acknowledging AI for what it is – the labour and extraction of materials alike – is where we will first start to see the signs of a difference being made.

Key Insights

Whiteness and Generality

White AI conforms to certain tendencies such as power and professionalism within the AI space, with the main attribute being its generalisability. This stems from the ideal that White people can be whomever they want: they are not vilified when they take on the role of a thug/thief, nor are they stereotyped in such positions. However, this luxury is not enjoyed by non-white people and, subsequently, non-white AI.

When assuming such roles, non-white people are stereotyped and confined to the role they find themselves in. Non-white people, when playing the role of a thief, are viewed as thieves instead of ā€˜non-white people playing the role of a thief’. This is a consequence of their non-generalisability; they cannot be whoever they want. Hence, given the non-universal nature of non-white people, they start to represent their whole race in their roles. To illustrate, the authors refer to the movie Ex Machina. 

Ex MachinaĀ 

In the movie, the main machine character, Ava, is played by a white character. ā€œAva is portrayed as intelligent, eloquent, creative and powerful—attributes the White racial frame associates with Whitenessā€ (p. 1777). This is opposed to another robot in engineer Nathan’s house, Kyoko, an East Asian representation presented as submissive and less intelligent through her inability to speak. Furthermore, Jasmine, a black android, is also introduced, who does not have a head. Out of the three androids present, only the white edition is presented as fully human and capable of being whatever they choose. This is the whiteness frame which the authors mentioned previously in action, which offers its own dilemma.

The Diversity Dilemma

On the one hand, the dilemma involves how AI is presented in a white framework that reproduces harmful stereotypes of non-white AI (such as in Ex Machina). Yet, on the other hand, trying to solve this problem by clearly demarcating white and non-white AI could then play into the hands of racist ideas of servitude. AI’s typical role is to obey the commands of a master human, which, if we establish white and non-white AI, would hark back to these ideas of a non-white entity serving a white person, even if this includes cases of a white AI doing the same. Placing white AI in the servant role may help white humans feel less guilty, but this doesn’t abstain them from their guilt as non-white AI are also still being exploited.

To tackle this problem, the authors observe three solutions.

The three solutions to the dilemma

  1. Sparrow (2020) proposes abandoning racialisation and anthropomorphism altogether. However, a study conducted by Liao and He (2020) showed how racialisation could be beneficial. It helps establish strong relationships between a human and an avatar of a similar skin tone. In terms of anthropomorphism, it proves extremely useful in establishing relationships of trust between humans and AI. 
  2. A second solution could involve putting non-white AI in roles that break the stereotypical mould, such as Maeve in Westworld, disregarding the role assigned to her by white designers and leading an android revolution.
  3. A third comes in presenting non-white AI within powerful and intelligent positions to counteract any stereotypes further.

Nevertheless, the authors note how any long-lasting solution to the problem of whiteness requires a change in our perception of AI. We need to present AI in terms of the labour costs it involves, not just the genius ideas of Hollywood directors and Silicon Valley Billionaires.

Between the lines

I agree wholeheartedly with his change in perception. A brilliant resource for this is the Anatomy of an AI System undergone by Crawford and Joler. The whole lifecycle of an Amazon Alexa is presented in clear view, noting all the points of extraction and exploitation involved. In this way, seeing an AI for what it involves and ā€œlooking under the hoodā€ (as advocated by Dr Maya Indira Ganesh in this podcast) will help address the issues that whiteness brings. Without doing so, we reproduce the same exploitation, just offshore.

Further resources

Liao, Y., He, J., 2020. The racial mirroring effects on human-agent in psychotherapeutic conversation. In Proceedings of the 25th International Conference on Intelligent User Interfaces, IUI’20. ACM, New York, pp. 430–442. https://doi.org/10.1145/1234567890 Sparrow, R. (2020). Robotics has a race problem. Science, Technology, & Human Values, 45, 538–560. https://doi.org/10.1177/0162243919862862

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

šŸ” SEARCH

Spotlight

A brightly coloured illustration which can be viewed in any direction. It has many elements to it working together: men in suits around a table, someone in a data centre, big hands controlling the scenes and holding a phone, people in a production line. Motifs such as network diagrams and melting emojis are placed throughout the busy vignettes.

Tech Futures: The Fossil Fuels Playbook for Big Tech: Part II

A rock embedded with intricate circuit board patterns, held delicately by pale hands drawn in a ghostly style. The contrast between the rough, metallic mineral and the sleek, artificial circuit board illustrates the relationship between raw natural resources and modern technological development. The hands evoke human involvement in the extraction and manufacturing processes.

Tech Futures: The Fossil Fuels Playbook for Big Tech: Part I

Close-up of a cat sleeping on a computer keyboard

Tech Futures: The threat of AI-generated code to the world’s digital infrastructure

The undying sun hangs in the sky, as people gather around signal towers, working through their digital devices.

Dreams and Realities in Modi’s AI Impact Summit

Illustration of a coral reef ecosystem

Tech Futures: Diversity of Thought and Experience: The UN’s Scientific Panel on AI

related posts

  • Applying the TAII Framework on Tesla Bot

    Applying the TAII Framework on Tesla Bot

  • Rewiring What-to-Watch-Next Recommendations to Reduce Radicalization Pathways

    Rewiring What-to-Watch-Next Recommendations to Reduce Radicalization Pathways

  • From Instructions to Intrinsic Human Values - A Survey of Alignment Goals for Big Models

    From Instructions to Intrinsic Human Values - A Survey of Alignment Goals for Big Models

  • Employee Perceptions of the Effective Adoption of AI Principles

    Employee Perceptions of the Effective Adoption of AI Principles

  • Performative Power

    Performative Power

  • Teaching AI Ethics Using Science Fiction (Research summary)

    Teaching AI Ethics Using Science Fiction (Research summary)

  • A roadmap toward empowering the labor force behind AI

    A roadmap toward empowering the labor force behind AI

  • Technical methods for regulatory inspection of algorithmic systems in social media platforms

    Technical methods for regulatory inspection of algorithmic systems in social media platforms

  • Investing in AI for Social Good: An Analysis of European National Strategies

    Investing in AI for Social Good: An Analysis of European National Strategies

  • The Political Power of Platforms: How Current Attempts to Regulate Misinformation Amplify Opinion Po...

    The Political Power of Platforms: How Current Attempts to Regulate Misinformation Amplify Opinion Po...

Partners

  • Ā 
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer


Articles

Columns

AI Literacy

The State of AI Ethics Report


 

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.

Contact

Donate


  • Ā© 2025 MONTREAL AI ETHICS INSTITUTE.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.