• Skip to main content
  • Skip to secondary menu
  • Skip to primary sidebar
  • Skip to footer
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

  • Articles
    • Public Policy
    • Privacy & Security
    • Human Rights
      • Ethics
      • JEDI (Justice, Equity, Diversity, Inclusion
    • Climate
    • Design
      • Emerging Technology
    • Application & Adoption
      • Health
      • Education
      • Government
        • Military
        • Public Works
      • Labour
    • Arts & Culture
      • Film & TV
      • Music
      • Pop Culture
      • Digital Art
  • Columns
    • AI Policy Corner
    • Recess
    • Tech Futures
  • The AI Ethics Brief
  • AI Literacy
    • Research Summaries
    • AI Ethics Living Dictionary
    • Learning Community
  • The State of AI Ethics Report
    • Volume 7 (November 2025)
    • Volume 6 (February 2022)
    • Volume 5 (July 2021)
    • Volume 4 (April 2021)
    • Volume 3 (Jan 2021)
    • Volume 2 (Oct 2020)
    • Volume 1 (June 2020)
  • About
    • Our Contributions Policy
    • Our Open Access Policy
    • Contact
    • Donate

Combatting Anti-Blackness in the AI Community

August 10, 2021

🔬 Research summary by Connor Wright, our Partnerships Manager.

[Original paper by Devin Guillory]


Overview: Racism has the potential to establish itself in every corner of society, with the AI community being no different. With a mix of observations and advice, the paper harbours a need for change alongside the potential for the academic environment to manifest it. While some of the steps involved carry risk, the danger of not doing so is even greater.


Introduction

How badly does the AI community need diversity? What is academia’s role in this process? Themes surrounding how those in the AI community can help to combat systemic racial injustice are lightly touched upon through an academic’s lens. Here, acknowledging how racism permeates into every corner of our society is an important first step. However, further disparities and changes are still left unturned, with the AI community suffering as a result if nothing is done.

Key Insights

Discrepancies in resources

The AI field boasts some invisible barriers to entry between candidates of different ethnicities wanting access. These present themselves in 3 different categories:

  • Physical discrepancies. Disparities in resources, such as computers, are accentuated in a field that often requires large computing power to participate. Also included is the valuable asset of time.
  • Social discrepancies. Many AI jobs are now accessed through social networking and referrals. With there being gaps in physical resources, having access to the networking environments required varies hugely.
  • The Measures used – SAT systems have been seen to disproportionately disadvantage Black students.

In the admissions process, the disparity in terms of social and physical resources becomes even more apparent. Academia’s role, and the potential problems it can propagate, become even more critical with its relation to the AI community.

Academia as a well of information

Academia and research are a direct feeder into the AI community, so any poisoning found in the field will propagate to other parts of society. In this way, academic faculties will have to wholeheartedly buy into the effort of combating the issues at their root. One way to do this is through feedback.

The importance of feedback

Any positive change will need to be grounded in information from those who have gone through the system. The experiences of those who have been discriminated against can provide crucial insights into how the system can change. Without such feedback, the same procedures and the same discrimination will continue to be present.

Given the need for change, the paper also offers views on what can be done. The first of which involves jumping into the unknown.

Taking risks

Starting to accept candidates with different applications to years gone by can be a first step toward combating any effects of systemic racism. Looking at other institutions from which students come from or prioritising different characteristics in successful candidates. Not emphasising the need for experienced, a clear consequence of having lesser social and physical discrepancies, as much can be one way of doing this.

Reflecting on your environment

As a researcher, reflecting on which students you are mentoring can also bring up observations about the current diversity level in your environment. Furthermore, collaborating with different people than you usually can also help promote diversity by experiencing different ways of thinking. 

What diversity brings

Such alternative views are not the only thing that diversity brings. Having a faculty with varied backgrounds can also allow students of similar experiences to better relate to their professors. Some students may feel that they can only talk about specific problems with professors of similar backgrounds, with such a presence bringing great comfort to the academic experience. However, this is not to say that underrepresented members are solely thought of in the value they add to an in-group institution. Instead, the benefits of diversity should be a consequence of the diverse professors’ value.

Between the lines

The potential academia possesses to influence the proliferation of discriminatory practices in the AI community is extensive. Seen as the seed for the AI community, taking risks to effect change is a significant step for me. Nevertheless, any form of change will not be easy, especially if it involves self-reflection about your environment. However, not taking these steps could further drive any form of diversity away, which is simply a move that the AI community can no longer afford.

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

🔍 SEARCH

Spotlight

A rock embedded with intricate circuit board patterns, held delicately by pale hands drawn in a ghostly style. The contrast between the rough, metallic mineral and the sleek, artificial circuit board illustrates the relationship between raw natural resources and modern technological development. The hands evoke human involvement in the extraction and manufacturing processes.

Tech Futures: The Fossil Fuels Playbook for Big Tech: Part I

Close-up of a cat sleeping on a computer keyboard

Tech Futures: The threat of AI-generated code to the world’s digital infrastructure

The undying sun hangs in the sky, as people gather around signal towers, working through their digital devices.

Dreams and Realities in Modi’s AI Impact Summit

Illustration of a coral reef ecosystem

Tech Futures: Diversity of Thought and Experience: The UN’s Scientific Panel on AI

This image shows a large white, traditional, old building. The top half of the building represents the humanities (which is symbolised by the embedded text from classic literature which is faintly shown ontop the building). The bottom section of the building is embossed with mathematical formulas to represent the sciences. The middle layer of the image is heavily pixelated. On the steps at the front of the building there is a group of scholars, wearing formal suits and tie attire, who are standing around at the enternace talking and some of them are sitting on the steps. There are two stone, statute-like hands that are stretching the building apart from the left side. In the forefront of the image, there are 8 students - which can only be seen from the back. Their graduation gowns have bright blue hoods and they all look as though they are walking towards the old building which is in the background at a distance. There are a mix of students in the foreground.

Tech Futures: Co-opting Research and Education

related posts

  • Exploiting The Right: Inferring Ideological Alignment in Online Influence Campaigns Using Shared Ima...

    Exploiting The Right: Inferring Ideological Alignment in Online Influence Campaigns Using Shared Ima...

  • Towards A Unified Utilitarian Ethics Framework for Healthcare Artificial Intelligence

    Towards A Unified Utilitarian Ethics Framework for Healthcare Artificial Intelligence

  • Is the Human Being Lost in the Hiring Process?

    Is the Human Being Lost in the Hiring Process?

  • Supporting Human-LLM collaboration in Auditing LLMs with LLMs

    Supporting Human-LLM collaboration in Auditing LLMs with LLMs

  • Labor and Fraud on the Google Play Store: The Case of Install-Incentivizing Apps

    Labor and Fraud on the Google Play Store: The Case of Install-Incentivizing Apps

  • Research summary: On the Edge of Tomorrow: Canada’s AI Augmented Workforce

    Research summary: On the Edge of Tomorrow: Canada’s AI Augmented Workforce

  • Diagnosing Gender Bias In Image Recognition Systems (Research Summary)

    Diagnosing Gender Bias In Image Recognition Systems (Research Summary)

  • AI and Marketing: Why We Need to Ask Ethical Questions

    AI and Marketing: Why We Need to Ask Ethical Questions

  • Ethics of AI‑Enabled Recruiting and Selection: A Review and Research Agenda

    Ethics of AI‑Enabled Recruiting and Selection: A Review and Research Agenda

  • Moral Machine or Tyranny of the Majority?

    Moral Machine or Tyranny of the Majority?

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer


Articles

Columns

AI Literacy

The State of AI Ethics Report


 

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.

Contact

Donate


  • © 2025 MONTREAL AI ETHICS INSTITUTE.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.