• Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • Core Principles of Responsible AI
    • Accountability
    • Fairness
    • Privacy
    • Safety and Security
    • Sustainability
    • Transparency
  • Special Topics
    • AI in Industry
    • Ethical Implications
    • Human-Centered Design
    • Regulatory Landscape
    • Technical Methods
  • Living Dictionary
  • State of AI Ethics
  • AI Ethics Brief
  • 🇫🇷
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

Combatting Anti-Blackness in the AI Community

August 10, 2021

🔬 Research summary by Connor Wright, our Partnerships Manager.

[Original paper by Devin Guillory]


Overview: Racism has the potential to establish itself in every corner of society, with the AI community being no different. With a mix of observations and advice, the paper harbours a need for change alongside the potential for the academic environment to manifest it. While some of the steps involved carry risk, the danger of not doing so is even greater.


Introduction

How badly does the AI community need diversity? What is academia’s role in this process? Themes surrounding how those in the AI community can help to combat systemic racial injustice are lightly touched upon through an academic’s lens. Here, acknowledging how racism permeates into every corner of our society is an important first step. However, further disparities and changes are still left unturned, with the AI community suffering as a result if nothing is done.

Key Insights

Discrepancies in resources

The AI field boasts some invisible barriers to entry between candidates of different ethnicities wanting access. These present themselves in 3 different categories:

  • Physical discrepancies. Disparities in resources, such as computers, are accentuated in a field that often requires large computing power to participate. Also included is the valuable asset of time.
  • Social discrepancies. Many AI jobs are now accessed through social networking and referrals. With there being gaps in physical resources, having access to the networking environments required varies hugely.
  • The Measures used – SAT systems have been seen to disproportionately disadvantage Black students.

In the admissions process, the disparity in terms of social and physical resources becomes even more apparent. Academia’s role, and the potential problems it can propagate, become even more critical with its relation to the AI community.

Academia as a well of information

Academia and research are a direct feeder into the AI community, so any poisoning found in the field will propagate to other parts of society. In this way, academic faculties will have to wholeheartedly buy into the effort of combating the issues at their root. One way to do this is through feedback.

The importance of feedback

Any positive change will need to be grounded in information from those who have gone through the system. The experiences of those who have been discriminated against can provide crucial insights into how the system can change. Without such feedback, the same procedures and the same discrimination will continue to be present.

Given the need for change, the paper also offers views on what can be done. The first of which involves jumping into the unknown.

Taking risks

Starting to accept candidates with different applications to years gone by can be a first step toward combating any effects of systemic racism. Looking at other institutions from which students come from or prioritising different characteristics in successful candidates. Not emphasising the need for experienced, a clear consequence of having lesser social and physical discrepancies, as much can be one way of doing this.

Reflecting on your environment

As a researcher, reflecting on which students you are mentoring can also bring up observations about the current diversity level in your environment. Furthermore, collaborating with different people than you usually can also help promote diversity by experiencing different ways of thinking. 

What diversity brings

Such alternative views are not the only thing that diversity brings. Having a faculty with varied backgrounds can also allow students of similar experiences to better relate to their professors. Some students may feel that they can only talk about specific problems with professors of similar backgrounds, with such a presence bringing great comfort to the academic experience. However, this is not to say that underrepresented members are solely thought of in the value they add to an in-group institution. Instead, the benefits of diversity should be a consequence of the diverse professors’ value.

Between the lines

The potential academia possesses to influence the proliferation of discriminatory practices in the AI community is extensive. Seen as the seed for the AI community, taking risks to effect change is a significant step for me. Nevertheless, any form of change will not be easy, especially if it involves self-reflection about your environment. However, not taking these steps could further drive any form of diversity away, which is simply a move that the AI community can no longer afford.

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

🔍 SEARCH

Spotlight

AI Policy Corner: Frontier AI Safety Commitments, AI Seoul Summit 2024

AI Policy Corner: The Colorado State Deepfakes Act

Special Edition: Honouring the Legacy of Abhishek Gupta (1992–2024)

AI Policy Corner: The Turkish Artificial Intelligence Law Proposal

From Funding Crisis to AI Misuse: Critical Digital Rights Challenges from RightsCon 2025

related posts

  • The Ethical Implications of Generative Audio Models: A Systematic Literature Review

    The Ethical Implications of Generative Audio Models: A Systematic Literature Review

  • Research summary: Principles alone cannot guarantee ethical AI

    Research summary: Principles alone cannot guarantee ethical AI

  • Social media polarization reflects shifting political alliances in Pakistan

    Social media polarization reflects shifting political alliances in Pakistan

  • Owning Ethics: Corporate Logics, Silicon Valley, and the Institutionalization of Ethics (Research Su...

    Owning Ethics: Corporate Logics, Silicon Valley, and the Institutionalization of Ethics (Research Su...

  • The Brussels Effect and AI: How EU Regulation will Impact the Global AI Market

    The Brussels Effect and AI: How EU Regulation will Impact the Global AI Market

  • AI Certification: Advancing Ethical Practice by Reducing Information Asymmetries

    AI Certification: Advancing Ethical Practice by Reducing Information Asymmetries

  • Research summary: Challenging Truth and Trust: A Global Inventory of Organized Social Media Manipula...

    Research summary: Challenging Truth and Trust: A Global Inventory of Organized Social Media Manipula...

  • Research summary: AI in Context: The Labor of Integrating New Technologies

    Research summary: AI in Context: The Labor of Integrating New Technologies

  • AI in the Gray: Exploring Moderation Policies in Dialogic Large Language Models vs. Human Answers in...

    AI in the Gray: Exploring Moderation Policies in Dialogic Large Language Models vs. Human Answers in...

  • Research summary: A Focus on Neural Machine Translation for African Languages

    Research summary: A Focus on Neural Machine Translation for African Languages

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer

Categories


• Blog
• Research Summaries
• Columns
• Core Principles of Responsible AI
• Special Topics

Signature Content


• The State Of AI Ethics

• The Living Dictionary

• The AI Ethics Brief

Learn More


• About

• Open Access Policy

• Contributions Policy

• Editorial Stance on AI Tools

• Press

• Donate

• Contact

The AI Ethics Brief (bi-weekly newsletter)

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.


Archive

  • © MONTREAL AI ETHICS INSTITUTE. All rights reserved 2024.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.