• Skip to main content
  • Skip to secondary menu
  • Skip to primary sidebar
  • Skip to footer
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

  • Articles
    • Public Policy
    • Privacy & Security
    • Human Rights
      • JEDI (Justice, Equity, Diversity, Inclusion
      • Ethics
    • Climate
    • Design
      • Emerging Technology
    • Application & Adoption
      • Health
      • Education
      • Government
        • Military
        • Public Works
      • Labour
    • Arts & Culture
      • Film & TV
      • Music
      • Pop Culture
      • Digital Art
  • Columns
    • AI Policy Corner
    • Recess
  • The AI Ethics Brief
  • AI Literacy
    • AI Ethics Living Dictionary
    • Learning Community
  • The State of AI Ethics Report
    • Volume 6 (February 2022)
    • Volume 5 (July 2021)
    • Volume 4 (April 2021)
    • Volume 3 (Jan 2021)
    • Volume 2 (Oct 2020)
    • Volume 1 (June 2020)
  • About
    • Our Contributions Policy
    • Our Open Access Policy
    • Contact
    • Donate

Research summary: Algorithmic Injustices towards a Relational Ethics

March 9, 2020

This paper presented by Abebe Birhane and Fred Cummins at the Black in AI workshop at NeurIPS 2019 elucidates how the current paradigm in research on building fair, inclusive AI systems falls short in addressing the real problems because of taking a narrow, technically focused approach. The paper utilizes a relational ethics approach to highlight areas of improvement. The key arguments emerging from such a characterization are centering the population that is going to be disproportionately impacted, focusing on understanding of underlying context rather than pure predictive power of the systems, viewing the algorithmic systems as a tool that can shape and sustain social and moral order and recognizing the temporal nature of the definitions of bias, fairness, etc and keeping the design and development of the systems as an iterative process. 

The paper starts by setting the stage for the well understood problem of building truly ethical, safe and inclusive AI systems that are increasingly leveraging ubiquitous sensors to make predictions on who we are and how we might behave. But, when these systems are deployed in socially contested domains, for example, “normal” behaviour where loosely we can think of normal as that defined by the majority and treating everything else as anomalous, then they don’t make value-free judgements and are not amoral in their operations. By viewing the systems as purely technical, the solutions to address these problems are purely technical which is where most of the fairness research has focused and it ignores the context of the people and communities where these systems are used. The paper serves to question the foundations of these systems and to take a deeper look at unstated assumptions in the design and development of the systems. It urges the readers, and the research community at large, to consider this from the perspective of relational ethics. It makes 4 key suggestions: 

  1. Center the focus of development on those within the community that will face a disproportionate burden or negative consequences from the use of the system 
  2. Instead of optimizing for prediction, it is more important to think about how we gain a fundamental understanding of why we’re getting certain results which might be arising because of historical stereotypes that were captured as a part of the development and design of the system 
  3. The systems end up creating a social and political order and then reinforcing it, meaning we should involve a larger systems based approach to designing the systems
  4. Given that the terms of bias, fairness, etc evolve over time and what’s acceptable at some time becomes unacceptable later, the process asks for constant monitoring, evaluation and iteration of the design to most accurately represent the community in context.

At MAIEI, we’ve advocated for an interdisciplinary approach leveraging the citizen community spanning a wide cross section to best capture the essence of different issues as closely as possible from those who experience them first hand. Placing the development of an ML system in context of the larger social and political order is important and we advocate for taking a systems design approach (see A Primer in Systems Thinking by Donna Meadows) which creates two benefits : one is that several ignored externalities can be considered and second to involve a wider set of inputs from people who might be affected by the system and who understand how the system will sit in the larger social and political order in which it will be deployed. Also, we particularly enjoyed the point on requiring a constant iterative process to the development and deployment of AI systems borrowing from cybersecurity research on how security of the system is not done and over with, requiring constant monitoring and attention to ensure the safety of the system.

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

🔍 SEARCH

Spotlight

ALL IN Conference 2025: Four Key Takeaways from Montreal

Beyond Dependency: The Hidden Risk of Social Comparison in Chatbot Companionship

AI Policy Corner: Restriction vs. Regulation: Comparing State Approaches to AI Mental Health Legislation

Beyond Consultation: Building Inclusive AI Governance for Canada’s Democratic Future

AI Policy Corner: U.S. Executive Order on Advancing AI Education for American Youth

related posts

  • How Artifacts Afford: The Power and Politics of Everyday Things

    How Artifacts Afford: The Power and Politics of Everyday Things

  • Principios éticos para una inteligencia artificial antropocéntrica: consensos actuales desde una per...

    Principios éticos para una inteligencia artificial antropocéntrica: consensos actuales desde una per...

  • The Ethical AI Startup Ecosystem 02: Data for AI

    The Ethical AI Startup Ecosystem 02: Data for AI

  • Towards Environmentally Equitable AI via Geographical Load Balancing

    Towards Environmentally Equitable AI via Geographical Load Balancing

  • Strategic Behavior of Large Language Models: Game Structure vs. Contextual Framing

    Strategic Behavior of Large Language Models: Game Structure vs. Contextual Framing

  • Research Summary: Towards Evaluating the Robustness of Neural Networks

    Research Summary: Towards Evaluating the Robustness of Neural Networks

  • Writer-Defined AI Personas for On-Demand Feedback Generation

    Writer-Defined AI Personas for On-Demand Feedback Generation

  • AI supply chains make it easy to disavow ethical accountability

    AI supply chains make it easy to disavow ethical accountability

  • What lies behind AGI: ethical concerns related to LLMs

    What lies behind AGI: ethical concerns related to LLMs

  • The importance of audit in AI governance

    The importance of audit in AI governance

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer

Categories


• Blog
• Research Summaries
• Columns
• Core Principles of Responsible AI
• Special Topics

Signature Content


• The State Of AI Ethics

• The Living Dictionary

• The AI Ethics Brief

Learn More


• About

• Open Access Policy

• Contributions Policy

• Editorial Stance on AI Tools

• Press

• Donate

• Contact

The AI Ethics Brief (bi-weekly newsletter)

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.


Archive

  • © 2025 MONTREAL AI ETHICS INSTITUTE.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.