• Skip to main content
  • Skip to secondary menu
  • Skip to primary sidebar
  • Skip to footer
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

  • Articles
    • Public Policy
    • Privacy & Security
    • Human Rights
      • Ethics
      • JEDI (Justice, Equity, Diversity, Inclusion
    • Climate
    • Design
      • Emerging Technology
    • Application & Adoption
      • Health
      • Education
      • Government
        • Military
        • Public Works
      • Labour
    • Arts & Culture
      • Film & TV
      • Music
      • Pop Culture
      • Digital Art
  • Columns
    • AI Policy Corner
    • Recess
  • The AI Ethics Brief
  • AI Literacy
    • Research Summaries
    • AI Ethics Living Dictionary
    • Learning Community
  • The State of AI Ethics Report
    • Volume 7 (November 2025)
    • Volume 6 (February 2022)
    • Volume 5 (July 2021)
    • Volume 4 (April 2021)
    • Volume 3 (Jan 2021)
    • Volume 2 (Oct 2020)
    • Volume 1 (June 2020)
  • About
    • Our Contributions Policy
    • Our Open Access Policy
    • Contact
    • Donate

You cannot have AI ethics without ethics

September 28, 2021

šŸ”¬ Research summary by Connor Wright, our Partnerships Manager.

[Original paper by Dave Lauer]


Overview: AI systems are often fixed by looking for the broken part, rather than the system that allowed the error to occur. The paper advocates for a more systematic examination of the AI process, which the more you think about it, the more sense it makes.


Introduction

Would Aristotle have bought into AI ethics? Or, does AI ethics sit as a separate entity to all that has gone before it? Given AI ethics’ rise in popularity, it has often been held in its own regard, with special mentions of AI principles at big corporations like Facebook and Google. Nevertheless, the answer to the question ā€˜can AI ethics exist in a vacuum?’ is a resounding no. An examination of an ā€˜unethical AI’ problem needs to be systemic and aware of the incentives involved in the process, rather than just looking for the ā€˜broken part’. Thus, let’s first look at why AI ethics does not exist in a vacuum, with a comparison to medical ethics along the way.

Key Insights

AI Ethics does not exist in a vacuum

The key notion that I found in this piece was how AI ethics could not come about without an ethical environment to surround it. As seen in medical ethics, the AI ethics space comes into contact with a whole host of issues also touched upon by other fields. Take, for example, the issues of autonomy and moral responsibility in AI ethics and for the past 500+ years of philosophy. Hence, without an all-encompassing ethical approach, the subfield of AI ethics quickly becomes isolated and ineffective.

In this sense, given AI ethics’ ties to an overall ethical environment, we need to examine the system as a whole when something goes wrong with an AI system. Here, systems thinking is introduced to mention the relationships between parts of a process/product as being key, not just individual parts themselves. In other words, if an AI system fails, don’t examine its features; examine its ecosystem.

The broken part fallacy

Tying into this last point, the ā€œbroken part fallacyā€ is introduced. About how humans examine problems, the fallacy lies in seeing that a system/product has malfunctioned and looking for the broken part with which to fix and resolve the issue. Such an approach deems the problem as something individualistic, which won’t necessarily fix it if it’s systemic. Looking for a broken part treats a systemic problem as too simple, given the complex interactional nature of an ecosystem. 

Hence, looking for a malfunction in an AI system will not automatically fix its problem of being unethical. Instead, a thorough look at how that unethical behaviour has surpassed the checks and balances is required, especially surrounding the product’s deployment into social and cultural contexts.

The importance of social and cultural sensitivity

When examining the systemic nature of an AI’s deployment, more abstract notions are discovered that require change than a simple ā€˜broken part’. Listening to those closest to the problem and avoiding top-down legislation is an excellent first step. This offers a closer look at the situation from those who designed the AI product, cultivating a more trusting relationship. 

The question of incentives

The next question is whether businesses can enact this kind of approach and whether they are incentivised to. The incentives created by law and policy can be a good starting point, examining whether there is a legislative push behind specific actions that can be deemed ā€˜ethical’.

Such examinations can then expose the type of ownership within a business. To illustrate, Facebook operates on an Absentee Ownership model, whereby the ā€œlocus of control and locus of responsibility are differentā€. In Facebook’s case, they control what is allowed on their platform but do not have legal responsibility for the content that’s eventually put on there. In this case, an AI ethics programme coming out of Facebook would not prosper without sharing in the centre of responsibility. Instead, ethical frameworks are needed to be part of the company’s ethos and not just something to be checked off the list. AI ethics can then be a branch of central ethical practices and frameworks instead of holding its own fort.

Between the lines

I very much share how AI ethics is not born in a vacuum. I liken it to conversations about bias in AI systems, whereby if the humans programming the AI product have their own biases, then we cannot expect some of these to turn up in the AI system. The aim is then to mitigate the harm that is produced from these biases taking hold. Applied to our present context, I would not be surprised if a company with a flawed ethical approach created an’ unethical AI’. Without self-reflection on the AI process itself, the reason why an AI is producing the ā€˜unethical’ behaviour that it does will remain an even darker black box. Hence, before looking for the broken part, we should ask ourselves how it got there.

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

šŸ” SEARCH

Spotlight

This image shows a large white, traditional, old building. The top half of the building represents the humanities (which is symbolised by the embedded text from classic literature which is faintly shown ontop the building). The bottom section of the building is embossed with mathematical formulas to represent the sciences. The middle layer of the image is heavily pixelated. On the steps at the front of the building there is a group of scholars, wearing formal suits and tie attire, who are standing around at the enternace talking and some of them are sitting on the steps. There are two stone, statute-like hands that are stretching the building apart from the left side. In the forefront of the image, there are 8 students - which can only be seen from the back. Their graduation gowns have bright blue hoods and they all look as though they are walking towards the old building which is in the background at a distance. There are a mix of students in the foreground.

Tech Futures: Co-opting Research and Education

Agentic AI systems and algorithmic accountability: a new era of e-commerce

ALL IN Conference 2025: Four Key Takeaways from Montreal

Beyond Dependency: The Hidden Risk of Social Comparison in Chatbot Companionship

AI Policy Corner: Restriction vs. Regulation: Comparing State Approaches to AI Mental Health Legislation

related posts

  • Towards Environmentally Equitable AI via Geographical Load Balancing

    Towards Environmentally Equitable AI via Geographical Load Balancing

  • DICES Dataset: Diversity in Conversational AI Evaluation for Safety

    DICES Dataset: Diversity in Conversational AI Evaluation for Safety

  • The Two Faces of AI in Green Mobile Computing: A Literature Review

    The Two Faces of AI in Green Mobile Computing: A Literature Review

  • 6 Ways Machine Learning Threatens Social Justice

    6 Ways Machine Learning Threatens Social Justice

  • Understanding technology-induced value change: a pragmatist proposal

    Understanding technology-induced value change: a pragmatist proposal

  • Generative AI in Writing Research Papers: A New Type of Algorithmic Bias and Uncertainty in Scholarl...

    Generative AI in Writing Research Papers: A New Type of Algorithmic Bias and Uncertainty in Scholarl...

  • Strategic Behavior of Large Language Models: Game Structure vs. Contextual Framing

    Strategic Behavior of Large Language Models: Game Structure vs. Contextual Framing

  • Moral Machine or Tyranny of the Majority?

    Moral Machine or Tyranny of the Majority?

  • De-platforming disinformation: conspiracy theories and their control

    De-platforming disinformation: conspiracy theories and their control

  • Algorithmic Impact Assessments – What Impact Do They Have?

    Algorithmic Impact Assessments – What Impact Do They Have?

Partners

  • Ā 
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer


Articles

Columns

AI Literacy

The State of AI Ethics Report


 

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.

Contact

Donate


  • Ā© 2025 MONTREAL AI ETHICS INSTITUTE.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.