• Skip to main content
  • Skip to secondary menu
  • Skip to primary sidebar
  • Skip to footer
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

  • Articles
    • Public Policy
    • Privacy & Security
    • Human Rights
      • Ethics
      • JEDI (Justice, Equity, Diversity, Inclusion
    • Climate
    • Design
      • Emerging Technology
    • Application & Adoption
      • Health
      • Education
      • Government
        • Military
        • Public Works
      • Labour
    • Arts & Culture
      • Film & TV
      • Music
      • Pop Culture
      • Digital Art
  • Columns
    • AI Policy Corner
    • Recess
  • The AI Ethics Brief
  • AI Literacy
    • Research Summaries
    • AI Ethics Living Dictionary
    • Learning Community
  • The State of AI Ethics Report
    • Volume 6 (February 2022)
    • Volume 5 (July 2021)
    • Volume 4 (April 2021)
    • Volume 3 (Jan 2021)
    • Volume 2 (Oct 2020)
    • Volume 1 (June 2020)
  • About
    • Our Contributions Policy
    • Our Open Access Policy
    • Contact
    • Donate

You cannot have AI ethics without ethics

September 28, 2021

šŸ”¬ Research summary by Connor Wright, our Partnerships Manager.

[Original paper by Dave Lauer]


Overview: AI systems are often fixed by looking for the broken part, rather than the system that allowed the error to occur. The paper advocates for a more systematic examination of the AI process, which the more you think about it, the more sense it makes.


Introduction

Would Aristotle have bought into AI ethics? Or, does AI ethics sit as a separate entity to all that has gone before it? Given AI ethics’ rise in popularity, it has often been held in its own regard, with special mentions of AI principles at big corporations like Facebook and Google. Nevertheless, the answer to the question ā€˜can AI ethics exist in a vacuum?’ is a resounding no. An examination of an ā€˜unethical AI’ problem needs to be systemic and aware of the incentives involved in the process, rather than just looking for the ā€˜broken part’. Thus, let’s first look at why AI ethics does not exist in a vacuum, with a comparison to medical ethics along the way.

Key Insights

AI Ethics does not exist in a vacuum

The key notion that I found in this piece was how AI ethics could not come about without an ethical environment to surround it. As seen in medical ethics, the AI ethics space comes into contact with a whole host of issues also touched upon by other fields. Take, for example, the issues of autonomy and moral responsibility in AI ethics and for the past 500+ years of philosophy. Hence, without an all-encompassing ethical approach, the subfield of AI ethics quickly becomes isolated and ineffective.

In this sense, given AI ethics’ ties to an overall ethical environment, we need to examine the system as a whole when something goes wrong with an AI system. Here, systems thinking is introduced to mention the relationships between parts of a process/product as being key, not just individual parts themselves. In other words, if an AI system fails, don’t examine its features; examine its ecosystem.

The broken part fallacy

Tying into this last point, the ā€œbroken part fallacyā€ is introduced. About how humans examine problems, the fallacy lies in seeing that a system/product has malfunctioned and looking for the broken part with which to fix and resolve the issue. Such an approach deems the problem as something individualistic, which won’t necessarily fix it if it’s systemic. Looking for a broken part treats a systemic problem as too simple, given the complex interactional nature of an ecosystem. 

Hence, looking for a malfunction in an AI system will not automatically fix its problem of being unethical. Instead, a thorough look at how that unethical behaviour has surpassed the checks and balances is required, especially surrounding the product’s deployment into social and cultural contexts.

The importance of social and cultural sensitivity

When examining the systemic nature of an AI’s deployment, more abstract notions are discovered that require change than a simple ā€˜broken part’. Listening to those closest to the problem and avoiding top-down legislation is an excellent first step. This offers a closer look at the situation from those who designed the AI product, cultivating a more trusting relationship. 

The question of incentives

The next question is whether businesses can enact this kind of approach and whether they are incentivised to. The incentives created by law and policy can be a good starting point, examining whether there is a legislative push behind specific actions that can be deemed ā€˜ethical’.

Such examinations can then expose the type of ownership within a business. To illustrate, Facebook operates on an Absentee Ownership model, whereby the ā€œlocus of control and locus of responsibility are differentā€. In Facebook’s case, they control what is allowed on their platform but do not have legal responsibility for the content that’s eventually put on there. In this case, an AI ethics programme coming out of Facebook would not prosper without sharing in the centre of responsibility. Instead, ethical frameworks are needed to be part of the company’s ethos and not just something to be checked off the list. AI ethics can then be a branch of central ethical practices and frameworks instead of holding its own fort.

Between the lines

I very much share how AI ethics is not born in a vacuum. I liken it to conversations about bias in AI systems, whereby if the humans programming the AI product have their own biases, then we cannot expect some of these to turn up in the AI system. The aim is then to mitigate the harm that is produced from these biases taking hold. Applied to our present context, I would not be surprised if a company with a flawed ethical approach created an’ unethical AI’. Without self-reflection on the AI process itself, the reason why an AI is producing the ā€˜unethical’ behaviour that it does will remain an even darker black box. Hence, before looking for the broken part, we should ask ourselves how it got there.

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

šŸ” SEARCH

Spotlight

ALL IN Conference 2025: Four Key Takeaways from Montreal

Beyond Dependency: The Hidden Risk of Social Comparison in Chatbot Companionship

AI Policy Corner: Restriction vs. Regulation: Comparing State Approaches to AI Mental Health Legislation

Beyond Consultation: Building Inclusive AI Governance for Canada’s Democratic Future

AI Policy Corner: U.S. Executive Order on Advancing AI Education for American Youth

related posts

  • Regional Differences in Information Privacy Concerns After the Facebook-Cambridge Analytica Data Sca...

    Regional Differences in Information Privacy Concerns After the Facebook-Cambridge Analytica Data Sca...

  • Promoting Bright Patterns

    Promoting Bright Patterns

  • A Systematic Review of Ethical Concerns with Voice Assistants

    A Systematic Review of Ethical Concerns with Voice Assistants

  • Humans are not Boltzmann Distributions: Challenges and Opportunities for Modelling Human Feedback an...

    Humans are not Boltzmann Distributions: Challenges and Opportunities for Modelling Human Feedback an...

  • Human-centred mechanism design with Democratic AI

    Human-centred mechanism design with Democratic AI

  • AI agents for facilitating social interactions and wellbeing

    AI agents for facilitating social interactions and wellbeing

  • Unsolved Problems in ML Safety

    Unsolved Problems in ML Safety

  • ABScribe: Rapid Exploration of Multiple Writing Variations in Human-AI Co-Writing Tasks using Large ...

    ABScribe: Rapid Exploration of Multiple Writing Variations in Human-AI Co-Writing Tasks using Large ...

  • AI Framework for Healthy Built Environments

    AI Framework for Healthy Built Environments

  • Bound by the Bounty: Collaboratively Shaping Evaluation Processes for Queer AI Harms

    Bound by the Bounty: Collaboratively Shaping Evaluation Processes for Queer AI Harms

Partners

  • Ā 
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer


Articles

Columns

AI Literacy

The State of AI Ethics Report


 

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.

Contact

Donate


  • Ā© 2025 MONTREAL AI ETHICS INSTITUTE.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.