• Skip to primary navigation
  • Skip to main content
  • LinkedIn
  • RSS
  • Twitter
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy.

  • Content
    • The State of AI Ethics
    • The AI Ethics Brief
    • The Living Dictionary
    • Research Summaries
    • Columns
      • Social Context in LLM Research: the BigScience Approach
      • Recess
      • Like Talking to a Person
      • Sociology of AI Ethics
      • The New Heartbeat of Healthcare
      • Office Hours
      • Permission to Be Uncertain
      • AI Application Spotlight
      • Ethical AI Startups
    • Publications
  • Community
    • Events
    • Learning Community
    • Code of Conduct
  • Team
  • Donate
  • About
    • Our Open Access Policy
    • Our Contributions Policy
    • Press
  • Contact
  • šŸ‡«šŸ‡·
Subscribe

You cannot have AI ethics without ethics

September 28, 2021 by MAIEI

šŸ”¬ Research summary by Connor Wright, our Partnerships Manager.

[Original paper by Dave Lauer]


Overview: AI systems are often fixed by looking for the broken part, rather than the system that allowed the error to occur. The paper advocates for a more systematic examination of the AI process, which the more you think about it, the more sense it makes.


Introduction

Would Aristotle have bought into AI ethics? Or, does AI ethics sit as a separate entity to all that has gone before it? Given AI ethics’ rise in popularity, it has often been held in its own regard, with special mentions of AI principles at big corporations like Facebook and Google. Nevertheless, the answer to the question ā€˜can AI ethics exist in a vacuum?’ is a resounding no. An examination of an ā€˜unethical AI’ problem needs to be systemic and aware of the incentives involved in the process, rather than just looking for the ā€˜broken part’. Thus, let’s first look at why AI ethics does not exist in a vacuum, with a comparison to medical ethics along the way.

Key Insights

AI Ethics does not exist in a vacuum

The key notion that I found in this piece was how AI ethics could not come about without an ethical environment to surround it. As seen in medical ethics, the AI ethics space comes into contact with a whole host of issues also touched upon by other fields. Take, for example, the issues of autonomy and moral responsibility in AI ethics and for the past 500+ years of philosophy. Hence, without an all-encompassing ethical approach, the subfield of AI ethics quickly becomes isolated and ineffective.

In this sense, given AI ethics’ ties to an overall ethical environment, we need to examine the system as a whole when something goes wrong with an AI system. Here, systems thinking is introduced to mention the relationships between parts of a process/product as being key, not just individual parts themselves. In other words, if an AI system fails, don’t examine its features; examine its ecosystem.

The broken part fallacy

Tying into this last point, the ā€œbroken part fallacyā€ is introduced. About how humans examine problems, the fallacy lies in seeing that a system/product has malfunctioned and looking for the broken part with which to fix and resolve the issue. Such an approach deems the problem as something individualistic, which won’t necessarily fix it if it’s systemic. Looking for a broken part treats a systemic problem as too simple, given the complex interactional nature of an ecosystem. 

Hence, looking for a malfunction in an AI system will not automatically fix its problem of being unethical. Instead, a thorough look at how that unethical behaviour has surpassed the checks and balances is required, especially surrounding the product’s deployment into social and cultural contexts.

The importance of social and cultural sensitivity

When examining the systemic nature of an AI’s deployment, more abstract notions are discovered that require change than a simple ā€˜broken part’. Listening to those closest to the problem and avoiding top-down legislation is an excellent first step. This offers a closer look at the situation from those who designed the AI product, cultivating a more trusting relationship. 

The question of incentives

The next question is whether businesses can enact this kind of approach and whether they are incentivised to. The incentives created by law and policy can be a good starting point, examining whether there is a legislative push behind specific actions that can be deemed ā€˜ethical’.

Such examinations can then expose the type of ownership within a business. To illustrate, Facebook operates on an Absentee Ownership model, whereby the ā€œlocus of control and locus of responsibility are differentā€. In Facebook’s case, they control what is allowed on their platform but do not have legal responsibility for the content that’s eventually put on there. In this case, an AI ethics programme coming out of Facebook would not prosper without sharing in the centre of responsibility. Instead, ethical frameworks are needed to be part of the company’s ethos and not just something to be checked off the list. AI ethics can then be a branch of central ethical practices and frameworks instead of holding its own fort.

Between the lines

I very much share how AI ethics is not born in a vacuum. I liken it to conversations about bias in AI systems, whereby if the humans programming the AI product have their own biases, then we cannot expect some of these to turn up in the AI system. The aim is then to mitigate the harm that is produced from these biases taking hold. Applied to our present context, I would not be surprised if a company with a flawed ethical approach created an’ unethical AI’. Without self-reflection on the AI process itself, the reason why an AI is producing the ā€˜unethical’ behaviour that it does will remain an even darker black box. Hence, before looking for the broken part, we should ask ourselves how it got there.

Category iconResearch Summaries

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We write every week.
  • LinkedIn
  • RSS
  • Twitter
  • Ā© MONTREAL AI ETHICS INSTITUTE. All rights reserved 2021.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Creative Commons LicenseLearn more about our open access policy here.