• Skip to main content
  • Skip to secondary menu
  • Skip to primary sidebar
  • Skip to footer
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

  • Articles
    • Public Policy
    • Privacy & Security
    • Human Rights
      • Ethics
      • JEDI (Justice, Equity, Diversity, Inclusion
    • Climate
    • Design
      • Emerging Technology
    • Application & Adoption
      • Health
      • Education
      • Government
        • Military
        • Public Works
      • Labour
    • Arts & Culture
      • Film & TV
      • Music
      • Pop Culture
      • Digital Art
  • Columns
    • AI Policy Corner
    • Recess
  • The AI Ethics Brief
  • AI Literacy
    • Research Summaries
    • AI Ethics Living Dictionary
    • Learning Community
  • The State of AI Ethics Report
    • Volume 7 (November 2025)
    • Volume 6 (February 2022)
    • Volume 5 (July 2021)
    • Volume 4 (April 2021)
    • Volume 3 (Jan 2021)
    • Volume 2 (Oct 2020)
    • Volume 1 (June 2020)
  • About
    • Our Contributions Policy
    • Our Open Access Policy
    • Contact
    • Donate

Research summary: Principles alone cannot guarantee ethical AI

July 18, 2020

Summary contributed by Connor Wright, who’s a 3rd year Philosophy student at the University of Exeter.

Link to original source + author at the bottom.


Mini-summary: AI Ethics has been approached from a principled angle since the dawn of the practice, drawing great inspiration from the 4 basic ethical principles of the medical ethics field. However, this paper advocates how AI Ethics cannot be tackled in the same principled way as the medical ethics profession. The paper bases this argument on 4 different aspects of the medical ethics field that its AI counterpart lacks, moving from the misalignment of values in the field, to a lack of established history to fall back on, to accountability and more. The paper then concludes by offering some ways forward for the AI Ethics field, emphasising on how ethics is a process, and not a destination. Translating the lofty principles into actionable conventions will help realise the true challenges that face AI Ethics, rather than treating it as something that is to be “solved”.


Full summary:

AI Ethics has been approached from a principled angle since the dawn of the practice, drawing great inspiration from the medical ethics field. However, this paper advocates how AI Ethics cannot be tackled in the same principled way as the medical ethics profession. The paper bases this argument on features of the medical ethics field that its AI counterpart lacks, and then aims to suggest ways forward. Taking this into account, I will split this post into 3 sections to demonstrate how this is the case. Section 1 will show what the paper believes the AI Ethics field lacks compared to the medical field, section 2 will be how this is the case, and section 3 will be how this is to be resolved. I will then end with my thoughts on the discussion.

Section 1: What the AI Ethics field lacks

Firstly, the practitioners in the AI Ethics field all lack a common aim or ‘patient’ that can align all the differing interests of the different institutions involved. The field is filled with different practitioners of diverse backgrounds, and private companies all with varying interests. Hence, a principled approach here would have to unite these differing views under the maxims it proposes. However, in order to accommodate all the different viewpoints, the principles start to become more and more abstract. Proposals such as ‘fair’ and ‘equal’ end up being the point of agreement for all parties, which this paper highlights as hiding the “fundamental normative and political tensions embedded” in these concepts (Mittelstadt, 2019, p. 1). For example, there are deep disagreements over what equality actually means, such as whether it proports to egalitarianism or complete equality for all (such as wage distribution). Medical ethics instead can unite on the subject of a patient, and prioritise their interest in their methods, forming a focus point for the differing views within the field. This is then further enforced by medical bodies being rigorously reviewed by legally backed institutions to make sure this prioritisation is taking place, with no such body existing in the AI Ethics field yet. Hence, a principled approach to said field may not be the most fruitful path to undertake.

Section 2: Why is this the case?

The paper then proposes that a principled approach as such is hindered by the field not having an established history. There are no previous lessons to draw on in order to demonstrate what “good” AI is. There is no ‘AI Hippocratic Oath’ to undertake for behaviour to be modelled on, and the unpredictability of AI means that any one single method can’t be guaranteed to always produce a ‘good’ result. Instead, each company is almost left to forge their own practice, tailored to their own company values. Resultantly, each company produces their own exemplars of how ‘good’ AI is deployed, leaving little scope for principled practical advice to influence how to implement ethical AI.

Such lack of advice then emphasises the importance of accountability when deploying AI, as there is no regulation apparent to signify what is seen as ‘bad’ AI. Even then, the AI Ethics field lacks the accountability framework to counterbalance the lack of regulation. With many different actors involved in processes that are hard to trace all the way back, it would be difficult to pin the responsibility on any one person. Whereas, the medical ethics field has a fixed team of actors at any one time, making a stronger case for the presence of accountability. Thus, approaching the AI Ethics field in the same way as the medical ethics arena may in fact be like mixing oil and water.

Section 3:

The paper then concludes by offering some ways forward for the AI Ethics field. Defining clear pathways that are most likely to end up in ethical AI will then help foster support for more emphasis on a “bottoms-up” (Mittelstadt, 2019, p. 9) approach to AI deployment. Such an approach will help generate novel problems that repeatedly face the field of AI Ethics, generating methods on how to tackle them rather than seeing similar problems surfacing from the companies at the top. This may then lead to AI deployment being crafted as a licensed profession, which can be utilised by both large and small corporations. Such a licensing can then smooth the approach away from individual AI Ethics, and more towards organisational ethics being considered. Individuals corrupting the use of AI will be held accountable as well as the corporations who allowed it to happen, with their role being previously left unquestioned. In this way, a principled approach to AI Ethics as seen in medical ethics will be better able to take form.

My thoughts:

I agree with the final section of the paper advocating that we treat AI ethics as a process, rather than something to be “solved” (Mittelstadt, 2019, p. 10). The lack of accountability generated by the combination of misaligned goals and a lack of history is something that needs to be addressed, and which cannot be done by lofty principles being the only point of agreement. Instead, working to close the gap between the abstract and reality through ethical practitioners and software engineers working together, I believe, will help create actionable change, and reveal the true challenges that face the AI Ethics field.


Original paper by Brent Mittelstadt: https://arxiv.org/ftp/arxiv/papers/1906/1906.06668.pdf 

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

🔍 SEARCH

Spotlight

This image shows a large white, traditional, old building. The top half of the building represents the humanities (which is symbolised by the embedded text from classic literature which is faintly shown ontop the building). The bottom section of the building is embossed with mathematical formulas to represent the sciences. The middle layer of the image is heavily pixelated. On the steps at the front of the building there is a group of scholars, wearing formal suits and tie attire, who are standing around at the enternace talking and some of them are sitting on the steps. There are two stone, statute-like hands that are stretching the building apart from the left side. In the forefront of the image, there are 8 students - which can only be seen from the back. Their graduation gowns have bright blue hoods and they all look as though they are walking towards the old building which is in the background at a distance. There are a mix of students in the foreground.

Tech Futures: Co-opting Research and Education

Agentic AI systems and algorithmic accountability: a new era of e-commerce

ALL IN Conference 2025: Four Key Takeaways from Montreal

Beyond Dependency: The Hidden Risk of Social Comparison in Chatbot Companionship

AI Policy Corner: Restriction vs. Regulation: Comparing State Approaches to AI Mental Health Legislation

related posts

  • Ethics of AI‑Enabled Recruiting and Selection: A Review and Research Agenda

    Ethics of AI‑Enabled Recruiting and Selection: A Review and Research Agenda

  • Research Summary: Geo-indistinguishability: Differential privacy for location-based systems

    Research Summary: Geo-indistinguishability: Differential privacy for location-based systems

  • Making Kin with the Machines

    Making Kin with the Machines

  • AI Ethics in the Public, Private, and NGO Sectors: A Review of a Global Document Collection

    AI Ethics in the Public, Private, and NGO Sectors: A Review of a Global Document Collection

  • Research summary: Troubling Trends in Machine Learning Scholarship

    Research summary: Troubling Trends in Machine Learning Scholarship

  • Algorithmic Domination in the Gig Economy

    Algorithmic Domination in the Gig Economy

  • Research summary:  Algorithmic Bias: On the Implicit Biases of Social Technology

    Research summary: Algorithmic Bias: On the Implicit Biases of Social Technology

  • Automated Interviewer or Augmented Survey? Collecting Social Data with Large Language Models

    Automated Interviewer or Augmented Survey? Collecting Social Data with Large Language Models

  • Privacy Limitations Of Interest-based Advertising On The Web: A Post-mortem Empirical Analysis Of Go...

    Privacy Limitations Of Interest-based Advertising On The Web: A Post-mortem Empirical Analysis Of Go...

  • Knowledge, Workflow, Oversight: A framework for implementing AI ethics

    Knowledge, Workflow, Oversight: A framework for implementing AI ethics

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer


Articles

Columns

AI Literacy

The State of AI Ethics Report


 

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.

Contact

Donate


  • © 2025 MONTREAL AI ETHICS INSTITUTE.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.