• Skip to main content
  • Skip to secondary menu
  • Skip to primary sidebar
  • Skip to footer
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

  • Articles
    • Public Policy
    • Privacy & Security
    • Human Rights
      • Ethics
      • JEDI (Justice, Equity, Diversity, Inclusion
    • Climate
    • Design
      • Emerging Technology
    • Application & Adoption
      • Health
      • Education
      • Government
        • Military
        • Public Works
      • Labour
    • Arts & Culture
      • Film & TV
      • Music
      • Pop Culture
      • Digital Art
  • Columns
    • AI Policy Corner
    • Recess
  • The AI Ethics Brief
  • AI Literacy
    • Research Summaries
    • AI Ethics Living Dictionary
    • Learning Community
  • The State of AI Ethics Report
    • Volume 6 (February 2022)
    • Volume 5 (July 2021)
    • Volume 4 (April 2021)
    • Volume 3 (Jan 2021)
    • Volume 2 (Oct 2020)
    • Volume 1 (June 2020)
  • About
    • Our Contributions Policy
    • Our Open Access Policy
    • Contact
    • Donate

Embedded ethics: a proposal for integrating ethics into the development of medical AI

June 10, 2022

šŸ”¬ Research Summary by Eryn Rigley, a PhD research student at University of Southampton, specialising in the intersection of environmental and AI ethics, as well as defence & security AI ethics.

[Original paper by Stuart McLennan, Amelia Fiske, Daniel Tigard, Ruth Müller, Sami Haddadin & Alena Buyx]


Overview: Ā Though AI ethics frameworks are plenty, applicable ethics guidance for AI developers remains few and far between. To translate high-level ethics guidelines into practice, the authors of this paper argue ethics ought to be embedded into every stage of the AI lifecycle.Ā 


Introduction

On a hospital bed, in the doctor’s office, and in an operating theater, we are at our most vulnerable when we need medical attention. For this reason, the medical field is perhaps the most important field to get AI ethics right in. Yet, medical AI systems continue to be rolled out without any ethical considerations or foresight of what might (inevitably) go wrong. The authors of this paper argue the gap between AI systems and our ethical principles can be solved by embedding ethics into every stage of an AI’s development, from conception to deployment.

Key Insights

The problem

AI Ethics frameworks and guidelines are plenty and have been found to converge around a handful of high level principles. That is, AI ought to ā€˜do no harm’, be ā€˜transparent’, and ā€˜robust’. However, these high-level principles solve only half the problem of AI ethical quandaries. To translate these principles in practice, for instance in navigating real ethical dilemmas, requires further work and a knowledge of how to apply high-level ethics in practice. 

The authors argue it would be unfair and over-demanding to assume and expect AI developers ought to be equipped to handle heavy ethics work. At the same time, ā€˜real ethicists’ have little participation in private industry. For this reason, AI ethics remains untranslated and impracticable to AI developers.  

This gap between high-level AI ethics principles and practical AI development has already resulted in real world problems. Within the medical field, novel AI systems are often deployed without explicit ethical consideration or foresight, reducing patients to unwilling ā€˜guinea pigs’ for the system. A lack of practicable AI ethics in the medical field thereby violates patients’ dignity, safety, and privacy. 

The Solution: Embedded Ethics

The authors propose that ā€˜embedded ethics’ is the solution to this gap between high level AI ethics work and practical AI development. This work describes the embedding of ethics into the entire lifecycle of AI, from design and development to deployment. In particular, the authors envisage ā€˜real ethicists’ posted at various stages of this life cycle to work with AI developers in the anticipation of future ethical concerns. Alternatively, if resources are constrained, regular exchanges between ethicists and other AI development team members should take place from the beginning of an AI systems’ conception. 

The general rule of thumb for this embedded ethics is the regular and prospective examination of ethical problems in AI, rather than sporadic and reactive engagement of AI developers with ethicists. The ethicist’s role here is two-fold: first of all to do the ethical heavy lifting of finding and fine-tuning ethical theories for application in AI systems; and second, to translate this high-level ethics into applicable guidance for developers. 

Limitations

The authors note several limitations to the use of embedded ethics in AI, and offer an adequate response to most. However, there are two major problems facing embedded ethicists within AI development which are due examination. 

The first problem facing ethicists is navigating competing interests within private industry, for instance between profitability or efficiency and ethical considerations. This need not be a grave problem for the ethicists, per se, since they will be expected to justify their arguments and provide reasons why their principles ought to be applied in certain cases. That is, it may fall within the ethicists’ purview to navigate these conflicting interests. 

The second problem is less clear to navigate. Simply placing ethicists into the lifecycle of AI will not ensure coherence and robustness in approaching ethical problems. The authors acknowledge that individual ethicists may have diverging opinions and state this is permissible so long as any viewpoint is justified and transparent. However, ensuring that individual ethicists within the same team or industry are working toward the same principles, and not diverging or conflicting in their analysis, will still require some kind of high-level overarching principles, for instance to maximize the wellbeing of humans. With this in mind, it is not clear how ethicists will be expected to adhere to these high-level principles whilst maintaining the freedom to do ā€˜real’ ethics work. Their role might therefore shrink to mere translation of high-level principles to practice.

Between the lines

Embedding ethics into every stage of an AI’s lifecycle, from conception to deployment, would seem an obvious necessity. And yet AI ethics remains a mystical high-level prescription of principles, impossible for computer scientists to navigate in practice. The authors have offered a convincing argument in favor of embedded AI ethics, and are able to respond to most of the problems they foresee. It remains unclear, however, what role the embedded ethicist is really taking on and how much freedom they will have to do their work: is it true embedded ethics will introduce ā€˜real ethicists’ to the industry, or are they mere ethics translators?

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

šŸ” SEARCH

Spotlight

ALL IN Conference 2025: Four Key Takeaways from Montreal

Beyond Dependency: The Hidden Risk of Social Comparison in Chatbot Companionship

AI Policy Corner: Restriction vs. Regulation: Comparing State Approaches to AI Mental Health Legislation

Beyond Consultation: Building Inclusive AI Governance for Canada’s Democratic Future

AI Policy Corner: U.S. Executive Order on Advancing AI Education for American Youth

related posts

  • Careless Whisper: Speech-to-text Hallucination Harms

    Careless Whisper: Speech-to-text Hallucination Harms

  • It doesn't tell me anything about how my data is used'': User Perceptions of Data Collection Purpos...

    "It doesn't tell me anything about how my data is used'': User Perceptions of Data Collection Purpos...

  • Self-Improving Diffusion Models with Synthetic Data

    Self-Improving Diffusion Models with Synthetic Data

  • Repairing Innovation - A Study of Integrating AI in Clinical Care (Research Summary)

    Repairing Innovation - A Study of Integrating AI in Clinical Care (Research Summary)

  • Outsourced & Automated: How AI Companies Have Taken Over Government Decision-Making

    Outsourced & Automated: How AI Companies Have Taken Over Government Decision-Making

  • Facial Recognition - Can It Evolve From A ā€œSource of Biasā€ to A ā€œTool Against Biasā€

    Facial Recognition - Can It Evolve From A ā€œSource of Biasā€ to A ā€œTool Against Biasā€

  • Research summary: Machine Learning Fairness - Lessons Learned

    Research summary: Machine Learning Fairness - Lessons Learned

  • Towards a Feminist Metaethics of AI

    Towards a Feminist Metaethics of AI

  • Understanding Machine Learning Practitioners' Data Documentation Perceptions, Needs, Challenges, and...

    Understanding Machine Learning Practitioners' Data Documentation Perceptions, Needs, Challenges, and...

  • Why AI ethics is a critical theory

    Why AI ethics is a critical theory

Partners

  • Ā 
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer


Articles

Columns

AI Literacy

The State of AI Ethics Report


 

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.

Contact

Donate


  • Ā© 2025 MONTREAL AI ETHICS INSTITUTE.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.