• Skip to main content
  • Skip to secondary menu
  • Skip to primary sidebar
  • Skip to footer
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

  • Articles
    • Public Policy
    • Privacy & Security
    • Human Rights
      • Ethics
      • JEDI (Justice, Equity, Diversity, Inclusion
    • Climate
    • Design
      • Emerging Technology
    • Application & Adoption
      • Health
      • Education
      • Government
        • Military
        • Public Works
      • Labour
    • Arts & Culture
      • Film & TV
      • Music
      • Pop Culture
      • Digital Art
  • Columns
    • AI Policy Corner
    • Recess
    • Tech Futures
  • The AI Ethics Brief
  • AI Literacy
    • Research Summaries
    • AI Ethics Living Dictionary
    • Learning Community
  • The State of AI Ethics Report
    • Volume 7 (November 2025)
    • Volume 6 (February 2022)
    • Volume 5 (July 2021)
    • Volume 4 (April 2021)
    • Volume 3 (Jan 2021)
    • Volume 2 (Oct 2020)
    • Volume 1 (June 2020)
  • About
    • Our Contributions Policy
    • Our Open Access Policy
    • Contact
    • Donate

Embedded ethics: a proposal for integrating ethics into the development of medical AI

March 30, 2022

🔬 Research Summary by Max Krueger, a consultant at Accenture with an interest in both the long and short-term implications of AI on society.

[Original paper by Stuart McLennan, Amelia Fiske, Daniel Tigard, Ruth MĂĽller, Sami Haddadin, and Alena Buyx]


Overview:  High-level ethical frameworks serve an important purpose but it is not clear how such frameworks influence the technical development of AI systems. There is a skills gap between technical AI development and the implementation of high-level frameworks. This paper explores an embedded form of AI development where ethicists and developers work in lockstep to address ethical issues and implement technical solutions within the healthcare domain.


Introduction

There is a critical implementation gap between current medical AI ethics frameworks and medical AI development. A literature review of high-level medical AI frameworks demonstrates that these frameworks are aligning with similar principles found in biomedical research and clinical practice. The research team has expressed doubt that these high-level principles apply to technical development. The authors suggest embedding ethics into the AI development process as opposed to relying on developers to implement ethics frameworks into their practice. This framework aims to identify and address ethical issues early in the development process and foster regular exchanges between ethicists and development teams. The authors identify four primary domains of embedded ethics: aims, integration, practice, and expertise/training.

High-level frameworks provide guidelines for teams and organizations to follow but there remains uncertainty as to the effectiveness of these frameworks within development teams. The research team suggests embedding ethics into the development process:

“[We] use “embedded ethics” in a wide sense, namely to refer to the ongoing practice of integrating ethics into the entire development process—here ethics becomes a truly collaborative, interdisciplinary enterprise.”

Identified are four main domains of embedded ethics: aims, integration, practice, and expertise/training.

Aims

An embedded ethics approach aims to “develop AI technologies that are ethically and socially responsible, … that benefit and do not harm individuals and society.” Ethics should be integrated into the development process from kickoff to deployment and address issues of ethical uncertainty. Ultimately, this is a collaborative process to address issues that arise. In the medical context, embedded ethics may draw from existing approaches such as clinical ethic advisory panels. The research draws a distinct differentiation between a process that seeks to increase the ethical awareness and responsiveness of a project from an effort to increase the marketability of the project. The latter raises concerns about “ethics washing”.

Integration

Integration takes many different shapes and sizes depending on the organization and available resources. The highest standard of integration would include an ethicist or team of ethicists as dedicated members of the project team. Such an approach is demonstrated by Jeantine Lunshof at Harvard’s Wyss Institute for Biologically Inspired Engineering. An alternative to this would be to have shared ethics resources available to all project teams. This may be a centrally organized ethicist or ethics team that consults with many projects simultaneously. A key to making this arrangement successful is having regular exchanges between the development and ethics teams rather than having the ethics team only consulted when issues arise. This introduces a level of rigor and structure into the program. Regardless of the arrangement, a pre-established working agreement should be developed to operationalize interaction between ethicists and development teams.

Practice

The authors believe a rigorous normative analysis should be the default position for issues identified throughout the development process including “explaining and clarifying complex ethical issues so as to allow a clearer understanding of them”. There is currently no standard approach to such analysis in AI ethics. The authors note they do not advocate for a prescriptive approach but that certain criteria should be followed:

  1. Make clear and explicit the theoretical ethical positions being invoked in a given normative analysis.
  2. Explain and justify why the positions are suitable to meet the specific goals of the project.

Expertise/Training

Expertise and training are paramount to a successful embedded ethics program. Embedded ethicists can come from a variety of backgrounds and “it is important that embedded ethicists have appropriate technology-related knowledge and skills”. Much like a data scientist embedded in a business unit, domain knowledge is the currency through which impact is derived. Where ethicists don’t have domain knowledge, time should be carved out to allow for this expertise to be gained.

Addressing uniqueness in medical AI 

The medical application of AI raises a few unique concerns. The Explainability of neural network systems make it difficult to provide reason on a given output, therefore, raising concerns about clinical responsibility among practitioners and their non-human colleagues. The authors note that the very nature of medicine is changing via altering relationships “between patients and practitioners, and between practitioners and the technical and scientific communities”. Embedded ethics is a way to manage these changing relationships and ensure social and ethical values are accounted for.

There remains a significant regulatory gap in the application of AI in medicine. Medical AI applications are often not tested rigorously as other medical technologies. Testing is often administered after development is complete when it is no longer practical to influence design decisions. In light of this, an embedded ethics approach can have a particularly large impact on ethical outcomes by addressing issues early in the process, saving both time and money.

Between the lines

Embedded ethics seems like a viable approach given the highly specific nature of the work. Important to the success of such a program is an endorsement from leadership enabling ethicists the latitude to make decisions that may not explicitly benefit the bottom line. Additional scrutiny on how these teams work may be needed for an embedded ethics program to be truly impactful. This might include explicit working agreements that teams follow to ensure all are accountable for systematically working through ethical issues. Embedded ethics bridges the gap between high-level frameworks and technical development which could be very successful if applied in the correct environment both in and outside of the medical community.

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

🔍 SEARCH

Spotlight

A network diagram with lots of little emojis, organised in clusters.

Tech Futures: AI For and Against Knowledge

A brightly coloured illustration which can be viewed in any direction. It has many elements to it working together: men in suits around a table, someone in a data centre, big hands controlling the scenes and holding a phone, people in a production line. Motifs such as network diagrams and melting emojis are placed throughout the busy vignettes.

Tech Futures: The Fossil Fuels Playbook for Big Tech: Part II

A rock embedded with intricate circuit board patterns, held delicately by pale hands drawn in a ghostly style. The contrast between the rough, metallic mineral and the sleek, artificial circuit board illustrates the relationship between raw natural resources and modern technological development. The hands evoke human involvement in the extraction and manufacturing processes.

Tech Futures: The Fossil Fuels Playbook for Big Tech: Part I

Close-up of a cat sleeping on a computer keyboard

Tech Futures: The threat of AI-generated code to the world’s digital infrastructure

The undying sun hangs in the sky, as people gather around signal towers, working through their digital devices.

Dreams and Realities in Modi’s AI Impact Summit

related posts

  • Justice in Misinformation Detection Systems

    Justice in Misinformation Detection Systems

  • Submission to World Intellectual Property Organization on IP & AI

    Submission to World Intellectual Property Organization on IP & AI

  • A Holistic Assessment of the Reliability of Machine Learning Systems

    A Holistic Assessment of the Reliability of Machine Learning Systems

  • Embedding Values in Artificial Intelligence (AI) Systems

    Embedding Values in Artificial Intelligence (AI) Systems

  • Exploring Collaboration Mechanisms for LLM Agents: A Social Psychology View

    Exploring Collaboration Mechanisms for LLM Agents: A Social Psychology View

  • Why civic competence in AI ethics is needed in 2021

    Why civic competence in AI ethics is needed in 2021

  • Regulatory Instruments for Fair Personalized Pricing

    Regulatory Instruments for Fair Personalized Pricing

  • Designing for Meaningful Human Control in Military Human-Machine Teams

    Designing for Meaningful Human Control in Military Human-Machine Teams

  • A fair pricing model via adversarial learning

    A fair pricing model via adversarial learning

  • Considerations for Closed Messaging Research in Democratic Contexts  (Research summary)

    Considerations for Closed Messaging Research in Democratic Contexts (Research summary)

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer


Articles

Columns

AI Literacy

The State of AI Ethics Report


 

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.

Contact

Donate


  • © 2025 MONTREAL AI ETHICS INSTITUTE.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.