• Skip to main content
  • Skip to secondary menu
  • Skip to primary sidebar
  • Skip to footer
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

  • Articles
    • Public Policy
    • Privacy & Security
    • Human Rights
      • Ethics
      • JEDI (Justice, Equity, Diversity, Inclusion
    • Climate
    • Design
      • Emerging Technology
    • Application & Adoption
      • Health
      • Education
      • Government
        • Military
        • Public Works
      • Labour
    • Arts & Culture
      • Film & TV
      • Music
      • Pop Culture
      • Digital Art
  • Columns
    • AI Policy Corner
    • Recess
  • The AI Ethics Brief
  • AI Literacy
    • AI Ethics Living Dictionary
    • Learning Community
  • The State of AI Ethics Report
    • Volume 6 (February 2022)
    • Volume 5 (July 2021)
    • Volume 4 (April 2021)
    • Volume 3 (Jan 2021)
    • Volume 2 (Oct 2020)
    • Volume 1 (June 2020)
  • About
    • Our Contributions Policy
    • Our Open Access Policy
    • Contact
    • Donate

Embedded ethics: a proposal for integrating ethics into the development of medical AI

March 30, 2022

🔬 Research Summary by Max Krueger, a consultant at Accenture with an interest in both the long and short-term implications of AI on society.

[Original paper by Stuart McLennan, Amelia Fiske, Daniel Tigard, Ruth MĂŒller, Sami Haddadin, and Alena Buyx]


Overview:  High-level ethical frameworks serve an important purpose but it is not clear how such frameworks influence the technical development of AI systems. There is a skills gap between technical AI development and the implementation of high-level frameworks. This paper explores an embedded form of AI development where ethicists and developers work in lockstep to address ethical issues and implement technical solutions within the healthcare domain.


Introduction

There is a critical implementation gap between current medical AI ethics frameworks and medical AI development. A literature review of high-level medical AI frameworks demonstrates that these frameworks are aligning with similar principles found in biomedical research and clinical practice. The research team has expressed doubt that these high-level principles apply to technical development. The authors suggest embedding ethics into the AI development process as opposed to relying on developers to implement ethics frameworks into their practice. This framework aims to identify and address ethical issues early in the development process and foster regular exchanges between ethicists and development teams. The authors identify four primary domains of embedded ethics: aims, integration, practice, and expertise/training.

High-level frameworks provide guidelines for teams and organizations to follow but there remains uncertainty as to the effectiveness of these frameworks within development teams. The research team suggests embedding ethics into the development process:

“[We] use “embedded ethics” in a wide sense, namely to refer to the ongoing practice of integrating ethics into the entire development process—here ethics becomes a truly collaborative, interdisciplinary enterprise.”

Identified are four main domains of embedded ethics: aims, integration, practice, and expertise/training.

Aims

An embedded ethics approach aims to “develop AI technologies that are ethically and socially responsible, 
 that benefit and do not harm individuals and society.” Ethics should be integrated into the development process from kickoff to deployment and address issues of ethical uncertainty. Ultimately, this is a collaborative process to address issues that arise. In the medical context, embedded ethics may draw from existing approaches such as clinical ethic advisory panels. The research draws a distinct differentiation between a process that seeks to increase the ethical awareness and responsiveness of a project from an effort to increase the marketability of the project. The latter raises concerns about “ethics washing”.

Integration

Integration takes many different shapes and sizes depending on the organization and available resources. The highest standard of integration would include an ethicist or team of ethicists as dedicated members of the project team. Such an approach is demonstrated by Jeantine Lunshof at Harvard’s Wyss Institute for Biologically Inspired Engineering. An alternative to this would be to have shared ethics resources available to all project teams. This may be a centrally organized ethicist or ethics team that consults with many projects simultaneously. A key to making this arrangement successful is having regular exchanges between the development and ethics teams rather than having the ethics team only consulted when issues arise. This introduces a level of rigor and structure into the program. Regardless of the arrangement, a pre-established working agreement should be developed to operationalize interaction between ethicists and development teams.

Practice

The authors believe a rigorous normative analysis should be the default position for issues identified throughout the development process including “explaining and clarifying complex ethical issues so as to allow a clearer understanding of them”. There is currently no standard approach to such analysis in AI ethics. The authors note they do not advocate for a prescriptive approach but that certain criteria should be followed:

  1. Make clear and explicit the theoretical ethical positions being invoked in a given normative analysis.
  2. Explain and justify why the positions are suitable to meet the specific goals of the project.

Expertise/Training

Expertise and training are paramount to a successful embedded ethics program. Embedded ethicists can come from a variety of backgrounds and “it is important that embedded ethicists have appropriate technology-related knowledge and skills”. Much like a data scientist embedded in a business unit, domain knowledge is the currency through which impact is derived. Where ethicists don’t have domain knowledge, time should be carved out to allow for this expertise to be gained.

Addressing uniqueness in medical AI 

The medical application of AI raises a few unique concerns. The Explainability of neural network systems make it difficult to provide reason on a given output, therefore, raising concerns about clinical responsibility among practitioners and their non-human colleagues. The authors note that the very nature of medicine is changing via altering relationships “between patients and practitioners, and between practitioners and the technical and scientific communities”. Embedded ethics is a way to manage these changing relationships and ensure social and ethical values are accounted for.

There remains a significant regulatory gap in the application of AI in medicine. Medical AI applications are often not tested rigorously as other medical technologies. Testing is often administered after development is complete when it is no longer practical to influence design decisions. In light of this, an embedded ethics approach can have a particularly large impact on ethical outcomes by addressing issues early in the process, saving both time and money.

Between the lines

Embedded ethics seems like a viable approach given the highly specific nature of the work. Important to the success of such a program is an endorsement from leadership enabling ethicists the latitude to make decisions that may not explicitly benefit the bottom line. Additional scrutiny on how these teams work may be needed for an embedded ethics program to be truly impactful. This might include explicit working agreements that teams follow to ensure all are accountable for systematically working through ethical issues. Embedded ethics bridges the gap between high-level frameworks and technical development which could be very successful if applied in the correct environment both in and outside of the medical community.

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

🔍 SEARCH

Spotlight

ALL IN Conference 2025: Four Key Takeaways from Montreal

Beyond Dependency: The Hidden Risk of Social Comparison in Chatbot Companionship

AI Policy Corner: Restriction vs. Regulation: Comparing State Approaches to AI Mental Health Legislation

Beyond Consultation: Building Inclusive AI Governance for Canada’s Democratic Future

AI Policy Corner: U.S. Executive Order on Advancing AI Education for American Youth

related posts

  • A roadmap toward empowering the labor force behind AI

    A roadmap toward empowering the labor force behind AI

  • Clueless AI: Should AI Models Report to Us When They Are Clueless?

    Clueless AI: Should AI Models Report to Us When They Are Clueless?

  • Resistance and refusal to algorithmic harms: Varieties of ‘knowledge projects’

    Resistance and refusal to algorithmic harms: Varieties of ‘knowledge projects’

  • Acceptable Risks in Europe’s Proposed AI Act: Reasonableness and Other Principles for Deciding How M...

    Acceptable Risks in Europe’s Proposed AI Act: Reasonableness and Other Principles for Deciding How M...

  • Avoiding an Oppressive Future of Machine Learning: A Design Theory for Emancipatory Assistants

    Avoiding an Oppressive Future of Machine Learning: A Design Theory for Emancipatory Assistants

  • Brave: what it means to be an AI Ethicist

    Brave: what it means to be an AI Ethicist

  • Computer vision and sustainability

    Computer vision and sustainability

  • The Impact of the GDPR on Artificial Intelligence

    The Impact of the GDPR on Artificial Intelligence

  • Promises and Challenges of Causality for Ethical Machine Learning

    Promises and Challenges of Causality for Ethical Machine Learning

  • Editing Personality for LLMs

    Editing Personality for LLMs

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer


Articles

Columns

AI Literacy

The State of AI Ethics Report


 

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.

Contact

Donate


  • © 2025 MONTREAL AI ETHICS INSTITUTE.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.