• Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • Core Principles of Responsible AI
    • Accountability
    • Fairness
    • Privacy
    • Safety and Security
    • Sustainability
    • Transparency
  • Special Topics
    • AI in Industry
    • Ethical Implications
    • Human-Centered Design
    • Regulatory Landscape
    • Technical Methods
  • Living Dictionary
  • State of AI Ethics
  • AI Ethics Brief
  • šŸ‡«šŸ‡·
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

Embedded ethics: a proposal for integrating ethics into the development of medical AI

June 10, 2022

šŸ”¬ Research Summary by Eryn Rigley, a PhD research student at University of Southampton, specialising in the intersection of environmental and AI ethics, as well as defence & security AI ethics.

[Original paper by Stuart McLennan, Amelia Fiske, Daniel Tigard, Ruth Müller, Sami Haddadin & Alena Buyx]


Overview: Ā Though AI ethics frameworks are plenty, applicable ethics guidance for AI developers remains few and far between. To translate high-level ethics guidelines into practice, the authors of this paper argue ethics ought to be embedded into every stage of the AI lifecycle.Ā 


Introduction

On a hospital bed, in the doctor’s office, and in an operating theater, we are at our most vulnerable when we need medical attention. For this reason, the medical field is perhaps the most important field to get AI ethics right in. Yet, medical AI systems continue to be rolled out without any ethical considerations or foresight of what might (inevitably) go wrong. The authors of this paper argue the gap between AI systems and our ethical principles can be solved by embedding ethics into every stage of an AI’s development, from conception to deployment.

Key Insights

The problem

AI Ethics frameworks and guidelines are plenty and have been found to converge around a handful of high level principles. That is, AI ought to ā€˜do no harm’, be ā€˜transparent’, and ā€˜robust’. However, these high-level principles solve only half the problem of AI ethical quandaries. To translate these principles in practice, for instance in navigating real ethical dilemmas, requires further work and a knowledge of how to apply high-level ethics in practice. 

The authors argue it would be unfair and over-demanding to assume and expect AI developers ought to be equipped to handle heavy ethics work. At the same time, ā€˜real ethicists’ have little participation in private industry. For this reason, AI ethics remains untranslated and impracticable to AI developers.  

This gap between high-level AI ethics principles and practical AI development has already resulted in real world problems. Within the medical field, novel AI systems are often deployed without explicit ethical consideration or foresight, reducing patients to unwilling ā€˜guinea pigs’ for the system. A lack of practicable AI ethics in the medical field thereby violates patients’ dignity, safety, and privacy. 

The Solution: Embedded Ethics

The authors propose that ā€˜embedded ethics’ is the solution to this gap between high level AI ethics work and practical AI development. This work describes the embedding of ethics into the entire lifecycle of AI, from design and development to deployment. In particular, the authors envisage ā€˜real ethicists’ posted at various stages of this life cycle to work with AI developers in the anticipation of future ethical concerns. Alternatively, if resources are constrained, regular exchanges between ethicists and other AI development team members should take place from the beginning of an AI systems’ conception. 

The general rule of thumb for this embedded ethics is the regular and prospective examination of ethical problems in AI, rather than sporadic and reactive engagement of AI developers with ethicists. The ethicist’s role here is two-fold: first of all to do the ethical heavy lifting of finding and fine-tuning ethical theories for application in AI systems; and second, to translate this high-level ethics into applicable guidance for developers. 

Limitations

The authors note several limitations to the use of embedded ethics in AI, and offer an adequate response to most. However, there are two major problems facing embedded ethicists within AI development which are due examination. 

The first problem facing ethicists is navigating competing interests within private industry, for instance between profitability or efficiency and ethical considerations. This need not be a grave problem for the ethicists, per se, since they will be expected to justify their arguments and provide reasons why their principles ought to be applied in certain cases. That is, it may fall within the ethicists’ purview to navigate these conflicting interests. 

The second problem is less clear to navigate. Simply placing ethicists into the lifecycle of AI will not ensure coherence and robustness in approaching ethical problems. The authors acknowledge that individual ethicists may have diverging opinions and state this is permissible so long as any viewpoint is justified and transparent. However, ensuring that individual ethicists within the same team or industry are working toward the same principles, and not diverging or conflicting in their analysis, will still require some kind of high-level overarching principles, for instance to maximize the wellbeing of humans. With this in mind, it is not clear how ethicists will be expected to adhere to these high-level principles whilst maintaining the freedom to do ā€˜real’ ethics work. Their role might therefore shrink to mere translation of high-level principles to practice.

Between the lines

Embedding ethics into every stage of an AI’s lifecycle, from conception to deployment, would seem an obvious necessity. And yet AI ethics remains a mystical high-level prescription of principles, impossible for computer scientists to navigate in practice. The authors have offered a convincing argument in favor of embedded AI ethics, and are able to respond to most of the problems they foresee. It remains unclear, however, what role the embedded ethicist is really taking on and how much freedom they will have to do their work: is it true embedded ethics will introduce ā€˜real ethicists’ to the industry, or are they mere ethics translators?

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

šŸ” SEARCH

Spotlight

Beyond Dependency: The Hidden Risk of Social Comparison in Chatbot Companionship

AI Policy Corner: Restriction vs. Regulation: Comparing State Approaches to AI Mental Health Legislation

Beyond Consultation: Building Inclusive AI Governance for Canada’s Democratic Future

AI Policy Corner: U.S. Executive Order on Advancing AI Education for American Youth

AI Policy Corner: U.S. Copyright Guidance on Works Created with AI

related posts

  • AI-synthesized faces are indistinguishable from real faces and more trustworthy

    AI-synthesized faces are indistinguishable from real faces and more trustworthy

  • Research summary: Warning Signs: The Future of Privacy and Security in the Age of Machine Learning

    Research summary: Warning Signs: The Future of Privacy and Security in the Age of Machine Learning

  • Relative Behavioral Attributes: Filling the Gap between Symbolic Goal Specification and Reward Learn...

    Relative Behavioral Attributes: Filling the Gap between Symbolic Goal Specification and Reward Learn...

  • Effects of ROSS Intelligence and NDAS, highlighting the need for AI regulation

    Effects of ROSS Intelligence and NDAS, highlighting the need for AI regulation

  • Enough With ā€œHuman-AI Collaborationā€

    Enough With ā€œHuman-AI Collaborationā€

  • Public Strategies for Artificial Intelligence: Which Value Drivers?

    Public Strategies for Artificial Intelligence: Which Value Drivers?

  • Humans are not Boltzmann Distributions: Challenges and Opportunities for Modelling Human Feedback an...

    Humans are not Boltzmann Distributions: Challenges and Opportunities for Modelling Human Feedback an...

  • Towards Environmentally Equitable AI via Geographical Load Balancing

    Towards Environmentally Equitable AI via Geographical Load Balancing

  • Reports on Communication Surveillance in Botswana, Malawi and the DRC, and the Chinese Digital Infra...

    Reports on Communication Surveillance in Botswana, Malawi and the DRC, and the Chinese Digital Infra...

  • A Sequentially Fair Mechanism for Multiple Sensitive Attributes

    A Sequentially Fair Mechanism for Multiple Sensitive Attributes

Partners

  • Ā 
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer

Categories


• Blog
• Research Summaries
• Columns
• Core Principles of Responsible AI
• Special Topics

Signature Content


• The State Of AI Ethics

• The Living Dictionary

• The AI Ethics Brief

Learn More


• About

• Open Access Policy

• Contributions Policy

• Editorial Stance on AI Tools

• Press

• Donate

• Contact

The AI Ethics Brief (bi-weekly newsletter)

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.


Archive

  • Ā© 2025 MONTREAL AI ETHICS INSTITUTE.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.