• Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • Core Principles of Responsible AI
    • Accountability
    • Fairness
    • Privacy
    • Safety and Security
    • Sustainability
    • Transparency
  • Special Topics
    • AI in Industry
    • Ethical Implications
    • Human-Centered Design
    • Regulatory Landscape
    • Technical Methods
  • Living Dictionary
  • State of AI Ethics
  • AI Ethics Brief
  • šŸ‡«šŸ‡·
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

Embedded ethics: a proposal for integrating ethics into the development of medical AI

June 10, 2022

šŸ”¬ Research Summary by Eryn Rigley, a PhD research student at University of Southampton, specialising in the intersection of environmental and AI ethics, as well as defence & security AI ethics.

[Original paper by Stuart McLennan, Amelia Fiske, Daniel Tigard, Ruth Müller, Sami Haddadin & Alena Buyx]


Overview: Ā Though AI ethics frameworks are plenty, applicable ethics guidance for AI developers remains few and far between. To translate high-level ethics guidelines into practice, the authors of this paper argue ethics ought to be embedded into every stage of the AI lifecycle.Ā 


Introduction

On a hospital bed, in the doctor’s office, and in an operating theater, we are at our most vulnerable when we need medical attention. For this reason, the medical field is perhaps the most important field to get AI ethics right in. Yet, medical AI systems continue to be rolled out without any ethical considerations or foresight of what might (inevitably) go wrong. The authors of this paper argue the gap between AI systems and our ethical principles can be solved by embedding ethics into every stage of an AI’s development, from conception to deployment.

Key Insights

The problem

AI Ethics frameworks and guidelines are plenty and have been found to converge around a handful of high level principles. That is, AI ought to ā€˜do no harm’, be ā€˜transparent’, and ā€˜robust’. However, these high-level principles solve only half the problem of AI ethical quandaries. To translate these principles in practice, for instance in navigating real ethical dilemmas, requires further work and a knowledge of how to apply high-level ethics in practice. 

The authors argue it would be unfair and over-demanding to assume and expect AI developers ought to be equipped to handle heavy ethics work. At the same time, ā€˜real ethicists’ have little participation in private industry. For this reason, AI ethics remains untranslated and impracticable to AI developers.  

This gap between high-level AI ethics principles and practical AI development has already resulted in real world problems. Within the medical field, novel AI systems are often deployed without explicit ethical consideration or foresight, reducing patients to unwilling ā€˜guinea pigs’ for the system. A lack of practicable AI ethics in the medical field thereby violates patients’ dignity, safety, and privacy. 

The Solution: Embedded Ethics

The authors propose that ā€˜embedded ethics’ is the solution to this gap between high level AI ethics work and practical AI development. This work describes the embedding of ethics into the entire lifecycle of AI, from design and development to deployment. In particular, the authors envisage ā€˜real ethicists’ posted at various stages of this life cycle to work with AI developers in the anticipation of future ethical concerns. Alternatively, if resources are constrained, regular exchanges between ethicists and other AI development team members should take place from the beginning of an AI systems’ conception. 

The general rule of thumb for this embedded ethics is the regular and prospective examination of ethical problems in AI, rather than sporadic and reactive engagement of AI developers with ethicists. The ethicist’s role here is two-fold: first of all to do the ethical heavy lifting of finding and fine-tuning ethical theories for application in AI systems; and second, to translate this high-level ethics into applicable guidance for developers. 

Limitations

The authors note several limitations to the use of embedded ethics in AI, and offer an adequate response to most. However, there are two major problems facing embedded ethicists within AI development which are due examination. 

The first problem facing ethicists is navigating competing interests within private industry, for instance between profitability or efficiency and ethical considerations. This need not be a grave problem for the ethicists, per se, since they will be expected to justify their arguments and provide reasons why their principles ought to be applied in certain cases. That is, it may fall within the ethicists’ purview to navigate these conflicting interests. 

The second problem is less clear to navigate. Simply placing ethicists into the lifecycle of AI will not ensure coherence and robustness in approaching ethical problems. The authors acknowledge that individual ethicists may have diverging opinions and state this is permissible so long as any viewpoint is justified and transparent. However, ensuring that individual ethicists within the same team or industry are working toward the same principles, and not diverging or conflicting in their analysis, will still require some kind of high-level overarching principles, for instance to maximize the wellbeing of humans. With this in mind, it is not clear how ethicists will be expected to adhere to these high-level principles whilst maintaining the freedom to do ā€˜real’ ethics work. Their role might therefore shrink to mere translation of high-level principles to practice.

Between the lines

Embedding ethics into every stage of an AI’s lifecycle, from conception to deployment, would seem an obvious necessity. And yet AI ethics remains a mystical high-level prescription of principles, impossible for computer scientists to navigate in practice. The authors have offered a convincing argument in favor of embedded AI ethics, and are able to respond to most of the problems they foresee. It remains unclear, however, what role the embedded ethicist is really taking on and how much freedom they will have to do their work: is it true embedded ethics will introduce ā€˜real ethicists’ to the industry, or are they mere ethics translators?

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

šŸ” SEARCH

Spotlight

AI Policy Corner: Singapore’s National AI Strategy 2.0

AI Governance in a Competitive World: Balancing Innovation, Regulation and Ethics | Point Zero Forum 2025

AI Policy Corner: Frontier AI Safety Commitments, AI Seoul Summit 2024

AI Policy Corner: The Colorado State Deepfakes Act

Special Edition: Honouring the Legacy of Abhishek Gupta (1992–2024)

related posts

  • Research Summary: Towards Evaluating the Robustness of Neural Networks

    Research Summary: Towards Evaluating the Robustness of Neural Networks

  • Towards a Feminist Metaethics of AI

    Towards a Feminist Metaethics of AI

  • GenAI Against Humanity: Nefarious Applications of Generative Artificial Intelligence and Large Langu...

    GenAI Against Humanity: Nefarious Applications of Generative Artificial Intelligence and Large Langu...

  • The importance of audit in AI governance

    The importance of audit in AI governance

  • Bias and Fairness in Large Language Models: A Survey

    Bias and Fairness in Large Language Models: A Survey

  • From the Gut? Questions on Artificial Intelligence and Music

    From the Gut? Questions on Artificial Intelligence and Music

  • Research summary: On the Edge of Tomorrow: Canada’s AI Augmented Workforce

    Research summary: On the Edge of Tomorrow: Canada’s AI Augmented Workforce

  • Low-Resource Languages Jailbreak GPT-4

    Low-Resource Languages Jailbreak GPT-4

  • Are Large Language Models a Threat to Digital Public Goods? Evidence from Activity on Stack Overflow

    Are Large Language Models a Threat to Digital Public Goods? Evidence from Activity on Stack Overflow

  • Open Problems and Fundamental Limitations of Reinforcement Learning from Human Feedback

    Open Problems and Fundamental Limitations of Reinforcement Learning from Human Feedback

Partners

  • Ā 
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer

Categories


• Blog
• Research Summaries
• Columns
• Core Principles of Responsible AI
• Special Topics

Signature Content


• The State Of AI Ethics

• The Living Dictionary

• The AI Ethics Brief

Learn More


• About

• Open Access Policy

• Contributions Policy

• Editorial Stance on AI Tools

• Press

• Donate

• Contact

The AI Ethics Brief (bi-weekly newsletter)

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.


Archive

  • Ā© MONTREAL AI ETHICS INSTITUTE. All rights reserved 2024.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.