• Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • Core Principles of Responsible AI
    • Accountability
    • Fairness
    • Privacy
    • Safety and Security
    • Sustainability
    • Transparency
  • Special Topics
    • AI in Industry
    • Ethical Implications
    • Human-Centered Design
    • Regulatory Landscape
    • Technical Methods
  • Living Dictionary
  • State of AI Ethics
  • AI Ethics Brief
  • 🇫🇷
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

Ethics and Governance of Trustworthy Medical Artificial Intelligence

December 3, 2023

🔬 Research Summary by Sofia Woo, a recent graduate from McGill University who studied History and Political Science, emphasizing scientific history and political economics.

[Original paper by Jie Zhang and Zong-Ming Zhang]


Overview: The increased use of medical AI in clinical settings has set off concerns surrounding the ethics and trustworthiness of these tools. This paper uses a multidisciplinary approach to analyze the factors that influence trustworthiness in medical AI at the design and application levels and offer governance recommendations.


Introduction

Can we trust machines with our medical data—one of our live’s most personal and private aspects? As AI is becoming increasingly utilized in medical institutions to assist clinicians in diagnosing and treating patients, this has added a new dimension to bioethics. This topic is analyzed through a multidisciplinary approach on the design level (which refers to the technology’s reliability) and on the application level (which refers to the human impact on the use of AI). This paper frames the issue of AI and bioethics through the lens of technology, law, and healthcare stakeholders. In tackling these issues, the authors focus on five subjects that affect medical AI’s trustworthiness: data quality, algorithmic bias, opacity, safety and security, and responsibility attribution. They found that while AI systems have been shown to streamline treatment processes successfully, they are limited in terms of being only as good as the quality of data provided to them. As data given to medical AI tools is at times non-standardized, unstructured, and contains errors, the authors propose measures to institute AI tools that are more ethical and trustworthy. 

Key Insights

The Importance of Data Quality

Because medical AI tools retrieve information via the data that clinicians input, a significant factor influencing just how trustworthy and accurate these machines are depends on how high quality the data is. It should be emphasized that it is not just data accuracy and representativeness but data annotation that is crucial. The paper highlights that it is the most frequently inconsistent data in the annotation process that poses the most problems. For example, differences between doctors’ biases and formats for labeling data and discrepancies between hospitals’ equipment and software have resulted in confusion and conflicting data being fed to medical AI systems. 

The Two Sides of Algorithmic Bias

There are two aspects to algorithmic bias: human-induced and data-induced. Human-induced bias is defined as being intentional or unintentional, and these biases (like data-induced ones) are often reinforced and amplified as new iterations of algorithms accumulate past data. The data is unrepresentative and/or insufficient in the case of data-induced bias. While there are ways to combat algorithmic biases (such as taking measures to ensure that the data is diverse and has been reviewed before feeding it to medical AI tools), the fact that deep learning is a black box makes biases easily undetectable—thus making it easier to deliver discriminatory treatment (whether it is intentional or unintentional). 

Opacity and Lacking Information

Connected to deep learning is the concept of opacity. The machine’s opaqueness creates a lack of understanding among users. In turn, this negatively affects how patients and clinicians view AI tools and could potentially establish distrust toward clinicians who use said tools. The paper outlines three main reasons opacity exists: algorithms are trade secrets, the algorithm is complex, and lay people cannot understand it. Explainable AI (XAI) has been proposed as a possible solution. However, it is important to note that there is often an inverse relationship between performance and machine transparency, with the highest-performing AI tools usually being the least transparent. 

The Issue of Safety and Security

Risks involving machine errors, cybersecurity, and the constant need for adequate testing are other crucial aspects of medical AI trustworthiness. Blind spots in data (as explained above), hacking risks, and problems of how to go about treatments when the machine and clinician have conflicting views are all ethical issues that need to be addressed. 

An Independent Entity? 

With the above four aspects in mind, whether AI is an independent legal responsibility entity subject to liability is an ever-evolving debate. While current medical AI tools do not yet carry their own consciousness, what should institutions and AI developers do when the machines reach this point? The paper asserts that technology should serve to advance human society. As such, machine autonomy should not forgo the need for human subjectivity. In other words, the authors argue that measures should be taken to prevent the development of fully independent AI systems that can function well beyond human control. 

Incorporating Bioethics Into Medical AI

Bioethics principles must be applied for medical AI to gain clinician and patient trust. One of the biggest issues is bridging the gap between promising ethical tools and actual applications. The authors propose embedding ethical algorithms into AI systems. However, regardless of which approaches are taken to embed these algorithms, the paper suggests that developers and clinicians should factor in the nuances of different medical cases rather than establishing universal ethics guidelines. 

The authors argue that countries should add liability fees to the selling prices of AI tools. Additionally, they promote establishing a government or industry-led insurance and reserve system with public and private parties paying for the fees and forming an independent pool of funds to pay legal liability for medical AI tools. 

With regard to data, hospitals should make strides in better structuring and standardizing data annotation to provide less biased medical treatments and recommendations. Hospitals, AI developers, and the government should work closely to ensure safe and responsible technology is used. If the government establishes an AI oversight committee, medical AI tools can be better regulated—especially in cases where algorithms are updated (thus requiring new regulatory policies). 

Between the lines

Incorporating bioethics principles into medical AI algorithms is undoubtedly a crucial aspect of creating trustworthy technology that serves to advance society. While the paper highlights multiple critical topics that should be considered and offers robust solutions, gaps and questions remain. More research needs to be done regarding when these AI tools are considered to be conscious of their own actions and, by extension when liability is placed on humans or the machines. Additionally, while the authors’ proposed solution of instituting liability fees and other systems between public and private parties is an interesting point to consider, it may be difficult to establish in countries such as the United States, where the healthcare system is incredibly disjointed. Additionally, in places with many competing private healthcare companies, it should be considered that these parties do not necessarily have patients’ best interests at heart. Instead, they tend to prioritize profits over patient well-being—thus adding another obstacle to ensuring ethical medical AI is instituted.

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

🔍 SEARCH

Spotlight

AI Policy Corner: Singapore’s National AI Strategy 2.0

AI Governance in a Competitive World: Balancing Innovation, Regulation and Ethics | Point Zero Forum 2025

AI Policy Corner: Frontier AI Safety Commitments, AI Seoul Summit 2024

AI Policy Corner: The Colorado State Deepfakes Act

Special Edition: Honouring the Legacy of Abhishek Gupta (1992–2024)

related posts

  • Ubuntu’s Implications for Philosophical Ethics

    Ubuntu’s Implications for Philosophical Ethics

  • FairQueue: Rethinking Prompt Learning for Fair Text-to-Image Generation (NeurIPS 2024)

    FairQueue: Rethinking Prompt Learning for Fair Text-to-Image Generation (NeurIPS 2024)

  • Writer-Defined AI Personas for On-Demand Feedback Generation

    Writer-Defined AI Personas for On-Demand Feedback Generation

  • Quantifying the Carbon Emissions of Machine Learning

    Quantifying the Carbon Emissions of Machine Learning

  • The Social Metaverse: Battle for Privacy

    The Social Metaverse: Battle for Privacy

  • The Ethical Need for Watermarks in Machine-Generated Language

    The Ethical Need for Watermarks in Machine-Generated Language

  • Government AI Readiness 2021 Index

    Government AI Readiness 2021 Index

  • Two Decades of Empirical Research on Trust in AI: A Bibliometric Analysis and HCI Research Agenda

    Two Decades of Empirical Research on Trust in AI: A Bibliometric Analysis and HCI Research Agenda

  • Research summary: Algorithmic Accountability

    Research summary: Algorithmic Accountability

  • Research summary: AI Mediated Exchange Theory by Xiao Ma and Taylor W. Brown

    Research summary: AI Mediated Exchange Theory by Xiao Ma and Taylor W. Brown

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer

Categories


• Blog
• Research Summaries
• Columns
• Core Principles of Responsible AI
• Special Topics

Signature Content


• The State Of AI Ethics

• The Living Dictionary

• The AI Ethics Brief

Learn More


• About

• Open Access Policy

• Contributions Policy

• Editorial Stance on AI Tools

• Press

• Donate

• Contact

The AI Ethics Brief (bi-weekly newsletter)

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.


Archive

  • © MONTREAL AI ETHICS INSTITUTE. All rights reserved 2024.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.