🔬 Research Summary by Max Krueger, a consultant at Accenture with an interest in both the long and short-term implications of AI on society.
[Original paper by Stuart McLennan, Amelia Fiske, Daniel Tigard, Ruth MĂĽller, Sami Haddadin, and Alena Buyx]
Overview: High-level ethical frameworks serve an important purpose but it is not clear how such frameworks influence the technical development of AI systems. There is a skills gap between technical AI development and the implementation of high-level frameworks. This paper explores an embedded form of AI development where ethicists and developers work in lockstep to address ethical issues and implement technical solutions within the healthcare domain.
Introduction
There is a critical implementation gap between current medical AI ethics frameworks and medical AI development. A literature review of high-level medical AI frameworks demonstrates that these frameworks are aligning with similar principles found in biomedical research and clinical practice. The research team has expressed doubt that these high-level principles apply to technical development. The authors suggest embedding ethics into the AI development process as opposed to relying on developers to implement ethics frameworks into their practice. This framework aims to identify and address ethical issues early in the development process and foster regular exchanges between ethicists and development teams. The authors identify four primary domains of embedded ethics: aims, integration, practice, and expertise/training.
High-level frameworks provide guidelines for teams and organizations to follow but there remains uncertainty as to the effectiveness of these frameworks within development teams. The research team suggests embedding ethics into the development process:
“[We] use “embedded ethics” in a wide sense, namely to refer to the ongoing practice of integrating ethics into the entire development process—here ethics becomes a truly collaborative, interdisciplinary enterprise.”
Identified are four main domains of embedded ethics: aims, integration, practice, and expertise/training.
Aims
An embedded ethics approach aims to “develop AI technologies that are ethically and socially responsible, … that benefit and do not harm individuals and society.” Ethics should be integrated into the development process from kickoff to deployment and address issues of ethical uncertainty. Ultimately, this is a collaborative process to address issues that arise. In the medical context, embedded ethics may draw from existing approaches such as clinical ethic advisory panels. The research draws a distinct differentiation between a process that seeks to increase the ethical awareness and responsiveness of a project from an effort to increase the marketability of the project. The latter raises concerns about “ethics washing”.
Integration
Integration takes many different shapes and sizes depending on the organization and available resources. The highest standard of integration would include an ethicist or team of ethicists as dedicated members of the project team. Such an approach is demonstrated by Jeantine Lunshof at Harvard’s Wyss Institute for Biologically Inspired Engineering. An alternative to this would be to have shared ethics resources available to all project teams. This may be a centrally organized ethicist or ethics team that consults with many projects simultaneously. A key to making this arrangement successful is having regular exchanges between the development and ethics teams rather than having the ethics team only consulted when issues arise. This introduces a level of rigor and structure into the program. Regardless of the arrangement, a pre-established working agreement should be developed to operationalize interaction between ethicists and development teams.
Practice
The authors believe a rigorous normative analysis should be the default position for issues identified throughout the development process including “explaining and clarifying complex ethical issues so as to allow a clearer understanding of them”. There is currently no standard approach to such analysis in AI ethics. The authors note they do not advocate for a prescriptive approach but that certain criteria should be followed:
- Make clear and explicit the theoretical ethical positions being invoked in a given normative analysis.
- Explain and justify why the positions are suitable to meet the specific goals of the project.
Expertise/Training
Expertise and training are paramount to a successful embedded ethics program. Embedded ethicists can come from a variety of backgrounds and “it is important that embedded ethicists have appropriate technology-related knowledge and skills”. Much like a data scientist embedded in a business unit, domain knowledge is the currency through which impact is derived. Where ethicists don’t have domain knowledge, time should be carved out to allow for this expertise to be gained.
Addressing uniqueness in medical AI
The medical application of AI raises a few unique concerns. The Explainability of neural network systems make it difficult to provide reason on a given output, therefore, raising concerns about clinical responsibility among practitioners and their non-human colleagues. The authors note that the very nature of medicine is changing via altering relationships “between patients and practitioners, and between practitioners and the technical and scientific communities”. Embedded ethics is a way to manage these changing relationships and ensure social and ethical values are accounted for.
There remains a significant regulatory gap in the application of AI in medicine. Medical AI applications are often not tested rigorously as other medical technologies. Testing is often administered after development is complete when it is no longer practical to influence design decisions. In light of this, an embedded ethics approach can have a particularly large impact on ethical outcomes by addressing issues early in the process, saving both time and money.
Between the lines
Embedded ethics seems like a viable approach given the highly specific nature of the work. Important to the success of such a program is an endorsement from leadership enabling ethicists the latitude to make decisions that may not explicitly benefit the bottom line. Additional scrutiny on how these teams work may be needed for an embedded ethics program to be truly impactful. This might include explicit working agreements that teams follow to ensure all are accountable for systematically working through ethical issues. Embedded ethics bridges the gap between high-level frameworks and technical development which could be very successful if applied in the correct environment both in and outside of the medical community.