đŹ Research Summary by Eryn Rigley, a PhD research student at University of Southampton, specialising in the intersection of environmental and AI ethics, as well as defence & security AI ethics.
[Original paper by Stuart McLennan, Amelia Fiske, Daniel Tigard, Ruth MĂźller, Sami Haddadin & Alena Buyx]
Overview: Â Though AI ethics frameworks are plenty, applicable ethics guidance for AI developers remains few and far between. To translate high-level ethics guidelines into practice, the authors of this paper argue ethics ought to be embedded into every stage of the AI lifecycle.Â
Introduction
On a hospital bed, in the doctorâs office, and in an operating theater, we are at our most vulnerable when we need medical attention. For this reason, the medical field is perhaps the most important field to get AI ethics right in. Yet, medical AI systems continue to be rolled out without any ethical considerations or foresight of what might (inevitably) go wrong. The authors of this paper argue the gap between AI systems and our ethical principles can be solved by embedding ethics into every stage of an AIâs development, from conception to deployment.
Key Insights
The problem
AI Ethics frameworks and guidelines are plenty and have been found to converge around a handful of high level principles. That is, AI ought to âdo no harmâ, be âtransparentâ, and ârobustâ. However, these high-level principles solve only half the problem of AI ethical quandaries. To translate these principles in practice, for instance in navigating real ethical dilemmas, requires further work and a knowledge of how to apply high-level ethics in practice.
The authors argue it would be unfair and over-demanding to assume and expect AI developers ought to be equipped to handle heavy ethics work. At the same time, âreal ethicistsâ have little participation in private industry. For this reason, AI ethics remains untranslated and impracticable to AI developers.
This gap between high-level AI ethics principles and practical AI development has already resulted in real world problems. Within the medical field, novel AI systems are often deployed without explicit ethical consideration or foresight, reducing patients to unwilling âguinea pigsâ for the system. A lack of practicable AI ethics in the medical field thereby violates patientsâ dignity, safety, and privacy.
The Solution: Embedded Ethics
The authors propose that âembedded ethicsâ is the solution to this gap between high level AI ethics work and practical AI development. This work describes the embedding of ethics into the entire lifecycle of AI, from design and development to deployment. In particular, the authors envisage âreal ethicistsâ posted at various stages of this life cycle to work with AI developers in the anticipation of future ethical concerns. Alternatively, if resources are constrained, regular exchanges between ethicists and other AI development team members should take place from the beginning of an AI systemsâ conception.
The general rule of thumb for this embedded ethics is the regular and prospective examination of ethical problems in AI, rather than sporadic and reactive engagement of AI developers with ethicists. The ethicistâs role here is two-fold: first of all to do the ethical heavy lifting of finding and fine-tuning ethical theories for application in AI systems; and second, to translate this high-level ethics into applicable guidance for developers.
Limitations
The authors note several limitations to the use of embedded ethics in AI, and offer an adequate response to most. However, there are two major problems facing embedded ethicists within AI development which are due examination.
The first problem facing ethicists is navigating competing interests within private industry, for instance between profitability or efficiency and ethical considerations. This need not be a grave problem for the ethicists, per se, since they will be expected to justify their arguments and provide reasons why their principles ought to be applied in certain cases. That is, it may fall within the ethicistsâ purview to navigate these conflicting interests.
The second problem is less clear to navigate. Simply placing ethicists into the lifecycle of AI will not ensure coherence and robustness in approaching ethical problems. The authors acknowledge that individual ethicists may have diverging opinions and state this is permissible so long as any viewpoint is justified and transparent. However, ensuring that individual ethicists within the same team or industry are working toward the same principles, and not diverging or conflicting in their analysis, will still require some kind of high-level overarching principles, for instance to maximize the wellbeing of humans. With this in mind, it is not clear how ethicists will be expected to adhere to these high-level principles whilst maintaining the freedom to do ârealâ ethics work. Their role might therefore shrink to mere translation of high-level principles to practice.
Between the lines
Embedding ethics into every stage of an AIâs lifecycle, from conception to deployment, would seem an obvious necessity. And yet AI ethics remains a mystical high-level prescription of principles, impossible for computer scientists to navigate in practice. The authors have offered a convincing argument in favor of embedded AI ethics, and are able to respond to most of the problems they foresee. It remains unclear, however, what role the embedded ethicist is really taking on and how much freedom they will have to do their work: is it true embedded ethics will introduce âreal ethicistsâ to the industry, or are they mere ethics translators?