Summary contributed by Connor Wright, who’s a 3rd year Philosophy student at the University of Exeter.
Link to original source + author at the bottom.
Mini-summary: AI Ethics has been approached from a principled angle since the dawn of the practice, drawing great inspiration from the 4 basic ethical principles of the medical ethics field. However, this paper advocates how AI Ethics cannot be tackled in the same principled way as the medical ethics profession. The paper bases this argument on 4 different aspects of the medical ethics field that its AI counterpart lacks, moving from the misalignment of values in the field, to a lack of established history to fall back on, to accountability and more. The paper then concludes by offering some ways forward for the AI Ethics field, emphasising on how ethics is a process, and not a destination. Translating the lofty principles into actionable conventions will help realise the true challenges that face AI Ethics, rather than treating it as something that is to be “solved”.
AI Ethics has been approached from a principled angle since the dawn of the practice, drawing great inspiration from the medical ethics field. However, this paper advocates how AI Ethics cannot be tackled in the same principled way as the medical ethics profession. The paper bases this argument on features of the medical ethics field that its AI counterpart lacks, and then aims to suggest ways forward. Taking this into account, I will split this post into 3 sections to demonstrate how this is the case. Section 1 will show what the paper believes the AI Ethics field lacks compared to the medical field, section 2 will be how this is the case, and section 3 will be how this is to be resolved. I will then end with my thoughts on the discussion.
Section 1: What the AI Ethics field lacks
Firstly, the practitioners in the AI Ethics field all lack a common aim or ‘patient’ that can align all the differing interests of the different institutions involved. The field is filled with different practitioners of diverse backgrounds, and private companies all with varying interests. Hence, a principled approach here would have to unite these differing views under the maxims it proposes. However, in order to accommodate all the different viewpoints, the principles start to become more and more abstract. Proposals such as ‘fair’ and ‘equal’ end up being the point of agreement for all parties, which this paper highlights as hiding the “fundamental normative and political tensions embedded” in these concepts (Mittelstadt, 2019, p. 1). For example, there are deep disagreements over what equality actually means, such as whether it proports to egalitarianism or complete equality for all (such as wage distribution). Medical ethics instead can unite on the subject of a patient, and prioritise their interest in their methods, forming a focus point for the differing views within the field. This is then further enforced by medical bodies being rigorously reviewed by legally backed institutions to make sure this prioritisation is taking place, with no such body existing in the AI Ethics field yet. Hence, a principled approach to said field may not be the most fruitful path to undertake.
Section 2: Why is this the case?
The paper then proposes that a principled approach as such is hindered by the field not having an established history. There are no previous lessons to draw on in order to demonstrate what “good” AI is. There is no ‘AI Hippocratic Oath’ to undertake for behaviour to be modelled on, and the unpredictability of AI means that any one single method can’t be guaranteed to always produce a ‘good’ result. Instead, each company is almost left to forge their own practice, tailored to their own company values. Resultantly, each company produces their own exemplars of how ‘good’ AI is deployed, leaving little scope for principled practical advice to influence how to implement ethical AI.
Such lack of advice then emphasises the importance of accountability when deploying AI, as there is no regulation apparent to signify what is seen as ‘bad’ AI. Even then, the AI Ethics field lacks the accountability framework to counterbalance the lack of regulation. With many different actors involved in processes that are hard to trace all the way back, it would be difficult to pin the responsibility on any one person. Whereas, the medical ethics field has a fixed team of actors at any one time, making a stronger case for the presence of accountability. Thus, approaching the AI Ethics field in the same way as the medical ethics arena may in fact be like mixing oil and water.
The paper then concludes by offering some ways forward for the AI Ethics field. Defining clear pathways that are most likely to end up in ethical AI will then help foster support for more emphasis on a “bottoms-up” (Mittelstadt, 2019, p. 9) approach to AI deployment. Such an approach will help generate novel problems that repeatedly face the field of AI Ethics, generating methods on how to tackle them rather than seeing similar problems surfacing from the companies at the top. This may then lead to AI deployment being crafted as a licensed profession, which can be utilised by both large and small corporations. Such a licensing can then smooth the approach away from individual AI Ethics, and more towards organisational ethics being considered. Individuals corrupting the use of AI will be held accountable as well as the corporations who allowed it to happen, with their role being previously left unquestioned. In this way, a principled approach to AI Ethics as seen in medical ethics will be better able to take form.
I agree with the final section of the paper advocating that we treat AI ethics as a process, rather than something to be “solved” (Mittelstadt, 2019, p. 10). The lack of accountability generated by the combination of misaligned goals and a lack of history is something that needs to be addressed, and which cannot be done by lofty principles being the only point of agreement. Instead, working to close the gap between the abstract and reality through ethical practitioners and software engineers working together, I believe, will help create actionable change, and reveal the true challenges that face the AI Ethics field.
Original paper by Brent Mittelstadt: https://arxiv.org/ftp/arxiv/papers/1906/1906.06668.pdf