š¬ Research Summary by Eryn Rigley, a PhD research student at University of Southampton, specialising in the intersection of environmental and AI ethics, as well as defence & security AI ethics.
[Original paper by Hubert Etienne]
Overview: Ā Can we abstract human ethical decision making processes into a machine readable and learnable language? Some AI researchers think so. Most notably, researchers at MIT recorded our fickle, contradictory, and error-prone moral compasses and suggested machines can follow along. This paper opposes this field of AI ethics and suggests ethical decision making should be left up to the human drivers, not the cars.Ā
Introduction
Autonomous systems populate a growing number of domains, not only low-risk environments performing automated and repetitive tasks, but in high risk environments – highways, operating theaters, and care homes – and are starting to perform morally significant tasks. In response, a growing field within AI ethics focuses on ensuring autonomous systems are capable of making the ārightā ethical choices. This paper opposes the growing interest in this area of research on the grounds that the methodology and results are unreliable and fail to advance ethics discourse. In addition, this paper argues that the deployment of autonomous systems which make ethically concerning decisions will harm rather than benefit society.Ā
Key Insights
Moral Machine Experiment
The first section of this paper opposes the famous Moral Machine (MM) experiment, and argues this experiment fails to contribute to development of āethicalā decision making in autonomous vehicles.
The MM experiment collected answers to ātrolley-problemā-type moral dilemmas, for instance, whether to save the many over the few, prioritise the young over the old, and so on. This experiment collected 39.61 million answers from 1.3 million respondents across 233 countries and territories in only 2 years – a phenomenal source of global ethical decision making.
The moral machine experiment was originally posited as a mere descriptive ethic, describing what people believe is ethical, rather than a normative ethic, prescribing what people should do. However, this experiment went on to inspire the development of automated decision making based on ācomputational social choiceā, which is reducible to public vote. Etienne opposes this on the grounds that the moral machine experiment was not methodologically sound to ground actual automated ethical decision making. The voters were disproportionately tech-savvy, and there is no way of ensuring that their votes accurately reflect what they truly feel is the most ethical thing to do. Moreover, these voters are not reasoning about what the right thing to do is but responding prima facie, which does not in itself progress knowledge of ethics. As Etienne states, āaggregating individual uninformed beliefs does not produce any common reasoned knowledgeā.
Etienne in fact opposes the notion that autonomous systems ought to make ethical decisions at all. Autonomous systems are not moral agents and so in the case these systems apply automated ethical decision making, the chain of moral responsibility breaks down. Etienne also opposes the voting-based ethical decision making of autonomous systems since humans do not make ethical decisions based on votes, particularly in the cases of car crashes or prioritizing one life over the other, and other ethical dilemmas which can be reduced to the ātrolley-problemā style. Rather, we reason, debate, and are capable of having, and endowed with the right to have, our own opinion.
Etienne rounds off the paper with a brief argument against the MM as a whole. The experiment allowed voters to make decisions based on morally irrelevant criteria, including age, gender, and social status. In addition, the experiment distracted from other AI ethics issues which should be receiving more attention, including hyper surveillance and terrorist hacking.
Instrumentalisation of ethical discourse
The second part of this paper opposes the āinstrumentalisation of ethical discourseā in autonomous systems. Etienne argues that āthe instrumental use of moral considerations as leverage to develop a favorable regulation for manufacturers has no solid foundationsā. A common argument made in favor of ethical autonomous vehicles is that humans make grave ethical errors at the wheel and the deployment of autonomous systems can avoid this. Etienne argues that this argument does not entail that the deployment of autonomous vehicles is necessarily a good thing.Ā
First, Etienne argues that the money used in the development of autonomous systems can alleviate starvation for many people. Second, autonomous systems may still kill some people, and those people would be different from the people killed by not using autonomous systems. As such, via Parfitās non-identity problem, autonomous systems would not save more people, per se, but different people. Third, so long as the decision making of autonomous systems undermines some ethical principles or the value of individuals, huge numbers of humans will be violated everyday, whether they interact directly with autonomous systems or not.
Between the lines
Etienne provides strong arguments against the use of MM to inform the ethical decision making of machines. The most compelling of which is the distinction between descriptive ethics and normative ethics. MM was a powerful tool for the former which offers no significant motivation to its application in the latter.
The paper starts to unravel in the second section, however, as Etienne argues against what is labeled the āinstrumentalization of ethics discourseā for the advancement of autonomous systems. Focus shifts away from the difficulty in abstracting human ethical decision making processes into computer-readable formats, which was the area of interesting discussion. Instead, Etienne seems to completely oppose any attempt in this area of research, on the grounds that it is itself immoral, and to the advancement of autonomous vehicles over human-directed vehicles as a whole. The arguments here are somewhat less compelling. Though it may be difficult to abstract the ways in which humans make ethical decisions, this doesnāt mean autonomous systems will never be involved in ethical decision making.