• Skip to main content
  • Skip to secondary menu
  • Skip to primary sidebar
  • Skip to footer
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

  • Articles
    • Public Policy
    • Privacy & Security
    • Human Rights
      • Ethics
      • JEDI (Justice, Equity, Diversity, Inclusion
    • Climate
    • Design
      • Emerging Technology
    • Application & Adoption
      • Health
      • Education
      • Government
        • Military
        • Public Works
      • Labour
    • Arts & Culture
      • Film & TV
      • Music
      • Pop Culture
      • Digital Art
  • Columns
    • AI Policy Corner
    • Recess
    • Tech Futures
  • The AI Ethics Brief
  • AI Literacy
    • Research Summaries
    • AI Ethics Living Dictionary
    • Learning Community
  • The State of AI Ethics Report
    • Volume 7 (November 2025)
    • Volume 6 (February 2022)
    • Volume 5 (July 2021)
    • Volume 4 (April 2021)
    • Volume 3 (Jan 2021)
    • Volume 2 (Oct 2020)
    • Volume 1 (June 2020)
  • About
    • Our Contributions Policy
    • Our Open Access Policy
    • Contact
    • Donate

When AI Ethics Goes Astray: A Case Study of Autonomous Vehicles

June 19, 2022

šŸ”¬ Research Summary by Eryn Rigley, a PhD research student at University of Southampton, specialising in the intersection of environmental and AI ethics, as well as defence & security AI ethics.

[Original paper by Hubert Etienne]


Overview: Ā Can we abstract human ethical decision making processes into a machine readable and learnable language? Some AI researchers think so. Most notably, researchers at MIT recorded our fickle, contradictory, and error-prone moral compasses and suggested machines can follow along. This paper opposes this field of AI ethics and suggests ethical decision making should be left up to the human drivers, not the cars.Ā 


Introduction

Autonomous systems populate a growing number of domains, not only low-risk environments performing automated and repetitive tasks, but in high risk environments – highways, operating theaters, and care homes – and are starting to perform morally significant tasks. In response, a growing field within AI ethics focuses on ensuring autonomous systems are capable of making the ā€˜right’ ethical choices. This paper opposes the growing interest in this area of research on the grounds that the methodology and results are unreliable and fail to advance ethics discourse. In addition, this paper argues that the deployment of autonomous systems which make ethically concerning decisions will harm rather than benefit society.Ā 

Key Insights

Moral Machine Experiment

The first section of this paper opposes the famous Moral Machine (MM) experiment, and argues this experiment fails to contribute to development of  ā€˜ethical’ decision making in autonomous vehicles.  

The MM experiment collected answers to ā€˜trolley-problem’-type moral dilemmas, for instance, whether to save the many over the few, prioritise the young over the old, and so on. This experiment collected 39.61 million answers from 1.3 million respondents across 233 countries and territories in only 2 years – a phenomenal source of global ethical decision making.

The moral machine experiment was originally posited as a mere descriptive ethic, describing what people believe is ethical, rather than a normative ethic, prescribing what people should do. However, this experiment went on to inspire the development of automated decision making based on ā€˜computational social choice’, which is reducible to public vote. Etienne opposes this on the grounds that the moral machine experiment was not methodologically sound to ground actual automated ethical decision making. The voters were disproportionately tech-savvy, and there is no way of ensuring that their votes accurately reflect what they truly feel is the most ethical thing to do. Moreover, these voters are not reasoning about what the right thing to do is but responding prima facie, which does not in itself progress knowledge of ethics. As Etienne states, ā€˜aggregating individual uninformed beliefs does not produce any common reasoned knowledge’. 

Etienne in fact opposes the notion that autonomous systems ought to make ethical decisions at all. Autonomous systems are not moral agents and so in the case these systems apply automated ethical decision making, the chain of moral responsibility breaks down. Etienne also opposes the voting-based ethical decision making of autonomous systems since humans do not make ethical decisions based on votes, particularly in the cases of car crashes or prioritizing one life over the other, and other ethical dilemmas which can be reduced to the ā€˜trolley-problem’ style. Rather, we reason, debate, and are capable of having, and endowed with the right to have, our own opinion. 

Etienne rounds off the paper with a brief argument against the MM as a whole. The experiment allowed voters to make decisions based on morally irrelevant criteria, including age, gender, and social status. In addition, the experiment distracted from other AI ethics issues which should be receiving more attention, including hyper surveillance and terrorist hacking. 

Instrumentalisation of ethical discourse

The second part of this paper opposes the ā€˜instrumentalisation of ethical discourse’ in autonomous systems. Etienne argues that ā€˜the instrumental use of moral considerations as leverage to develop a favorable regulation for manufacturers has no solid foundations’. A common argument made in favor of ethical autonomous vehicles is that humans make grave ethical errors at the wheel and the deployment of autonomous systems can avoid this. Etienne argues that this argument does not entail that the deployment of autonomous vehicles is necessarily a good thing.Ā 

First, Etienne argues that the money used in the development of autonomous systems can alleviate starvation for many people. Second, autonomous systems may still kill some people, and those people would be different from the people killed by not using autonomous systems. As such, via Parfit’s non-identity problem, autonomous systems would not save more people, per se, but different people. Third, so long as the decision making of autonomous systems undermines some ethical principles or the value of individuals, huge numbers of humans will be violated everyday, whether they interact directly with autonomous systems or not.

Between the lines

Etienne provides strong arguments against the use of MM to inform the ethical decision making of machines. The most compelling of which is the distinction between descriptive ethics and normative ethics. MM was a powerful tool for the former which offers no significant motivation to its application in the latter. 

The paper starts to unravel in the second section, however, as Etienne argues against what is labeled the ā€˜instrumentalization of ethics discourse’ for the advancement of autonomous systems. Focus shifts away from the difficulty in abstracting human ethical decision making processes into computer-readable formats, which was the area of interesting discussion. Instead, Etienne seems to completely oppose any attempt in this area of research, on the grounds that it is itself immoral, and to the advancement of autonomous vehicles over human-directed vehicles as a whole. The arguments here are somewhat less compelling. Though it may be difficult to abstract the ways in which humans make ethical decisions, this doesn’t mean autonomous systems will never be involved in ethical decision making.

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

šŸ” SEARCH

Spotlight

Close-up of a cat sleeping on a computer keyboard

Tech Futures: The threat of AI-generated code to the world’s digital infrastructure

The undying sun hangs in the sky, as people gather around signal towers, working through their digital devices.

Dreams and Realities in Modi’s AI Impact Summit

Illustration of a coral reef ecosystem

Tech Futures: Diversity of Thought and Experience: The UN’s Scientific Panel on AI

This image shows a large white, traditional, old building. The top half of the building represents the humanities (which is symbolised by the embedded text from classic literature which is faintly shown ontop the building). The bottom section of the building is embossed with mathematical formulas to represent the sciences. The middle layer of the image is heavily pixelated. On the steps at the front of the building there is a group of scholars, wearing formal suits and tie attire, who are standing around at the enternace talking and some of them are sitting on the steps. There are two stone, statute-like hands that are stretching the building apart from the left side. In the forefront of the image, there are 8 students - which can only be seen from the back. Their graduation gowns have bright blue hoods and they all look as though they are walking towards the old building which is in the background at a distance. There are a mix of students in the foreground.

Tech Futures: Co-opting Research and Education

Agentic AI systems and algorithmic accountability: a new era of e-commerce

related posts

  • Supporting Human-LLM collaboration in Auditing LLMs with LLMs

    Supporting Human-LLM collaboration in Auditing LLMs with LLMs

  • Universal and Transferable Adversarial Attacks on Aligned Language Models

    Universal and Transferable Adversarial Attacks on Aligned Language Models

  • Towards Climate Awareness in NLP Research

    Towards Climate Awareness in NLP Research

  • Trust me!: How to use trust-by-design to build resilient tech in times of crisis

    Trust me!: How to use trust-by-design to build resilient tech in times of crisis

  • A Sequentially Fair Mechanism for Multiple Sensitive Attributes

    A Sequentially Fair Mechanism for Multiple Sensitive Attributes

  • Augmented Datasheets for Speech Datasets and Ethical Decision-Making

    Augmented Datasheets for Speech Datasets and Ethical Decision-Making

  • Beyond Bias and Compliance: Towards Individual Agency and Plurality of Ethics in AI

    Beyond Bias and Compliance: Towards Individual Agency and Plurality of Ethics in AI

  • A survey on adversarial attacks and defences

    A survey on adversarial attacks and defences

  • How Do We Teach Tech Ethics? How Should We?

    How Do We Teach Tech Ethics? How Should We?

  • Research summary:  Laughing is Scary, but Farting is Cute: A Conceptual Model of Children’s Perspect...

    Research summary: Laughing is Scary, but Farting is Cute: A Conceptual Model of Children’s Perspect...

Partners

  • Ā 
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer


Articles

Columns

AI Literacy

The State of AI Ethics Report


 

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.

Contact

Donate


  • Ā© 2025 MONTREAL AI ETHICS INSTITUTE.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.