• Skip to main content
  • Skip to secondary menu
  • Skip to primary sidebar
  • Skip to footer
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

  • Articles
    • Public Policy
    • Privacy & Security
    • Human Rights
      • Ethics
      • JEDI (Justice, Equity, Diversity, Inclusion
    • Climate
    • Design
      • Emerging Technology
    • Application & Adoption
      • Health
      • Education
      • Government
        • Military
        • Public Works
      • Labour
    • Arts & Culture
      • Film & TV
      • Music
      • Pop Culture
      • Digital Art
  • Columns
    • AI Policy Corner
    • Recess
    • Tech Futures
  • The AI Ethics Brief
  • AI Literacy
    • Research Summaries
    • AI Ethics Living Dictionary
    • Learning Community
  • The State of AI Ethics Report
    • Volume 7 (November 2025)
    • Volume 6 (February 2022)
    • Volume 5 (July 2021)
    • Volume 4 (April 2021)
    • Volume 3 (Jan 2021)
    • Volume 2 (Oct 2020)
    • Volume 1 (June 2020)
  • About
    • Our Contributions Policy
    • Our Open Access Policy
    • Contact
    • Donate

When AI Ethics Goes Astray: A Case Study of Autonomous Vehicles

June 19, 2022

šŸ”¬ Research Summary by Eryn Rigley, a PhD research student at University of Southampton, specialising in the intersection of environmental and AI ethics, as well as defence & security AI ethics.

[Original paper by Hubert Etienne]


Overview: Ā Can we abstract human ethical decision making processes into a machine readable and learnable language? Some AI researchers think so. Most notably, researchers at MIT recorded our fickle, contradictory, and error-prone moral compasses and suggested machines can follow along. This paper opposes this field of AI ethics and suggests ethical decision making should be left up to the human drivers, not the cars.Ā 


Introduction

Autonomous systems populate a growing number of domains, not only low-risk environments performing automated and repetitive tasks, but in high risk environments – highways, operating theaters, and care homes – and are starting to perform morally significant tasks. In response, a growing field within AI ethics focuses on ensuring autonomous systems are capable of making the ā€˜right’ ethical choices. This paper opposes the growing interest in this area of research on the grounds that the methodology and results are unreliable and fail to advance ethics discourse. In addition, this paper argues that the deployment of autonomous systems which make ethically concerning decisions will harm rather than benefit society.Ā 

Key Insights

Moral Machine Experiment

The first section of this paper opposes the famous Moral Machine (MM) experiment, and argues this experiment fails to contribute to development of  ā€˜ethical’ decision making in autonomous vehicles.  

The MM experiment collected answers to ā€˜trolley-problem’-type moral dilemmas, for instance, whether to save the many over the few, prioritise the young over the old, and so on. This experiment collected 39.61 million answers from 1.3 million respondents across 233 countries and territories in only 2 years – a phenomenal source of global ethical decision making.

The moral machine experiment was originally posited as a mere descriptive ethic, describing what people believe is ethical, rather than a normative ethic, prescribing what people should do. However, this experiment went on to inspire the development of automated decision making based on ā€˜computational social choice’, which is reducible to public vote. Etienne opposes this on the grounds that the moral machine experiment was not methodologically sound to ground actual automated ethical decision making. The voters were disproportionately tech-savvy, and there is no way of ensuring that their votes accurately reflect what they truly feel is the most ethical thing to do. Moreover, these voters are not reasoning about what the right thing to do is but responding prima facie, which does not in itself progress knowledge of ethics. As Etienne states, ā€˜aggregating individual uninformed beliefs does not produce any common reasoned knowledge’. 

Etienne in fact opposes the notion that autonomous systems ought to make ethical decisions at all. Autonomous systems are not moral agents and so in the case these systems apply automated ethical decision making, the chain of moral responsibility breaks down. Etienne also opposes the voting-based ethical decision making of autonomous systems since humans do not make ethical decisions based on votes, particularly in the cases of car crashes or prioritizing one life over the other, and other ethical dilemmas which can be reduced to the ā€˜trolley-problem’ style. Rather, we reason, debate, and are capable of having, and endowed with the right to have, our own opinion. 

Etienne rounds off the paper with a brief argument against the MM as a whole. The experiment allowed voters to make decisions based on morally irrelevant criteria, including age, gender, and social status. In addition, the experiment distracted from other AI ethics issues which should be receiving more attention, including hyper surveillance and terrorist hacking. 

Instrumentalisation of ethical discourse

The second part of this paper opposes the ā€˜instrumentalisation of ethical discourse’ in autonomous systems. Etienne argues that ā€˜the instrumental use of moral considerations as leverage to develop a favorable regulation for manufacturers has no solid foundations’. A common argument made in favor of ethical autonomous vehicles is that humans make grave ethical errors at the wheel and the deployment of autonomous systems can avoid this. Etienne argues that this argument does not entail that the deployment of autonomous vehicles is necessarily a good thing.Ā 

First, Etienne argues that the money used in the development of autonomous systems can alleviate starvation for many people. Second, autonomous systems may still kill some people, and those people would be different from the people killed by not using autonomous systems. As such, via Parfit’s non-identity problem, autonomous systems would not save more people, per se, but different people. Third, so long as the decision making of autonomous systems undermines some ethical principles or the value of individuals, huge numbers of humans will be violated everyday, whether they interact directly with autonomous systems or not.

Between the lines

Etienne provides strong arguments against the use of MM to inform the ethical decision making of machines. The most compelling of which is the distinction between descriptive ethics and normative ethics. MM was a powerful tool for the former which offers no significant motivation to its application in the latter. 

The paper starts to unravel in the second section, however, as Etienne argues against what is labeled the ā€˜instrumentalization of ethics discourse’ for the advancement of autonomous systems. Focus shifts away from the difficulty in abstracting human ethical decision making processes into computer-readable formats, which was the area of interesting discussion. Instead, Etienne seems to completely oppose any attempt in this area of research, on the grounds that it is itself immoral, and to the advancement of autonomous vehicles over human-directed vehicles as a whole. The arguments here are somewhat less compelling. Though it may be difficult to abstract the ways in which humans make ethical decisions, this doesn’t mean autonomous systems will never be involved in ethical decision making.

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

šŸ” SEARCH

Spotlight

A network diagram with lots of little emojis, organised in clusters.

Tech Futures: AI For and Against Knowledge

A brightly coloured illustration which can be viewed in any direction. It has many elements to it working together: men in suits around a table, someone in a data centre, big hands controlling the scenes and holding a phone, people in a production line. Motifs such as network diagrams and melting emojis are placed throughout the busy vignettes.

Tech Futures: The Fossil Fuels Playbook for Big Tech: Part II

A rock embedded with intricate circuit board patterns, held delicately by pale hands drawn in a ghostly style. The contrast between the rough, metallic mineral and the sleek, artificial circuit board illustrates the relationship between raw natural resources and modern technological development. The hands evoke human involvement in the extraction and manufacturing processes.

Tech Futures: The Fossil Fuels Playbook for Big Tech: Part I

Close-up of a cat sleeping on a computer keyboard

Tech Futures: The threat of AI-generated code to the world’s digital infrastructure

The undying sun hangs in the sky, as people gather around signal towers, working through their digital devices.

Dreams and Realities in Modi’s AI Impact Summit

related posts

  • The role of the African value of Ubuntu in global AI inclusion discourse: A normative ethics perspec...

    The role of the African value of Ubuntu in global AI inclusion discourse: A normative ethics perspec...

  • Research summary: Algorithmic Accountability

    Research summary: Algorithmic Accountability

  • Digital transformation and the renewal of social theory: Unpacking the new fraudulent myths and misp...

    Digital transformation and the renewal of social theory: Unpacking the new fraudulent myths and misp...

  • Human-Algorithm Collaboration: Achieving Complementarity and Avoiding Unfairness

    Human-Algorithm Collaboration: Achieving Complementarity and Avoiding Unfairness

  • Corporate Governance of Artificial Intelligence in the Public Interest

    Corporate Governance of Artificial Intelligence in the Public Interest

  • Towards Environmentally Equitable AI via Geographical Load Balancing

    Towards Environmentally Equitable AI via Geographical Load Balancing

  • The importance of audit in AI governance

    The importance of audit in AI governance

  • Automated Interviewer or Augmented Survey? Collecting Social Data with Large Language Models

    Automated Interviewer or Augmented Survey? Collecting Social Data with Large Language Models

  • Risk and Trust Perceptions of the Public of Artificial Intelligence Applications

    Risk and Trust Perceptions of the Public of Artificial Intelligence Applications

  • Evaluating a Methodology for Increasing AI Transparency: A Case Study

    Evaluating a Methodology for Increasing AI Transparency: A Case Study

Partners

  • Ā 
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer


Articles

Columns

AI Literacy

The State of AI Ethics Report


 

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.

Contact

Donate


  • Ā© 2025 MONTREAL AI ETHICS INSTITUTE.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.