• Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • Core Principles of Responsible AI
    • Accountability
    • Fairness
    • Privacy
    • Safety and Security
    • Sustainability
    • Transparency
  • Special Topics
    • AI in Industry
    • Ethical Implications
    • Human-Centered Design
    • Regulatory Landscape
    • Technical Methods
  • Living Dictionary
  • State of AI Ethics
  • AI Ethics Brief
  • šŸ‡«šŸ‡·
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

When AI Ethics Goes Astray: A Case Study of Autonomous Vehicles

June 19, 2022

šŸ”¬ Research Summary by Eryn Rigley, a PhD research student at University of Southampton, specialising in the intersection of environmental and AI ethics, as well as defence & security AI ethics.

[Original paper by Hubert Etienne]


Overview: Ā Can we abstract human ethical decision making processes into a machine readable and learnable language? Some AI researchers think so. Most notably, researchers at MIT recorded our fickle, contradictory, and error-prone moral compasses and suggested machines can follow along. This paper opposes this field of AI ethics and suggests ethical decision making should be left up to the human drivers, not the cars.Ā 


Introduction

Autonomous systems populate a growing number of domains, not only low-risk environments performing automated and repetitive tasks, but in high risk environments – highways, operating theaters, and care homes – and are starting to perform morally significant tasks. In response, a growing field within AI ethics focuses on ensuring autonomous systems are capable of making the ā€˜right’ ethical choices. This paper opposes the growing interest in this area of research on the grounds that the methodology and results are unreliable and fail to advance ethics discourse. In addition, this paper argues that the deployment of autonomous systems which make ethically concerning decisions will harm rather than benefit society.Ā 

Key Insights

Moral Machine Experiment

The first section of this paper opposes the famous Moral Machine (MM) experiment, and argues this experiment fails to contribute to development of  ā€˜ethical’ decision making in autonomous vehicles.  

The MM experiment collected answers to ā€˜trolley-problem’-type moral dilemmas, for instance, whether to save the many over the few, prioritise the young over the old, and so on. This experiment collected 39.61 million answers from 1.3 million respondents across 233 countries and territories in only 2 years – a phenomenal source of global ethical decision making.

The moral machine experiment was originally posited as a mere descriptive ethic, describing what people believe is ethical, rather than a normative ethic, prescribing what people should do. However, this experiment went on to inspire the development of automated decision making based on ā€˜computational social choice’, which is reducible to public vote. Etienne opposes this on the grounds that the moral machine experiment was not methodologically sound to ground actual automated ethical decision making. The voters were disproportionately tech-savvy, and there is no way of ensuring that their votes accurately reflect what they truly feel is the most ethical thing to do. Moreover, these voters are not reasoning about what the right thing to do is but responding prima facie, which does not in itself progress knowledge of ethics. As Etienne states, ā€˜aggregating individual uninformed beliefs does not produce any common reasoned knowledge’. 

Etienne in fact opposes the notion that autonomous systems ought to make ethical decisions at all. Autonomous systems are not moral agents and so in the case these systems apply automated ethical decision making, the chain of moral responsibility breaks down. Etienne also opposes the voting-based ethical decision making of autonomous systems since humans do not make ethical decisions based on votes, particularly in the cases of car crashes or prioritizing one life over the other, and other ethical dilemmas which can be reduced to the ā€˜trolley-problem’ style. Rather, we reason, debate, and are capable of having, and endowed with the right to have, our own opinion. 

Etienne rounds off the paper with a brief argument against the MM as a whole. The experiment allowed voters to make decisions based on morally irrelevant criteria, including age, gender, and social status. In addition, the experiment distracted from other AI ethics issues which should be receiving more attention, including hyper surveillance and terrorist hacking. 

Instrumentalisation of ethical discourse

The second part of this paper opposes the ā€˜instrumentalisation of ethical discourse’ in autonomous systems. Etienne argues that ā€˜the instrumental use of moral considerations as leverage to develop a favorable regulation for manufacturers has no solid foundations’. A common argument made in favor of ethical autonomous vehicles is that humans make grave ethical errors at the wheel and the deployment of autonomous systems can avoid this. Etienne argues that this argument does not entail that the deployment of autonomous vehicles is necessarily a good thing.Ā 

First, Etienne argues that the money used in the development of autonomous systems can alleviate starvation for many people. Second, autonomous systems may still kill some people, and those people would be different from the people killed by not using autonomous systems. As such, via Parfit’s non-identity problem, autonomous systems would not save more people, per se, but different people. Third, so long as the decision making of autonomous systems undermines some ethical principles or the value of individuals, huge numbers of humans will be violated everyday, whether they interact directly with autonomous systems or not.

Between the lines

Etienne provides strong arguments against the use of MM to inform the ethical decision making of machines. The most compelling of which is the distinction between descriptive ethics and normative ethics. MM was a powerful tool for the former which offers no significant motivation to its application in the latter. 

The paper starts to unravel in the second section, however, as Etienne argues against what is labeled the ā€˜instrumentalization of ethics discourse’ for the advancement of autonomous systems. Focus shifts away from the difficulty in abstracting human ethical decision making processes into computer-readable formats, which was the area of interesting discussion. Instead, Etienne seems to completely oppose any attempt in this area of research, on the grounds that it is itself immoral, and to the advancement of autonomous vehicles over human-directed vehicles as a whole. The arguments here are somewhat less compelling. Though it may be difficult to abstract the ways in which humans make ethical decisions, this doesn’t mean autonomous systems will never be involved in ethical decision making.

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

šŸ” SEARCH

Spotlight

AI Policy Corner: Singapore’s National AI Strategy 2.0

AI Governance in a Competitive World: Balancing Innovation, Regulation and Ethics | Point Zero Forum 2025

AI Policy Corner: Frontier AI Safety Commitments, AI Seoul Summit 2024

AI Policy Corner: The Colorado State Deepfakes Act

Special Edition: Honouring the Legacy of Abhishek Gupta (1992–2024)

related posts

  • Embedded ethics: a proposal for integrating ethics into the development of medical AI

    Embedded ethics: a proposal for integrating ethics into the development of medical AI

  • Research summary: Decision Points in AI Governance

    Research summary: Decision Points in AI Governance

  • Impacts of Differential Privacy on Fostering More Racially and Ethnically Diverse Elementary Schools

    Impacts of Differential Privacy on Fostering More Racially and Ethnically Diverse Elementary Schools

  • Risk of AI in Healthcare: A Study Framework

    Risk of AI in Healthcare: A Study Framework

  • Research summary: Algorithmic Injustices towards a Relational Ethics

    Research summary: Algorithmic Injustices towards a Relational Ethics

  • Who Funds Misinformation? A Systematic Analysis of the Ad-related Profit Routines of Fake News sites

    Who Funds Misinformation? A Systematic Analysis of the Ad-related Profit Routines of Fake News sites

  • Designing Fiduciary Artificial Intelligence

    Designing Fiduciary Artificial Intelligence

  • Code Work: Thinking with the System in Mexico

    Code Work: Thinking with the System in Mexico

  • AI and the Global South: Designing for Other Worlds  (Research Summary)

    AI and the Global South: Designing for Other Worlds (Research Summary)

  • Open and Linked Data Model for Carbon Footprint Scenarios

    Open and Linked Data Model for Carbon Footprint Scenarios

Partners

  • Ā 
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer

Categories


• Blog
• Research Summaries
• Columns
• Core Principles of Responsible AI
• Special Topics

Signature Content


• The State Of AI Ethics

• The Living Dictionary

• The AI Ethics Brief

Learn More


• About

• Open Access Policy

• Contributions Policy

• Editorial Stance on AI Tools

• Press

• Donate

• Contact

The AI Ethics Brief (bi-weekly newsletter)

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.


Archive

  • Ā© MONTREAL AI ETHICS INSTITUTE. All rights reserved 2024.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.