• Skip to main content
  • Skip to secondary menu
  • Skip to primary sidebar
  • Skip to footer
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

  • Articles
    • Public Policy
    • Privacy & Security
    • Human Rights
      • Ethics
      • JEDI (Justice, Equity, Diversity, Inclusion
    • Climate
    • Design
      • Emerging Technology
    • Application & Adoption
      • Health
      • Education
      • Government
        • Military
        • Public Works
      • Labour
    • Arts & Culture
      • Film & TV
      • Music
      • Pop Culture
      • Digital Art
  • Columns
    • AI Policy Corner
    • Recess
  • The AI Ethics Brief
  • AI Literacy
    • Research Summaries
    • AI Ethics Living Dictionary
    • Learning Community
  • The State of AI Ethics Report
    • Volume 7 (November 2025)
    • Volume 6 (February 2022)
    • Volume 5 (July 2021)
    • Volume 4 (April 2021)
    • Volume 3 (Jan 2021)
    • Volume 2 (Oct 2020)
    • Volume 1 (June 2020)
  • About
    • Our Contributions Policy
    • Our Open Access Policy
    • Contact
    • Donate

The Moral Machine Experiment on Large Language Models

October 3, 2023

馃敩 Research Summary by Kazuhiro Takemoto, Professor at Kyushu Institute of Technology.

[Original paper by Kazuhiro Takemoto]


Overview: Large Language Models (LLMs) are increasingly integrated into autonomous systems, raising concerns about their ethical decision-making capabilities. This study uses the Moral Machine framework, a tool designed to gauge ethical decisions in autonomous driving scenarios, to explore the ethical preferences of LLMs and compare them to human preferences.


Introduction

The advent of autonomous driving brings forth the critical question of ethical decision-making by machines. LLMs like ChatGPT, with their potential to be integrated into autonomous systems, are at the forefront of this debate. While these models can generate human-like responses, their ethical judgments in life-critical driving scenarios remain scrutinized. This study employs the Moral Machine framework, a platform tailored for autonomous driving that presents moral dilemmas faced by self-driving cars. By comparing LLM responses with global human preferences, the research aims to discern the ethical alignment or divergence between machine and human moral judgments in driving scenarios.

Key Insights

Moral Machine Framework in Autonomous Driving

The rapid evolution of autonomous driving technology has ushered in a new era of ethical challenges, transforming abstract philosophical debates into tangible real-world concerns. One of the most prominent tools designed to address these dilemmas is the Moral Machine. Specifically crafted for the domain of autonomous driving, the Moral Machine is a platform that seeks to understand human perspectives on moral decisions that self-driving cars might encounter in real-world scenarios. It presents a series of driving situations requiring moral judgments, often involving trade-offs between undesirable outcomes.

For instance, imagine a scenario where an autonomous vehicle faces a sudden brake failure and must make a split-second decision: should it continue ahead, potentially harming an elderly pedestrian, or swerve and crash into a concrete barrier, putting a young child inside the vehicle at risk?

When exposed to the dilemmas posed by the Moral Machine, the reactions of LLMs offered deep insights into their moral decision-making tendencies in driving situations. While the results were enlightening, they highlighted the complexities and potential risks of integrating AI-driven ethics into real-world applications, underscoring the need for ongoing evaluation and improvement.

LLMs, Human Ethics, and the Conundrum of Driving Dilemmas

With our intricate web of emotions, cultural backgrounds, and personal experiences, humans approach ethical dilemmas in driving with a nuanced perspective. In contrast, LLMs, devoid of emotions and consciousness, base their decisions on patterns discerned from vast amounts of data they were trained on. This fundamental difference in decision-making processes was evident when LLMs were subjected to the Moral Machine’s dilemmas.

While there was a noticeable alignment between certain LLM outputs and human preferences in some driving scenarios, there were also significant deviations in others. For instance, it was observed that some LLMs displayed a pronounced preference for sparing certain demographics over others, such as prioritizing few lives over many lives or sparing elderly individuals over the younger. Such biases raise pressing ethical concerns, especially in the context of autonomous driving, where decisions can have life-altering consequences.

The Underlying Factors Influencing LLM Decisions in Driving Scenarios

To understand the decisions LLMs make, delving into the factors influencing their outputs is crucial. Humans often base their driving decisions on a combination of factors, including immediate observations, past experiences, learned societal norms, and personal ethics. LLMs, on the other hand, lack personal experiences or emotions. They derive their decisions predominantly from patterns present in their training data.

This distinction became glaringly evident with certain LLMs, such as PaLM 2. PaLM 2 often justified its decisions based on generalized data patterns rather than nuanced ethical considerations in many scenarios. Such a pattern-based approach, while efficient, can lead to unexpected and potentially undesirable outcomes in real-world driving scenarios.

Implications for the Future of Autonomous Driving

As the horizon of autonomous driving draws nearer, and with LLMs poised to play a pivotal role in decision-making processes, understanding their ethical reasoning becomes paramount. The insights from the Moral Machine experiment underscore the need for a rigorous evaluation framework tailored for LLMs. Such a framework should ensure that the ethical decisions made by these models align closely with societal values, especially in contexts as critical as driving.

Moreover, the study highlights the importance of transparency in LLM decision-making. Stakeholders, from policymakers to the general public, need to be aware of how and why certain decisions are made by LLMs in driving scenarios. Only with this transparency can we build trust in these systems.

While LLMs hold immense promise in revolutionizing the autonomous driving landscape, they pose complex ethical challenges. Addressing these challenges requires a multi-faceted approach, combining technological advancements with robust ethical frameworks, to ensure a safe and morally sound future for autonomous driving.

Between the lines

The exploration of LLMs in the context of the Moral Machine framework for autonomous driving is more than just a technical endeavor; it reflects our evolving relationship with technology and the ethical quandaries it presents. While the study offers valuable insights into how LLMs might react in real-world driving scenarios, it raises deeper questions about our reliance on AI systems in critical decision-making processes. Can we ever truly entrust machines with decisions that have moral implications? And if so, how do we ensure these decisions resonate with our collective human values? The study underscores the importance of continuous dialogue, interdisciplinary collaboration, and public engagement in shaping the ethical foundations of future AI-driven technologies. As we stand on the cusp of an autonomous driving revolution, it’s imperative to remember that technology should not just serve us but also reflect the best of us.

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

馃攳 SEARCH

Spotlight

ALL IN Conference 2025: Four Key Takeaways from Montreal

Beyond Dependency: The Hidden Risk of Social Comparison in Chatbot Companionship

AI Policy Corner: Restriction vs. Regulation: Comparing State Approaches to AI Mental Health Legislation

Beyond Consultation: Building Inclusive AI Governance for Canada’s Democratic Future

AI Policy Corner: U.S. Executive Order on Advancing AI Education for American Youth

related posts

  • Research summary: Sponge Examples: Energy-Latency Attacks on Neural Networks

    Research summary: Sponge Examples: Energy-Latency Attacks on Neural Networks

  • How Cognitive Biases Affect XAI-assisted Decision-making: A Systematic Review

    How Cognitive Biases Affect XAI-assisted Decision-making: A Systematic Review

  • Co-Designing Checklists to Understand Organizational Challenges and Opportunities around Fairness in...

    Co-Designing Checklists to Understand Organizational Challenges and Opportunities around Fairness in...

  • Choices, Risks, and Reward Reports: Charting Public Policy for Reinforcement Learning Systems

    Choices, Risks, and Reward Reports: Charting Public Policy for Reinforcement Learning Systems

  • Can we blame a chatbot if it goes wrong?

    Can we blame a chatbot if it goes wrong?

  • Research summary: AI Governance: A Holistic Approach to Implement Ethics in AI

    Research summary: AI Governance: A Holistic Approach to Implement Ethics in AI

  • The Impact of the GDPR on Artificial Intelligence

    The Impact of the GDPR on Artificial Intelligence

  • 10 Takeaways from the State of AI Ethics in Canada & Spain

    10 Takeaways from the State of AI Ethics in Canada & Spain

  • From Sparse to Dense: GPT-4 Summarization with Chain of Density Prompting

    From Sparse to Dense: GPT-4 Summarization with Chain of Density Prompting

  • Understanding the Effect of Counterfactual Explanations on Trust and Reliance on AI for Human-AI Col...

    Understanding the Effect of Counterfactual Explanations on Trust and Reliance on AI for Human-AI Col...

Partners

  • 聽
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer


Articles

Columns

AI Literacy

The State of AI Ethics Report


 

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.

Contact

Donate


  • 漏 2025 MONTREAL AI ETHICS INSTITUTE.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.