• Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • Core Principles of Responsible AI
    • Accountability
    • Fairness
    • Privacy
    • Safety and Security
    • Sustainability
    • Transparency
  • Special Topics
    • AI in Industry
    • Ethical Implications
    • Human-Centered Design
    • Regulatory Landscape
    • Technical Methods
  • Living Dictionary
  • State of AI Ethics
  • AI Ethics Brief
  • 🇫🇷
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

Building a Credible Case for Safety: Waymo’s Approach for the Determination of Absence of Unreasonable Risk

June 24, 2023

🔬 Research summary by Dr. Francesca Favaro, who leads the Safety Best Practices team at Waymo (prev. Google Self-Driving Car Project), where she manages the development of the company safety case and works on external engagement with standardization bodies.

[Original paper by Francesca Favaro, Laura Fraade-Blanar, Scott Schnelle, Trent Victor, Mauricio Peña, Johan Engstrom, John Scanlon, Kris Kusano, and Dan Smith]


Overview: Autonomous driving technology holds the potential to revolutionize on-road transportation. It can break down barriers to mobility access for disabled individuals, improve the safety of all road users and transform the way we travel from point A to point B. But how do we assess the safety of an autonomous vehicle and judge that a company has done sufficient due diligence to field a product on public roads? A safety case helps answer those questions, and this paper unboxes the approach taken by Waymo (prev. Google Self-Driving Car Project).  


Introduction

Decades of scientific research and understanding in the field of system safety and risk management across multiple domains, from locomotive and automotive to aviation, in development and standardization, and across public, private, and military sectors, have all informed the current definition of safety as the absence of unreasonable risk. Such definition recognizes that no activity in life can be undertaken without any risk and is thus grounded on the need to establish public consensus on what can be considered an acceptable level of risk.

Industry standards further define acceptable risk as one grounded in valid societal moral concepts. As of today, no agreed-upon benchmark to evaluate the safety of autonomous vehicles exists, and experts and safety advocates have argued that the safety of such complex systems cannot be ascribed to a single number. A safety case, defined as the explicit logical argument complemented by the supporting evidence to determine the absence of unreasonable risk, is the answer brought forth in this paper.  The authors provide foundational thinking for the entire industry into how a system is determined to be ready for deployment and justifying that the set of acceptance criteria employed for the safety determination is sufficient and that their evaluation (and associated methods) is credible.

Key Insights 

What Does “Safe” Even Mean? Defining a Case for Safety

Several industries have historically used the concept of a safety case when tasked with the goal of formally asserting and demonstrating how an adequate level of safety may be achieved – an analysis that was often done following regrettable tragedies like the Piper Alpha platform explosion and the Columbia space shuttle disintegration. Over the years, the definition of a safety case has been refined to become a standardized formulation for autonomous products in 2020. Such definition reads as:

“A structured argument, supported by a body of evidence that provides a compelling, comprehensible, and valid case that a system is safe for a given application in a given environment”

UL 4600

A safety case for fully autonomous operations is thus a formal way to explain how a company determines that an AV system is safe enough to be deployed on public roads without a human driver, and it includes evidence to support that determination. It involves an explanation of the system, the methodologies used to develop it, the metrics used to validate it, and the actual results of validation tests. Building a case for safety requires great engineering rigor and scholarly review, which means including much more detail and context than what’s usually disclosed in AV companies’ safety reports.

Decomposing Absence of Unreasonable Risk

In this paper, we begin by addressing a layered approach to safety, that is, a vertical decomposition of the notion of the absence of unreasonable risk. When we consider the safety of our autonomous driving technology, the Waymo Driver, we start with the notion that the Waymo Driver is very good at preventing undesirable behaviors that still play a major role in crashes today: it does not drive distracted, or angry, or under the influence of alcohol or other substances. Yet, those risks must be balanced with new risks introduced by the autonomous vehicle that may be uncommon for conventional, manually driven vehicles. We thus detail a systematic risk assessment process grounded in the identification and appropriate management of three layers of hazard sources: architectural (those coming from design choices about our architecture), behavioral (those coming from our driving policy and the behavior exhibited on the road), and in-service operational (those coming from operational considerations related to fleet management and interactions with the ecosystem external to our company).

The Dynamic Nature of Safety

Safety is not a checklist, and it can continuously be improved upon. Beyond the vertical decomposition of the potential sources of risk, it is important to look at the longitudinal and time-dependent development of the safety determination lifecycle. This is characterized by ongoing (rather than one-time) assessments of risks and readiness — for example, when we begin to drive in a new city or a new vehicle platform is added for operation — within the overall framework of our safety methodologies. This framing decomposes three-phased perspectives of safety: safety as an emergent

development property; safety as an acceptable prediction and/or observation; and safety as continuous confidence growth.

Establishing Credibility

A safety case can become quite an abstract and, at the same time, highly technical concept… So how can we ensure the success of this complex endeavor? In this paper, we present Waymo’s Case Credibility Assessment (CCA) that helps systematically and robustly structure the argumentation — a differentiator of our thinking that we shared more broadly with the AV community for the first time. The CCA rests on two pillars — the credibility of the arguments for safety and the credibility of evidence — reinforced through an implementation credibility check. Together these three ingredients enable us to derive a coherent structure for our claims, which we showcase through an example within the paper. 

Between the lines

Autonomous driving technology can unlock a new way to conceive of the entire transportation ecosystem. The promises of this technology can only be fulfilled through broad and shared education and public acceptance. Waymo has worked hard to earn the public trust by consistently sharing information about our safety methodologies and safety performance data while encouraging greater transparency across the entire industry. This paper further demonstrates the company’s commitment to both safety and to an open and in-depth dialogue with the public and policymakers. By proposing an approach that remains methodology agnostic, we hope also to foster support from the rest of the industry, which could employ portions or all of what is being proposed toward the common goal of improving road safety and mobility for all.

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

🔍 SEARCH

Spotlight

AI Policy Corner: Singapore’s National AI Strategy 2.0

AI Governance in a Competitive World: Balancing Innovation, Regulation and Ethics | Point Zero Forum 2025

AI Policy Corner: Frontier AI Safety Commitments, AI Seoul Summit 2024

AI Policy Corner: The Colorado State Deepfakes Act

Special Edition: Honouring the Legacy of Abhishek Gupta (1992–2024)

related posts

  • From Pretraining Data to Language Models to Downstream Tasks: Tracking the Trails of Political Biase...

    From Pretraining Data to Language Models to Downstream Tasks: Tracking the Trails of Political Biase...

  • Virtues Not Principles

    Virtues Not Principles

  • Research summary: Classical Ethics in A/IS

    Research summary: Classical Ethics in A/IS

  • Assessing the nature of large language models: A caution against anthropocentrism

    Assessing the nature of large language models: A caution against anthropocentrism

  • Tell me, what are you most afraid of? Exploring the Effects of Agent Representation on Information D...

    Tell me, what are you most afraid of? Exploring the Effects of Agent Representation on Information D...

  • Artificial Intelligence and Inequality in the Middle East: The Political Economy of Inclusion

    Artificial Intelligence and Inequality in the Middle East: The Political Economy of Inclusion

  • The State of AI Ethics Report (Volume 6)

    The State of AI Ethics Report (Volume 6)

  • Re-imagining Algorithmic Fairness in India and Beyond (Research Summary)

    Re-imagining Algorithmic Fairness in India and Beyond (Research Summary)

  • The Epistemological View: Data Ethics, Privacy & Trust on Digital Platform

    The Epistemological View: Data Ethics, Privacy & Trust on Digital Platform

  • Investing in AI for Social Good: An Analysis of European National Strategies

    Investing in AI for Social Good: An Analysis of European National Strategies

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer

Categories


• Blog
• Research Summaries
• Columns
• Core Principles of Responsible AI
• Special Topics

Signature Content


• The State Of AI Ethics

• The Living Dictionary

• The AI Ethics Brief

Learn More


• About

• Open Access Policy

• Contributions Policy

• Editorial Stance on AI Tools

• Press

• Donate

• Contact

The AI Ethics Brief (bi-weekly newsletter)

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.


Archive

  • © MONTREAL AI ETHICS INSTITUTE. All rights reserved 2024.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.