• Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • Core Principles of Responsible AI
    • Accountability
    • Fairness
    • Privacy
    • Safety and Security
    • Sustainability
    • Transparency
  • Special Topics
    • AI in Industry
    • Ethical Implications
    • Human-Centered Design
    • Regulatory Landscape
    • Technical Methods
  • Living Dictionary
  • State of AI Ethics
  • AI Ethics Brief
  • šŸ‡«šŸ‡·
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

System Safety and Artificial Intelligence

December 6, 2022

šŸ”¬ Research Summary by Roel Dobbe, an Assistant Professor working at the intersection of engineering, design and governance of data-driven and algorithmic control and decision-making systems.

[Original paper by Roel Dobbe]


Overview: The governance of Artificial Intelligence (AI) systems is increasingly important to prevent emerging forms of harm and injustice. This paper presents an integrated system perspective based on lessons from the field of system safety, inspired by the work of system safety pioneer Nancy Leveson. For decades, this field has grappled with ensuring safety in systems subject to various forms of software-based automation. This tradition offers tangible methods and tools to assess and improve safeguards for designing and governing current-day AI systems and to point out open research, policy, and advocacy questions.Ā 


Introduction

Have you ever thought about what it takes to keep an airplane in the air and ensure you can fly around the world safely and comfortably? In addition to a well-designed plane, we rely on skilled pilots, stewards, ground personnel, regular maintenance, a spectrum of standards and procedures, various safety audits, complex air traffic management, infrastructure across the airspace, and intimate political agreements across the globe.

Now think about how we portray artificial intelligence these days. Typically: as a ā€˜thing on its own’; as a robot or chatbot that can independently execute tasks and deliver services; as the magical solution to many current-day problems. 

How do we keep AI systems safe? The dominant thinking goes like this: ā€œAs long as we feed it enough data; provide technical fixes for things like explainability or fairness; think carefully about ethical choices and implications, and provide some documentation to ensure transparency, we must be safe, right?ā€
Unfortunately, the answer is ā€œno – we need a lot more.ā€ But also: ā€œwe can and should learn from the history of aviation and other safety-critical domains.ā€ This paper covers lessons from system safety and sketches what we can do to prevent harm and ensure safety in current-day AI and algorithmic systems.

Key Insights

Safety in AI is complex and ā€œsocio-technicalā€

System safety arose in the 1950s in aviation and aerospace when engineered systems became too complex to safeguard with traditional safety engineering methods. Thinking in terms of ā€˜causal event chains’ leading up to a possible accident and the typical quantitative risk methods could no longer account for or capture the kinds of complex situations that could lead to a failure or accident. 

Central to this complexity was the realization that accidents emerge from intimate interactions between technology, human operators or users, and the surrounding context of formal rules and norms and informal culture and behavior. Safety is, therefore, inherently a ā€œsociotechnicalā€ property that can only be understood and controlled across a broader system of social, technical, and institutional components.

A weighty example

To make this concrete, let’s consider a recent example; the two crashes with the Boeing 737-MAX in 2018 and 2019. These were quickly reported to have emerged from the redesign of the airplane with larger and more efficient engines to compete with a competitor’s new airplane model. 

While the crashes could be understood as resulting from faulty sensors and an ill-designed automation system, the US House of Representatives found that there were other critical factors at play that can largely be attributed to failing management and governance, namely: (1) giving in to financial pressure to compete with the rivaling Airbus A320neo plane; (2) faulty assumptions about critical technology and pilot performance; (3) a culture of concealment internally at Boeing and towards the authorities; (4) conflicts of interest between Boeing and the authorities; and (5) disproportional influence of Boeing over its oversight. 

Unfortunately, these factors are well known to be constitutive of accidents, in particular for systems in which software-based automation plays a central role. Their social and managerial nature underlines the necessity of a system-theoretic lens to understand and prevent accidents and harm. 

Looking at the emerging landscape of AI safety, AI ethics, and AI policy and governance, the worrying conclusion is that most efforts need to interpret safety in this systemic manner sufficiently. There is work to do to ensure we think about safety in effective ways to prevent AI systems go awry in similar ways across the many domains in which these are increasingly integrated and dominating. 

Leveson’s Lessons for system safety

System safety pioneer Nancy Leveson outlined the core learnings from system safety into seven important assumptions. In my paper, I interpret these for AI systems today, providing various examples and pointing to practical tools and methods to apply system safety analysis in AI system design and governance. 

  1. Shift Focus from Component Reliability to System Hazard Elimination: A well-performing AI model is not sufficient, nor is it always necessary for safety. To prevent safety issues, one needs to look at how that model is embedded in a context of use and its broader impacts and implications.Ā 
  2. Shift from Event-based to Constraint-based Accident Models: To understand (possible) accidents, look at the entire sociotechnical system in which the AI operates, including the environment enabled and governed by management and authorities.
  3. Shift from a Probabilistic to a System-theoretic Safety Perspective: Rather than touting your AI model by its performance, assume that your AI system will fail and show that you know what to do when it happens to prevent harm.Ā 
  4. Shift from Siloed Design and Operation to Aligning Mental Models: Accidents are often unfairly attributed to operator error. Instead, designers are responsible for the environment in which such errors occur and need to assume that their mental model of what the AI system can and cannot do and how to operate it will differ from the people they design it for.
  5. Curb the Curse of Flexibility in AI Software Development: Perhaps the most pressing for modern AI systems (especially those relying on deep learning) – the most serious safety problems arise ā€œwhen nobody understands what the software should do or even what it should not doā€ (quote from Leveson). AI models and software are a prime source of safety hazards and should be treated as such.
  6. Translate Safety Constraints to the Design and Operation of the System: AI systems tend to migrate to states of higher risk as safety defenses degenerate, often under financial or political pressure, as we saw in the Boeing case. Therefore, ongoing feedback mechanisms are needed to learn from mistakes and anticipate potential safety hazards transparently and professionally.Ā 
  7. Build an Organization and Culture Open to Understanding and Learning: Culture is critical. Without the ability to safely share possible issues or factors that could lead to safety problems and an effective follow-up by the organization, an AI system is doomed to fail.

Between the lines

The stakes of AI systems are high, especially where these establish unsafe conditions in social domains, thereby often reifying historical power asymmetries. In many ways, the harms that emerge are externalized, i.e., not treated as part of the AI system design. 

A system safety perspective could help empower those subjected to either ill-willed or unprincipled faulty AI systems by better understanding how such harms emerge across the technical, social, and institutional elements in the broader context, thereby bringing responsible actors and powerful players into view. We have done this for aviation and medicine, too: when things go wrong, there are procedures to carefully understand what happened to prevent more harm in the future. 

Often, a full spectrum of actors is involved, from the users and developers of technology to supervisory bodies, auditors, regulators, and civil society. This tells us that it often takes much more to responsibly and sustainably safeguard complex systems subject to software-based automation. As such, the lessons and tools from system safety are useful to understand what is needed to prevent new forms of harm in contexts where AI is relatively new and to inform standards that the design and governance of such systems should meet based on decades of experience.

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

šŸ” SEARCH

Spotlight

Canada’s Minister of AI and Digital Innovation is a Historic First. Here’s What We Recommend.

Am I Literate? Redefining Literacy in the Age of Artificial Intelligence

AI Policy Corner: The Texas Responsible AI Governance Act

AI Policy Corner: Singapore’s National AI Strategy 2.0

AI Governance in a Competitive World: Balancing Innovation, Regulation and Ethics | Point Zero Forum 2025

related posts

  • AI Deception: A Survey of Examples, Risks, and Potential Solutions

    AI Deception: A Survey of Examples, Risks, and Potential Solutions

  • Looking before we leap: Expanding ethical review processes for AI and data science research

    Looking before we leap: Expanding ethical review processes for AI and data science research

  • The State of AI Ethics Report (Oct 2020)

    The State of AI Ethics Report (Oct 2020)

  • The path toward equal performance in medical machine learning

    The path toward equal performance in medical machine learning

  • AI-synthesized faces are indistinguishable from real faces and more trustworthy

    AI-synthesized faces are indistinguishable from real faces and more trustworthy

  • Evaluating a Methodology for Increasing AI Transparency: A Case Study

    Evaluating a Methodology for Increasing AI Transparency: A Case Study

  • Conceptualizing the Relationship between AI Explanations and User Agency

    Conceptualizing the Relationship between AI Explanations and User Agency

  • Going public: the role of public participation approaches in commercial AI labs

    Going public: the role of public participation approaches in commercial AI labs

  • Privacy Limitations Of Interest-based Advertising On The Web: A Post-mortem Empirical Analysis Of Go...

    Privacy Limitations Of Interest-based Advertising On The Web: A Post-mortem Empirical Analysis Of Go...

  • An error management approach to perceived fakeness of deepfakes: The moderating role of perceived de...

    An error management approach to perceived fakeness of deepfakes: The moderating role of perceived de...

Partners

  • Ā 
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer

Categories


• Blog
• Research Summaries
• Columns
• Core Principles of Responsible AI
• Special Topics

Signature Content


• The State Of AI Ethics

• The Living Dictionary

• The AI Ethics Brief

Learn More


• About

• Open Access Policy

• Contributions Policy

• Editorial Stance on AI Tools

• Press

• Donate

• Contact

The AI Ethics Brief (bi-weekly newsletter)

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.


Archive

  • Ā© MONTREAL AI ETHICS INSTITUTE. All rights reserved 2024.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.