• Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • Core Principles of Responsible AI
    • Accountability
    • Fairness
    • Privacy
    • Safety and Security
    • Sustainability
    • Transparency
  • Special Topics
    • AI in Industry
    • Ethical Implications
    • Human-Centered Design
    • Regulatory Landscape
    • Technical Methods
  • Living Dictionary
  • State of AI Ethics
  • AI Ethics Brief
  • 🇫🇷
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

Oppenheimer As A Timely Warning to the AI Community

July 30, 2023

✍️ Original article by Eryn Rigley, a PhD research student at University of Southampton, specializing in the intersection of environmental and AI ethics, as well as defense & security AI ethics.


Like many others, I went to the cinema to watch Oppenheimer on the opening Friday night. And, like many others in my screening, I left quiet, absorbed, and reflective. Haunted by the birth of the nuclear bomb and Oppenheimer’s regret and mourning as he quotes, “Now I am become Death, the destroyer of worlds.” In the bathroom queue, I overheard a woman saying, “I left that movie feeling grateful that I will never have to carry that weight.” Others agreed. I stayed quiet, scared that, as an early career researcher of machine ethics, I may at some point have to carry the weight of knowing that I have, in some way, contributed to the creation of something which cannot be undone.

Lots of moral questions bubble up when watching Oppenheimer. For me, the most pressing was whether there is such a thing as amoral or morally neutral technology. Throughout the movie, Oppenheimer convinces himself that just because he and his team created the nuclear bomb, he has no responsibility for its use. He, in fact, actively avoids the truth of how that weapon was used. The movie does not show how the nuclear bomb affected Hiroshima and Nagasaki. We never see what happened to the Japanese civilians but are instead stuck on Oppenheimer as he winces and turns away from the projections. But, Oppenheimer’s attempt to close his eyes to the outcomes of his weapon neither erases what happened nor distances him from the outcomes of his work, but rather displays a pathetic regret and remorse. He is a martyr, absorbed and haunted by the use of his creation. However, as Kitty Oppenheimer says in the movie, being a martyr for your sins does not warrant forgiveness for them. 

“The technology is not evil. It is the humans who use it for evil” is a common defense of the development of disruptive AI technology. AI can, indeed, save lives just as much as it can cause harm. And it seems intuitive that, at least today, whether AI is good or bad is determined by the people who use it. We are not at the point of general AI, sentient AI, or truly autonomous systems that can exist completely independently of human makers and users. And for that reason, some may assume that we are not yet facing an existential question over whether AI can destroy the world. However, just because AI is still locked inside our computers, and does not look and act like the evil robots of sci-fi movies, does not mean we are safe from existential threats.

We see in the movie the calculation of whether the deployment of an atomic bomb would cause an unstoppable chain reaction, in effect, destroying the world. The probability is put at “near zero.” We never find out how close to zero that is. Since AI cannot “wake up,” it might seem that the chances of AI destroying the world are also near zero. However, we know that the creation of both the nuclear bomb and AI has indeed altered the world in a way that cannot be undone. 

People lose their jobs, are misinformed, threatened, and harmed by AI every day. Yet, blinded by ego and convinced by military, political, or economic necessity, we continue to build and use AI systems. Similarly, Oppenheimer pursues the Manhattan Project, excited by power and prestige, and pursuing his status as “not just self-important, but actually important.” Ultimately, he faces the truth that he has indeed destroyed the world. The chain reaction of events after the birth of the nuclear bomb – the Cold War, the arms race, and the spread of nuclear weapons worldwide – started with Oppenheimer and cannot be undone. 

I don’t believe we have created the AI equivalent of the nuclear bomb yet. But we will. And Oppenheimer is a timely warning of the fame, prestige, power, or ambition which clouds truth and justifies destructive technology. Moreover, we are faced with the weakness of defending disruptive technology as “amoral” or morally neutral, or the false perception that just because I created it doesn’t mean I am responsible for how it is used. 

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

🔍 SEARCH

Spotlight

AI Policy Corner: Singapore’s National AI Strategy 2.0

AI Governance in a Competitive World: Balancing Innovation, Regulation and Ethics | Point Zero Forum 2025

AI Policy Corner: Frontier AI Safety Commitments, AI Seoul Summit 2024

AI Policy Corner: The Colorado State Deepfakes Act

Special Edition: Honouring the Legacy of Abhishek Gupta (1992–2024)

related posts

  • The Future of Teaching Tech Ethics

    The Future of Teaching Tech Ethics

  • Regulating Artificial Intelligence: The EU AI Act - Part 1 (i)

    Regulating Artificial Intelligence: The EU AI Act - Part 1 (i)

  • AI Ethics During Warfare: An Evolving Paradox

    AI Ethics During Warfare: An Evolving Paradox

  • From Funding Crisis to AI Misuse: Critical Digital Rights Challenges from RightsCon 2025

    From Funding Crisis to AI Misuse: Critical Digital Rights Challenges from RightsCon 2025

  • Social Context of LLMs - the BigScience Approach, Part 4:Model Governance and Responsible Use

    Social Context of LLMs - the BigScience Approach, Part 4:Model Governance and Responsible Use

  • Open Problems and Fundamental Limitations of Reinforcement Learning from Human Feedback

    Open Problems and Fundamental Limitations of Reinforcement Learning from Human Feedback

  • Whose AI Dream? In search of the aspiration in data annotation.

    Whose AI Dream? In search of the aspiration in data annotation.

  • Participation and Division of Labor in User-Driven Algorithm Audits: How Do Everyday Users Work toge...

    Participation and Division of Labor in User-Driven Algorithm Audits: How Do Everyday Users Work toge...

  • Use case cards: a use case reporting framework inspired by the European AI Act

    Use case cards: a use case reporting framework inspired by the European AI Act

  • Responsible AI Licenses: social vehicles toward decentralized control of AI

    Responsible AI Licenses: social vehicles toward decentralized control of AI

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer

Categories


• Blog
• Research Summaries
• Columns
• Core Principles of Responsible AI
• Special Topics

Signature Content


• The State Of AI Ethics

• The Living Dictionary

• The AI Ethics Brief

Learn More


• About

• Open Access Policy

• Contributions Policy

• Editorial Stance on AI Tools

• Press

• Donate

• Contact

The AI Ethics Brief (bi-weekly newsletter)

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.


Archive

  • © MONTREAL AI ETHICS INSTITUTE. All rights reserved 2024.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.