• Skip to main content
  • Skip to secondary menu
  • Skip to primary sidebar
  • Skip to footer
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

  • Articles
    • Public Policy
    • Privacy & Security
    • Human Rights
      • Ethics
      • JEDI (Justice, Equity, Diversity, Inclusion
    • Climate
    • Design
      • Emerging Technology
    • Application & Adoption
      • Health
      • Education
      • Government
        • Military
        • Public Works
      • Labour
    • Arts & Culture
      • Film & TV
      • Music
      • Pop Culture
      • Digital Art
  • Columns
    • AI Policy Corner
    • Recess
  • The AI Ethics Brief
  • AI Literacy
    • Research Summaries
    • AI Ethics Living Dictionary
    • Learning Community
  • The State of AI Ethics Report
    • Volume 7 (November 2025)
    • Volume 6 (February 2022)
    • Volume 5 (July 2021)
    • Volume 4 (April 2021)
    • Volume 3 (Jan 2021)
    • Volume 2 (Oct 2020)
    • Volume 1 (June 2020)
  • About
    • Our Contributions Policy
    • Our Open Access Policy
    • Contact
    • Donate

Oppenheimer As A Timely Warning to the AI Community

July 30, 2023

✍️ Original article by Eryn Rigley, a PhD research student at University of Southampton, specializing in the intersection of environmental and AI ethics, as well as defense & security AI ethics.


Like many others, I went to the cinema to watch Oppenheimer on the opening Friday night. And, like many others in my screening, I left quiet, absorbed, and reflective. Haunted by the birth of the nuclear bomb and Oppenheimer’s regret and mourning as he quotes, “Now I am become Death, the destroyer of worlds.” In the bathroom queue, I overheard a woman saying, “I left that movie feeling grateful that I will never have to carry that weight.” Others agreed. I stayed quiet, scared that, as an early career researcher of machine ethics, I may at some point have to carry the weight of knowing that I have, in some way, contributed to the creation of something which cannot be undone.

Lots of moral questions bubble up when watching Oppenheimer. For me, the most pressing was whether there is such a thing as amoral or morally neutral technology. Throughout the movie, Oppenheimer convinces himself that just because he and his team created the nuclear bomb, he has no responsibility for its use. He, in fact, actively avoids the truth of how that weapon was used. The movie does not show how the nuclear bomb affected Hiroshima and Nagasaki. We never see what happened to the Japanese civilians but are instead stuck on Oppenheimer as he winces and turns away from the projections. But, Oppenheimer’s attempt to close his eyes to the outcomes of his weapon neither erases what happened nor distances him from the outcomes of his work, but rather displays a pathetic regret and remorse. He is a martyr, absorbed and haunted by the use of his creation. However, as Kitty Oppenheimer says in the movie, being a martyr for your sins does not warrant forgiveness for them. 

“The technology is not evil. It is the humans who use it for evil” is a common defense of the development of disruptive AI technology. AI can, indeed, save lives just as much as it can cause harm. And it seems intuitive that, at least today, whether AI is good or bad is determined by the people who use it. We are not at the point of general AI, sentient AI, or truly autonomous systems that can exist completely independently of human makers and users. And for that reason, some may assume that we are not yet facing an existential question over whether AI can destroy the world. However, just because AI is still locked inside our computers, and does not look and act like the evil robots of sci-fi movies, does not mean we are safe from existential threats.

We see in the movie the calculation of whether the deployment of an atomic bomb would cause an unstoppable chain reaction, in effect, destroying the world. The probability is put at “near zero.” We never find out how close to zero that is. Since AI cannot “wake up,” it might seem that the chances of AI destroying the world are also near zero. However, we know that the creation of both the nuclear bomb and AI has indeed altered the world in a way that cannot be undone. 

People lose their jobs, are misinformed, threatened, and harmed by AI every day. Yet, blinded by ego and convinced by military, political, or economic necessity, we continue to build and use AI systems. Similarly, Oppenheimer pursues the Manhattan Project, excited by power and prestige, and pursuing his status as “not just self-important, but actually important.” Ultimately, he faces the truth that he has indeed destroyed the world. The chain reaction of events after the birth of the nuclear bomb – the Cold War, the arms race, and the spread of nuclear weapons worldwide – started with Oppenheimer and cannot be undone. 

I don’t believe we have created the AI equivalent of the nuclear bomb yet. But we will. And Oppenheimer is a timely warning of the fame, prestige, power, or ambition which clouds truth and justifies destructive technology. Moreover, we are faced with the weakness of defending disruptive technology as “amoral” or morally neutral, or the false perception that just because I created it doesn’t mean I am responsible for how it is used. 

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

🔍 SEARCH

Spotlight

This image shows a large white, traditional, old building. The top half of the building represents the humanities (which is symbolised by the embedded text from classic literature which is faintly shown ontop the building). The bottom section of the building is embossed with mathematical formulas to represent the sciences. The middle layer of the image is heavily pixelated. On the steps at the front of the building there is a group of scholars, wearing formal suits and tie attire, who are standing around at the enternace talking and some of them are sitting on the steps. There are two stone, statute-like hands that are stretching the building apart from the left side. In the forefront of the image, there are 8 students - which can only be seen from the back. Their graduation gowns have bright blue hoods and they all look as though they are walking towards the old building which is in the background at a distance. There are a mix of students in the foreground.

Tech Futures: Co-opting Research and Education

Agentic AI systems and algorithmic accountability: a new era of e-commerce

ALL IN Conference 2025: Four Key Takeaways from Montreal

Beyond Dependency: The Hidden Risk of Social Comparison in Chatbot Companionship

AI Policy Corner: Restriction vs. Regulation: Comparing State Approaches to AI Mental Health Legislation

related posts

  • The State of AI Ethics Report

    The State of AI Ethics Report

  • Who Funds Misinformation? A Systematic Analysis of the Ad-related Profit Routines of Fake News sites

    Who Funds Misinformation? A Systematic Analysis of the Ad-related Profit Routines of Fake News sites

  • FaiRIR: Mitigating Exposure Bias from Related Item Recommendations in Two-Sided Platforms

    FaiRIR: Mitigating Exposure Bias from Related Item Recommendations in Two-Sided Platforms

  • Maintaining fairness across distribution shift: do we have viable solutions for real-world applicati...

    Maintaining fairness across distribution shift: do we have viable solutions for real-world applicati...

  • Prediction Sensitivity: Continual Audit of Counterfactual Fairness in Deployed Classifiers

    Prediction Sensitivity: Continual Audit of Counterfactual Fairness in Deployed Classifiers

  • Research summary: Classical Ethics in A/IS

    Research summary: Classical Ethics in A/IS

  • Common but Different Futures: AI Inequity and Climate Change

    Common but Different Futures: AI Inequity and Climate Change

  • RAIN Africa and MAIEI on The Future of Responsible AI in Africa (Public Consultation Summary)

    RAIN Africa and MAIEI on The Future of Responsible AI in Africa (Public Consultation Summary)

  • When Algorithms Infer Pregnancy or Other Sensitive Information About People

    When Algorithms Infer Pregnancy or Other Sensitive Information About People

  • Understanding technology-induced value change: a pragmatist proposal

    Understanding technology-induced value change: a pragmatist proposal

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer


Articles

Columns

AI Literacy

The State of AI Ethics Report


 

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.

Contact

Donate


  • © 2025 MONTREAL AI ETHICS INSTITUTE.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.