• Skip to main content
  • Skip to secondary menu
  • Skip to primary sidebar
  • Skip to footer
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

  • Articles
    • Public Policy
    • Privacy & Security
    • Human Rights
      • Ethics
      • JEDI (Justice, Equity, Diversity, Inclusion
    • Climate
    • Design
      • Emerging Technology
    • Application & Adoption
      • Health
      • Education
      • Government
        • Military
        • Public Works
      • Labour
    • Arts & Culture
      • Film & TV
      • Music
      • Pop Culture
      • Digital Art
  • Columns
    • AI Policy Corner
    • Recess
    • Tech Futures
  • The AI Ethics Brief
  • AI Literacy
    • Research Summaries
    • AI Ethics Living Dictionary
    • Learning Community
  • The State of AI Ethics Report
    • Volume 7 (November 2025)
    • Volume 6 (February 2022)
    • Volume 5 (July 2021)
    • Volume 4 (April 2021)
    • Volume 3 (Jan 2021)
    • Volume 2 (Oct 2020)
    • Volume 1 (June 2020)
  • About
    • Our Contributions Policy
    • Our Open Access Policy
    • Contact
    • Donate

Oppenheimer As A Timely Warning to the AI Community

July 30, 2023

✍️ Original article by Eryn Rigley, a PhD research student at University of Southampton, specializing in the intersection of environmental and AI ethics, as well as defense & security AI ethics.


Like many others, I went to the cinema to watch Oppenheimer on the opening Friday night. And, like many others in my screening, I left quiet, absorbed, and reflective. Haunted by the birth of the nuclear bomb and Oppenheimer’s regret and mourning as he quotes, “Now I am become Death, the destroyer of worlds.” In the bathroom queue, I overheard a woman saying, “I left that movie feeling grateful that I will never have to carry that weight.” Others agreed. I stayed quiet, scared that, as an early career researcher of machine ethics, I may at some point have to carry the weight of knowing that I have, in some way, contributed to the creation of something which cannot be undone.

Lots of moral questions bubble up when watching Oppenheimer. For me, the most pressing was whether there is such a thing as amoral or morally neutral technology. Throughout the movie, Oppenheimer convinces himself that just because he and his team created the nuclear bomb, he has no responsibility for its use. He, in fact, actively avoids the truth of how that weapon was used. The movie does not show how the nuclear bomb affected Hiroshima and Nagasaki. We never see what happened to the Japanese civilians but are instead stuck on Oppenheimer as he winces and turns away from the projections. But, Oppenheimer’s attempt to close his eyes to the outcomes of his weapon neither erases what happened nor distances him from the outcomes of his work, but rather displays a pathetic regret and remorse. He is a martyr, absorbed and haunted by the use of his creation. However, as Kitty Oppenheimer says in the movie, being a martyr for your sins does not warrant forgiveness for them. 

“The technology is not evil. It is the humans who use it for evil” is a common defense of the development of disruptive AI technology. AI can, indeed, save lives just as much as it can cause harm. And it seems intuitive that, at least today, whether AI is good or bad is determined by the people who use it. We are not at the point of general AI, sentient AI, or truly autonomous systems that can exist completely independently of human makers and users. And for that reason, some may assume that we are not yet facing an existential question over whether AI can destroy the world. However, just because AI is still locked inside our computers, and does not look and act like the evil robots of sci-fi movies, does not mean we are safe from existential threats.

We see in the movie the calculation of whether the deployment of an atomic bomb would cause an unstoppable chain reaction, in effect, destroying the world. The probability is put at “near zero.” We never find out how close to zero that is. Since AI cannot “wake up,” it might seem that the chances of AI destroying the world are also near zero. However, we know that the creation of both the nuclear bomb and AI has indeed altered the world in a way that cannot be undone. 

People lose their jobs, are misinformed, threatened, and harmed by AI every day. Yet, blinded by ego and convinced by military, political, or economic necessity, we continue to build and use AI systems. Similarly, Oppenheimer pursues the Manhattan Project, excited by power and prestige, and pursuing his status as “not just self-important, but actually important.” Ultimately, he faces the truth that he has indeed destroyed the world. The chain reaction of events after the birth of the nuclear bomb – the Cold War, the arms race, and the spread of nuclear weapons worldwide – started with Oppenheimer and cannot be undone. 

I don’t believe we have created the AI equivalent of the nuclear bomb yet. But we will. And Oppenheimer is a timely warning of the fame, prestige, power, or ambition which clouds truth and justifies destructive technology. Moreover, we are faced with the weakness of defending disruptive technology as “amoral” or morally neutral, or the false perception that just because I created it doesn’t mean I am responsible for how it is used. 

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

🔍 SEARCH

Spotlight

A brightly coloured illustration which can be viewed in any direction. It has many elements to it working together: men in suits around a table, someone in a data centre, big hands controlling the scenes and holding a phone, people in a production line. Motifs such as network diagrams and melting emojis are placed throughout the busy vignettes.

Tech Futures: The Fossil Fuels Playbook for Big Tech: Part II

A rock embedded with intricate circuit board patterns, held delicately by pale hands drawn in a ghostly style. The contrast between the rough, metallic mineral and the sleek, artificial circuit board illustrates the relationship between raw natural resources and modern technological development. The hands evoke human involvement in the extraction and manufacturing processes.

Tech Futures: The Fossil Fuels Playbook for Big Tech: Part I

Close-up of a cat sleeping on a computer keyboard

Tech Futures: The threat of AI-generated code to the world’s digital infrastructure

The undying sun hangs in the sky, as people gather around signal towers, working through their digital devices.

Dreams and Realities in Modi’s AI Impact Summit

Illustration of a coral reef ecosystem

Tech Futures: Diversity of Thought and Experience: The UN’s Scientific Panel on AI

related posts

  • The Bias of Harmful Label Associations in Vision-Language Models

    The Bias of Harmful Label Associations in Vision-Language Models

  • On Human-AI Collaboration in Artistic Performance

    On Human-AI Collaboration in Artistic Performance

  • The Whiteness of AI (Research Summary)

    The Whiteness of AI (Research Summary)

  • Energy and Policy Considerations in Deep Learning for NLP

    Energy and Policy Considerations in Deep Learning for NLP

  • In AI We Trust: Ethics, Artificial Intelligence, and Reliability

    In AI We Trust: Ethics, Artificial Intelligence, and Reliability

  • Algorithmic Harms in Child Welfare: Uncertainties in Practice, Organization, and Street-level Decisi...

    Algorithmic Harms in Child Welfare: Uncertainties in Practice, Organization, and Street-level Decisi...

  • Research Summary: Explaining and Harnessing Adversarial Examples

    Research Summary: Explaining and Harnessing Adversarial Examples

  • Mapping the Ethicality of Algorithmic Pricing

    Mapping the Ethicality of Algorithmic Pricing

  • Are we ready for a multispecies Westworld?

    Are we ready for a multispecies Westworld?

  • AI Ethics and Ordoliberalism 2.0: Towards A ‘Digital Bill of Rights

    AI Ethics and Ordoliberalism 2.0: Towards A ‘Digital Bill of Rights

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer


Articles

Columns

AI Literacy

The State of AI Ethics Report


 

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.

Contact

Donate


  • © 2025 MONTREAL AI ETHICS INSTITUTE.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.