• Skip to main content
  • Skip to secondary menu
  • Skip to primary sidebar
  • Skip to footer
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

  • Articles
    • Public Policy
    • Privacy & Security
    • Human Rights
      • Ethics
      • JEDI (Justice, Equity, Diversity, Inclusion
    • Climate
    • Design
      • Emerging Technology
    • Application & Adoption
      • Health
      • Education
      • Government
        • Military
        • Public Works
      • Labour
    • Arts & Culture
      • Film & TV
      • Music
      • Pop Culture
      • Digital Art
  • Columns
    • AI Policy Corner
    • Recess
  • The AI Ethics Brief
  • AI Literacy
    • Research Summaries
    • AI Ethics Living Dictionary
    • Learning Community
  • The State of AI Ethics Report
    • Volume 6 (February 2022)
    • Volume 5 (July 2021)
    • Volume 4 (April 2021)
    • Volume 3 (Jan 2021)
    • Volume 2 (Oct 2020)
    • Volume 1 (June 2020)
  • About
    • Our Contributions Policy
    • Our Open Access Policy
    • Contact
    • Donate

Oppenheimer As A Timely Warning to the AI Community

July 30, 2023

✍️ Original article by Eryn Rigley, a PhD research student at University of Southampton, specializing in the intersection of environmental and AI ethics, as well as defense & security AI ethics.


Like many others, I went to the cinema to watch Oppenheimer on the opening Friday night. And, like many others in my screening, I left quiet, absorbed, and reflective. Haunted by the birth of the nuclear bomb and Oppenheimer’s regret and mourning as he quotes, “Now I am become Death, the destroyer of worlds.” In the bathroom queue, I overheard a woman saying, “I left that movie feeling grateful that I will never have to carry that weight.” Others agreed. I stayed quiet, scared that, as an early career researcher of machine ethics, I may at some point have to carry the weight of knowing that I have, in some way, contributed to the creation of something which cannot be undone.

Lots of moral questions bubble up when watching Oppenheimer. For me, the most pressing was whether there is such a thing as amoral or morally neutral technology. Throughout the movie, Oppenheimer convinces himself that just because he and his team created the nuclear bomb, he has no responsibility for its use. He, in fact, actively avoids the truth of how that weapon was used. The movie does not show how the nuclear bomb affected Hiroshima and Nagasaki. We never see what happened to the Japanese civilians but are instead stuck on Oppenheimer as he winces and turns away from the projections. But, Oppenheimer’s attempt to close his eyes to the outcomes of his weapon neither erases what happened nor distances him from the outcomes of his work, but rather displays a pathetic regret and remorse. He is a martyr, absorbed and haunted by the use of his creation. However, as Kitty Oppenheimer says in the movie, being a martyr for your sins does not warrant forgiveness for them. 

“The technology is not evil. It is the humans who use it for evil” is a common defense of the development of disruptive AI technology. AI can, indeed, save lives just as much as it can cause harm. And it seems intuitive that, at least today, whether AI is good or bad is determined by the people who use it. We are not at the point of general AI, sentient AI, or truly autonomous systems that can exist completely independently of human makers and users. And for that reason, some may assume that we are not yet facing an existential question over whether AI can destroy the world. However, just because AI is still locked inside our computers, and does not look and act like the evil robots of sci-fi movies, does not mean we are safe from existential threats.

We see in the movie the calculation of whether the deployment of an atomic bomb would cause an unstoppable chain reaction, in effect, destroying the world. The probability is put at “near zero.” We never find out how close to zero that is. Since AI cannot “wake up,” it might seem that the chances of AI destroying the world are also near zero. However, we know that the creation of both the nuclear bomb and AI has indeed altered the world in a way that cannot be undone. 

People lose their jobs, are misinformed, threatened, and harmed by AI every day. Yet, blinded by ego and convinced by military, political, or economic necessity, we continue to build and use AI systems. Similarly, Oppenheimer pursues the Manhattan Project, excited by power and prestige, and pursuing his status as “not just self-important, but actually important.” Ultimately, he faces the truth that he has indeed destroyed the world. The chain reaction of events after the birth of the nuclear bomb – the Cold War, the arms race, and the spread of nuclear weapons worldwide – started with Oppenheimer and cannot be undone. 

I don’t believe we have created the AI equivalent of the nuclear bomb yet. But we will. And Oppenheimer is a timely warning of the fame, prestige, power, or ambition which clouds truth and justifies destructive technology. Moreover, we are faced with the weakness of defending disruptive technology as “amoral” or morally neutral, or the false perception that just because I created it doesn’t mean I am responsible for how it is used. 

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

🔍 SEARCH

Spotlight

ALL IN Conference 2025: Four Key Takeaways from Montreal

Beyond Dependency: The Hidden Risk of Social Comparison in Chatbot Companionship

AI Policy Corner: Restriction vs. Regulation: Comparing State Approaches to AI Mental Health Legislation

Beyond Consultation: Building Inclusive AI Governance for Canada’s Democratic Future

AI Policy Corner: U.S. Executive Order on Advancing AI Education for American Youth

related posts

  • Fusing Art and Engineering for a more Humane Tech Future

    Fusing Art and Engineering for a more Humane Tech Future

  • Looking before we leap: Expanding ethical review processes for AI and data science research

    Looking before we leap: Expanding ethical review processes for AI and data science research

  • On the sui generis value capture of new digital technologies: The case of AI

    On the sui generis value capture of new digital technologies: The case of AI

  • Exchanging Lessons Between Algorithmic Fairness and Domain Generalization (Research Summary)

    Exchanging Lessons Between Algorithmic Fairness and Domain Generalization (Research Summary)

  • Research summary: Troubling Trends in Machine Learning Scholarship

    Research summary: Troubling Trends in Machine Learning Scholarship

  • More Trust, Less Eavesdropping in Conversational AI

    More Trust, Less Eavesdropping in Conversational AI

  • Consequences of Recourse In Binary Classification

    Consequences of Recourse In Binary Classification

  • Efficiency is Not Enough: A Critical Perspective of Environmentally Sustainable AI

    Efficiency is Not Enough: A Critical Perspective of Environmentally Sustainable AI

  • Press Release: Analysis of COVI, Mila’s Contact Tracing Application

    Press Release: Analysis of COVI, Mila’s Contact Tracing Application

  • Knowledge, Workflow, Oversight: A framework for implementing AI ethics

    Knowledge, Workflow, Oversight: A framework for implementing AI ethics

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer


Articles

Columns

AI Literacy

The State of AI Ethics Report


 

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.

Contact

Donate


  • © 2025 MONTREAL AI ETHICS INSTITUTE.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.