✍️ Original article by Eryn Rigley, a PhD research student at University of Southampton, specializing in the intersection of environmental and AI ethics, as well as defense & security AI ethics.
Like many others, I went to the cinema to watch Oppenheimer on the opening Friday night. And, like many others in my screening, I left quiet, absorbed, and reflective. Haunted by the birth of the nuclear bomb and Oppenheimer’s regret and mourning as he quotes, “Now I am become Death, the destroyer of worlds.” In the bathroom queue, I overheard a woman saying, “I left that movie feeling grateful that I will never have to carry that weight.” Others agreed. I stayed quiet, scared that, as an early career researcher of machine ethics, I may at some point have to carry the weight of knowing that I have, in some way, contributed to the creation of something which cannot be undone.
Lots of moral questions bubble up when watching Oppenheimer. For me, the most pressing was whether there is such a thing as amoral or morally neutral technology. Throughout the movie, Oppenheimer convinces himself that just because he and his team created the nuclear bomb, he has no responsibility for its use. He, in fact, actively avoids the truth of how that weapon was used. The movie does not show how the nuclear bomb affected Hiroshima and Nagasaki. We never see what happened to the Japanese civilians but are instead stuck on Oppenheimer as he winces and turns away from the projections. But, Oppenheimer’s attempt to close his eyes to the outcomes of his weapon neither erases what happened nor distances him from the outcomes of his work, but rather displays a pathetic regret and remorse. He is a martyr, absorbed and haunted by the use of his creation. However, as Kitty Oppenheimer says in the movie, being a martyr for your sins does not warrant forgiveness for them.
“The technology is not evil. It is the humans who use it for evil” is a common defense of the development of disruptive AI technology. AI can, indeed, save lives just as much as it can cause harm. And it seems intuitive that, at least today, whether AI is good or bad is determined by the people who use it. We are not at the point of general AI, sentient AI, or truly autonomous systems that can exist completely independently of human makers and users. And for that reason, some may assume that we are not yet facing an existential question over whether AI can destroy the world. However, just because AI is still locked inside our computers, and does not look and act like the evil robots of sci-fi movies, does not mean we are safe from existential threats.
We see in the movie the calculation of whether the deployment of an atomic bomb would cause an unstoppable chain reaction, in effect, destroying the world. The probability is put at “near zero.” We never find out how close to zero that is. Since AI cannot “wake up,” it might seem that the chances of AI destroying the world are also near zero. However, we know that the creation of both the nuclear bomb and AI has indeed altered the world in a way that cannot be undone.
People lose their jobs, are misinformed, threatened, and harmed by AI every day. Yet, blinded by ego and convinced by military, political, or economic necessity, we continue to build and use AI systems. Similarly, Oppenheimer pursues the Manhattan Project, excited by power and prestige, and pursuing his status as “not just self-important, but actually important.” Ultimately, he faces the truth that he has indeed destroyed the world. The chain reaction of events after the birth of the nuclear bomb – the Cold War, the arms race, and the spread of nuclear weapons worldwide – started with Oppenheimer and cannot be undone.
I don’t believe we have created the AI equivalent of the nuclear bomb yet. But we will. And Oppenheimer is a timely warning of the fame, prestige, power, or ambition which clouds truth and justifies destructive technology. Moreover, we are faced with the weakness of defending disruptive technology as “amoral” or morally neutral, or the false perception that just because I created it doesn’t mean I am responsible for how it is used.