🔬 Original article by Azfar Adib, who is currently pursuing his PhD in Electrical and Computer Engineering in Concordia University in Montreal. He is a Senior Member in the Institute of Electrical and Electronic Engineers (IEEE).
There is a saying- “all is fair in love and war”. Particularly for war, ethical aspects have been puzzling for centuries. There remain some basic humane principles to uphold even during wartime, as defined by international laws like Geneva convention. However, we see violations of those so often, just like the way we are observing now during the Russian invasion of Ukraine.
Simultaneously, wars boost arms business. And in this era of automation, this boost is happening for some AI industries also. Unmanned Aerial Vehicles (UAV) or drones are widely used in the ongoing Russia-Ukraine war. While exact figures are still not available, it is assumed that drone sales to corresponding regions have increased almost exponentially. It will continue to rise as many western nations are now increasing their military expenditure. In fact, in several recent wars drones have been a major catalyst (Azerbaijan- Armenia war in 2021 is a solid example of that) [1].
Wars often play a driving role in advancing technologies. Two world wars are great examples of that. So it is not unlikely to see certain AI mechanisms advancing further as a side-effect of the on-going war. However, that never counts as an excuse or explanation, in comparison with the immense destruction and suffering this war is causing to so many people. Its far reaching and long-lasting consequence on the entire world is alarming enough. Like we are often now even hearing about nuclear threats!
So, let us talk about two people, who saved the world from a nuclear fiasco during the era of cold war.
First one was Vasili Arkhipov, a Soviet submarine officer who was on board the submarine B-59 near Cuba, during peak time of Cuban missile crisis between USA & USSR . On 27 October 1962, the crew of the submarine found themselves in great trouble. They were facing continuous non-lethal attacks from US battleships surrounding them, they had lost communication with their command centre, and they were trapped in deep water without power supply. However, unknown to US forces, they had a special weapon – a ten kiloton nuclear torpedo which could more destruction than the atomic bomb dropped in Hiroshima. Fearing capture or death and being unaware of the actual scenario, a senior officer in the submarine (Valentin Savitsky) decided to launch the nuclear missile. However, according to Soviet protocol, all three senior officers on board had to agree to deploy the weapon. Luckily for the world, the other senior officer Vasili Arkhipov refused to sanction the nuclear launch. He preferred to wait further to gather accurate information. As a result, the nuclear torpedo was never fired. Ultimately the submarine came over the ocean surface where it was greeted by the US fleet, who (still being unaware of its nuclear arsenal) allowed it to safely return to the Soviet Union. If the nuclear torpedo was launched that day, the fate of the world would have been very different, as it would probably initiate a nuclear war causing global devastation [2].
Second one was Stanislav Petrov, a lieutenant colonel in the Soviet Air Defense Forces, who was working as duty officer in an early-warning center near Moscow in 1983. It was again a time of heightened tensions between US and USSR. On 26 September 1983, while being on-duty, Petrov suddenly heard alarms ringing. The radar screens were warning that five ballistic missiles had been launched from US aimed at the USSR. According to the protocol, he should have immediately reported the strikes to the Soviet leadership. Given the political climate of that time the possibility of a retaliatory launch was extremely high. Again fortunately enough for the world, Petrov decided to do otherwise, even after getting five straightforward alarms. After careful thinking, he reported a malfunction in the system, instead of a missile strike. His decision was based on his instinct and also logical assumptions, such as the newness of the system. Indeed, it was so. It was later found out that the alarm went off because the observation satellites had mistaken the reflections of sun’s rays on the top off the clouds for a missile launch. Petrov’s extraordinary decision ultimately prevented a nuclear catastrophe [3].
These two unsung heroes- Vasili Arkhipov and Stanislav Petrov, saved the earth from destruction by making the right judgments in extremely crucial times. Now, if an AI agent (robots or algorithms) were taking the same decisions instead of them, then what could have happened?
In the first scenario which Arkhipov faced, could an AI agent remain similarly passive, while getting no instruction and facing deadly threats? That is difficult to say. In the second scenario which Petrov faced, it is quite likely that an AI agent would have straightforwardly reported the alarms of five missiles, as that sounds quite logical.
Then, how does AI ethics basically suit warfare? It is true that automation also helps to reduce human casualties of war in a few ways. Like: enhanced precision of firing can reduce collateral damage. Use of unmanned vehicles saves precious human lives. Also, robots have been used for a long to sweep for fatal mines in land and water.
But the big question is- how far AI can be empowered in wartime decision making? Different theories have arisen in different times regarding this. A summary of those is given below [4]:
Deontology Theory: This theory states that the morality of an action should be based on whether that action itself is right or wrong under a series of rules, rather than based on the consequences of the action. Deontologist’s opinion of AI will be specifically influenced by the moral law or duty in the culture/country in which it operates. In the above mentioned contexts of Arkhipov and Petrov, a deontologist AI agent would have just acted entirely based on its pre-set moral principles, without considering ultimate consequence.
Utilitarianism Theory: Utilitarians, almost the opposite to Deontologists, prefer considering the consequences of actions, more than the ethics of the action itself. They evaluate each option to create the greatest balance of good over bad. The actions ultimately taken by Arkhipov and Petrov were sort of utilitarian, as they considered its far-reaching consequences and tried to ensure greater good.
Contract Theory: Contract theory is another interesting concept. It states that no person is naturally so strong to be free from fear of another person, and none is so weak to present a threat. Accordingly, it argues that AI policy must be aligned with the general desire of the populace. So whatever action an AI agent takes in the scenario of Arkhipov and Petrov, it would just follow the general expectation from the corresponding population. Any considerations of moral law or ultimate consequence will not appear there.
Virtue Theory: Virtue ethicists have some different perspectives. They argue that without true consciousness, AI agents will never be able to act in an ethical manner. Therefore, they should never operate on the battlefield autonomously. Only humans should make critical decision since they can be held ethically accountable, while AI agents can not be. “Campaign to Stop Killer Robots” in a good example of such activism, which is gaining significant momentum recently [5]. In this perspective, any AI agent can never be in the crucial positions like Arkhipov and Petrov.
While theories enlighten and guide us, real-life applications remain as the ultimate decider. So, moving ahead, ethics of AI during warfare is expected to keep evolving as an appealing topic.
References
[3] https://www.bbc.com/news/world-europe-24280831
[5] https://www.stopkillerrobots.org/[6] https://mitpress.mit.edu/books/new-fire