🔬 Original article by Chloé Currie, Itai Epstein, Dane Malenfant, and Cella Wardrop from Encode Justice Canada
This is a part of our Recess series in which university students from across Canada briefly explain key concepts in AI that young people should know about: specifically, what AI does, how it works, and what it means for you. The writers are members of Encode Justice Canada, a student-led advocacy organization dedicated to including Canadian youth in the essential conversations about the future of AI.
Introduction
Chloé Currie
War has historically been thought of as a form of direct armed conflict characterized by violence and physical aggression. Today, however, the introduction of new forms of military weaponry and technology has morphed the traditional definition of war to include more passive approaches. Artificial Intelligence (AI) has been incorporated in warfare through the application of lethal autonomous systems, small arms and light weapons, and three-dimensional (3D) printing. These current uses of AI in military weaponry facilitates conversations regarding the ethical dimensions of the role of AI in war. The militarization and securitization of AI highlight the ever-changing nature and the fears associated with technological advancements.
A History of the Use of Technology in Warfare
Itai Epstein
​​Before computing became mainstream, technology and scientific advancements were essential to militarization. Used to gain the upper hand in conflicts both small and large, wartime advances have played a significant role in creating and commercializing the under-appreciated technology we use today.
Technological innovation in combat began with the invention of gunpowder. Traced back to medieval China, experiments with a medical compound enabled a “bullet” to be shot up to a range of 10 feet. [1] By the time it reached Europe in the 13th century, its development process was documented by English philosopher Roger Bacon. As the method became more widely known, gunpowder was eventually integrated into siege warfare. Over time, gunpowder was refined and in the 16th century, muskets were created, forever changing the field of combat to incorporate long-range combat. [2]
Over time, the musket evolved into the gun, giving way to several inspired derivatives, each used for specific battle tactics. During World War I, technological advancements of military vehicles advanced significantly. The British rolled the first tanks into the battlefield, introducing the first implementation of armoured vehicles. Shortly after, submarines and airplanes were deployed, solidifying the age of mechanized warfare. [3]
During World War II, warfare innovations expanded beyond technological advancements. The atomic bomb, first developed by the United States (U.S.), became the scientific advancement of the time. Nuclear weapons ended the Second World War, triggering a worldwide arms race amongst nations fighting for power. Also, during this time the first computer was developed. Unlike the modern computers we know and use today, the first “computer” was analog, not digital, and was created to automate the process of translating a secret, encoded message.
Following its creation, computing became the way of the future. Nations knew they needed computing supremacy to remain a dominant and relevant force. This need for power prompted a vast increase in technological funding. Implementing the benefits of computation into standard military operations became the theme of the Cold War. [4]
The Soviet Union set to work on creating their atomic bomb, eventually matching the U.S. in firepower. Following this, the U.S. announced their development of the hydrogen bomb, which became the new standard for a weapon of mass destruction. As a result of these weapons, the stakes of another war became very high, and tensions between the two countries constantly increased, each trying to gain the upper hand on one another. When the Soviet Union launched the first satellite, Sputnik, the U.S. countered back with its own, named Explorer I. The creation of these satellites prompted a space race of sorts which ended with the U.S. landing the first man on the moon as part of the Apollo 11 mission. Also, during this time, the U.S. heavily increased its military investments in research. These investments gave way to the guided missile, which allowed the U.S. to pose threats at an international range. In addition, they inspired the development of drones, which have proven crucial to taking down enemy firepower from afar. [5]
Throughout history, the advancement of technology has been justified to gain supremacy in war. Following the removal of trench warfare, the domain of combat shifted from physical to digital. Initially used to create stronger weapons, the same technology was eventually applied to vehicles, starting the age of mechanized warfare. As advancements became stagnant, scientific research funding increased heavily, creating bombs and other forms of nuclear weapons. Consequently, computer research also increased. These developments paved the way for the automation of military tasks. In addition, technology-assisted warfare generated communications and information analysis technologies that defied what unassisted humans were capable of. [6] Presently, AI is being experimented with in an aggressive context. Future wars may be fought through computers, testing drone warfare capabilities and remote-controlled vehicles and robots. Soon, humans may be out of the loop of war, as AI and cybersecurity become the standards to consider a nation a superpower.
Current Uses of AI in Military Weapons
Dane Malenfant
The increasing development and use of lethal autonomous systems (LAW) have led to ethical concerns over the AI arms race increasing current tensions between militaries and the possibility of proliferation by malicious non-state actors. [7] However, the first documented use of a LAW was only in 2021 during the Libyan War. [8] While LAWs have dominated headlines, the factory automation of weapons manufacturing for both conventional and autonomous products has largely gone unnoticed. Small arms and light weapons (SALWs) contributed to an estimated half of all violent deaths between 2010 and 2015. [9] However, factory manufacturing and automation have improved with the use of AI. The prevalence of 3D printing (which also can be enhanced with AI techniques) corresponds with the development of unlicensed craft weaponry.
Relative to killer robots, SALWs are easier to obtain. There have been approximately one billion firearms in circulation since 2017. Their increase in prevalence can be related to the rise in homicide rates. [10] They are significantly implicated in empowering authoritarian governments, militias, gangs and terrorist organizations. [11] LAWs, on the other hand, have mainly been used to patrol areas and “divebomb at enemy radar signals” or as automated sentry guns by militaries rather than non-state actors. [12] Considering the overt prevalence of SALWs compared to LAWs, the impact of conventional firearms is considerable, but could one day be overshadowed by advancements in autonomous weapons systems.
The process of modern factory automation can be traced to 1913, when Henry Ford automated his production line of cars in 1914. [13] Later during and after the Second World War, automation became a standard feature in both US and Japanese factories. [14] Industrial automation can now use AI to optimize the supply chain and production, machine maintenance and workplace safety. [15] This use of AI and automation is not necessarily harmful: for example, it has even led to the automation of disassembling weaponry. [16] However, the service and optimization of these procedures in weapons manufacturing of SALWs do contribute to harm and the effect of AI on these industries is not well known.
Craft guns or improvised firearms are made from scrap weapons parts and other metals. They have caused concern due to being uncounted and unregistered with authorities, but are usually crude. [17] Craft gun parts have also been made with 3D printers. Some printers cost only $500 USD, albeit with lesser functions. [18] The original 3D-printed guns, like the single shot Liberator, had limited usability and broke down frequently. [19] However, there has been an increase in the illicit creation of 3D-printed firearms and parts for crime. [20] Although the creation of these weapons and details are still limited, the 3D printing process has previously benefited from AI for real-time error correction (MIT News, 2019). [21] Knowledge of how AI is used in commercial 3D printers is essential to understanding the changes in military weapons.
With the popularity of SALWs in a global and local conflict, lack of information on the optimization of factory production, and 3D printing to craft weapons, conversations surrounding AI and weaponry should include conventional weapons manufacturing.
The Ethical Implications of AI Use in Military and Security
Cella Wardrop
AI has been increasingly used in military and security operations, raising concerns over the ethical implications of AI. Canada is a global leader in ethical AI implementation, praised for co-founding the International Panel on Artificial Intelligence in 2018 and being the first country to outline a national strategy for artificial intelligence. [22] However, Canada’s guidelines for responsible AI are less clear when it comes to military implementations of AI. [23]
AI Ethics Guidelines
The Canadian government’s guiding principles for the “Responsible use of artificial intelligence (AI)” outline the government’s commitment to measurable and transparent AI use by trained government employees. [24] However, Floridi and Cowls of Oxford University detail a “Unified Framework of Five Principles for AI in Society” with a greater application to AI’s use in the military. [25] The framework highlights the importance of AI adhering to principles of beneficence, non-maleficence, autonomy, justice, and explicability. [26] These ethical AI principles aim to maintain human autonomy and decision-making capabilities while recognizing and promoting the critical role AI can play to better society. [27]
AI in the Military
When considering AI integration into the military, it is essential to consider that military operations are specifically high-risk. The military has control over the lives and well-being of large populations. Thus, it is vital to understand the capabilities of AI and the roles AI should, or should not, play in a military setting. The Canadian military ethics codes highlight the importance of specific interpersonal skills and values; for example, integrity, loyalty, and courage are recurring requirements. [28] If AI is to be implemented in the place of a human’s job, it is important to consider the extent to which AI can adhere to these ethical guidelines.
Some may argue that AI has the potential to fulfill these interpersonal military ethics requirements, as researchers are working to create robots with a capacity for social skills, and some AI technology can perform human behaviour when interacting with another human.[29] However, AI ethics guidelines, such as that of Floridi and Cowls, lead us to consider whether AI should play a large enough military role that it fulfills the human values of integrity, loyalty, and courage. Suppose military AI implementation maintains its role as a tool that supports human autonomy, is traceable, and is used for justice and benevolence. In that case, it should adhere to the military ethics guidelines.
Furthermore, it is crucial to consider the potential uses of AI by the military and the state. In allowing for unregulated uses of AI in the military, especially as AI can reproduce social biases and be discriminatory, the state’s military use of AI has the potential to be biased. [30] AI used in the military must be consciously created to minimize the reproduction of social bias to avoid discrimination. As Dr. Joy Buolamwini, founder of the Algorithmic Justice League, points out, diversifying datasets and tech employees would minimize the likelihood of social biases being reproduced in algorithms. [31]
To address the concerns of this article, Canada should outline military-specific guidelines for the ethical implementation of AI. These guidelines should prioritize the conservation of human autonomy and responsibility for AI used in the military. Additionally, AI used in the military should not be implemented if proven to be discriminatory against marginalized groups, such as visible minorities and women.
Conclusion
Chloé Currie
The militarization and securitization of AI and other machine-technologies highlight the changing approaches towards warfare in the 2020s. The modern introduction of AI in military weapons highlights a fear that is reminiscent of the Cold War, particularly the Space Race, which saw conflict between nations absent of direct military action and represented by a competition for technological advancement. The integration of AI in military weapons creates ethical implications concerning the embedded social biases with AI programs, but Canada can address these issues by prioritizing an interdisciplinary approach and seriously considering the ethics of automated militarization.
Notes
[1] Alius. (2018, January 30). A Brief History of Modern Warfare Technology: From Gunpowder to Drones. Technology Org. Retrieved March 15, 2022, from https://www.technology.org/2018/01/30/a-brief-history-of-modern-warfare-technology-from-gunpowder-to-drones/.
[2] Ibid.
[3] Ibid.
[4] Malloryk. (2020, July 31). The scientific and technological advances of World War II: The National WWII Museum: New Orleans. The National WWII Museum | New Orleans. Retrieved March 15, 2022, from https://www.nationalww2museum.org/war/articles/scientific-and-technological-advances-world-war-ii.
[5] Edwards, P.N. (1997). Why Build Computers?: The Military Role in Computer Research. In The Closed World. The MIT Press. https://doi.org/10.7551/mitpress/1871.003.0004.
[6] Ibid.
[7] Zhang, B. (2021, October 7). Public Opinion Toward Artificial Intelligence. https://doi.org/10.31219/osf.io/284sm.
[8] Majumdar Roy Choudhury, L., Aoun, A., Badawy, D., de Alburquerque Bacardit, L. A., Marjane, Y., & Wilkinson, A., Letter dated 8 March 2021 from the Panel of Experts on Libya established pursuant to resolution 1973 (2011) addressed to the President of the Security Council (2021). Retrieved from https://documents-dds-ny.un.org/doc/UNDOC/GEN/N21/037/72/PDF/N2103772.pdf?OpenElement.
[9]Thrall, A., & Cohen, T. (2022). 2021 Arms Sales Risk Index. Small arms. https://www.cato.org/study/2021-arms-sales-risk-index.
[10] Ibid.
[11] Ibid.
[12] Vynck, G. D. (2021, August 13). The U.S. says humans will always be in control of AI weapons. but the age of autonomous war is already here. The Washington Post. Retrieved February 21, 2022, from https://www.washingtonpost.com/technology/2021/07/07/ai-weapons-us-military/.
[13] Torrero, E. A. Automating the production line: Henry Ford began it all when he designed the first car assembly line in 1914, in IEEE Spectrum, vol. 14, no. 11, pp. 71-72, Nov. 1977, doi: 10.1109/MSPEC.1977.6501657.
[14] Boisset, F. (2018, May 24). The history of Industrial Automation in manufacturing. Automate. Retrieved from https://www.automate.org/editorials/the-history-of-industrial-automation-in-manufacturing.
[15] Spurr, R. (2021, March 29). How artificial intelligence is transforming manufacturing. The Manufacturer. Retrieved from https://www.themanufacturer.com/articles/ai-transforming-manufacturing/.
[16] DeSmet, J. (2021, December 1). Robots Automate Disassembly of Chemical Weapons. ASSEMBLY. https://www.assemblymag.com/articles/96763-robots-automate-disassembly-of-chemical-weapons.
[17] Small Arms Survey. (2018, June 18). Global Firearms Holdings. Small Arms Survey. Retrieved from https://www.smallarmssurvey.org/database/global-firearms-holdings
[18] Daly, A., Mann, M., Squires, P., & Walters, R. (2021). 3D printing, policing and crime. Policing and Society, 31(1), 37–51. doi:10.1080/10439463.2020.1730835.
[19] Veilleux-Lepage, D. Y. (2021, November 15). Ctrl, hate, print: Terrorists and the appeal of 3D-printed weapons. ICCT. Retrieved February 21, 2022, from https://icct.nl/publication/ctrl-hate-print-terrorists-and-the-appeal-of-3d-printed-weapons/.
[20] Ibid.
[21] A 3-D printer powered by machine vision and artificial intelligence. (2019, June 4). MIT News | Massachusetts Institute of Technology. https://news.mit.edu/2019/inkbit-3d-printer-0604.
[22] Mandate for the International Panel on Artificial Intelligence. Prime Minister of Canada. (2018, December 6). Retrieved March 7, 2022, from https://pm.gc.ca/en/news/backgrounders/2018/12/06/mandate-international-panel-artificial-intelligence; UNESCO. (2018, November 22). Canada first to adopt strategy for Artificial Intelligence: United Nations Educational, Scientific and Cultural Organization. Canada first to adopt strategy for artificial intelligence | United Nations Educational, Scientific and Cultural Organization. Retrieved March 7, 2022, from http://www.unesco.org/new/en/member-states/single-view/news/canada_first_to_adopt_strategy_for_artificial_intelligence/.
[23] Marijan, B. (2019, May 13). More clarity on Canada’s views on military applications of artificial intelligence needed. Project Ploughshares. Retrieved March 7, 2022, from https://ploughshares.ca/2019/05/more-clarity-on-canadas-views-on-military-applications-of-artificial-intelligence-needed/.
[24] Ibid.
[25] Floridi, L., & Cowls, J. (2019, July 1). A Unified Framework of Five Principles for AI in Society. Harvard Data Science Review. Retrieved March 7, 2022, from https://hdsr.mitpress.mit.edu/pub/l0jsh9d1/release/7.
[26] Ibid.
[27] Ibid.
[28] Floridi, L., & Cowls, J. (2019, July 1). A Unified Framework of Five Principles for AI in Society. Harvard Data Science Review. Retrieved March 7, 2022, from https://hdsr.mitpress.mit.edu/pub/l0jsh9d1/release/7.
[29] Zewe, A. (2021, November 5). Giving robots social skills: A new machine-learning system helps robots understand and perform certain social interactions. MIT News | Massachusetts Institute of Technology. Retrieved March 13, 2022, from https://news.mit.edu/2021/robots-social-skills-1105.
[30] Miller, C. C. (2015, July 9). When Algorithms Discriminate. The New York Times. Retrieved March 12, 2022, from https://www.nytimes.com/2015/07/10/upshot/when-algorithms-discriminate.html.[31] Buolamwini, J. (2019, February 7). Artificial Intelligence has a racial and gender bias problem. Time. Retrieved March 15, 2022, from https://time.com/5520558/artificial-intelligence-racial-gender-bias/.