🔬 Research summary by Katherine Chandler, an assistant professor in the Walsh School of Foreign Service at Georgetown University who researches on the intersection of culture, technology and politics.
[Original paper by Katherine Chandler]
[You can read coverage of this publication on the Twitter thread from UNIDIR]
Gender, as scholars have long argued, is part and parcel of existing power relationships. This means that the aims of gender equity cannot be limited to proportional representation of gender identities (though, of course, this is an important goal), but should also consider how being and knowing are coded by gender. Ostensibly neutral terms–like rationality, AI, machines or data–tie to and reproduce gender. For example, in debates on robotics, masculine stereotypes are utilized to suggest the potential dominance of “technical” systems, while feminine stereotypes prevail in showing the ways the systems might be controlled. These tropes are scaled up through references to “hard” and “soft” knowledge. Such masculine and feminine figures are simultaneously coded through race, sexuality, age and ability.
As a scholar of war, technology and culture, I have long been familiar with these arguments. Yet, their prevalence was forcefully brought home to me when I struggled to find a cover for the report that I recently authored for the United Nations Institute of Disarmament Research (UNIDIR), Does Military AI Have Gender? The research examines how theories of human-machine interactions can be strengthened by attending to the multiple ways humans are embodied. It draws on interdisciplinary research studying race and gender bias in commercial AI. The report argues ethical approaches to military applications of AI must be expanded by making transparent how gender, race, age and ability will be both explicitly and implicitly encoded in machine learning systems in development for national security. It also warns of the risks associated with increasing overlap between commercial and military applications of AI.
The first version of the cover proposed by the designers used an image to represent AI that was feminized. While the figure may have been a blue cyborg, the lips suggested a stereotyped, feminine form that was very subtly sexualized. “She” was filled in with abstract forms, drawing on the design of computer chips. I quickly wrote back to express my dismay. I objected to the feminized form as a stand-in for AI, as well as the abstract figures, which failed to reflect the real, lethal potential of military applications of AI. I suggested that the image for the cover of the report should show military personnel interacting with technical systems, which would better reflect the content.
In the second round of images, the people shown interacting with military computer systems were heavily masculinized and white. They wore layers of gear, while the colors and gestures subtly referenced the iconic Terminator. Unfortunately, there was not enough time (or appropriate stock images) to design a cover that I think would have done a better job of representing the report’s content, i.e. a team of military personnel, composed of male and female identifying persons, interacting with a digital infrastructure. Instead, I selected the image of the male soldier and hoped that at least some of the readers of the report would see the cover as a resounding answer to the question it poses, “Does Military AI Have Gender?”
The struggle to find a cover for the report is a micro example of the broader challenges associated with gender and military applications of AI. Technology is not separate from social relations but rather part of them. Policy makers, military strategists and legal analysts discuss human-machine interactions, yet, the picture at the backdrop of these discussions is often men, women and machines. In the report, I argue that the apparent neutrality of AI is contested by the complex identities that will need to be understood by artificial intelligence if the systems are to act like humans. The default understanding of the human as male, common to law, politics and science, for example, troubles one of the apparent advantages of mechanical systems often touted by advocates, namely, that technologies have the potential to be more objective than people. Rather, as ethical AI advocates have already pointed out in an array of non-military applications, artificial intelligence utilizes data and models that reproduce and exacerbate inequalities, privileging, for example, male voices, male pronouns and images of white males.
Consider the problems posed by a voice recognition system that fails to respond to a female pilot’s voice; a machine translation model that substitutes a neutral pronoun in one language with a male pronoun in another; or an image recognition system that fails to identify a baby as a person. These limitations are not simply engineering or policy problems but tie technology to existing cultural stereotypes and social inequities. Gender scholars have long pointed out that norms associated with males and females vary significantly depending on culture, while gender roles associated with war, technology and engineering also differ. Norms transform through their connection to race, age, sexuality and ability. What is worrisome for military applications of AI are the ways machine-processes may solidify contingent norms as rules and the lethal consequences of such assumptions.
Digital divides already indicate how technology worldwide is more accessible to men and how data and machine learning models more often represent them. What happens when this data becomes a rubric for attack and protection? Facial recognition systems could make men, regardless of their actual combatant or civilian status, hyper-visible as targets. Biased data sets and inadequately trained algorithms may mean that women of color would be misrecognized at a higher rate, leaving them exposed to differential risks. A group of individuals, including women and children, whose faces are partially obscured, may not be identified. At the moment, billions of dollars are being spent on investments in artificial intelligence by global powers; at the same time, many nations rely on other countries for data storage and collection and have limited access to the infrastructure to make AI. These potentials tie military artificial intelligence to systemic global inequalities.
Current debates in the CCW have focused on limits to the development of autonomous weapon systems. My report affirms the recommendations of the ICRC to ban unpredictable autonomous weapon systems and their use for human targeting. Yet, these controls are not sufficient to address how gender stereotypes and inequities may become embedded in many other applications of artificial intelligence in development for uses that include logistics, cyber warfare, intelligence collection and human resources. While these systems are not weapons, their use can have life or death consequences, e.g. in the case of intelligence analysis. Other applications, for example, in human resources, have the potential to undo gains made in gender mainstreaming, as automated recruitment tools designed to seek out personnel similar to previous recruits are unlikely to diversify military forces.
Does Military AI Have Gender? underlines the importance of evaluation and testing of military applications of artificial intelligence over the life-cycle of the technology. These findings should be clearly communicated to all potential users. Experts from a range of cultural and disciplinary backgrounds should be part of these evaluations, particularly, scholars and practitioners who take intersectional approaches to gender, race, age and ability. More research is needed on the limitations of AI representing and responding to children, non-binary gender identities and persons with different abilities, all themes underdeveloped in my report with significant consequences for potential military applications. I am also particularly concerned about potential weaponization of data from refugees and post-conflict scenarios. While applications of AI for peacebuilding and relief efforts are important counterbalances to AI weapons (and should form a much larger part of current defense budgets), measures must be developed to safeguard this data and prevent harmful uses of such tools.
I do not think that there are any foregone conclusions about military AI; rather, what these systems are and how they will be used is still being shaped and made. In the early stages of development, there are opportunities to transform military applications of AI and to impose limits that uphold humanitarian law and human rights. For this to happen, however, the international community needs to make clear the risks and shortcomings associated with military applications of artificial intelligence. Rather than assume the neutrality of technology, policy makers should acknowledge how human-machine interactions can reproduce and exacerbate existing inequalities. Militaries should be required to disclose how proposed applications of AI will interact with gender, race, age and ability. Systems that fail to adequately account for a diversity of global identities should be placed in moratoria. Finally, the United Nations Security Council Resolution 1325 on Women, Peace and Security should be updated for the digital era to include cybersecurity and military applications of AI.