• Skip to main content
  • Skip to secondary menu
  • Skip to primary sidebar
  • Skip to footer
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

  • Articles
    • Public Policy
    • Privacy & Security
    • Human Rights
      • Ethics
      • JEDI (Justice, Equity, Diversity, Inclusion
    • Climate
    • Design
      • Emerging Technology
    • Application & Adoption
      • Health
      • Education
      • Government
        • Military
        • Public Works
      • Labour
    • Arts & Culture
      • Film & TV
      • Music
      • Pop Culture
      • Digital Art
  • Columns
    • AI Policy Corner
    • Recess
  • The AI Ethics Brief
  • AI Literacy
    • Research Summaries
    • AI Ethics Living Dictionary
    • Learning Community
  • The State of AI Ethics Report
    • Volume 6 (February 2022)
    • Volume 5 (July 2021)
    • Volume 4 (April 2021)
    • Volume 3 (Jan 2021)
    • Volume 2 (Oct 2020)
    • Volume 1 (June 2020)
  • About
    • Our Contributions Policy
    • Our Open Access Policy
    • Contact
    • Donate

Why reciprocity prohibits autonomous weapons systems in war

May 28, 2023

🔬 Research Summary by Joshua Brand, a PhD student at Télécom Paris-Institut Polytechnique de Paris with his thesis on Human-centred explainable artificial intelligence and the virtue of vulnerability.

[Original paper by Joshua L.M. Brand]


Overview: Conversations and the (likely) deployment of autonomous weapons systems (AWS) in warfare have considerably advanced. This paper analyzes the morality of deploying AWS and ultimately argues for a categorical ban. It arrives at this conclusion by avoiding hypothetical technological advances. Instead, it focuses on the relationship between humans and machines in the context of international humanitarian law principles jus in bello.


Introduction

While the technology of AWS has yet to reach a point where it could be widely deployed, many of the world’s largest militaries have been rapidly funding its development. The US, for example, has spent US$18 billion between 2016 and 2020. There have already been reported uses of AWS in Libya in 2020. Once a fantastical “killer robot” in science fiction, it has become an impending reality. Ethical analyses have subsequently been raised around such issues as the disproportionality and lack of accountability of using AWS on the battlefield.

However, this paper determines AWS’s suitability by considering if it can fit within the standards of jus in bello. I argue that both jus in bello principles of proportionality and discrimination give rise to the duty to respect humans as ends-in-themselves and that within this duty is the necessity of sustaining a reciprocal reasons-giving relationship. Following this prerequisite of preserving reciprocity, AWS cannot be deployed without threatening the established moral standards of warfare conduct.

Key Insights

The moral foundation of jus in bello

The paper begins by setting out the foundational argument that widely adopted international humanitarian law, jus in bello, guides how wars ought to be fought and understood as deontological—where the action is considered morally good when it adheres to prescribed duties of conduct. One of the most famous duties, attributed to Immanuel Kant and from which I argue jus in bello is derived, is that all rational beings must be treated as ends-in-themselves and never as mere means. In other words, jus in bello appeals to a duty of respecting and recognizing the humanity of all involved in warfare. Without this primary acknowledgment, the guiding principles of warfare would not exist as they do.

Existing AWS Literature

I then turn to existing ethical AWS literature. Considering that AWS are weapons that navigate, target, and engage targets without meaningful human oversight, Peter Asaro and Robert Sparrow argue for at least a current ban on AWS. They both focus on the need to recognize and respect the humanity of the potential target. Because machines currently cannot appreciate the intrinsic value of a human and give reasons for their actions, Asaro argues that they cannot meaningfully engage with their targets and, thus, cannot replace human combatants; Sparrow emphasizes the need for an interpersonal relationship between combatant and target that involves an understanding of the other’s humanity and not just as another object of warfare. Delegating the combatant role to AWS would sever this interpersonal relationship.

These accounts focus on the obligation to recognize the potential targets as intrinsically valuable autonomous beings and not mere objects. With current technology, machines essentially only “see” a collection of data points in the shape of a human being. AWS currently does not fit the requirements in the context of acknowledging and respecting humanity as a unique concept.

The emphasis here is currently. Asaro and Sparrow appeal to existing technology and do not justify a categorical ban. They leave open the possibility that if AI sufficiently progresses to recognize and respect their target’s intrinsic value, then their ban on AWS would be void. Ronald Arkin uses this openness to progress in support of AWS. 

Further, these accounts also problematically present the combatant-target relationship as unidirectional—they only examine how AWS, acting as the combatant, understands the target.

Reciprocity

Addressing these concerns, I present the key argument of reciprocity. The ban on AWS can be presented as categorical when the combatant-target relationship is considered as reciprocal instead of unidirectional.

To explain what this means, I rely on the work of Christine Korsgaard, a contemporary Kantian philosopher, to link this notion of reciprocity with moral duties, such as those embedded within jus in bello. If any duty is to be objectively true, it must have authority over everyone, such as all those involved in warfare. This means that any person, irrespective of the role in which they find themselves, must be equally bound to the obligations of the moral law. For example, a civilian in a war zone can demand that combatants recognize their standing as innocent bystanders and, therefore, have a reason not to target them. If we stop here, however, the moral duty is not yet complete. The civilian must also concede that should the roles be reversed, the combatant-turned-civilian can make the same moral claims. The duty is a shared and reciprocal authority that reigns over every person, irrespective of their contextual position.

Embedded within this shared reciprocal authority is the consequence of constitutive symmetry—to conceptually reverse roles as required by any moral duty, both entities involved must be constitutively equal. Even if they replicate human capabilities to a high standard, machines will never be on equal footing with humans; most importantly, machines are not concerned with mortality the way humans are. AWS, irrespective of hypothetical advances in AI, will always be in an asymmetrical relationship with humans and, therefore, can never be under the same authoritative duty as humans. Accordingly, humans and AWS cannot co-exist within the duties of jus in bello.

Addressing Objections

I conclude by addressing foreseeable objections. Namely, is this deontological account too demanding? If deploying AWS could reduce casualties by even just 10% makes it difficult to choose the higher philosophical ground over consequentialist life savings. I, however, point out that most unlawful deaths are caused by military strategy and not rogue combatants; there is already evidence that a shift in strategy can significantly reduce casualties, thus giving hope that accepting a total ban on AWS can coincide with a reduction in deaths.

A final relevant objection, put forth by Ryan Jenkins and Duncan Purves, is that banning AWS entails an additional ban on such weapons as cruise missiles and long-distance artillery as they also disobey respecting the target in the way Asaro, Sparrow, and myself argue. They argue that reducing war to close-combat weaponry would exponentially make warfare more fatal. I refute this objection by citing a Canadian-led UN study that argues that contemporary long-range artillery is not the primary driver of lower casualties—small-scale fighting and improvement in healthcare are reasons for less deadly war. And while we can still ban AWS when other weapons may breach moral law, even if it did entail further prohibition, it would not result in a slippery slope to more destructive warfare.

Between the lines

In summary, the benefit of this paper is that it avoids the need to examine hypothetical advancements of AI, which are often difficult to ascertain, to consider the immorality of AWS. It sidesteps the conversation by focusing on the logic of the well-established jus in bello principles, thus arguing that a categorical ban is already an accepted position. And while it is acknowledged in the section on foreseeable objections, I am nevertheless sympathetic to AWS supporters with their consequentialist angle. Replacing even one combatant on the battlefield with a machine is still one human life of the combatant saved. While I did present responses to this objection, this deserves further discussion.

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

🔍 SEARCH

Spotlight

ALL IN Conference 2025: Four Key Takeaways from Montreal

Beyond Dependency: The Hidden Risk of Social Comparison in Chatbot Companionship

AI Policy Corner: Restriction vs. Regulation: Comparing State Approaches to AI Mental Health Legislation

Beyond Consultation: Building Inclusive AI Governance for Canada’s Democratic Future

AI Policy Corner: U.S. Executive Order on Advancing AI Education for American Youth

related posts

  • Research summary: Out of the Laboratory and Into the Classroom: The Future of AI in Education

    Research summary: Out of the Laboratory and Into the Classroom: The Future of AI in Education

  • Collectionless Artificial Intelligence

    Collectionless Artificial Intelligence

  • Research summary: Challenging Truth and Trust: A Global Inventory of Organized Social Media Manipula...

    Research summary: Challenging Truth and Trust: A Global Inventory of Organized Social Media Manipula...

  • Animism, Rinri, Modernization; the Base of Japanese Robotics

    Animism, Rinri, Modernization; the Base of Japanese Robotics

  • Open-source provisions for large models in the AI Act

    Open-source provisions for large models in the AI Act

  • Effects of ROSS Intelligence and NDAS, highlighting the need for AI regulation

    Effects of ROSS Intelligence and NDAS, highlighting the need for AI regulation

  • The Social Metaverse: Battle for Privacy

    The Social Metaverse: Battle for Privacy

  • Oppenheimer As A Timely Warning to the AI Community

    Oppenheimer As A Timely Warning to the AI Community

  • The Case for Anticipating Undesirable Consequences of Computing Innovations Early, Often, and Across...

    The Case for Anticipating Undesirable Consequences of Computing Innovations Early, Often, and Across...

  • The Death of Canada’s Artificial Intelligence and Data Act: What Happened, and What’s Next for AI Re...

    The Death of Canada’s Artificial Intelligence and Data Act: What Happened, and What’s Next for AI Re...

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer


Articles

Columns

AI Literacy

The State of AI Ethics Report


 

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.

Contact

Donate


  • © 2025 MONTREAL AI ETHICS INSTITUTE.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.