• Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • Core Principles of Responsible AI
    • Accountability
    • Fairness
    • Privacy
    • Safety and Security
    • Sustainability
    • Transparency
  • Special Topics
    • AI in Industry
    • Ethical Implications
    • Human-Centered Design
    • Regulatory Landscape
    • Technical Methods
  • Living Dictionary
  • State of AI Ethics
  • AI Ethics Brief
  • 🇫🇷
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

Why reciprocity prohibits autonomous weapons systems in war

May 28, 2023

🔬 Research Summary by Joshua Brand, a PhD student at Télécom Paris-Institut Polytechnique de Paris with his thesis on Human-centred explainable artificial intelligence and the virtue of vulnerability.

[Original paper by Joshua L.M. Brand]


Overview: Conversations and the (likely) deployment of autonomous weapons systems (AWS) in warfare have considerably advanced. This paper analyzes the morality of deploying AWS and ultimately argues for a categorical ban. It arrives at this conclusion by avoiding hypothetical technological advances. Instead, it focuses on the relationship between humans and machines in the context of international humanitarian law principles jus in bello.


Introduction

While the technology of AWS has yet to reach a point where it could be widely deployed, many of the world’s largest militaries have been rapidly funding its development. The US, for example, has spent US$18 billion between 2016 and 2020. There have already been reported uses of AWS in Libya in 2020. Once a fantastical “killer robot” in science fiction, it has become an impending reality. Ethical analyses have subsequently been raised around such issues as the disproportionality and lack of accountability of using AWS on the battlefield.

However, this paper determines AWS’s suitability by considering if it can fit within the standards of jus in bello. I argue that both jus in bello principles of proportionality and discrimination give rise to the duty to respect humans as ends-in-themselves and that within this duty is the necessity of sustaining a reciprocal reasons-giving relationship. Following this prerequisite of preserving reciprocity, AWS cannot be deployed without threatening the established moral standards of warfare conduct.

Key Insights

The moral foundation of jus in bello

The paper begins by setting out the foundational argument that widely adopted international humanitarian law, jus in bello, guides how wars ought to be fought and understood as deontological—where the action is considered morally good when it adheres to prescribed duties of conduct. One of the most famous duties, attributed to Immanuel Kant and from which I argue jus in bello is derived, is that all rational beings must be treated as ends-in-themselves and never as mere means. In other words, jus in bello appeals to a duty of respecting and recognizing the humanity of all involved in warfare. Without this primary acknowledgment, the guiding principles of warfare would not exist as they do.

Existing AWS Literature

I then turn to existing ethical AWS literature. Considering that AWS are weapons that navigate, target, and engage targets without meaningful human oversight, Peter Asaro and Robert Sparrow argue for at least a current ban on AWS. They both focus on the need to recognize and respect the humanity of the potential target. Because machines currently cannot appreciate the intrinsic value of a human and give reasons for their actions, Asaro argues that they cannot meaningfully engage with their targets and, thus, cannot replace human combatants; Sparrow emphasizes the need for an interpersonal relationship between combatant and target that involves an understanding of the other’s humanity and not just as another object of warfare. Delegating the combatant role to AWS would sever this interpersonal relationship.

These accounts focus on the obligation to recognize the potential targets as intrinsically valuable autonomous beings and not mere objects. With current technology, machines essentially only “see” a collection of data points in the shape of a human being. AWS currently does not fit the requirements in the context of acknowledging and respecting humanity as a unique concept.

The emphasis here is currently. Asaro and Sparrow appeal to existing technology and do not justify a categorical ban. They leave open the possibility that if AI sufficiently progresses to recognize and respect their target’s intrinsic value, then their ban on AWS would be void. Ronald Arkin uses this openness to progress in support of AWS. 

Further, these accounts also problematically present the combatant-target relationship as unidirectional—they only examine how AWS, acting as the combatant, understands the target.

Reciprocity

Addressing these concerns, I present the key argument of reciprocity. The ban on AWS can be presented as categorical when the combatant-target relationship is considered as reciprocal instead of unidirectional.

To explain what this means, I rely on the work of Christine Korsgaard, a contemporary Kantian philosopher, to link this notion of reciprocity with moral duties, such as those embedded within jus in bello. If any duty is to be objectively true, it must have authority over everyone, such as all those involved in warfare. This means that any person, irrespective of the role in which they find themselves, must be equally bound to the obligations of the moral law. For example, a civilian in a war zone can demand that combatants recognize their standing as innocent bystanders and, therefore, have a reason not to target them. If we stop here, however, the moral duty is not yet complete. The civilian must also concede that should the roles be reversed, the combatant-turned-civilian can make the same moral claims. The duty is a shared and reciprocal authority that reigns over every person, irrespective of their contextual position.

Embedded within this shared reciprocal authority is the consequence of constitutive symmetry—to conceptually reverse roles as required by any moral duty, both entities involved must be constitutively equal. Even if they replicate human capabilities to a high standard, machines will never be on equal footing with humans; most importantly, machines are not concerned with mortality the way humans are. AWS, irrespective of hypothetical advances in AI, will always be in an asymmetrical relationship with humans and, therefore, can never be under the same authoritative duty as humans. Accordingly, humans and AWS cannot co-exist within the duties of jus in bello.

Addressing Objections

I conclude by addressing foreseeable objections. Namely, is this deontological account too demanding? If deploying AWS could reduce casualties by even just 10% makes it difficult to choose the higher philosophical ground over consequentialist life savings. I, however, point out that most unlawful deaths are caused by military strategy and not rogue combatants; there is already evidence that a shift in strategy can significantly reduce casualties, thus giving hope that accepting a total ban on AWS can coincide with a reduction in deaths.

A final relevant objection, put forth by Ryan Jenkins and Duncan Purves, is that banning AWS entails an additional ban on such weapons as cruise missiles and long-distance artillery as they also disobey respecting the target in the way Asaro, Sparrow, and myself argue. They argue that reducing war to close-combat weaponry would exponentially make warfare more fatal. I refute this objection by citing a Canadian-led UN study that argues that contemporary long-range artillery is not the primary driver of lower casualties—small-scale fighting and improvement in healthcare are reasons for less deadly war. And while we can still ban AWS when other weapons may breach moral law, even if it did entail further prohibition, it would not result in a slippery slope to more destructive warfare.

Between the lines

In summary, the benefit of this paper is that it avoids the need to examine hypothetical advancements of AI, which are often difficult to ascertain, to consider the immorality of AWS. It sidesteps the conversation by focusing on the logic of the well-established jus in bello principles, thus arguing that a categorical ban is already an accepted position. And while it is acknowledged in the section on foreseeable objections, I am nevertheless sympathetic to AWS supporters with their consequentialist angle. Replacing even one combatant on the battlefield with a machine is still one human life of the combatant saved. While I did present responses to this objection, this deserves further discussion.

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

🔍 SEARCH

Spotlight

AI Policy Corner: Singapore’s National AI Strategy 2.0

AI Governance in a Competitive World: Balancing Innovation, Regulation and Ethics | Point Zero Forum 2025

AI Policy Corner: Frontier AI Safety Commitments, AI Seoul Summit 2024

AI Policy Corner: The Colorado State Deepfakes Act

Special Edition: Honouring the Legacy of Abhishek Gupta (1992–2024)

related posts

  • A fair pricing model via adversarial learning

    A fair pricing model via adversarial learning

  • Exploring Clusters of Research in Three Areas of AI Safety

    Exploring Clusters of Research in Three Areas of AI Safety

  • Digital Sex Crime, Online Misogyny, and Digital Feminism in South Korea

    Digital Sex Crime, Online Misogyny, and Digital Feminism in South Korea

  • Theorizing Femininity in AI: a Framework for Undoing Technology’s Gender Troubles (Research Summary)

    Theorizing Femininity in AI: a Framework for Undoing Technology’s Gender Troubles (Research Summary)

  • The Ethics of AI Business Practices: A Review of 47 AI Ethics Guidelines

    The Ethics of AI Business Practices: A Review of 47 AI Ethics Guidelines

  • The Grand Illusion: The Myth of Software Portability and Implications for ML Progress

    The Grand Illusion: The Myth of Software Portability and Implications for ML Progress

  • Research summary: Classical Ethics in A/IS

    Research summary: Classical Ethics in A/IS

  • Evolution in Age-Verification Applications: Can AI Open Some New Horizons?

    Evolution in Age-Verification Applications: Can AI Open Some New Horizons?

  • Ethics for People Who Work in Tech

    Ethics for People Who Work in Tech

  • Mapping AI Arguments in Journalism and Communication Studies

    Mapping AI Arguments in Journalism and Communication Studies

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer

Categories


• Blog
• Research Summaries
• Columns
• Core Principles of Responsible AI
• Special Topics

Signature Content


• The State Of AI Ethics

• The Living Dictionary

• The AI Ethics Brief

Learn More


• About

• Open Access Policy

• Contributions Policy

• Editorial Stance on AI Tools

• Press

• Donate

• Contact

The AI Ethics Brief (bi-weekly newsletter)

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.


Archive

  • © MONTREAL AI ETHICS INSTITUTE. All rights reserved 2024.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.