• Skip to main content
  • Skip to secondary menu
  • Skip to primary sidebar
  • Skip to footer
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

  • Articles
    • Public Policy
    • Privacy & Security
    • Human Rights
      • Ethics
      • JEDI (Justice, Equity, Diversity, Inclusion
    • Climate
    • Design
      • Emerging Technology
    • Application & Adoption
      • Health
      • Education
      • Government
        • Military
        • Public Works
      • Labour
    • Arts & Culture
      • Film & TV
      • Music
      • Pop Culture
      • Digital Art
  • Columns
    • AI Policy Corner
    • Recess
  • The AI Ethics Brief
  • AI Literacy
    • Research Summaries
    • AI Ethics Living Dictionary
    • Learning Community
  • The State of AI Ethics Report
    • Volume 7 (November 2025)
    • Volume 6 (February 2022)
    • Volume 5 (July 2021)
    • Volume 4 (April 2021)
    • Volume 3 (Jan 2021)
    • Volume 2 (Oct 2020)
    • Volume 1 (June 2020)
  • About
    • Our Contributions Policy
    • Our Open Access Policy
    • Contact
    • Donate

Designing for Meaningful Human Control in Military Human-Machine Teams

July 4, 2023

馃敩 Research Summary by Jurriaan van Diggelen, a Senior Researcher Responsible Military AI and Human-machine Teaming at TNO, the Netherlands. He is also chair of several NATO groups on Meaningful Human Control.

[Original paper by Jurriaan van Diggelen, Karel van den Bosch, Mark Neerincx, and Marc Steen]


Overview: Ethical principles of responsible AI in the military state that moral decision-making must be under meaningful human control. This paper proposes a way to operationalize this principle by proposing methods for analysis, design, and evaluation. 


Introduction

A UN report from 2021 suggested that a drone deployed to attack militia members in Libya’s civil war may have chosen its targets entirely on its own. This would mean that the long-feared killer robot would have made its first appearance in history. To prevent the use of such systems, several humanitarian groups and governments have recommended ethical principles that state that AI should always remain under meaningful human control (MHC). AI systems should never be allowed to make life-or-death decisions. However, the effectiveness of such a principle depends on the availability of more detailed standards for analysis, design, and evaluation. This research is the first step towards such standards.  Our approach is based on three principles. Firstly, MHC should be regarded as a core objective that guides all analysis, design, and evaluation phases. Secondly, MHC affects all parts of the socio-technical system, including humans, machines, AI, interactions, and context. Lastly, MHC should be viewed as a property that spans longer periods, encompassing both prior and real-time control by multiple actors.

Key Insights

Morality within a military context 

Whereas it may be difficult for some people to regard warfare as anything other than morally wrong, the ethics of warfare has been the subject of legal and philosophical analysis since the ancient Greeks. Whereas much has been written about military ethical principles, and they have been codified in law in various ways (such as rules of engagement and international humanitarian law), applying these principles in military practice is never a self-evident effort. Like in other morally sensitive domains (such as healthcare and automotive), the moral complexity of the military context is characterized by opposing values, uncertainty, and evolving public opinion. However, a few factors make the military context unique and are important for designing responsible military AI. These factors of the environment, tasks, and actors are crucial to understanding morality in the military domain. They include at least: adversary tactics, uncertainties in the operating environment, presence of civilians, defensive/offensive, lethal/non-lethal, presence of human/non-human targets, and public opinion.  

Moral decision-making within military Command and Control

Moral decisions are decisions that affect human values. In the military context, this means that

  • the effect of the decision may be brought about directly or further down the decision chain. 
  • the nature of the decision is highly context-dependent.  
  • human values include physical integrity, privacy, safety, equality, etc.  
  • human values may regard the decision maker鈥檚 values or those of others.  

Military human-machine teams and meaningful human control 

Within such human-machine teams, the human must play the role of the moral agent (i.e., make all moral decisions), and the machine must be prevented from making any moral decisions. This means that careful consideration must be given to creating optimal conditions for human decision-making while designing for MHC, such as providing sufficient time, situation awareness, information, etc. Furthermore, MHC may be executed by multiple actors at different moments. We, therefore, need to think of MHC as an emergent property that emerges from the interactions between multiple humans and technology over a longer period. Using prior control, moral decisions are made before the moment the problem occurs. Using real-time control, moral decisions are made simultaneously or shortly before the problem occurs.

Design solution 1: ban use in certain contexts. Just like the Geneva Convention bans chemical weapons, one might argue that any Lethal Autonomous Weapon System (LAWS) not under MHC should be banned too. However, unlike detecting illegal chemical substances in weapons, detecting the presence or absence of MHC is far from straightforward. MHC is not a property of technology alone but is a relational property between humans, technology, and context. In some cases, we advocate restricting the autonomy of technology in certain pre-specified contexts. For example, the use of weaponized land robots may be restricted to open battlefields and to ban their use in urban environments, where the presence of civilians is more likely.

Design solution 2: Improve human-machine teaming. Improving real-time collaboration is often interpreted as human-in-the-loop, human-on-the-loop, or supervisory control. This may help achieve MHC directly, but it burdens the human with the dull and time-consuming task of acting as a failsafe. By designing the AI-based systems as a teammate, the AI can manage its collaboration with humans more adaptively and actively, analogous to how human teammates collaborate. Teams are dynamic entities, meaning how tasks are distributed over team members changes over time and in response to situational task requirements. To design these dynamics, team design patterns (TDPs) have been proposed as reusable solutions to recurring problems regarding human-machine collaboration. An example of a pattern that allows MHC is the mixed control pattern, where the machine offloads the human by doing non-moral tasks. Still, when the human detects that the situation involves moral decision-making, the human takes over.聽

Design solution 3:聽 Artificial morality. A profoundly different approach to achieving morally acceptable machine behavior is to use so-called Artificial Moral Agents (AMAs). AMAs use a computational model of morality to make聽moral decisions autonomously. Much controversy exists around artificial moral agents, and it is generally agreed that the required technology is still in its infancy. However, a light form of moral agency can be helpful in many cases. For example, the machine only uses moral聽reasoning to recognize moral sensitivity but leaves it to humans to make moral decisions.聽

Between the lines

This chapter discussed design considerations for MHC in the military domain. We conclude that no silver bullet exists for achieving MHC over military AI-based systems. Therefore, we need to regard MHC as a core principle guiding all phases of analysis, design, and evaluation; as a property that is intertwined with all parts of the socio-technical system, including humans, machines, AI, interactions, and context; and as a property that spans longer periods, encompassing both prior and real-time control.  

Perhaps, we need to understand MHC as a new scientific discipline relevant since AI-based systems were first deployed. MHC could be like safety research, which has become relevant after the first technologies appeared that were dangerous for humans, such as automobiles, airplanes, and factories. Over the years, the field of safety research has specialized in several domains and provided a vast array of practical concepts such as training programs, safety officers, safety culture studies, legal norms, technology standards, supply chain regulations, etc. This analogy points out that MHC research might still be embryonic and will become even more encompassing than it currently appears. 

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

馃攳 SEARCH

Spotlight

ALL IN Conference 2025: Four Key Takeaways from Montreal

Beyond Dependency: The Hidden Risk of Social Comparison in Chatbot Companionship

AI Policy Corner: Restriction vs. Regulation: Comparing State Approaches to AI Mental Health Legislation

Beyond Consultation: Building Inclusive AI Governance for Canada’s Democratic Future

AI Policy Corner: U.S. Executive Order on Advancing AI Education for American Youth

related posts

  • The Ethical Need for Watermarks in Machine-Generated Language

    The Ethical Need for Watermarks in Machine-Generated Language

  • Examining the Impact of Provenance-Enabled Media on Trust and Accuracy Perceptions

    Examining the Impact of Provenance-Enabled Media on Trust and Accuracy Perceptions

  • AI and the Global South: Designing for Other Worlds  (Research Summary)

    AI and the Global South: Designing for Other Worlds (Research Summary)

  • Implementing Responsible AI: Tensions and Trade-Offs Between Ethics Aspects

    Implementing Responsible AI: Tensions and Trade-Offs Between Ethics Aspects

  • Can we blame a chatbot if it goes wrong?

    Can we blame a chatbot if it goes wrong?

  • Research summary:  Algorithmic Bias: On the Implicit Biases of Social Technology

    Research summary: Algorithmic Bias: On the Implicit Biases of Social Technology

  • An Empirical Study of Modular Bias Mitigators and Ensembles

    An Empirical Study of Modular Bias Mitigators and Ensembles

  • A Prompt Array Keeps the Bias Away: Debiasing Vision-Language Models with Adversarial Learning

    A Prompt Array Keeps the Bias Away: Debiasing Vision-Language Models with Adversarial Learning

  • Demographic-Reliant Algorithmic Fairness: Characterizing the Risks of Demographic Data Collection an...

    Demographic-Reliant Algorithmic Fairness: Characterizing the Risks of Demographic Data Collection an...

  • Rethink reporting of evaluation results in AI

    Rethink reporting of evaluation results in AI

Partners

  • 聽
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer


Articles

Columns

AI Literacy

The State of AI Ethics Report


 

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.

Contact

Donate


  • 漏 2025 MONTREAL AI ETHICS INSTITUTE.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.