• Skip to main content
  • Skip to secondary menu
  • Skip to primary sidebar
  • Skip to footer
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

  • Articles
    • Public Policy
    • Privacy & Security
    • Human Rights
      • Ethics
      • JEDI (Justice, Equity, Diversity, Inclusion
    • Climate
    • Design
      • Emerging Technology
    • Application & Adoption
      • Health
      • Education
      • Government
        • Military
        • Public Works
      • Labour
    • Arts & Culture
      • Film & TV
      • Music
      • Pop Culture
      • Digital Art
  • Columns
    • AI Policy Corner
    • Recess
    • Tech Futures
  • The AI Ethics Brief
  • AI Literacy
    • Research Summaries
    • AI Ethics Living Dictionary
    • Learning Community
  • The State of AI Ethics Report
    • Volume 7 (November 2025)
    • Volume 6 (February 2022)
    • Volume 5 (July 2021)
    • Volume 4 (April 2021)
    • Volume 3 (Jan 2021)
    • Volume 2 (Oct 2020)
    • Volume 1 (June 2020)
  • About
    • Our Contributions Policy
    • Our Open Access Policy
    • Contact
    • Donate

Designing for Meaningful Human Control in Military Human-Machine Teams

July 4, 2023

🔬 Research Summary by Jurriaan van Diggelen, a Senior Researcher Responsible Military AI and Human-machine Teaming at TNO, the Netherlands. He is also chair of several NATO groups on Meaningful Human Control.

[Original paper by Jurriaan van Diggelen, Karel van den Bosch, Mark Neerincx, and Marc Steen]


Overview: Ethical principles of responsible AI in the military state that moral decision-making must be under meaningful human control. This paper proposes a way to operationalize this principle by proposing methods for analysis, design, and evaluation. 


Introduction

A UN report from 2021 suggested that a drone deployed to attack militia members in Libya’s civil war may have chosen its targets entirely on its own. This would mean that the long-feared killer robot would have made its first appearance in history. To prevent the use of such systems, several humanitarian groups and governments have recommended ethical principles that state that AI should always remain under meaningful human control (MHC). AI systems should never be allowed to make life-or-death decisions. However, the effectiveness of such a principle depends on the availability of more detailed standards for analysis, design, and evaluation. This research is the first step towards such standards.  Our approach is based on three principles. Firstly, MHC should be regarded as a core objective that guides all analysis, design, and evaluation phases. Secondly, MHC affects all parts of the socio-technical system, including humans, machines, AI, interactions, and context. Lastly, MHC should be viewed as a property that spans longer periods, encompassing both prior and real-time control by multiple actors.

Key Insights

Morality within a military context 

Whereas it may be difficult for some people to regard warfare as anything other than morally wrong, the ethics of warfare has been the subject of legal and philosophical analysis since the ancient Greeks. Whereas much has been written about military ethical principles, and they have been codified in law in various ways (such as rules of engagement and international humanitarian law), applying these principles in military practice is never a self-evident effort. Like in other morally sensitive domains (such as healthcare and automotive), the moral complexity of the military context is characterized by opposing values, uncertainty, and evolving public opinion. However, a few factors make the military context unique and are important for designing responsible military AI. These factors of the environment, tasks, and actors are crucial to understanding morality in the military domain. They include at least: adversary tactics, uncertainties in the operating environment, presence of civilians, defensive/offensive, lethal/non-lethal, presence of human/non-human targets, and public opinion.  

Moral decision-making within military Command and Control

Moral decisions are decisions that affect human values. In the military context, this means that

  • the effect of the decision may be brought about directly or further down the decision chain. 
  • the nature of the decision is highly context-dependent.  
  • human values include physical integrity, privacy, safety, equality, etc.  
  • human values may regard the decision maker’s values or those of others.  

Military human-machine teams and meaningful human control 

Within such human-machine teams, the human must play the role of the moral agent (i.e., make all moral decisions), and the machine must be prevented from making any moral decisions. This means that careful consideration must be given to creating optimal conditions for human decision-making while designing for MHC, such as providing sufficient time, situation awareness, information, etc. Furthermore, MHC may be executed by multiple actors at different moments. We, therefore, need to think of MHC as an emergent property that emerges from the interactions between multiple humans and technology over a longer period. Using prior control, moral decisions are made before the moment the problem occurs. Using real-time control, moral decisions are made simultaneously or shortly before the problem occurs.

Design solution 1: ban use in certain contexts. Just like the Geneva Convention bans chemical weapons, one might argue that any Lethal Autonomous Weapon System (LAWS) not under MHC should be banned too. However, unlike detecting illegal chemical substances in weapons, detecting the presence or absence of MHC is far from straightforward. MHC is not a property of technology alone but is a relational property between humans, technology, and context. In some cases, we advocate restricting the autonomy of technology in certain pre-specified contexts. For example, the use of weaponized land robots may be restricted to open battlefields and to ban their use in urban environments, where the presence of civilians is more likely.

Design solution 2: Improve human-machine teaming. Improving real-time collaboration is often interpreted as human-in-the-loop, human-on-the-loop, or supervisory control. This may help achieve MHC directly, but it burdens the human with the dull and time-consuming task of acting as a failsafe. By designing the AI-based systems as a teammate, the AI can manage its collaboration with humans more adaptively and actively, analogous to how human teammates collaborate. Teams are dynamic entities, meaning how tasks are distributed over team members changes over time and in response to situational task requirements. To design these dynamics, team design patterns (TDPs) have been proposed as reusable solutions to recurring problems regarding human-machine collaboration. An example of a pattern that allows MHC is the mixed control pattern, where the machine offloads the human by doing non-moral tasks. Still, when the human detects that the situation involves moral decision-making, the human takes over. 

Design solution 3:  Artificial morality. A profoundly different approach to achieving morally acceptable machine behavior is to use so-called Artificial Moral Agents (AMAs). AMAs use a computational model of morality to make moral decisions autonomously. Much controversy exists around artificial moral agents, and it is generally agreed that the required technology is still in its infancy. However, a light form of moral agency can be helpful in many cases. For example, the machine only uses moral reasoning to recognize moral sensitivity but leaves it to humans to make moral decisions. 

Between the lines

This chapter discussed design considerations for MHC in the military domain. We conclude that no silver bullet exists for achieving MHC over military AI-based systems. Therefore, we need to regard MHC as a core principle guiding all phases of analysis, design, and evaluation; as a property that is intertwined with all parts of the socio-technical system, including humans, machines, AI, interactions, and context; and as a property that spans longer periods, encompassing both prior and real-time control.  

Perhaps, we need to understand MHC as a new scientific discipline relevant since AI-based systems were first deployed. MHC could be like safety research, which has become relevant after the first technologies appeared that were dangerous for humans, such as automobiles, airplanes, and factories. Over the years, the field of safety research has specialized in several domains and provided a vast array of practical concepts such as training programs, safety officers, safety culture studies, legal norms, technology standards, supply chain regulations, etc. This analogy points out that MHC research might still be embryonic and will become even more encompassing than it currently appears. 

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

🔍 SEARCH

Spotlight

A brightly coloured illustration which can be viewed in any direction. It has many elements to it working together: men in suits around a table, someone in a data centre, big hands controlling the scenes and holding a phone, people in a production line. Motifs such as network diagrams and melting emojis are placed throughout the busy vignettes.

Tech Futures: The Fossil Fuels Playbook for Big Tech: Part II

A rock embedded with intricate circuit board patterns, held delicately by pale hands drawn in a ghostly style. The contrast between the rough, metallic mineral and the sleek, artificial circuit board illustrates the relationship between raw natural resources and modern technological development. The hands evoke human involvement in the extraction and manufacturing processes.

Tech Futures: The Fossil Fuels Playbook for Big Tech: Part I

Close-up of a cat sleeping on a computer keyboard

Tech Futures: The threat of AI-generated code to the world’s digital infrastructure

The undying sun hangs in the sky, as people gather around signal towers, working through their digital devices.

Dreams and Realities in Modi’s AI Impact Summit

Illustration of a coral reef ecosystem

Tech Futures: Diversity of Thought and Experience: The UN’s Scientific Panel on AI

related posts

  • Jake Elwes: Constructing and Deconstructing Gender with AI-Generated Art

    Jake Elwes: Constructing and Deconstructing Gender with AI-Generated Art

  • Anthropomorphization of AI: Opportunities and Risks

    Anthropomorphization of AI: Opportunities and Risks

  • A fair pricing model via adversarial learning

    A fair pricing model via adversarial learning

  • AI Bias in Healthcare: Using ImpactPro as a Case Study for Healthcare Practitioners’ Duties to Engag...

    AI Bias in Healthcare: Using ImpactPro as a Case Study for Healthcare Practitioners’ Duties to Engag...

  • Brave: what it means to be an AI Ethicist

    Brave: what it means to be an AI Ethicist

  • Europe : Analysis of the Proposal for an AI Regulation

    Europe : Analysis of the Proposal for an AI Regulation

  • How to Help People Understand AI

    How to Help People Understand AI

  • A Critical Analysis of the What3Words Geocoding Algorithm

    A Critical Analysis of the What3Words Geocoding Algorithm

  • From Promise to Practice: A Glimpse into AI-Driven Approaches to Neuroscience

    From Promise to Practice: A Glimpse into AI-Driven Approaches to Neuroscience

  • Online public discourse on artificial intelligence and ethics in China: context, content, and implic...

    Online public discourse on artificial intelligence and ethics in China: context, content, and implic...

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer


Articles

Columns

AI Literacy

The State of AI Ethics Report


 

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.

Contact

Donate


  • © 2025 MONTREAL AI ETHICS INSTITUTE.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.