• Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • Core Principles of Responsible AI
    • Accountability
    • Fairness
    • Privacy
    • Safety and Security
    • Sustainability
    • Transparency
  • Special Topics
    • AI in Industry
    • Ethical Implications
    • Human-Centered Design
    • Regulatory Landscape
    • Technical Methods
  • Living Dictionary
  • State of AI Ethics
  • AI Ethics Brief
  • 🇫🇷
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

Designing for Meaningful Human Control in Military Human-Machine Teams

July 4, 2023

🔬 Research Summary by Jurriaan van Diggelen, a Senior Researcher Responsible Military AI and Human-machine Teaming at TNO, the Netherlands. He is also chair of several NATO groups on Meaningful Human Control.

[Original paper by Jurriaan van Diggelen, Karel van den Bosch, Mark Neerincx, and Marc Steen]


Overview: Ethical principles of responsible AI in the military state that moral decision-making must be under meaningful human control. This paper proposes a way to operationalize this principle by proposing methods for analysis, design, and evaluation. 


Introduction

A UN report from 2021 suggested that a drone deployed to attack militia members in Libya’s civil war may have chosen its targets entirely on its own. This would mean that the long-feared killer robot would have made its first appearance in history. To prevent the use of such systems, several humanitarian groups and governments have recommended ethical principles that state that AI should always remain under meaningful human control (MHC). AI systems should never be allowed to make life-or-death decisions. However, the effectiveness of such a principle depends on the availability of more detailed standards for analysis, design, and evaluation. This research is the first step towards such standards.  Our approach is based on three principles. Firstly, MHC should be regarded as a core objective that guides all analysis, design, and evaluation phases. Secondly, MHC affects all parts of the socio-technical system, including humans, machines, AI, interactions, and context. Lastly, MHC should be viewed as a property that spans longer periods, encompassing both prior and real-time control by multiple actors.

Key Insights

Morality within a military context 

Whereas it may be difficult for some people to regard warfare as anything other than morally wrong, the ethics of warfare has been the subject of legal and philosophical analysis since the ancient Greeks. Whereas much has been written about military ethical principles, and they have been codified in law in various ways (such as rules of engagement and international humanitarian law), applying these principles in military practice is never a self-evident effort. Like in other morally sensitive domains (such as healthcare and automotive), the moral complexity of the military context is characterized by opposing values, uncertainty, and evolving public opinion. However, a few factors make the military context unique and are important for designing responsible military AI. These factors of the environment, tasks, and actors are crucial to understanding morality in the military domain. They include at least: adversary tactics, uncertainties in the operating environment, presence of civilians, defensive/offensive, lethal/non-lethal, presence of human/non-human targets, and public opinion.  

Moral decision-making within military Command and Control

Moral decisions are decisions that affect human values. In the military context, this means that

  • the effect of the decision may be brought about directly or further down the decision chain. 
  • the nature of the decision is highly context-dependent.  
  • human values include physical integrity, privacy, safety, equality, etc.  
  • human values may regard the decision maker’s values or those of others.  

Military human-machine teams and meaningful human control 

Within such human-machine teams, the human must play the role of the moral agent (i.e., make all moral decisions), and the machine must be prevented from making any moral decisions. This means that careful consideration must be given to creating optimal conditions for human decision-making while designing for MHC, such as providing sufficient time, situation awareness, information, etc. Furthermore, MHC may be executed by multiple actors at different moments. We, therefore, need to think of MHC as an emergent property that emerges from the interactions between multiple humans and technology over a longer period. Using prior control, moral decisions are made before the moment the problem occurs. Using real-time control, moral decisions are made simultaneously or shortly before the problem occurs.

Design solution 1: ban use in certain contexts. Just like the Geneva Convention bans chemical weapons, one might argue that any Lethal Autonomous Weapon System (LAWS) not under MHC should be banned too. However, unlike detecting illegal chemical substances in weapons, detecting the presence or absence of MHC is far from straightforward. MHC is not a property of technology alone but is a relational property between humans, technology, and context. In some cases, we advocate restricting the autonomy of technology in certain pre-specified contexts. For example, the use of weaponized land robots may be restricted to open battlefields and to ban their use in urban environments, where the presence of civilians is more likely.

Design solution 2: Improve human-machine teaming. Improving real-time collaboration is often interpreted as human-in-the-loop, human-on-the-loop, or supervisory control. This may help achieve MHC directly, but it burdens the human with the dull and time-consuming task of acting as a failsafe. By designing the AI-based systems as a teammate, the AI can manage its collaboration with humans more adaptively and actively, analogous to how human teammates collaborate. Teams are dynamic entities, meaning how tasks are distributed over team members changes over time and in response to situational task requirements. To design these dynamics, team design patterns (TDPs) have been proposed as reusable solutions to recurring problems regarding human-machine collaboration. An example of a pattern that allows MHC is the mixed control pattern, where the machine offloads the human by doing non-moral tasks. Still, when the human detects that the situation involves moral decision-making, the human takes over. 

Design solution 3:  Artificial morality. A profoundly different approach to achieving morally acceptable machine behavior is to use so-called Artificial Moral Agents (AMAs). AMAs use a computational model of morality to make moral decisions autonomously. Much controversy exists around artificial moral agents, and it is generally agreed that the required technology is still in its infancy. However, a light form of moral agency can be helpful in many cases. For example, the machine only uses moral reasoning to recognize moral sensitivity but leaves it to humans to make moral decisions. 

Between the lines

This chapter discussed design considerations for MHC in the military domain. We conclude that no silver bullet exists for achieving MHC over military AI-based systems. Therefore, we need to regard MHC as a core principle guiding all phases of analysis, design, and evaluation; as a property that is intertwined with all parts of the socio-technical system, including humans, machines, AI, interactions, and context; and as a property that spans longer periods, encompassing both prior and real-time control.  

Perhaps, we need to understand MHC as a new scientific discipline relevant since AI-based systems were first deployed. MHC could be like safety research, which has become relevant after the first technologies appeared that were dangerous for humans, such as automobiles, airplanes, and factories. Over the years, the field of safety research has specialized in several domains and provided a vast array of practical concepts such as training programs, safety officers, safety culture studies, legal norms, technology standards, supply chain regulations, etc. This analogy points out that MHC research might still be embryonic and will become even more encompassing than it currently appears. 

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

🔍 SEARCH

Spotlight

Canada’s Minister of AI and Digital Innovation is a Historic First. Here’s What We Recommend.

Am I Literate? Redefining Literacy in the Age of Artificial Intelligence

AI Policy Corner: The Texas Responsible AI Governance Act

AI Policy Corner: Singapore’s National AI Strategy 2.0

AI Governance in a Competitive World: Balancing Innovation, Regulation and Ethics | Point Zero Forum 2025

related posts

  • Labor and Fraud on the Google Play Store: The Case of Install-Incentivizing Apps

    Labor and Fraud on the Google Play Store: The Case of Install-Incentivizing Apps

  • To Be or Not to Be Algorithm Aware: A Question of a New Digital Divide? (Research Summary)

    To Be or Not to Be Algorithm Aware: A Question of a New Digital Divide? (Research Summary)

  • Towards Environmentally Equitable AI via Geographical Load Balancing

    Towards Environmentally Equitable AI via Geographical Load Balancing

  • Automating Extremism: Mapping the Affective Roles of Artificial Agents in Online Radicalization

    Automating Extremism: Mapping the Affective Roles of Artificial Agents in Online Radicalization

  • Beyond Empirical Windowing: An Attention-Based Approach for Trust Prediction in Autonomous Vehicles

    Beyond Empirical Windowing: An Attention-Based Approach for Trust Prediction in Autonomous Vehicles

  • The Ethics of AI Value Chains: An Approach for Integrating and Expanding AI Ethics Research, Practic...

    The Ethics of AI Value Chains: An Approach for Integrating and Expanding AI Ethics Research, Practic...

  • Industry AI Ethics 101 with Kathy Baxter (Podcast Summary)

    Industry AI Ethics 101 with Kathy Baxter (Podcast Summary)

  • A Matrix for Selecting Responsible AI Frameworks

    A Matrix for Selecting Responsible AI Frameworks

  • Can we trust robots?

    Can we trust robots?

  • Target specification bias, counterfactual prediction, and algorithmic fairness in healthcare

    Target specification bias, counterfactual prediction, and algorithmic fairness in healthcare

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer

Categories


• Blog
• Research Summaries
• Columns
• Core Principles of Responsible AI
• Special Topics

Signature Content


• The State Of AI Ethics

• The Living Dictionary

• The AI Ethics Brief

Learn More


• About

• Open Access Policy

• Contributions Policy

• Editorial Stance on AI Tools

• Press

• Donate

• Contact

The AI Ethics Brief (bi-weekly newsletter)

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.


Archive

  • © MONTREAL AI ETHICS INSTITUTE. All rights reserved 2024.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.