• Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • Core Principles of Responsible AI
    • Accountability
    • Fairness
    • Privacy
    • Safety and Security
    • Sustainability
    • Transparency
  • Special Topics
    • AI in Industry
    • Ethical Implications
    • Human-Centered Design
    • Regulatory Landscape
    • Technical Methods
  • Living Dictionary
  • State of AI Ethics
  • AI Ethics Brief
  • 🇫🇷
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

NATO Artificial Intelligence Strategy

November 24, 2021

🔬 Research summary by Angshuman Kaushik, Researcher in AI Policy, Governance and Ethics.

[Original document by NATO]


Overview:  On October 21-22, 2021 during the NATO Defence Ministers Meeting, held in Brussels, the ministers agreed upon to adopt the NATO Artificial Intelligence Strategy (“hereinafter the strategy”). The strategy is not publicly available and what is accessible is a document titled ‘Summary of the NATO Artificial Intelligence Strategy’. This write-up provides an overview of the said summary.   


Introduction

“We see authoritarian regimes racing to develop new technologies, from artificial intelligence to autonomous systems,” NATO Secretary General Jens Stoltenberg said during a media conference at NATO headquarters in Brussels on October 20, 2021, a day prior to the aforesaid Defence Ministers Meeting. No prizes for guessing as to who he was referring to by the use of the phrase ‘authoritarian regimes’. Although, putting out a strategy on AI is a step in the right direction, but how far the same would be implemented in practice is the sixty-four thousand dollar question. Nonetheless, the fourfold aim of the strategy is as follows;

  • to provide a foundation for NATO and Allies to lead by example and encourage the development and use of AI in a responsible manner for Allied defence and security purposes; 
  • to accelerate and mainstream AI adoption in capability development and delivery, enhancing interoperability within the Alliance, including through proposals for AI Use Cases, new structures, and new programmes; 
  • to protect and monitor our AI technologies and ability to innovate, addressing security policy considerations such as the operationalisation of our Principles of Responsible Use; and
  • to identify and safeguard against the threats from malicious use of AI by state and non-state actors.

The Strategy

The strategy talks about AI changing the global defence and security environment and offering an unprecedented opportunity to strengthen NATO’s technological edge and at the same time, escalation of the speed of the threats it faces. It further mentions that AI will likely affect the full spectrum of activities undertaken by the Alliance in support of its three core tasks; collective defence, crisis management, and cooperative security. In the future, the NATO Alliance aims to integrate AI in an interoperable way to support its three core tasks. The strategy recognizes the leading role played by the private sector and the academia in the development of AI and envisages significant cooperation between NATO, the private sector and academia; a capable workforce of NATO technical and policy-based AI talent; a robust, relevant, secure data infrastructure; and appropriate cyber defences. According to the footnote in the strategy, ‘private sector’ includes Big Tech, start-ups, entrepreneurs and SMEs as well as risk capital (such as venture and private equity funds). It is obvious that the AI revolution is being spearheaded by the private sector and the academia and NATO plans attracting the best talent to join its workforce. Under the forthcoming Defence Innovation Accelerator for the North Atlantic (DIANA), NATO aims to support its AI ambition through the national AI test centers and also intends to conduct regular high-level dialogues, engaging technology companies at a strategic political level. At the forefront of the strategy lie the NATO Principles of Responsible Use for AI in Defence, which are based on existing and widely accepted ethical, legal, and policy commitments.

NATO Principles of Responsible Use of AI in Defence

NATO and the Allies commit to ensuring that the AI applications they develop and consider for deployment will be at the various stages of their lifecycles, in accordance with the following six principles:

 A. Lawfulness: AI applications will be developed and used in accordance with national and international law, including international humanitarian law and human rights law, as applicable;

 B. Responsibility and Accountability: AI applications will be developed and used with appropriate levels of judgment and care; clear human responsibility shall apply in order to ensure accountability;

 C. Explainability and Traceability: AI applications will be appropriately understandable and transparent, including through the use of review methodologies, sources, and procedures. This includes verification, assessment and validation mechanisms at either a NATO and/or national level;

D. Reliability: AI applications will have explicit, well-defined use cases. The safety, security, and robustness of such capabilities will be subject to testing and assurance within those use cases across their entire life cycle, including through established NATO and/or national certification procedures;

E. Governability: AI applications will be developed and used according to their intended functions and will allow for: appropriate human-machine interaction; the ability to detect and avoid unintended consequences; and the ability to take steps, such as disengagement or deactivation of systems, when such systems demonstrate unintended behaviour; and

F. Bias Mitigation: Proactive steps will be taken to minimise any unintended bias in the development and use of AI applications and in data sets. 

The commitment to abide by the principles at the various stages of a lifestyle of AI systems is a substantial one, and only time will tell as to the operationalization of the same. Moreover, terms like ‘appropriate levels’, ‘judgment and care’, and ‘appropriately understandable’ etc. need exposition. Further, the strategy also talks about NATO operationalising its Principles of Responsible Use to ensure the safe and responsible use of AI. It lays emphasis on consciously putting bias mitigation efforts into practice, which will seek to minimise biases such as gender, ethnicity or personal attributes. There is a further commitment to conduct appropriate risk and/or impact assessments prior to deploying AI capabilities. The strategy also takes note of the fact that some state and non-state actors will likely seek to exploit defects or limitations within NATO’s AI technologies. Hence, it must strive to protect the AI systems from such interference, manipulation, or sabotage, in line with the ‘Reliability Principle of Responsible Use’. Adequate security certification requirements, such as specific threat analysis frameworks and tailored security audits for purposes of ‘stress-testing’, also find mention in the strategy. The strategy also refers to AI’s impact on critical infrastructure, capabilities and civil preparedness, including those covered by NATO’s seven resilience Baseline Requirements, creating potential vulnerabilities that could be exploited by certain state and non-state actors. Issues such as disinformation and public distrust of military use of AI by state and non-state actors are also stressed. The strategy envisions further working with relevant international AI standards setting bodies to help foster military-civil standards coherence with regards to AI standards.

Between the lines

Some of the key areas that need elucidation with respect to the aim of the strategy include firstly, the position of NATO with respect to the use of Lethal Autonomous Weapon Systems (LAWS) in a ‘responsible manner’. In fact, the strategy does not even mention anything about LAWS. Secondly, the aspect of ‘interoperability’ needs further clarity with regard to its scope. Thirdly, elaboration on how security policy considerations come under the ambit of ‘operationalisation of Principles of Responsible Use’. Fourthly, whether a NATO member state will fall within the meaning of a ‘state actor’ if it is involved in the malicious use of AI needs to be clarified? For instance, what happens in a scenario like Turkey’s use of AI-controlled drones (read LAWS) in the Libyan skies in the recent past?

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

🔍 SEARCH

Spotlight

AI Policy Corner: Frontier AI Safety Commitments, AI Seoul Summit 2024

AI Policy Corner: The Colorado State Deepfakes Act

Special Edition: Honouring the Legacy of Abhishek Gupta (1992–2024)

AI Policy Corner: The Turkish Artificial Intelligence Law Proposal

From Funding Crisis to AI Misuse: Critical Digital Rights Challenges from RightsCon 2025

related posts

  • The State of Artificial Intelligence in the Pacific Islands

    The State of Artificial Intelligence in the Pacific Islands

  • Unlocking Accuracy and Fairness in Differentially Private Image Classification

    Unlocking Accuracy and Fairness in Differentially Private Image Classification

  • Research summary: A Focus on Neural Machine Translation for African Languages

    Research summary: A Focus on Neural Machine Translation for African Languages

  • Between a Rock and a Hard Place: Freedom, Flexibility, Precarity and Vulnerability in the Gig Econom...

    Between a Rock and a Hard Place: Freedom, Flexibility, Precarity and Vulnerability in the Gig Econom...

  • Cleaning Up the Streets: Understanding Motivations, Mental Models, and Concerns of Users Flagging So...

    Cleaning Up the Streets: Understanding Motivations, Mental Models, and Concerns of Users Flagging So...

  • International Human Rights, Artificial Intelligence, and the Challenge for the Pondering State: Time...

    International Human Rights, Artificial Intelligence, and the Challenge for the Pondering State: Time...

  • Science Communications for Explainable Artificial Intelligence

    Science Communications for Explainable Artificial Intelligence

  • Artificial intelligence and biological misuse: Differentiating risks of language models and biologic...

    Artificial intelligence and biological misuse: Differentiating risks of language models and biologic...

  • Implications of Distance over Redistricting Maps: Central and Outlier Maps

    Implications of Distance over Redistricting Maps: Central and Outlier Maps

  • Faith and Fate: Limits of Transformers on Compositionality

    Faith and Fate: Limits of Transformers on Compositionality

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer

Categories


• Blog
• Research Summaries
• Columns
• Core Principles of Responsible AI
• Special Topics

Signature Content


• The State Of AI Ethics

• The Living Dictionary

• The AI Ethics Brief

Learn More


• About

• Open Access Policy

• Contributions Policy

• Editorial Stance on AI Tools

• Press

• Donate

• Contact

The AI Ethics Brief (bi-weekly newsletter)

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.


Archive

  • © MONTREAL AI ETHICS INSTITUTE. All rights reserved 2024.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.