š¬ Research summary by Angshuman Kaushik, Researcher in AI Policy, Governance and Ethics.
[Original document by NATO]
Overview: Ā On October 21-22, 2021 during the NATO Defence Ministers Meeting, held in Brussels, the ministers agreed upon to adopt the NATO Artificial Intelligence Strategy (āhereinafter the strategyā). The strategy is not publicly available and what is accessible is a document titled āSummary of the NATO Artificial Intelligence Strategyā. This write-up provides an overview of the said summary.Ā Ā Ā
Introduction
āWe see authoritarian regimes racing to develop new technologies, from artificial intelligence to autonomous systems,ā NATO Secretary General Jens Stoltenberg said during a media conference at NATO headquarters in Brussels on October 20, 2021, a day prior to the aforesaid Defence Ministers Meeting. No prizes for guessing as to who he was referring to by the use of the phrase āauthoritarian regimesā. Although, putting out a strategy on AI is a step in the right direction, but how far the same would be implemented in practice is the sixty-four thousand dollar question. Nonetheless, the fourfold aim of the strategy is as follows;
- to provide a foundation for NATO and Allies to lead by example and encourage the development and use of AI in a responsible manner for Allied defence and security purposes;
- to accelerate and mainstream AI adoption in capability development and delivery, enhancing interoperability within the Alliance, including through proposals for AI Use Cases, new structures, and new programmes;
- to protect and monitor our AI technologies and ability to innovate, addressing security policy considerations such as the operationalisation of our Principles of Responsible Use; and
- to identify and safeguard against the threats from malicious use of AI by state and non-state actors.
The Strategy
The strategy talks about AI changing the global defence and security environment and offering an unprecedented opportunity to strengthen NATOās technological edge and at the same time, escalation of the speed of the threats it faces. It further mentions that AI will likely affect the full spectrum of activities undertaken by the Alliance in support of its three core tasks; collective defence, crisis management, and cooperative security. In the future, the NATO Alliance aims to integrate AI in an interoperable way to support its three core tasks. The strategy recognizes the leading role played by the private sector and the academia in the development of AI and envisages significant cooperation between NATO, the private sector and academia; a capable workforce of NATO technical and policy-based AI talent; a robust, relevant, secure data infrastructure; and appropriate cyber defences. According to the footnote in the strategy, āprivate sectorā includes Big Tech, start-ups, entrepreneurs and SMEs as well as risk capital (such as venture and private equity funds). It is obvious that the AI revolution is being spearheaded by the private sector and the academia and NATO plans attracting the best talent to join its workforce. Under the forthcoming Defence Innovation Accelerator for the North Atlantic (DIANA), NATO aims to support its AI ambition through the national AI test centers and also intends to conduct regular high-level dialogues, engaging technology companies at a strategic political level. At the forefront of the strategy lie the NATO Principles of Responsible Use for AI in Defence, which are based on existing and widely accepted ethical, legal, and policy commitments.
NATO Principles of Responsible Use of AI in Defence
NATO and the Allies commit to ensuring that the AI applications they develop and consider for deployment will be at the various stages of their lifecycles, in accordance with the following six principles:
A. Lawfulness: AI applications will be developed and used in accordance with national and international law, including international humanitarian law and human rights law, as applicable;
B. Responsibility and Accountability: AI applications will be developed and used with appropriate levels of judgment and care; clear human responsibility shall apply in order to ensure accountability;
C. Explainability and Traceability: AI applications will be appropriately understandable and transparent, including through the use of review methodologies, sources, and procedures. This includes verification, assessment and validation mechanisms at either a NATO and/or national level;
D. Reliability: AI applications will have explicit, well-defined use cases. The safety, security, and robustness of such capabilities will be subject to testing and assurance within those use cases across their entire life cycle, including through established NATO and/or national certification procedures;
E. Governability: AI applications will be developed and used according to their intended functions and will allow for: appropriate human-machine interaction; the ability to detect and avoid unintended consequences; and the ability to take steps, such as disengagement or deactivation of systems, when such systems demonstrate unintended behaviour; and
F. Bias Mitigation: Proactive steps will be taken to minimise any unintended bias in the development and use of AI applications and in data sets.
The commitment to abide by the principles at the various stages of a lifestyle of AI systems is a substantial one, and only time will tell as to the operationalization of the same. Moreover, terms like āappropriate levelsā, ājudgment and careā, and āappropriately understandableā etc. need exposition. Further, the strategy also talks about NATO operationalising its Principles of Responsible Use to ensure the safe and responsible use of AI. It lays emphasis on consciously putting bias mitigation efforts into practice, which will seek to minimise biases such as gender, ethnicity or personal attributes. There is a further commitment to conduct appropriate risk and/or impact assessments prior to deploying AI capabilities. The strategy also takes note of the fact that some state and non-state actors will likely seek to exploit defects or limitations within NATOās AI technologies. Hence, it must strive to protect the AI systems from such interference, manipulation, or sabotage, in line with the āReliability Principle of Responsible Useā. Adequate security certification requirements, such as specific threat analysis frameworks and tailored security audits for purposes of āstress-testingā, also find mention in the strategy. The strategy also refers to AIās impact on critical infrastructure, capabilities and civil preparedness, including those covered by NATOās seven resilience Baseline Requirements, creating potential vulnerabilities that could be exploited by certain state and non-state actors. Issues such as disinformation and public distrust of military use of AI by state and non-state actors are also stressed. The strategy envisions further working with relevant international AI standards setting bodies to help foster military-civil standards coherence with regards to AI standards.
Between the lines
Some of the key areas that need elucidation with respect to the aim of the strategy include firstly, the position of NATO with respect to the use of Lethal Autonomous Weapon Systems (LAWS) in a āresponsible mannerā. In fact, the strategy does not even mention anything about LAWS. Secondly, the aspect of āinteroperabilityā needs further clarity with regard to its scope. Thirdly, elaboration on how security policy considerations come under the ambit of āoperationalisation of Principles of Responsible Useā. Fourthly, whether a NATO member state will fall within the meaning of a āstate actorā if it is involved in the malicious use of AI needs to be clarified? For instance, what happens in a scenario like Turkeyās use of AI-controlled drones (read LAWS) in the Libyan skies in the recent past?