• Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • Core Principles of Responsible AI
    • Accountability
    • Fairness
    • Privacy
    • Safety and Security
    • Sustainability
    • Transparency
  • Special Topics
    • AI in Industry
    • Ethical Implications
    • Human-Centered Design
    • Regulatory Landscape
    • Technical Methods
  • Living Dictionary
  • State of AI Ethics
  • AI Ethics Brief
  • 🇫🇷
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

The Impact of Artificial Intelligence on Military Defence and Security

May 28, 2023

🔬 Research Summary by Grace Wright, Business Development Manager at a technology start-up and has worked in research roles focused on responsible and ethical development and use of AI and other emerging technologies.

[Original paper by Daniel Araya and Meg King]


Overview: Today’s world order is being heavily influenced by emerging and disruptive technologies, creating an urgency for international cooperation to ensure peace and security amid such rapid change. This paper explores the role of technology, specifically artificial intelligence (AI), in defense and security and potential opportunities for multilateral engagement and governance to guide its responsible development and use. 


Introduction

From chariots to guns, technological advancements in defense and security have revolutionized warfare throughout human history. Emerging and disruptive technologies (EDTs) like quantum computing and artificial intelligence (AI) are no exception, and given their pace of evolution and wide breadth of military applications, they are quickly reshaping today’s geopolitical context. 

This paper focuses on the role of AI in defense and security and seeks to explore how AI can be effectively governed in that context. The authors of this paper explore current military applications of AI, specifically its role in augmenting operations of the Canadian Armed Forces (CAF), and concerns over its use in lethal autonomous weapons (LAWs) and adversarial attacks. The authors also examine various governance mechanisms, such as confidence-building measures and treaties to ensure that as applications of AI in the defense space continue to expand and improve, global peace and security is at the forefront of the process. 

Key Insights

AI & Geopolitics 

AI has permeated both the public and private sectors, and its applications in defense and security, in particular, continue to be a source of research, development, and controversy. Increasingly, scholars point to the deepening connection of military power with being able to effectively weaponize EDTs like AI and their growing influence on geopolitical issues and relationships. For example, in many parts of the world, governments are investing heavily in research and development of AI-driven defense technologies, especially in countries like the United States and China, which continues to fuel military rivalries and shift the balance of power. 

Defense Applications

In the context of defense and security, AI is considered a force multiplier critical to state militaries world,wide including Canada and major allied states. As the authors of this paper point out, AI is fundamentally reshaping how decisions are being made, how force is being applied, and overall, having a transformative effect on military strategy. 

Data plays a significant role in these advancements. For example, as more data becomes available, militaries can use it to feed AI systems with information that can help shape decision-making with data-driven insights about operations, promising to improve overall organizational efficiency and resource allocation. For organizations like the Canadian Armed Forces (CAF), the authors argue that harnessing these applications and their corresponding benefits will be critical moving forward in the digital age. 

Risks & Concerns 

While advances in AI and other EDTs for defense present many opportunities for enhancing military operations, they also come with numerous risks, such as generating immense military competition between both state and non-state actors and dual-use challenges. Given the rapid advancement of these technologies and their potential for harm, the authors of this paper argue that appropriate guardrails must be put in place at a global level to mitigate future crises and risks. 

Some technologies in particular that have received a lot of attention are lethal autonomous systems (i.e., systems that can select and engage targets without human authorization), drone swarms  (i.e., groups of small unmanned aerial vehicles with the capacity to launch weapons or conduct surveillance). AI has been used for launching adversarial attacks (i.e., attacks that identify and exploit weaknesses in software). These applications are just some examples of technologies that can be hugely beneficial to militaries but also present numerous risks. For example, in the case of autonomous weapons, while some say they will help reduce collateral damage by enhancing precision and accuracy, there has been an outcry from the international community about a lack of meaningful human engagement in the decision-making process. 

Technological Governance

The development and deployment of these technologies present a challenge for global governance. The authors of this paper suggest that greater cooperation and multilateral efforts are required to address these concerns at the international level. In this vein, the authors recommend a variety of tools at the disposal of the international community, including using previous treaties as a way to inform new guardrails with concern to AI and confidence-building measures as well. 

Ultimately, the authors acknowledge that as a constantly evolving technology, regulating AI globally is immensely challenging, and the window of opportunity for negotiating on AI governance may be quickly closing. However, Canada can play an important role in coalescing these efforts and encouraging cooperation as a part of a broader ecosystem of actors looking to create change in the area. To do this, there must be greater collaboration between the private and public sectors, and there must be an effort to update and change as these technologies evolve to remain responsive. 

Between the lines

The topics explored in this paper underline not only challenges technology creates for global governance and geopolitical relations but a fundamental challenge policymakers have at all levels of government to effectively govern technologies that are rapidly evolving in a competitive landscape. It underscores the importance of collective action to ensure these technologies benefit society and advance peace while acknowledging the tangible benefits they have in improving military performance at various levels of these organizations. 

While this paper provides an important initial grounding for the discussion, it prompts questions for further consideration. More specifically, while the authors of this paper, and certainly many national governments, can see the importance of governance mechanisms for ensuring peace and security, some states may not be interested in governance cooperation when it comes to military technology. Given the highly competitive nature of military power, is cooperation from the most influential state militaries realistic in this context?

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

🔍 SEARCH

Spotlight

Canada’s Minister of AI and Digital Innovation is a Historic First. Here’s What We Recommend.

Am I Literate? Redefining Literacy in the Age of Artificial Intelligence

AI Policy Corner: The Texas Responsible AI Governance Act

AI Policy Corner: Singapore’s National AI Strategy 2.0

AI Governance in a Competitive World: Balancing Innovation, Regulation and Ethics | Point Zero Forum 2025

related posts

  • Corporate Governance of Artificial Intelligence in the Public Interest

    Corporate Governance of Artificial Intelligence in the Public Interest

  • Conversational Swarm Intelligence (CSI) Enhances Groupwise Deliberation

    Conversational Swarm Intelligence (CSI) Enhances Groupwise Deliberation

  • On the Creativity of Large Language Models

    On the Creativity of Large Language Models

  • Owning Ethics: Corporate Logics, Silicon Valley, and the Institutionalization of Ethics (Research Su...

    Owning Ethics: Corporate Logics, Silicon Valley, and the Institutionalization of Ethics (Research Su...

  • AI hyped? A horizon scan of discourse on artificial intelligence in education (AIED) and development

    AI hyped? A horizon scan of discourse on artificial intelligence in education (AIED) and development

  • Research Summary: Countering Information Influence Activities: The State of the Art

    Research Summary: Countering Information Influence Activities: The State of the Art

  • Embedded ethics: a proposal for integrating ethics into the development of medical AI

    Embedded ethics: a proposal for integrating ethics into the development of medical AI

  • Participation and Division of Labor in User-Driven Algorithm Audits: How Do Everyday Users Work toge...

    Participation and Division of Labor in User-Driven Algorithm Audits: How Do Everyday Users Work toge...

  • The Case for Anticipating Undesirable Consequences of Computing Innovations Early, Often, and Across...

    The Case for Anticipating Undesirable Consequences of Computing Innovations Early, Often, and Across...

  • Best humans still outperform artificial intelligence in a creative divergent thinking task

    Best humans still outperform artificial intelligence in a creative divergent thinking task

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer

Categories


• Blog
• Research Summaries
• Columns
• Core Principles of Responsible AI
• Special Topics

Signature Content


• The State Of AI Ethics

• The Living Dictionary

• The AI Ethics Brief

Learn More


• About

• Open Access Policy

• Contributions Policy

• Editorial Stance on AI Tools

• Press

• Donate

• Contact

The AI Ethics Brief (bi-weekly newsletter)

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.


Archive

  • © MONTREAL AI ETHICS INSTITUTE. All rights reserved 2024.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.