• Skip to main content
  • Skip to secondary menu
  • Skip to primary sidebar
  • Skip to footer
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

  • Articles
    • Public Policy
    • Privacy & Security
    • Human Rights
      • Ethics
      • JEDI (Justice, Equity, Diversity, Inclusion
    • Climate
    • Design
      • Emerging Technology
    • Application & Adoption
      • Health
      • Education
      • Government
        • Military
        • Public Works
      • Labour
    • Arts & Culture
      • Film & TV
      • Music
      • Pop Culture
      • Digital Art
  • Columns
    • AI Policy Corner
    • Recess
  • The AI Ethics Brief
  • AI Literacy
    • Research Summaries
    • AI Ethics Living Dictionary
    • Learning Community
  • The State of AI Ethics Report
    • Volume 6 (February 2022)
    • Volume 5 (July 2021)
    • Volume 4 (April 2021)
    • Volume 3 (Jan 2021)
    • Volume 2 (Oct 2020)
    • Volume 1 (June 2020)
  • About
    • Our Contributions Policy
    • Our Open Access Policy
    • Contact
    • Donate

AI Policy Corner: AI and Security in Africa: Assessing the African Union’s Continental AI Strategy

September 30, 2025

✍️By Ogadinma Enwereazu.

Ogadinma is a Ph.D. Student in the department of Political Science and a Graduate Affiliate at the Governance and Responsible AI Lab (GRAIL), Purdue University.


📌 Editor’s Note: This article is part of our AI Policy Corner series, a collaboration between the Montreal AI Ethics Institute (MAIEI) and the Governance and Responsible AI Lab (GRAIL) at Purdue University. The series provides concise insights into critical AI policy developments from the local to international levels, helping our readers stay informed about the evolving landscape of AI governance. This piece analyses the African Union’s strategic framework to advance Africa’s AI development.


The African Union (AU) Continental AI Strategy lays down a unified and strategic framework to advance Africa’s AI development. While distinct from the EU AI Act in its non-binding nature, the strategy still provides a high-level document that guides member states in developing national AI policies consistent with continental goals. It also cultivates capacity building and mobilizes investments tailored to Africa’s unique socio-economic landscape.

The strategy rests on five key focus areas:

  • Harnessing AI’s benefits for socio-economic development, agriculture, education, healthcare, climate, and public service.
  • Minimizing risks related to ethical, social, and security concerns, including bias, misinformation, and human rights protections.
  • Building capability through infrastructure, data platforms, AI skills development, research, and innovation.
  • Fostering regional and international cooperation to strengthen Africa’s AI ecosystem and global participation.
  • Stimulating public and private investment in AI initiatives and startups.

Security is central to the minimizing risk focus area.  Under this category, the framework integrates AI as both an opportunity and a challenge for peace and security governance. The strategy acknowledges these risks and categorizes security as a priority area along with its broader focus on AI innovation. It also emphasizes the importance of adopting and implementing technical standards to ensure AI systems’ safety and security across the continent, aiming to prevent unauthorized access by malign actors such as terrorists. Further, this focus area encourages member states to address the manipulation potential of AI in spreading misinformation, fake news, and hate speech, which are recognized tactics frequently employed by extremist groups to radicalize and incite violence.

In the African context, emerging evidence shows that non-state actors and other groups are adopting AI technologies for propaganda dissemination, video editing, and manipulation of written communications, thereby enhancing their cyber and physical operational capabilities. Although the sophistication and extent of AI use by these groups remain limited and experimental, the rapid accessibility and low entry barriers of AI tools raise concerns about accelerated exploitation by violent extremist groups. This reality urges African nations to prioritize AI-ready counterterrorism frameworks.

The strategy also advocates for rigorous assessment of AI safety, particularly the risks tied to emerging technologies like generative AI and large language models. This includes the call for transparent AI systems and frameworks to mitigate misuse and vulnerabilities. While these transparency principles are strongly endorsed, specific details on how to operationalize such transparency remain underdeveloped within the document. It also acknowledges broader cybersecurity challenges, with some calls to strengthen national cybersecurity systems in line with the AU Malabo Convention and other continental frameworks.

Despite its promise with the five major focus areas, the strategy faces some limitations concerning security. First, as mentioned earlier, it operates largely as a voluntary guiding framework without binding obligations, meaning that implementation is likely to vary depending on each country’s political will. Second, the strategy does not provide detailed counterterrorism measures or practical defensive guidelines for integrating AI into national and regional security operations. This gap is particularly concerning because African states are confronted with challenges such as limited technical expertise and inadequate funding, which significantly hinder their ability to implement security infrastructures. These challenges are now further compounded by the accelerating advent of AI, which brings both new opportunities and heightened risks that the current strategy does not fully address.

Further Reading

  1. Smart Africa’s AI Blueprint
  2. How persuasive is AI-generated propaganda
  3. Shaping Africa’s AI strategy
  4. Africa’s AI Innovations Database

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

🔍 SEARCH

Spotlight

ALL IN Conference 2025: Four Key Takeaways from Montreal

Beyond Dependency: The Hidden Risk of Social Comparison in Chatbot Companionship

AI Policy Corner: Restriction vs. Regulation: Comparing State Approaches to AI Mental Health Legislation

Beyond Consultation: Building Inclusive AI Governance for Canada’s Democratic Future

AI Policy Corner: U.S. Executive Order on Advancing AI Education for American Youth

related posts

  • ISED Launches AI Risk Management Guide Based on Voluntary Code

    ISED Launches AI Risk Management Guide Based on Voluntary Code

  • Regulating Artificial Intelligence: The EU AI Act - Part 1

    Regulating Artificial Intelligence: The EU AI Act - Part 1

  • Social Context of LLMs - the BigScience Approach, Part 2: Project Ethical and Legal Grounding

    Social Context of LLMs - the BigScience Approach, Part 2: Project Ethical and Legal Grounding

  • From Case Law to Code: Evaluating AI’s Role in the Justice System

    From Case Law to Code: Evaluating AI’s Role in the Justice System

  • Regulating Artificial Intelligence: The EU AI Act - Part 1 (i)

    Regulating Artificial Intelligence: The EU AI Act - Part 1 (i)

  • AI Policy Corner: Frontier AI Safety Commitments, AI Seoul Summit 2024

    AI Policy Corner: Frontier AI Safety Commitments, AI Seoul Summit 2024

  • From Promise to Practice: A Glimpse into AI-Driven Approaches to Neuroscience

    From Promise to Practice: A Glimpse into AI-Driven Approaches to Neuroscience

  • AI Policy Corner: Texas and New York: Comparing U.S. State-Level AI Laws

    AI Policy Corner: Texas and New York: Comparing U.S. State-Level AI Laws

  • AI Policy Corner: U.S. Copyright Guidance on Works Created with AI

    AI Policy Corner: U.S. Copyright Guidance on Works Created with AI

  • Beyond Consultation: Building Inclusive AI Governance for Canada's Democratic Future

    Beyond Consultation: Building Inclusive AI Governance for Canada's Democratic Future

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer


Articles

Columns

AI Literacy

The State of AI Ethics Report


 

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.

Contact

Donate


  • © 2025 MONTREAL AI ETHICS INSTITUTE.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.