• Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • Core Principles of Responsible AI
    • Accountability
    • Fairness
    • Privacy
    • Safety and Security
    • Sustainability
    • Transparency
  • Special Topics
    • AI in Industry
    • Ethical Implications
    • Human-Centered Design
    • Regulatory Landscape
    • Technical Methods
  • Living Dictionary
  • State of AI Ethics
  • AI Ethics Brief
  • 🇫🇷
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

AI Policy Corner: AI and Security in Africa: Assessing the African Union’s Continental AI Strategy

September 30, 2025

Photo by African Union

✍️By Ogadinma Enwereazu.

Ogadinma is a Ph.D. Student in the department of Political Science and a Graduate Affiliate at the Governance and Responsible AI Lab (GRAIL), Purdue University.


📌 Editor’s Note: This article is part of our AI Policy Corner series, a collaboration between the Montreal AI Ethics Institute (MAIEI) and the Governance and Responsible AI Lab (GRAIL) at Purdue University. The series provides concise insights into critical AI policy developments from the local to international levels, helping our readers stay informed about the evolving landscape of AI governance. This piece analyses the African Union’s strategic framework to advance Africa’s AI development.


The African Union (AU) Continental AI Strategy lays down a unified and strategic framework to advance Africa’s AI development. While distinct from the EU AI Act in its non-binding nature, the strategy still provides a high-level document that guides member states in developing national AI policies consistent with continental goals. It also cultivates capacity building and mobilizes investments tailored to Africa’s unique socio-economic landscape.

The strategy rests on five key focus areas:

  • Harnessing AI’s benefits for socio-economic development, agriculture, education, healthcare, climate, and public service.
  • Minimizing risks related to ethical, social, and security concerns, including bias, misinformation, and human rights protections.
  • Building capability through infrastructure, data platforms, AI skills development, research, and innovation.
  • Fostering regional and international cooperation to strengthen Africa’s AI ecosystem and global participation.
  • Stimulating public and private investment in AI initiatives and startups.

Security is central to the minimizing risk focus area.  Under this category, the framework integrates AI as both an opportunity and a challenge for peace and security governance. The strategy acknowledges these risks and categorizes security as a priority area along with its broader focus on AI innovation. It also emphasizes the importance of adopting and implementing technical standards to ensure AI systems’ safety and security across the continent, aiming to prevent unauthorized access by malign actors such as terrorists. Further, this focus area encourages member states to address the manipulation potential of AI in spreading misinformation, fake news, and hate speech, which are recognized tactics frequently employed by extremist groups to radicalize and incite violence.

In the African context, emerging evidence shows that non-state actors and other groups are adopting AI technologies for propaganda dissemination, video editing, and manipulation of written communications, thereby enhancing their cyber and physical operational capabilities. Although the sophistication and extent of AI use by these groups remain limited and experimental, the rapid accessibility and low entry barriers of AI tools raise concerns about accelerated exploitation by violent extremist groups. This reality urges African nations to prioritize AI-ready counterterrorism frameworks.

The strategy also advocates for rigorous assessment of AI safety, particularly the risks tied to emerging technologies like generative AI and large language models. This includes the call for transparent AI systems and frameworks to mitigate misuse and vulnerabilities. While these transparency principles are strongly endorsed, specific details on how to operationalize such transparency remain underdeveloped within the document. It also acknowledges broader cybersecurity challenges, with some calls to strengthen national cybersecurity systems in line with the AU Malabo Convention and other continental frameworks.

Despite its promise with the five major focus areas, the strategy faces some limitations concerning security. First, as mentioned earlier, it operates largely as a voluntary guiding framework without binding obligations, meaning that implementation is likely to vary depending on each country’s political will. Second, the strategy does not provide detailed counterterrorism measures or practical defensive guidelines for integrating AI into national and regional security operations. This gap is particularly concerning because African states are confronted with challenges such as limited technical expertise and inadequate funding, which significantly hinder their ability to implement security infrastructures. These challenges are now further compounded by the accelerating advent of AI, which brings both new opportunities and heightened risks that the current strategy does not fully address.

Further Reading

Smart Africa’s AI Blueprint

How persuasive is AI-generated propaganda

Shaping Africa’s AI strategy

Africa’s AI Innovations Database

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

🔍 SEARCH

Spotlight

Beyond Dependency: The Hidden Risk of Social Comparison in Chatbot Companionship

AI Policy Corner: Restriction vs. Regulation: Comparing State Approaches to AI Mental Health Legislation

Beyond Consultation: Building Inclusive AI Governance for Canada’s Democratic Future

AI Policy Corner: U.S. Executive Order on Advancing AI Education for American Youth

AI Policy Corner: U.S. Copyright Guidance on Works Created with AI

related posts

  • Response to the AHRC and WEF regarding Responsible Innovation in AI

    Response to the AHRC and WEF regarding Responsible Innovation in AI

  • Montreal AI Ethics Institute Hosts a TechAIDE Café Session

    Montreal AI Ethics Institute Hosts a TechAIDE Café Session

  • AI in Finance: 8 Frequently Asked Questions

    AI in Finance: 8 Frequently Asked Questions

  • Green Lighting ML: Confidentiality, Integrity, and Availability of Machine Learning Systems in Deplo...

    Green Lighting ML: Confidentiality, Integrity, and Availability of Machine Learning Systems in Deplo...

  • Talk at AI for Good Global Summit 2019

    Talk at AI for Good Global Summit 2019

  • On the Construction of Artificial Moral Agents Agents

    On the Construction of Artificial Moral Agents Agents

  • SECure: A Social and Environmental Certificate for AI Systems

    SECure: A Social and Environmental Certificate for AI Systems

  • Response to Office of the Privacy Commissioner of Canada Consultation Proposals pertaining to amendm...

    Response to Office of the Privacy Commissioner of Canada Consultation Proposals pertaining to amendm...

  • AI Economist: Reinforcement Learning is the Future for Equitable Economic Policy

    AI Economist: Reinforcement Learning is the Future for Equitable Economic Policy

  • The Ethics of AI in Medtech: A Discussion With Abhishek Gupta

    The Ethics of AI in Medtech: A Discussion With Abhishek Gupta

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer

Categories


• Blog
• Research Summaries
• Columns
• Core Principles of Responsible AI
• Special Topics

Signature Content


• The State Of AI Ethics

• The Living Dictionary

• The AI Ethics Brief

Learn More


• About

• Open Access Policy

• Contributions Policy

• Editorial Stance on AI Tools

• Press

• Donate

• Contact

The AI Ethics Brief (bi-weekly newsletter)

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.


Archive

  • © 2025 MONTREAL AI ETHICS INSTITUTE.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.