• Skip to main content
  • Skip to secondary menu
  • Skip to primary sidebar
  • Skip to footer
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

  • Articles
    • Public Policy
    • Privacy & Security
    • Human Rights
      • Ethics
      • JEDI (Justice, Equity, Diversity, Inclusion
    • Climate
    • Design
      • Emerging Technology
    • Application & Adoption
      • Health
      • Education
      • Government
        • Military
        • Public Works
      • Labour
    • Arts & Culture
      • Film & TV
      • Music
      • Pop Culture
      • Digital Art
  • Columns
    • AI Policy Corner
    • Recess
  • The AI Ethics Brief
  • AI Literacy
    • Research Summaries
    • AI Ethics Living Dictionary
    • Learning Community
  • The State of AI Ethics Report
    • Volume 7 (November 2025)
    • Volume 6 (February 2022)
    • Volume 5 (July 2021)
    • Volume 4 (April 2021)
    • Volume 3 (Jan 2021)
    • Volume 2 (Oct 2020)
    • Volume 1 (June 2020)
  • About
    • Our Contributions Policy
    • Our Open Access Policy
    • Contact
    • Donate

AI Policy Corner: The Kenya National AI Strategy

June 22, 2025

✍️ By Tasneem Ahmed.

Tasneem is an Undergraduate Student in Political Science and a Research Assistant at the Governance and Responsible AI Lab (GRAIL), Purdue University.


📌 Editor’s Note: This article is part of our AI Policy Corner series, a collaboration between the Montreal AI Ethics Institute (MAIEI) and the Governance and Responsible AI Lab (GRAIL) at Purdue University. The series provides concise insights into critical AI policy developments from the local to international levels, helping our readers stay informed about the evolving landscape of AI governance. This piece spotlights Kenya’s National AI Strategy for 2025-2030, focusing on recent comprehensive actions taken within Kenya to develop its AI initiatives.


The Kenya National AI Strategy

After Mauritius drafted its AI Strategy, multiple countries in Africa have focused on adopting AI policies from international policies, such as the African Union Continental AI Strategy, to national policies seen in Egypt, Rwanda, and most recently, Kenya.

Last March, Kenya released its National AI Strategy for 2025-2030, outlining the country’s goals and potential strategies for creating a global and inclusive AI program. This was achieved by outlining three main pillars and four enablers to guide the creation of policy and regulation.

Pillar One: AI Digital Infrastructure

This pillar focuses on developing accessible and sustainable AI infrastructure by creating advanced connectivity systems, increasing the number of local data centers, such as the East Africa Innovation Lab, and utilizing green energy sources within AI infrastructure. The Strategy mentions that it is necessary to work with international partnerships and research institutions while also focusing on domestic facilities to create a safe system.

Pillar Two: Data

This pillar addresses the need for a data governance framework that ensures transparency, accountability, and security when handling data, especially when disclosing data to stakeholders. It also discusses the creation of AI training datasets and the need for more professionals trained to work with the collection, annotation, and organization of datasets within Kenya.

Pillar Three: AI R&D and Innovation

To successfully implement the first and second pillars, substantial research is required on solutions for local issues, as well as creating and supporting startups and SMEs, and fostering an industry that encourages innovation, growth, and competitiveness. This requires establishing research hubs, funding research and collaborations within the industry, and promoting open and accessible research practices.

Enablers

The foundational pillars of the Kenya National AI Strategy are supported by four enablers, which are methods to ensure the successful execution of the pillars.

1. Talent Development: By modifying the school curriculum to include instruction in basic AI skills, such as programming, ethics, and computation, and fostering partnerships with international institutions, there will be an increase in experts who understand AI policy and data science.

2. Governance: A comprehensive AI framework addressing legal and regulatory strategies, incentives for compliance, and oversight methods is necessary to create an industry standard for Kenya that focuses on mitigating AI risks and ethical violations.

3. Investments: The inclusion of both the private and public sectors, through government research, promoting local businesses, and incentives for private capital, is vital for the most efficient innovations and solutions.

4. Ethics, Equity, and Inclusion: By advocating for ethical, responsible, and inclusive AI strategies, such as AI literacy and participation from all groups, Kenya’s foundational mission of improving the socioeconomic status of its citizens may be achieved.

The pillars and enablers are designed to guide future policy development in Kenya. Since releasing the Strategy, Kenya has utilized it to integrate AI within multiple sectors of its society.

Recent AI Developments in Kenya

  • KICTANet has collaborated with MindHYVE.ai and DV8 Infosystems to further develop Kenya’s National AI Strategy.
  • African countries, such as Kenya, have developed the Artificial Intelligence Hub for Sustainable Development with Italy and the United Nations Development Programme.
  • Simba AI has collaborated with Cassava Technologies and NVIDIA to establish an AI factory in Kenya, introducing a chatbot that processes and supports underrepresented languages.

Further Reading

  1. Leveraging AI and emerging technologies to unlock Africa’s potential
  2. African Countries Are Racing to Create AI Strategies 

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

🔍 SEARCH

Spotlight

This image shows a large white, traditional, old building. The top half of the building represents the humanities (which is symbolised by the embedded text from classic literature which is faintly shown ontop the building). The bottom section of the building is embossed with mathematical formulas to represent the sciences. The middle layer of the image is heavily pixelated. On the steps at the front of the building there is a group of scholars, wearing formal suits and tie attire, who are standing around at the enternace talking and some of them are sitting on the steps. There are two stone, statute-like hands that are stretching the building apart from the left side. In the forefront of the image, there are 8 students - which can only be seen from the back. Their graduation gowns have bright blue hoods and they all look as though they are walking towards the old building which is in the background at a distance. There are a mix of students in the foreground.

Tech Futures: Co-opting Research and Education

Agentic AI systems and algorithmic accountability: a new era of e-commerce

ALL IN Conference 2025: Four Key Takeaways from Montreal

Beyond Dependency: The Hidden Risk of Social Comparison in Chatbot Companionship

AI Policy Corner: Restriction vs. Regulation: Comparing State Approaches to AI Mental Health Legislation

related posts

  • U.S.-EU Trade and Technology Council Inaugural Joint Statement – A look into what’s in store for AI?

    U.S.-EU Trade and Technology Council Inaugural Joint Statement – A look into what’s in store for AI?

  • Animism, Rinri, Modernization; the Base of Japanese Robotics

    Animism, Rinri, Modernization; the Base of Japanese Robotics

  • Responsible Use of Technology: The IBM Case Study

    Responsible Use of Technology: The IBM Case Study

  • Research summary: Algorithmic Injustices towards a Relational Ethics

    Research summary: Algorithmic Injustices towards a Relational Ethics

  • Design Principles for User Interfaces in AI-Based Decision Support Systems: The Case of Explainable ...

    Design Principles for User Interfaces in AI-Based Decision Support Systems: The Case of Explainable ...

  • Balancing Transparency and Risk: The Security and Privacy Risks of Open-Source Machine Learning Mode...

    Balancing Transparency and Risk: The Security and Privacy Risks of Open-Source Machine Learning Mode...

  • Public Perceptions of Gender Bias in Large Language Models: Cases of ChatGPT and Ernie

    Public Perceptions of Gender Bias in Large Language Models: Cases of ChatGPT and Ernie

  • Confucius, cyberpunk and Mr. Science: comparing AI ethics principles between China and the EU

    Confucius, cyberpunk and Mr. Science: comparing AI ethics principles between China and the EU

  • Ethical concerns with replacing human relations with humanoid robots: an Ubuntu perspective

    Ethical concerns with replacing human relations with humanoid robots: an Ubuntu perspective

  • The Ethical Need for Watermarks in Machine-Generated Language

    The Ethical Need for Watermarks in Machine-Generated Language

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer


Articles

Columns

AI Literacy

The State of AI Ethics Report


 

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.

Contact

Donate


  • © 2025 MONTREAL AI ETHICS INSTITUTE.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.