• Skip to main content
  • Skip to secondary menu
  • Skip to primary sidebar
  • Skip to footer
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

  • Articles
    • Public Policy
    • Privacy & Security
    • Human Rights
      • Ethics
      • JEDI (Justice, Equity, Diversity, Inclusion
    • Climate
    • Design
      • Emerging Technology
    • Application & Adoption
      • Health
      • Education
      • Government
        • Military
        • Public Works
      • Labour
    • Arts & Culture
      • Film & TV
      • Music
      • Pop Culture
      • Digital Art
  • Columns
    • AI Policy Corner
    • Recess
  • The AI Ethics Brief
  • AI Literacy
    • Research Summaries
    • AI Ethics Living Dictionary
    • Learning Community
  • The State of AI Ethics Report
    • Volume 7 (November 2025)
    • Volume 6 (February 2022)
    • Volume 5 (July 2021)
    • Volume 4 (April 2021)
    • Volume 3 (Jan 2021)
    • Volume 2 (Oct 2020)
    • Volume 1 (June 2020)
  • About
    • Our Contributions Policy
    • Our Open Access Policy
    • Contact
    • Donate

From Dance App to Political Mercenary: How disinformation on TikTok gaslights political tensions in Kenya

June 19, 2022

🔬 Research summary by Connor Wright, our Partnerships Manager.

[Original paper by Mozilla]


Overview: The upcoming Kenyan elections on August 9th will be one of the hottest events in 2022 within the African context. Yet, disinformation’s role on TikTok’s platform is proving far too prominent.


Introduction

The upcoming Kenyan elections on August 9th will be one of the hottest events in 2022 within the African context, with previous elections having caused violence and heightened tribal tensions. Consequently, disinformation has been rife on social media platforms, with Tik Tok being largely unscrutinised having just entered the social media scene. Hence,  this is Tik Tok’s first real litmus test in Africa, with its prominence having shaken up the previous Kenyan social media landscape dominated by Facebook and Twitter. However, the report finds that it is failing this test, with disinformation proving vivid and far too compelling.

Key Insights

The report

Mozilla investigated a sample of inappropriate political content taken from Tik Tok in Kenya: 130 videos and 33 different accounts, amassing over 4 million views. The hashtags #siasa and #siasazakenya (which translate to “politics” and “Kenyan politics,” respectively) have over 20 million views on the platform, showing the sheer amount of potential content to be analysed.

In sum, the “research suggests that Kenyan TikTok has become a breeding ground for propaganda, hate speech, and disinformation about Kenya’s election.” (p. 5). To arrive at this conclusion, the content was parsed into two different categories: (1) hate speech and incitement against communities; and (2) synthetic and manipulated content (P. 6).

  1. Actors are praying on the fears of many Kenyans of post-election violence, which brought life to a standstill in 2007. These messages were explicit, including targeting specific tribes and eliminating them from Kenya.
  2. Given the vast proliferation of disinformation, Kenyans cannot trust news outlets or social media to spread unbiased information.

Context bias

An explanation as to why such information is allowed to persist can be found in context bias. Here, content moderators were often subjected to content in different languages and with unknown figures involved. Hence, problematic content was allowed to fester due to a lack of appropriate contextual knowledge, which, to the untrained eye, may not be seen as an issue. Furthermore, time and quota pressure for content moderators means they aren’t always able to moderate a video properly, having to watch some videos at high speeds instead.

Within this disinformation-heavy landscape, flashbacks to 2017 when smear campaigns by both presidential candidates took place over the internet also result. This is highly worrying for young people who are still forming their political identities yet are heavily involved with Tik Tok.

Ways forward

The report lists some steps to take to combat the issue at hand. These pieces of advice generally encompass the following:

  • Establishing partnerships with local authorities in the country to better understand the gravity of the situation in Kenya and the importance of these elections.
  • Potentially consider turning off group elements of the platform and trending sections such as the For You Page, as Facebook and Twitter have decided to do.

Between the lines

From my perspective, this situation is yet another example of the importance of diversity in big tech. Given how content moderators have to evaluate videos in a different language, there will always be content that slips through. To best neutralise this effect will be to consult and engage with those who know the Kenyan context. Especially in situations like these, we cannot walk in another’s shoes, meaning our next best move is to involve these types of perspectives. In other words, instead of working for African nations, we should work with them.

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

🔍 SEARCH

Spotlight

This image shows a large white, traditional, old building. The top half of the building represents the humanities (which is symbolised by the embedded text from classic literature which is faintly shown ontop the building). The bottom section of the building is embossed with mathematical formulas to represent the sciences. The middle layer of the image is heavily pixelated. On the steps at the front of the building there is a group of scholars, wearing formal suits and tie attire, who are standing around at the enternace talking and some of them are sitting on the steps. There are two stone, statute-like hands that are stretching the building apart from the left side. In the forefront of the image, there are 8 students - which can only be seen from the back. Their graduation gowns have bright blue hoods and they all look as though they are walking towards the old building which is in the background at a distance. There are a mix of students in the foreground.

Tech Futures: Co-opting Research and Education

Agentic AI systems and algorithmic accountability: a new era of e-commerce

ALL IN Conference 2025: Four Key Takeaways from Montreal

Beyond Dependency: The Hidden Risk of Social Comparison in Chatbot Companionship

AI Policy Corner: Restriction vs. Regulation: Comparing State Approaches to AI Mental Health Legislation

related posts

  • Can We Teach AI Robots How to Be Human?

    Can We Teach AI Robots How to Be Human?

  • Research summary: Designing for Human Rights in AI

    Research summary: Designing for Human Rights in AI

  • South Korea as a Fourth Industrial Revolution Middle Power?

    South Korea as a Fourth Industrial Revolution Middle Power?

  • Regulatory Instruments for Fair Personalized Pricing

    Regulatory Instruments for Fair Personalized Pricing

  • Problematic Machine Behavior: A Systematic Literature Review of Algorithm Audits

    Problematic Machine Behavior: A Systematic Literature Review of Algorithm Audits

  • AI Neutrality in the Spotlight: ChatGPT’s Political Biases Revisited

    AI Neutrality in the Spotlight: ChatGPT’s Political Biases Revisited

  • How to Help People Understand AI

    How to Help People Understand AI

  • Digital Sex Crime, Online Misogyny, and Digital Feminism in South Korea

    Digital Sex Crime, Online Misogyny, and Digital Feminism in South Korea

  • Defining organizational AI governance

    Defining organizational AI governance

  • Embedded ethics: a proposal for integrating ethics into the development of medical AI

    Embedded ethics: a proposal for integrating ethics into the development of medical AI

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer


Articles

Columns

AI Literacy

The State of AI Ethics Report


 

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.

Contact

Donate


  • © 2025 MONTREAL AI ETHICS INSTITUTE.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.