• Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • Core Principles of Responsible AI
    • Accountability
    • Fairness
    • Privacy
    • Safety and Security
    • Sustainability
    • Transparency
  • Special Topics
    • AI in Industry
    • Ethical Implications
    • Human-Centered Design
    • Regulatory Landscape
    • Technical Methods
  • Living Dictionary
  • State of AI Ethics
  • AI Ethics Brief
  • 🇫🇷
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

From Dance App to Political Mercenary: How disinformation on TikTok gaslights political tensions in Kenya

June 19, 2022

🔬 Research summary by Connor Wright, our Partnerships Manager.

[Original paper by Mozilla]


Overview: The upcoming Kenyan elections on August 9th will be one of the hottest events in 2022 within the African context. Yet, disinformation’s role on TikTok’s platform is proving far too prominent.


Introduction

The upcoming Kenyan elections on August 9th will be one of the hottest events in 2022 within the African context, with previous elections having caused violence and heightened tribal tensions. Consequently, disinformation has been rife on social media platforms, with Tik Tok being largely unscrutinised having just entered the social media scene. Hence,  this is Tik Tok’s first real litmus test in Africa, with its prominence having shaken up the previous Kenyan social media landscape dominated by Facebook and Twitter. However, the report finds that it is failing this test, with disinformation proving vivid and far too compelling.

Key Insights

The report

Mozilla investigated a sample of inappropriate political content taken from Tik Tok in Kenya: 130 videos and 33 different accounts, amassing over 4 million views. The hashtags #siasa and #siasazakenya (which translate to “politics” and “Kenyan politics,” respectively) have over 20 million views on the platform, showing the sheer amount of potential content to be analysed.

In sum, the “research suggests that Kenyan TikTok has become a breeding ground for propaganda, hate speech, and disinformation about Kenya’s election.” (p. 5). To arrive at this conclusion, the content was parsed into two different categories: (1) hate speech and incitement against communities; and (2) synthetic and manipulated content (P. 6).

  1. Actors are praying on the fears of many Kenyans of post-election violence, which brought life to a standstill in 2007. These messages were explicit, including targeting specific tribes and eliminating them from Kenya.
  2. Given the vast proliferation of disinformation, Kenyans cannot trust news outlets or social media to spread unbiased information.

Context bias

An explanation as to why such information is allowed to persist can be found in context bias. Here, content moderators were often subjected to content in different languages and with unknown figures involved. Hence, problematic content was allowed to fester due to a lack of appropriate contextual knowledge, which, to the untrained eye, may not be seen as an issue. Furthermore, time and quota pressure for content moderators means they aren’t always able to moderate a video properly, having to watch some videos at high speeds instead.

Within this disinformation-heavy landscape, flashbacks to 2017 when smear campaigns by both presidential candidates took place over the internet also result. This is highly worrying for young people who are still forming their political identities yet are heavily involved with Tik Tok.

Ways forward

The report lists some steps to take to combat the issue at hand. These pieces of advice generally encompass the following:

  • Establishing partnerships with local authorities in the country to better understand the gravity of the situation in Kenya and the importance of these elections.
  • Potentially consider turning off group elements of the platform and trending sections such as the For You Page, as Facebook and Twitter have decided to do.

Between the lines

From my perspective, this situation is yet another example of the importance of diversity in big tech. Given how content moderators have to evaluate videos in a different language, there will always be content that slips through. To best neutralise this effect will be to consult and engage with those who know the Kenyan context. Especially in situations like these, we cannot walk in another’s shoes, meaning our next best move is to involve these types of perspectives. In other words, instead of working for African nations, we should work with them.

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

🔍 SEARCH

Spotlight

Canada’s Minister of AI and Digital Innovation is a Historic First. Here’s What We Recommend.

Am I Literate? Redefining Literacy in the Age of Artificial Intelligence

AI Policy Corner: The Texas Responsible AI Governance Act

AI Policy Corner: Singapore’s National AI Strategy 2.0

AI Governance in a Competitive World: Balancing Innovation, Regulation and Ethics | Point Zero Forum 2025

related posts

  • Never trust, always verify: a roadmap for Trustworthy AI?

    Never trust, always verify: a roadmap for Trustworthy AI?

  • Common but Different Futures: AI Inequity and Climate Change

    Common but Different Futures: AI Inequity and Climate Change

  • The algorithmic imaginary: exploring the ordinary affects of Facebook algorithms (Research Summary)

    The algorithmic imaginary: exploring the ordinary affects of Facebook algorithms (Research Summary)

  • Regional Differences in Information Privacy Concerns After the Facebook-Cambridge Analytica Data Sca...

    Regional Differences in Information Privacy Concerns After the Facebook-Cambridge Analytica Data Sca...

  • Counterfactual Explanations via Locally-guided Sequential Algorithmic Recourse

    Counterfactual Explanations via Locally-guided Sequential Algorithmic Recourse

  • Generative AI-Driven Storytelling: A New Era for Marketing

    Generative AI-Driven Storytelling: A New Era for Marketing

  • AI Safety, Security, and Stability Among Great Powers (Research Summary)

    AI Safety, Security, and Stability Among Great Powers (Research Summary)

  • In AI We Trust: Ethics, Artificial Intelligence, and Reliability

    In AI We Trust: Ethics, Artificial Intelligence, and Reliability

  • Research summary: Principles alone cannot guarantee ethical AI

    Research summary: Principles alone cannot guarantee ethical AI

  • Exploring the Carbon Footprint of Hugging Face's ML Models: A Repository Mining Study

    Exploring the Carbon Footprint of Hugging Face's ML Models: A Repository Mining Study

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer

Categories


• Blog
• Research Summaries
• Columns
• Core Principles of Responsible AI
• Special Topics

Signature Content


• The State Of AI Ethics

• The Living Dictionary

• The AI Ethics Brief

Learn More


• About

• Open Access Policy

• Contributions Policy

• Editorial Stance on AI Tools

• Press

• Donate

• Contact

The AI Ethics Brief (bi-weekly newsletter)

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.


Archive

  • © MONTREAL AI ETHICS INSTITUTE. All rights reserved 2024.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.