• Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • Core Principles of Responsible AI
    • Accountability
    • Fairness
    • Privacy
    • Safety and Security
    • Sustainability
    • Transparency
  • Special Topics
    • AI in Industry
    • Ethical Implications
    • Human-Centered Design
    • Regulatory Landscape
    • Technical Methods
  • Living Dictionary
  • State of AI Ethics
  • AI Ethics Brief
  • 🇫🇷
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

From Dance App to Political Mercenary: How disinformation on TikTok gaslights political tensions in Kenya

June 19, 2022

🔬 Research summary by Connor Wright, our Partnerships Manager.

[Original paper by Mozilla]


Overview: The upcoming Kenyan elections on August 9th will be one of the hottest events in 2022 within the African context. Yet, disinformation’s role on TikTok’s platform is proving far too prominent.


Introduction

The upcoming Kenyan elections on August 9th will be one of the hottest events in 2022 within the African context, with previous elections having caused violence and heightened tribal tensions. Consequently, disinformation has been rife on social media platforms, with Tik Tok being largely unscrutinised having just entered the social media scene. Hence,  this is Tik Tok’s first real litmus test in Africa, with its prominence having shaken up the previous Kenyan social media landscape dominated by Facebook and Twitter. However, the report finds that it is failing this test, with disinformation proving vivid and far too compelling.

Key Insights

The report

Mozilla investigated a sample of inappropriate political content taken from Tik Tok in Kenya: 130 videos and 33 different accounts, amassing over 4 million views. The hashtags #siasa and #siasazakenya (which translate to “politics” and “Kenyan politics,” respectively) have over 20 million views on the platform, showing the sheer amount of potential content to be analysed.

In sum, the “research suggests that Kenyan TikTok has become a breeding ground for propaganda, hate speech, and disinformation about Kenya’s election.” (p. 5). To arrive at this conclusion, the content was parsed into two different categories: (1) hate speech and incitement against communities; and (2) synthetic and manipulated content (P. 6).

  1. Actors are praying on the fears of many Kenyans of post-election violence, which brought life to a standstill in 2007. These messages were explicit, including targeting specific tribes and eliminating them from Kenya.
  2. Given the vast proliferation of disinformation, Kenyans cannot trust news outlets or social media to spread unbiased information.

Context bias

An explanation as to why such information is allowed to persist can be found in context bias. Here, content moderators were often subjected to content in different languages and with unknown figures involved. Hence, problematic content was allowed to fester due to a lack of appropriate contextual knowledge, which, to the untrained eye, may not be seen as an issue. Furthermore, time and quota pressure for content moderators means they aren’t always able to moderate a video properly, having to watch some videos at high speeds instead.

Within this disinformation-heavy landscape, flashbacks to 2017 when smear campaigns by both presidential candidates took place over the internet also result. This is highly worrying for young people who are still forming their political identities yet are heavily involved with Tik Tok.

Ways forward

The report lists some steps to take to combat the issue at hand. These pieces of advice generally encompass the following:

  • Establishing partnerships with local authorities in the country to better understand the gravity of the situation in Kenya and the importance of these elections.
  • Potentially consider turning off group elements of the platform and trending sections such as the For You Page, as Facebook and Twitter have decided to do.

Between the lines

From my perspective, this situation is yet another example of the importance of diversity in big tech. Given how content moderators have to evaluate videos in a different language, there will always be content that slips through. To best neutralise this effect will be to consult and engage with those who know the Kenyan context. Especially in situations like these, we cannot walk in another’s shoes, meaning our next best move is to involve these types of perspectives. In other words, instead of working for African nations, we should work with them.

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

🔍 SEARCH

Spotlight

ALL IN Conference 2025: Four Key Takeaways from Montreal

Beyond Dependency: The Hidden Risk of Social Comparison in Chatbot Companionship

AI Policy Corner: Restriction vs. Regulation: Comparing State Approaches to AI Mental Health Legislation

Beyond Consultation: Building Inclusive AI Governance for Canada’s Democratic Future

AI Policy Corner: U.S. Executive Order on Advancing AI Education for American Youth

related posts

  • In 2020, Nobody Knows You’re a Chatbot

    In 2020, Nobody Knows You’re a Chatbot

  • Race and AI: the Diversity Dilemma

    Race and AI: the Diversity Dilemma

  • On the Challenges of Deploying Privacy-Preserving Synthetic Data in the Enterprise

    On the Challenges of Deploying Privacy-Preserving Synthetic Data in the Enterprise

  • Research Summary: Toward Fairness in AI for People with Disabilities: A Research Roadmap

    Research Summary: Toward Fairness in AI for People with Disabilities: A Research Roadmap

  • Beyond Bias and Discrimination: Redefining the AI Ethics Principle of Fairness in Healthcare Machine...

    Beyond Bias and Discrimination: Redefining the AI Ethics Principle of Fairness in Healthcare Machine...

  • Research summary: Technology-Enabled Disinformation: Summary, Lessons, and Recommendations

    Research summary: Technology-Enabled Disinformation: Summary, Lessons, and Recommendations

  • Do Large GPT Models Discover Moral Dimensions in Language Representations? A Topological Study Of Se...

    Do Large GPT Models Discover Moral Dimensions in Language Representations? A Topological Study Of Se...

  • Research summary: Algorithmic Colonization of Africa

    Research summary: Algorithmic Colonization of Africa

  • The Two Faces of AI in Green Mobile Computing: A Literature Review

    The Two Faces of AI in Green Mobile Computing: A Literature Review

  • Demystifying Local and Global Fairness Trade-offs in Federated Learning Using Partial Information De...

    Demystifying Local and Global Fairness Trade-offs in Federated Learning Using Partial Information De...

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer

Categories


• Blog
• Research Summaries
• Columns
• Core Principles of Responsible AI
• Special Topics

Signature Content


• The State Of AI Ethics

• The Living Dictionary

• The AI Ethics Brief

Learn More


• About

• Open Access Policy

• Contributions Policy

• Editorial Stance on AI Tools

• Press

• Donate

• Contact

The AI Ethics Brief (bi-weekly newsletter)

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.


Archive

  • © 2025 MONTREAL AI ETHICS INSTITUTE.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.