• Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • Core Principles of Responsible AI
    • Accountability
    • Fairness
    • Privacy
    • Safety and Security
    • Sustainability
    • Transparency
  • Special Topics
    • AI in Industry
    • Ethical Implications
    • Human-Centered Design
    • Regulatory Landscape
    • Technical Methods
  • Living Dictionary
  • State of AI Ethics
  • AI Ethics Brief
  • 🇫🇷
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

From Dance App to Political Mercenary: How disinformation on TikTok gaslights political tensions in Kenya

June 19, 2022

🔬 Research summary by Connor Wright, our Partnerships Manager.

[Original paper by Mozilla]


Overview: The upcoming Kenyan elections on August 9th will be one of the hottest events in 2022 within the African context. Yet, disinformation’s role on TikTok’s platform is proving far too prominent.


Introduction

The upcoming Kenyan elections on August 9th will be one of the hottest events in 2022 within the African context, with previous elections having caused violence and heightened tribal tensions. Consequently, disinformation has been rife on social media platforms, with Tik Tok being largely unscrutinised having just entered the social media scene. Hence,  this is Tik Tok’s first real litmus test in Africa, with its prominence having shaken up the previous Kenyan social media landscape dominated by Facebook and Twitter. However, the report finds that it is failing this test, with disinformation proving vivid and far too compelling.

Key Insights

The report

Mozilla investigated a sample of inappropriate political content taken from Tik Tok in Kenya: 130 videos and 33 different accounts, amassing over 4 million views. The hashtags #siasa and #siasazakenya (which translate to “politics” and “Kenyan politics,” respectively) have over 20 million views on the platform, showing the sheer amount of potential content to be analysed.

In sum, the “research suggests that Kenyan TikTok has become a breeding ground for propaganda, hate speech, and disinformation about Kenya’s election.” (p. 5). To arrive at this conclusion, the content was parsed into two different categories: (1) hate speech and incitement against communities; and (2) synthetic and manipulated content (P. 6).

  1. Actors are praying on the fears of many Kenyans of post-election violence, which brought life to a standstill in 2007. These messages were explicit, including targeting specific tribes and eliminating them from Kenya.
  2. Given the vast proliferation of disinformation, Kenyans cannot trust news outlets or social media to spread unbiased information.

Context bias

An explanation as to why such information is allowed to persist can be found in context bias. Here, content moderators were often subjected to content in different languages and with unknown figures involved. Hence, problematic content was allowed to fester due to a lack of appropriate contextual knowledge, which, to the untrained eye, may not be seen as an issue. Furthermore, time and quota pressure for content moderators means they aren’t always able to moderate a video properly, having to watch some videos at high speeds instead.

Within this disinformation-heavy landscape, flashbacks to 2017 when smear campaigns by both presidential candidates took place over the internet also result. This is highly worrying for young people who are still forming their political identities yet are heavily involved with Tik Tok.

Ways forward

The report lists some steps to take to combat the issue at hand. These pieces of advice generally encompass the following:

  • Establishing partnerships with local authorities in the country to better understand the gravity of the situation in Kenya and the importance of these elections.
  • Potentially consider turning off group elements of the platform and trending sections such as the For You Page, as Facebook and Twitter have decided to do.

Between the lines

From my perspective, this situation is yet another example of the importance of diversity in big tech. Given how content moderators have to evaluate videos in a different language, there will always be content that slips through. To best neutralise this effect will be to consult and engage with those who know the Kenyan context. Especially in situations like these, we cannot walk in another’s shoes, meaning our next best move is to involve these types of perspectives. In other words, instead of working for African nations, we should work with them.

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

🔍 SEARCH

Spotlight

AI Policy Corner: Frontier AI Safety Commitments, AI Seoul Summit 2024

AI Policy Corner: The Colorado State Deepfakes Act

Special Edition: Honouring the Legacy of Abhishek Gupta (1992–2024)

AI Policy Corner: The Turkish Artificial Intelligence Law Proposal

From Funding Crisis to AI Misuse: Critical Digital Rights Challenges from RightsCon 2025

related posts

  • Epistemic fragmentation poses a threat to the governance of online targeting

    Epistemic fragmentation poses a threat to the governance of online targeting

  • Down the Toxicity Rabbit Hole: Investigating PaLM 2 Guardrails

    Down the Toxicity Rabbit Hole: Investigating PaLM 2 Guardrails

  • Governance of artificial intelligence

    Governance of artificial intelligence

  • The Proliferation of AI Ethics Principles: What's Next?

    The Proliferation of AI Ethics Principles: What's Next?

  • AI and Great Power Competition: Implications for National Security

    AI and Great Power Competition: Implications for National Security

  • The Ethics of Sustainability for Artificial Intelligence

    The Ethics of Sustainability for Artificial Intelligence

  • Can Large Language Models Provide Security & Privacy Advice? Measuring the Ability of LLMs to Re...

    Can Large Language Models Provide Security & Privacy Advice? Measuring the Ability of LLMs to Re...

  • Ethics and Governance of Trustworthy Medical Artificial Intelligence

    Ethics and Governance of Trustworthy Medical Artificial Intelligence

  • The Political Power of Platforms: How Current Attempts to Regulate Misinformation Amplify Opinion Po...

    The Political Power of Platforms: How Current Attempts to Regulate Misinformation Amplify Opinion Po...

  • Animism, Rinri, Modernization; the Base of Japanese Robotics

    Animism, Rinri, Modernization; the Base of Japanese Robotics

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer

Categories


• Blog
• Research Summaries
• Columns
• Core Principles of Responsible AI
• Special Topics

Signature Content


• The State Of AI Ethics

• The Living Dictionary

• The AI Ethics Brief

Learn More


• About

• Open Access Policy

• Contributions Policy

• Editorial Stance on AI Tools

• Press

• Donate

• Contact

The AI Ethics Brief (bi-weekly newsletter)

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.


Archive

  • © MONTREAL AI ETHICS INSTITUTE. All rights reserved 2024.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.