• Skip to main content
  • Skip to secondary menu
  • Skip to primary sidebar
  • Skip to footer
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

  • Articles
    • Public Policy
    • Privacy & Security
    • Human Rights
      • Ethics
      • JEDI (Justice, Equity, Diversity, Inclusion
    • Climate
    • Design
      • Emerging Technology
    • Application & Adoption
      • Health
      • Education
      • Government
        • Military
        • Public Works
      • Labour
    • Arts & Culture
      • Film & TV
      • Music
      • Pop Culture
      • Digital Art
  • Columns
    • AI Policy Corner
    • Recess
    • Tech Futures
  • The AI Ethics Brief
  • AI Literacy
    • Research Summaries
    • AI Ethics Living Dictionary
    • Learning Community
  • The State of AI Ethics Report
    • Volume 7 (November 2025)
    • Volume 6 (February 2022)
    • Volume 5 (July 2021)
    • Volume 4 (April 2021)
    • Volume 3 (Jan 2021)
    • Volume 2 (Oct 2020)
    • Volume 1 (June 2020)
  • About
    • Our Contributions Policy
    • Our Open Access Policy
    • Contact
    • Donate

ChatGPT and the media in the Global South: How non-representative corpus in sub-Sahara Africa are engaging with the chatbots

December 14, 2023

🔬 Research Summary by Gregory Gondwe, an Assistant Professor of Journalism at California State University – San Bernardino and a Harvard faculty Associate with the Berkman Klein Centre.

[Original paper by Gregory Gondwe]


Overview: This study investigated ChatGPT usage among journalists in sub-Saharan Africa and its implications for misinformation, plagiarism, and stereotypes. The research highlighted the challenges posed by ChatGPT’s reliance on limited and non-representative databases for African contexts, particularly the limitations related to language and language-switch codes. Despite these challenges, ChatGPT offers opportunities for effective journalism practice in the region.


Introduction

The increasing reliance on generative AI tools and the aspiration for a connected world have sparked debates about the Global South’s ability to effectively engage with new media technologies. Some scholars highlight the Global South’s lack of resources and technological skills as a barrier to optimal AI utilization. In contrast, others argue that the “Global Village” concept suggests active participation in these debates, emphasizing interconnectedness beyond cultural and geographical boundaries.

Skeptics challenge this vision, viewing the networked world as a manifestation of capitalism that exploits the Global South’s data. Concerns about technology’s potential exclusion of marginalized communities in system design and data biases are raised. The study aims to investigate the integration of generative AI, specifically ChatGPT, in the practices of journalists in five sub-Saharan African countries. Through interviews, the research explores the challenges and potential benefits of ChatGPT use in contexts with underrepresented databases, examining the Global South’s involvement in generative AI, the representation of the Global South corpus within these tools, and potential concerns among journalists regarding their utilization of generative AI tools in their work.

Key Insights

Online Databases Perception

The study revealed contrasting views among journalists in sub-Saharan Africa regarding the nature of online databases. While some believed in their existence, citing increased social media presence and data availability for crowd coding and crowdsourcing, others perceived them to be almost non-existent, expressing concerns about the reliability and representativeness of online content. Journalists emphasized the need for clear and relevant information from reliable sources, often disregarding content in local languages or code-switching.

Challenges with Internet Connectivity

Journalists in sub-Saharan Africa faced significant challenges with unreliable internet connections and slow download speeds. This posed obstacles in their utilization of generative AI tools like ChatGPT. Despite the availability of alternative solutions, such as live coverage through Facebook or WhatsApp, the study highlighted the need for better internet infrastructure to support seamless journalistic practices in the region.

Awareness of ChatGPT Inaccuracies

Initially, journalists exhibited limited awareness of ChatGPT’s inaccuracies and were impressed by its ability to organize thoughts and provide basic information. However, as they gained more experience with the tool, they became cautious about its accuracy and reliability. Careless errors raised concerns about the trustworthiness of ChatGPT in providing accurate information, prompting journalists to question its reliability for professional use.

Perpetuation of Stereotypes

The study shed light on ChatGPT’s perpetuation of stereotypes about Africa, including poverty, corruption, and gender issues. Journalists expressed caution and concern about how the tool portrayed African countries and leaders. Additionally, ChatGPT was perceived to be biased in favor of Western narratives, raising questions about AI’s potential to reinforce existing stereotypes. This finding emphasized the importance of critically examining and decolonizing AI tools to ensure fair and unbiased representations in journalism.

Between the lines

Journalists in Sub-Saharan Africa had varying perceptions of online databases; some considered them valuable for crowd coding and crowdsourcing, while others viewed them as unreliable. Connectivity challenges were common, with some needing to be made aware of the role of initiatives like Facebook Basics and zero-rating services. Initially, journalists found ChatGPT impressive but grew cautious about its accuracy over time, especially regarding misinformation. ChatGPT was deemed unsuitable for writing complete stories, and Google remained the preferred source for information. However, ChatGPT was used to organize thoughts. The tool was criticized for perpetuating stereotypes about Africa, raising concerns about bias and its alignment with Western narratives.

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

🔍 SEARCH

Spotlight

A brightly coloured illustration which can be viewed in any direction. It has many elements to it working together: men in suits around a table, someone in a data centre, big hands controlling the scenes and holding a phone, people in a production line. Motifs such as network diagrams and melting emojis are placed throughout the busy vignettes.

Tech Futures: The Fossil Fuels Playbook for Big Tech: Part II

A rock embedded with intricate circuit board patterns, held delicately by pale hands drawn in a ghostly style. The contrast between the rough, metallic mineral and the sleek, artificial circuit board illustrates the relationship between raw natural resources and modern technological development. The hands evoke human involvement in the extraction and manufacturing processes.

Tech Futures: The Fossil Fuels Playbook for Big Tech: Part I

Close-up of a cat sleeping on a computer keyboard

Tech Futures: The threat of AI-generated code to the world’s digital infrastructure

The undying sun hangs in the sky, as people gather around signal towers, working through their digital devices.

Dreams and Realities in Modi’s AI Impact Summit

Illustration of a coral reef ecosystem

Tech Futures: Diversity of Thought and Experience: The UN’s Scientific Panel on AI

related posts

  • Deciphering Open Source in the EU AI Act

    Deciphering Open Source in the EU AI Act

  • Exploring the Subtleties of Privacy Protection in Machine Learning Research in QuĂ©bec 

    Exploring the Subtleties of Privacy Protection in Machine Learning Research in Québec 

  • GAM(e) changer or not? An evaluation of interpretable machine learning models

    GAM(e) changer or not? An evaluation of interpretable machine learning models

  • HAI Weekly Seminar Series: Decolonizing AI with Sabelo Mhlambi

    HAI Weekly Seminar Series: Decolonizing AI with Sabelo Mhlambi

  • Confucius, cyberpunk and Mr. Science: comparing AI ethics principles between China and the EU

    Confucius, cyberpunk and Mr. Science: comparing AI ethics principles between China and the EU

  • The Moral Machine Experiment on Large Language Models

    The Moral Machine Experiment on Large Language Models

  • Research Summary: Geo-indistinguishability: Differential privacy for location-based systems

    Research Summary: Geo-indistinguishability: Differential privacy for location-based systems

  • The Ethics of Sustainability for Artificial Intelligence

    The Ethics of Sustainability for Artificial Intelligence

  • Clueless AI: Should AI Models Report to Us When They Are Clueless?

    Clueless AI: Should AI Models Report to Us When They Are Clueless?

  • In Consideration of Indigenous Data Sovereignty: Data Mining as a Colonial Practice

    In Consideration of Indigenous Data Sovereignty: Data Mining as a Colonial Practice

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer


Articles

Columns

AI Literacy

The State of AI Ethics Report


 

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.

Contact

Donate


  • © 2025 MONTREAL AI ETHICS INSTITUTE.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.