• Skip to main content
  • Skip to secondary menu
  • Skip to primary sidebar
  • Skip to footer
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

  • Articles
    • Public Policy
    • Privacy & Security
    • Human Rights
      • Ethics
      • JEDI (Justice, Equity, Diversity, Inclusion
    • Climate
    • Design
      • Emerging Technology
    • Application & Adoption
      • Health
      • Education
      • Government
        • Military
        • Public Works
      • Labour
    • Arts & Culture
      • Film & TV
      • Music
      • Pop Culture
      • Digital Art
  • Columns
    • AI Policy Corner
    • Recess
  • The AI Ethics Brief
  • AI Literacy
    • Research Summaries
    • AI Ethics Living Dictionary
    • Learning Community
  • The State of AI Ethics Report
    • Volume 7 (November 2025)
    • Volume 6 (February 2022)
    • Volume 5 (July 2021)
    • Volume 4 (April 2021)
    • Volume 3 (Jan 2021)
    • Volume 2 (Oct 2020)
    • Volume 1 (June 2020)
  • About
    • Our Contributions Policy
    • Our Open Access Policy
    • Contact
    • Donate

ChatGPT and the media in the Global South: How non-representative corpus in sub-Sahara Africa are engaging with the chatbots

December 14, 2023

🔬 Research Summary by Gregory Gondwe, an Assistant Professor of Journalism at California State University – San Bernardino and a Harvard faculty Associate with the Berkman Klein Centre.

[Original paper by Gregory Gondwe]


Overview: This study investigated ChatGPT usage among journalists in sub-Saharan Africa and its implications for misinformation, plagiarism, and stereotypes. The research highlighted the challenges posed by ChatGPT’s reliance on limited and non-representative databases for African contexts, particularly the limitations related to language and language-switch codes. Despite these challenges, ChatGPT offers opportunities for effective journalism practice in the region.


Introduction

The increasing reliance on generative AI tools and the aspiration for a connected world have sparked debates about the Global South’s ability to effectively engage with new media technologies. Some scholars highlight the Global South’s lack of resources and technological skills as a barrier to optimal AI utilization. In contrast, others argue that the “Global Village” concept suggests active participation in these debates, emphasizing interconnectedness beyond cultural and geographical boundaries.

Skeptics challenge this vision, viewing the networked world as a manifestation of capitalism that exploits the Global South’s data. Concerns about technology’s potential exclusion of marginalized communities in system design and data biases are raised. The study aims to investigate the integration of generative AI, specifically ChatGPT, in the practices of journalists in five sub-Saharan African countries. Through interviews, the research explores the challenges and potential benefits of ChatGPT use in contexts with underrepresented databases, examining the Global South’s involvement in generative AI, the representation of the Global South corpus within these tools, and potential concerns among journalists regarding their utilization of generative AI tools in their work.

Key Insights

Online Databases Perception

The study revealed contrasting views among journalists in sub-Saharan Africa regarding the nature of online databases. While some believed in their existence, citing increased social media presence and data availability for crowd coding and crowdsourcing, others perceived them to be almost non-existent, expressing concerns about the reliability and representativeness of online content. Journalists emphasized the need for clear and relevant information from reliable sources, often disregarding content in local languages or code-switching.

Challenges with Internet Connectivity

Journalists in sub-Saharan Africa faced significant challenges with unreliable internet connections and slow download speeds. This posed obstacles in their utilization of generative AI tools like ChatGPT. Despite the availability of alternative solutions, such as live coverage through Facebook or WhatsApp, the study highlighted the need for better internet infrastructure to support seamless journalistic practices in the region.

Awareness of ChatGPT Inaccuracies

Initially, journalists exhibited limited awareness of ChatGPT’s inaccuracies and were impressed by its ability to organize thoughts and provide basic information. However, as they gained more experience with the tool, they became cautious about its accuracy and reliability. Careless errors raised concerns about the trustworthiness of ChatGPT in providing accurate information, prompting journalists to question its reliability for professional use.

Perpetuation of Stereotypes

The study shed light on ChatGPT’s perpetuation of stereotypes about Africa, including poverty, corruption, and gender issues. Journalists expressed caution and concern about how the tool portrayed African countries and leaders. Additionally, ChatGPT was perceived to be biased in favor of Western narratives, raising questions about AI’s potential to reinforce existing stereotypes. This finding emphasized the importance of critically examining and decolonizing AI tools to ensure fair and unbiased representations in journalism.

Between the lines

Journalists in Sub-Saharan Africa had varying perceptions of online databases; some considered them valuable for crowd coding and crowdsourcing, while others viewed them as unreliable. Connectivity challenges were common, with some needing to be made aware of the role of initiatives like Facebook Basics and zero-rating services. Initially, journalists found ChatGPT impressive but grew cautious about its accuracy over time, especially regarding misinformation. ChatGPT was deemed unsuitable for writing complete stories, and Google remained the preferred source for information. However, ChatGPT was used to organize thoughts. The tool was criticized for perpetuating stereotypes about Africa, raising concerns about bias and its alignment with Western narratives.

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

🔍 SEARCH

Spotlight

ALL IN Conference 2025: Four Key Takeaways from Montreal

Beyond Dependency: The Hidden Risk of Social Comparison in Chatbot Companionship

AI Policy Corner: Restriction vs. Regulation: Comparing State Approaches to AI Mental Health Legislation

Beyond Consultation: Building Inclusive AI Governance for Canada’s Democratic Future

AI Policy Corner: U.S. Executive Order on Advancing AI Education for American Youth

related posts

  • The Montreal AI Ethics Institute (MAIEI) Joins the AI Alliance

    The Montreal AI Ethics Institute (MAIEI) Joins the AI Alliance

  • Auditing for Human Expertise

    Auditing for Human Expertise

  • Towards Intersectionality in Machine Learning: Including More Identities, Handling Underrepresentati...

    Towards Intersectionality in Machine Learning: Including More Identities, Handling Underrepresentati...

  • The Case for Anticipating Undesirable Consequences of Computing Innovations Early, Often, and Across...

    The Case for Anticipating Undesirable Consequences of Computing Innovations Early, Often, and Across...

  • RAIN Africa and MAIEI on The Future of Responsible AI in Africa (Public Consultation Summary)

    RAIN Africa and MAIEI on The Future of Responsible AI in Africa (Public Consultation Summary)

  • DICES Dataset: Diversity in Conversational AI Evaluation for Safety

    DICES Dataset: Diversity in Conversational AI Evaluation for Safety

  • AI in the Gray: Exploring Moderation Policies in Dialogic Large Language Models vs. Human Answers in...

    AI in the Gray: Exploring Moderation Policies in Dialogic Large Language Models vs. Human Answers in...

  • LLM-Deliberation: Evaluating LLMs with Interactive Multi-Agent Negotiation Games

    LLM-Deliberation: Evaluating LLMs with Interactive Multi-Agent Negotiation Games

  • Subreddit Links Drive Community Creation and User Engagement on Reddit

    Subreddit Links Drive Community Creation and User Engagement on Reddit

  • The Grand Illusion: The Myth of Software Portability and Implications for ML Progress

    The Grand Illusion: The Myth of Software Portability and Implications for ML Progress

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer


Articles

Columns

AI Literacy

The State of AI Ethics Report


 

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.

Contact

Donate


  • © 2025 MONTREAL AI ETHICS INSTITUTE.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.