• Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • Core Principles of Responsible AI
    • Accountability
    • Fairness
    • Privacy
    • Safety and Security
    • Sustainability
    • Transparency
  • Special Topics
    • AI in Industry
    • Ethical Implications
    • Human-Centered Design
    • Regulatory Landscape
    • Technical Methods
  • Living Dictionary
  • State of AI Ethics
  • AI Ethics Brief
  • 🇫🇷
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

Nonhuman humanitarianism: when ‘AI for good’ can be harmful

June 19, 2022

🔬 Research Summary by Shreyasha Paudel, an independent researcher with an expertise in uncertainty in computer vision. She is about to start her PhD in Human Computer Interaction studying the ethical and social impacts of automation, digitization and data in developing countries from Fall 2022.

[Original paper by Mirca Madianou]


Overview:  This paper analyzes the use of chatbots in humanitarian applications to critically examine the assumptions behind “AI for good” initiatives. Through a mixed-method study, the author observes that current implementations of humanitarian chatbots are quite limited in their abilities to hold conversation and do not often deliver on stated benefits. On the other hand, they often carry risks of real harm due to the potential of data breaches, exclusion of the most marginalized, and removal of human connection. The article concludes that ‘AI for good’ initiatives often benefit tech companies and humanitarian organizations at the expense of vulnerable populations and thus rework the colonial legacies of humanitarianism while occluding the power dynamics at play.


Introduction

“AI for Good” initiatives often claim to leverage the power of artificial intelligence to solve global problems. Many international humanitarian agencies have spent large amounts of funding and other resources for these initiatives, often in partnership with private tech companies. This study questions the assumption that use of AI for humanitarian goals is always “good” and examines the purpose, advantages, limitations, risks, and actual beneficiaries of such applications.

For this study, the author focused on the use of chatbots in various humanitarian applications such as information dissemination, data collection, and feedback collection. She conducted a mixed-method study with interviews, participant observation, and digital ethnography to study  The interviews were conducted with seven groups of stakeholders – entrepreneurs, donors, humanitarian workers, digital developers, government representatives, business representatives, and volunteers. Participant observation was done at hackathons and industry events. In addition, the author also conducted digital ethnography by consulting various publicly available documents and drawing from her own interaction with these chatbots. 

The author found that despite their promise of a ‘more natural conversation’, the chatbots often did not understand complex sentences, lacked most recent information and cultural sensitivities, and excluded some groups of people. The author also identified power asymmetries between developers and promoters of these algorithms and the communities they intend to serve. The author claims that chatbots often benefit tech companies and humanitarian organizations from the Global North at the expense of vulnerable communities in the Global South and thus recreate colonial legacies.

Key Insights

Chatbots in Humanitarian Applications: their promise, usage, and their limitations

Humanitarian organizations claim to use chatbots for information dissemination, communication with communities, and accountability. In this study, the author looked at the psychotherapy bot ‘Karim’, the information app for refugees ‘Refugee Text’, World Food Programme (WFP)’s CHITCHAT developed for refugee camps in Western Kenya, WFP’s Agrochatea developed for farmers in Peru, and many others like it.

Through her study, the author also identifies reducing cost and increasing efficiency as an unstated motive for using chatbots. While many of these chatbots claim to harness AI to have more natural sounding conversations, the author finds that they are still pretty limited in their ability to understand complex phrases. The author showed that the chatbots reliably answered in case of specific predetermined questions or phrases but struggled when the questions consisted of longer sentences or had multiple and complex ideas. The author claims in case of information dissemination or communication, the chatbots might be functionally equivalent to simple surveys or FAQ pages. The chatbot also privileged short text and phrases to longer explanations and questions which reduces the possibility of meaningful conversation. The author also points out that not all chatbots were designed for specific cultural contexts or with feedback from communities. As a result, their usage did not really resonate with the intended communities.

Using the findings from her previous projects, the author also situates these chatbots into the trend of increasing digitized feedback and efficiency driven projects in humanitarian applications. The author wonders whether this trend and the use of chatbots with their limitations has reduced conversation to simplistic questions and answers, and accountability to a box-ticking exercise. In the long run, this may create distance between affected communities and the aid workers and has the potential to dehumanize interactions in the humanitarian context whilst claiming to be objective and scientific.

Risks: Can “AI for Good” be harmful ?

The author identifies various risks and potential for harm in the current approach to chatbots in humanitarian application. TODO:Data Safeguards. Many chatbots are currently released to the communities via existing messaging platforms such as Facebook Messenger, Whatsapp, and Telegram. Often they are released without a formal agreement with the parent tech company. As a result, there are no explicit protocols or accountability mechanisms to secure and prevent data breaches during use of these applications. Sometimes humanitarian organizations create their own websites for better data protection but this causes an accessibility issue, especially among the more digitally illiterate population.

Another risk that the author identifies is the risk of misinformation. The chatbots are most commonly deployed for information dissemination. However, there have been examples where the chatbots shared incorrect or out of date information. In humanitarian applications, such misinformation can directly lead to harm. Often chatbots are deployed to increase efficiency, and are accompanied by a reduction in human resource. In such a context, it may get increasingly difficult to detect and quantify the impact of such misinformation.

Lastly, the author talks about distancing created by automation as a harm. Chatbots can be frustrating if they keep restarting the conversation because they don’t understand the question. This problem can be compounded for humanitarian aid where situations are complex situations without predetermined solutions. Because these chatbots are usually developed outside of the cultural contexts of these affected communities, they also pose a risk of further removing the voice of these vulnerable communities and making them more invisible. 

Who benefits? Recreating coloniality

The author connects the above limitations and potential harms to existing power imbalances in the technology and the humanitarian sector. Often these “AI for good” initiatives are conceived, developed, or funded by organizations from the Global North. As a result, they encode “western biases” as evidenced by most applications being in English or lacking cultural nuances about the communities where they will be used. The author compares this western bias to existing western bias in how the notions of trust, accountability, and participation promoted by humanitarian organizations are also biased by western values. Thus in their current connotation, chatbots also exist as a notional accountability/participation mechanism and inherit the hierarchies and power dynamics that lie in existing humanitarian practices.

The author also points out that the main benefit for deploying these chatbots often goes to tech companies through the generated hype and publicity. These applications are used as testbeds by the developing companies to tune their consumer-facing products that will be released to the audience from Global North at a steep price. The author cited the example of the psychotherapy chatbot Karim released to refugee camps in Syria. The company which developed Karim also released a similar app ‘Tess’ for the US market. However, while Karim was deployed as a replacement for therapists, Tess was marketed as a supplement to human therapists and its use was accompanied by the availability of licensed psychotherapists. In this way, “AI for good” initiatives also allow private tech companies to experiment with untested technologies and extract data, time, and knowledge from vulnerable communities to generate private profits. The author compares this to the history of scientific experiments in former colonies.

The author concludes by warning against the ‘enchantment of technology’ where providing visibility and legitimacy to these chatbots developed by for-profit companies, we create a feedback cycle where the needs of communities become more hidden while the limitations are glossed over.

Between the lines

This is a much needed article that examines assumptions and unequal power relations behind “AI for good” innovations that are being promoted in the Global South. As a researcher who was trained in North America and is currently based in Nepal, the claims in the paper reflect many of my own experiences. I hope that the issues highlighted in this paper will motivate both AI researchers and humanitarian practitioners to think about power, use cases, and tradeoffs as they develop and push for AI based solutions. 

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

🔍 SEARCH

Spotlight

Canada’s Minister of AI and Digital Innovation is a Historic First. Here’s What We Recommend.

Am I Literate? Redefining Literacy in the Age of Artificial Intelligence

AI Policy Corner: The Texas Responsible AI Governance Act

AI Policy Corner: Singapore’s National AI Strategy 2.0

AI Governance in a Competitive World: Balancing Innovation, Regulation and Ethics | Point Zero Forum 2025

related posts

  • Human-AI Interactions and Societal Pitfalls

    Human-AI Interactions and Societal Pitfalls

  • One Map to Rule Them All? Google Maps as Digital Technical Object

    One Map to Rule Them All? Google Maps as Digital Technical Object

  • The Struggle for AI’s Recognition: Understanding the Normative Implications of Gender Bias in AI wit...

    The Struggle for AI’s Recognition: Understanding the Normative Implications of Gender Bias in AI wit...

  • Reliabilism and the Testimony of Robots (Research Summary)

    Reliabilism and the Testimony of Robots (Research Summary)

  • From OECD to India: Exploring cross-cultural differences in perceived trust, responsibility and reli...

    From OECD to India: Exploring cross-cultural differences in perceived trust, responsibility and reli...

  • Research summary: The Wrong Kind of AI? Artificial Intelligence and the Future of Labor Demand

    Research summary: The Wrong Kind of AI? Artificial Intelligence and the Future of Labor Demand

  • Sociotechnical Specification for the Broader Impacts of Autonomous Vehicles

    Sociotechnical Specification for the Broader Impacts of Autonomous Vehicles

  • Collectionless Artificial Intelligence

    Collectionless Artificial Intelligence

  • Human-centred mechanism design with Democratic AI

    Human-centred mechanism design with Democratic AI

  • Moral Dilemmas for Moral Machines

    Moral Dilemmas for Moral Machines

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer

Categories


• Blog
• Research Summaries
• Columns
• Core Principles of Responsible AI
• Special Topics

Signature Content


• The State Of AI Ethics

• The Living Dictionary

• The AI Ethics Brief

Learn More


• About

• Open Access Policy

• Contributions Policy

• Editorial Stance on AI Tools

• Press

• Donate

• Contact

The AI Ethics Brief (bi-weekly newsletter)

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.


Archive

  • Š MONTREAL AI ETHICS INSTITUTE. All rights reserved 2024.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.