• Skip to main content
  • Skip to secondary menu
  • Skip to primary sidebar
  • Skip to footer
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

  • Articles
    • Public Policy
    • Privacy & Security
    • Human Rights
      • Ethics
      • JEDI (Justice, Equity, Diversity, Inclusion
    • Climate
    • Design
      • Emerging Technology
    • Application & Adoption
      • Health
      • Education
      • Government
        • Military
        • Public Works
      • Labour
    • Arts & Culture
      • Film & TV
      • Music
      • Pop Culture
      • Digital Art
  • Columns
    • AI Policy Corner
    • Recess
  • The AI Ethics Brief
  • AI Literacy
    • Research Summaries
    • AI Ethics Living Dictionary
    • Learning Community
  • The State of AI Ethics Report
    • Volume 6 (February 2022)
    • Volume 5 (July 2021)
    • Volume 4 (April 2021)
    • Volume 3 (Jan 2021)
    • Volume 2 (Oct 2020)
    • Volume 1 (June 2020)
  • About
    • Our Contributions Policy
    • Our Open Access Policy
    • Contact
    • Donate

Nonhuman humanitarianism: when ‘AI for good’ can be harmful

June 19, 2022

🔬 Research Summary by Shreyasha Paudel, an independent researcher with an expertise in uncertainty in computer vision. She is about to start her PhD in Human Computer Interaction studying the ethical and social impacts of automation, digitization and data in developing countries from Fall 2022.

[Original paper by Mirca Madianou]


Overview:  This paper analyzes the use of chatbots in humanitarian applications to critically examine the assumptions behind “AI for good” initiatives. Through a mixed-method study, the author observes that current implementations of humanitarian chatbots are quite limited in their abilities to hold conversation and do not often deliver on stated benefits. On the other hand, they often carry risks of real harm due to the potential of data breaches, exclusion of the most marginalized, and removal of human connection. The article concludes that ‘AI for good’ initiatives often benefit tech companies and humanitarian organizations at the expense of vulnerable populations and thus rework the colonial legacies of humanitarianism while occluding the power dynamics at play.


Introduction

“AI for Good” initiatives often claim to leverage the power of artificial intelligence to solve global problems. Many international humanitarian agencies have spent large amounts of funding and other resources for these initiatives, often in partnership with private tech companies. This study questions the assumption that use of AI for humanitarian goals is always “good” and examines the purpose, advantages, limitations, risks, and actual beneficiaries of such applications.

For this study, the author focused on the use of chatbots in various humanitarian applications such as information dissemination, data collection, and feedback collection. She conducted a mixed-method study with interviews, participant observation, and digital ethnography to study  The interviews were conducted with seven groups of stakeholders – entrepreneurs, donors, humanitarian workers, digital developers, government representatives, business representatives, and volunteers. Participant observation was done at hackathons and industry events. In addition, the author also conducted digital ethnography by consulting various publicly available documents and drawing from her own interaction with these chatbots. 

The author found that despite their promise of a ‘more natural conversation’, the chatbots often did not understand complex sentences, lacked most recent information and cultural sensitivities, and excluded some groups of people. The author also identified power asymmetries between developers and promoters of these algorithms and the communities they intend to serve. The author claims that chatbots often benefit tech companies and humanitarian organizations from the Global North at the expense of vulnerable communities in the Global South and thus recreate colonial legacies.

Key Insights

Chatbots in Humanitarian Applications: their promise, usage, and their limitations

Humanitarian organizations claim to use chatbots for information dissemination, communication with communities, and accountability. In this study, the author looked at the psychotherapy bot ‘Karim’, the information app for refugees ‘Refugee Text’, World Food Programme (WFP)’s CHITCHAT developed for refugee camps in Western Kenya, WFP’s Agrochatea developed for farmers in Peru, and many others like it.

Through her study, the author also identifies reducing cost and increasing efficiency as an unstated motive for using chatbots. While many of these chatbots claim to harness AI to have more natural sounding conversations, the author finds that they are still pretty limited in their ability to understand complex phrases. The author showed that the chatbots reliably answered in case of specific predetermined questions or phrases but struggled when the questions consisted of longer sentences or had multiple and complex ideas. The author claims in case of information dissemination or communication, the chatbots might be functionally equivalent to simple surveys or FAQ pages. The chatbot also privileged short text and phrases to longer explanations and questions which reduces the possibility of meaningful conversation. The author also points out that not all chatbots were designed for specific cultural contexts or with feedback from communities. As a result, their usage did not really resonate with the intended communities.

Using the findings from her previous projects, the author also situates these chatbots into the trend of increasing digitized feedback and efficiency driven projects in humanitarian applications. The author wonders whether this trend and the use of chatbots with their limitations has reduced conversation to simplistic questions and answers, and accountability to a box-ticking exercise. In the long run, this may create distance between affected communities and the aid workers and has the potential to dehumanize interactions in the humanitarian context whilst claiming to be objective and scientific.

Risks: Can “AI for Good” be harmful ?

The author identifies various risks and potential for harm in the current approach to chatbots in humanitarian application. TODO:Data Safeguards. Many chatbots are currently released to the communities via existing messaging platforms such as Facebook Messenger, Whatsapp, and Telegram. Often they are released without a formal agreement with the parent tech company. As a result, there are no explicit protocols or accountability mechanisms to secure and prevent data breaches during use of these applications. Sometimes humanitarian organizations create their own websites for better data protection but this causes an accessibility issue, especially among the more digitally illiterate population.

Another risk that the author identifies is the risk of misinformation. The chatbots are most commonly deployed for information dissemination. However, there have been examples where the chatbots shared incorrect or out of date information. In humanitarian applications, such misinformation can directly lead to harm. Often chatbots are deployed to increase efficiency, and are accompanied by a reduction in human resource. In such a context, it may get increasingly difficult to detect and quantify the impact of such misinformation.

Lastly, the author talks about distancing created by automation as a harm. Chatbots can be frustrating if they keep restarting the conversation because they don’t understand the question. This problem can be compounded for humanitarian aid where situations are complex situations without predetermined solutions. Because these chatbots are usually developed outside of the cultural contexts of these affected communities, they also pose a risk of further removing the voice of these vulnerable communities and making them more invisible. 

Who benefits? Recreating coloniality

The author connects the above limitations and potential harms to existing power imbalances in the technology and the humanitarian sector. Often these “AI for good” initiatives are conceived, developed, or funded by organizations from the Global North. As a result, they encode “western biases” as evidenced by most applications being in English or lacking cultural nuances about the communities where they will be used. The author compares this western bias to existing western bias in how the notions of trust, accountability, and participation promoted by humanitarian organizations are also biased by western values. Thus in their current connotation, chatbots also exist as a notional accountability/participation mechanism and inherit the hierarchies and power dynamics that lie in existing humanitarian practices.

The author also points out that the main benefit for deploying these chatbots often goes to tech companies through the generated hype and publicity. These applications are used as testbeds by the developing companies to tune their consumer-facing products that will be released to the audience from Global North at a steep price. The author cited the example of the psychotherapy chatbot Karim released to refugee camps in Syria. The company which developed Karim also released a similar app ‘Tess’ for the US market. However, while Karim was deployed as a replacement for therapists, Tess was marketed as a supplement to human therapists and its use was accompanied by the availability of licensed psychotherapists. In this way, “AI for good” initiatives also allow private tech companies to experiment with untested technologies and extract data, time, and knowledge from vulnerable communities to generate private profits. The author compares this to the history of scientific experiments in former colonies.

The author concludes by warning against the ‘enchantment of technology’ where providing visibility and legitimacy to these chatbots developed by for-profit companies, we create a feedback cycle where the needs of communities become more hidden while the limitations are glossed over.

Between the lines

This is a much needed article that examines assumptions and unequal power relations behind “AI for good” innovations that are being promoted in the Global South. As a researcher who was trained in North America and is currently based in Nepal, the claims in the paper reflect many of my own experiences. I hope that the issues highlighted in this paper will motivate both AI researchers and humanitarian practitioners to think about power, use cases, and tradeoffs as they develop and push for AI based solutions. 

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

🔍 SEARCH

Spotlight

ALL IN Conference 2025: Four Key Takeaways from Montreal

Beyond Dependency: The Hidden Risk of Social Comparison in Chatbot Companionship

AI Policy Corner: Restriction vs. Regulation: Comparing State Approaches to AI Mental Health Legislation

Beyond Consultation: Building Inclusive AI Governance for Canada’s Democratic Future

AI Policy Corner: U.S. Executive Order on Advancing AI Education for American Youth

related posts

  • Research summary: The Toxic Potential of YouTube's Feedback Loop

    Research summary: The Toxic Potential of YouTube's Feedback Loop

  • From Instructions to Intrinsic Human Values - A Survey of Alignment Goals for Big Models

    From Instructions to Intrinsic Human Values - A Survey of Alignment Goals for Big Models

  • Toward Responsible AI Use: Considerations for Sustainability Impact Assessment

    Toward Responsible AI Use: Considerations for Sustainability Impact Assessment

  • Customization is Key: Four Characteristics of Textual Affordances for Accessible Data Visualizatio...

    "Customization is Key": Four Characteristics of Textual Affordances for Accessible Data Visualizatio...

  • Repairing Innovation - A Study of Integrating AI in Clinical Care (Research Summary)

    Repairing Innovation - A Study of Integrating AI in Clinical Care (Research Summary)

  • Assessing the Fairness of AI Systems: AI Practitioners' Processes, Challenges, and Needs for Support

    Assessing the Fairness of AI Systems: AI Practitioners' Processes, Challenges, and Needs for Support

  • Distributed Governance: a Principal-Agent Approach to Data Governance - Part 1 Background & Core Def...

    Distributed Governance: a Principal-Agent Approach to Data Governance - Part 1 Background & Core Def...

  • Research summary: Adversarial Machine Learning - Industry Perspectives

    Research summary: Adversarial Machine Learning - Industry Perspectives

  • Ten Simple Rules for Good Model-sharing Practices

    Ten Simple Rules for Good Model-sharing Practices

  • Why AI ethics is a critical theory

    Why AI ethics is a critical theory

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer


Articles

Columns

AI Literacy

The State of AI Ethics Report


 

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.

Contact

Donate


  • © 2025 MONTREAL AI ETHICS INSTITUTE.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.