đŹ Research Summary by Shreyasha Paudel, an independent researcher with an expertise in uncertainty in computer vision. She is about to start her PhD in Human Computer Interaction studying the ethical and social impacts of automation, digitization and data in developing countries from Fall 2022.
[Original paper by Mirca Madianou]
Overview: Â This paper analyzes the use of chatbots in humanitarian applications to critically examine the assumptions behind âAI for goodâ initiatives. Through a mixed-method study, the author observes that current implementations of humanitarian chatbots are quite limited in their abilities to hold conversation and do not often deliver on stated benefits. On the other hand, they often carry risks of real harm due to the potential of data breaches, exclusion of the most marginalized, and removal of human connection. The article concludes that âAI for goodâ initiatives often benefit tech companies and humanitarian organizations at the expense of vulnerable populations and thus rework the colonial legacies of humanitarianism while occluding the power dynamics at play.
Introduction
âAI for Goodâ initiatives often claim to leverage the power of artificial intelligence to solve global problems. Many international humanitarian agencies have spent large amounts of funding and other resources for these initiatives, often in partnership with private tech companies. This study questions the assumption that use of AI for humanitarian goals is always âgoodâ and examines the purpose, advantages, limitations, risks, and actual beneficiaries of such applications.
For this study, the author focused on the use of chatbots in various humanitarian applications such as information dissemination, data collection, and feedback collection. She conducted a mixed-method study with interviews, participant observation, and digital ethnography to study The interviews were conducted with seven groups of stakeholders – entrepreneurs, donors, humanitarian workers, digital developers, government representatives, business representatives, and volunteers. Participant observation was done at hackathons and industry events. In addition, the author also conducted digital ethnography by consulting various publicly available documents and drawing from her own interaction with these chatbots.
The author found that despite their promise of a âmore natural conversationâ, the chatbots often did not understand complex sentences, lacked most recent information and cultural sensitivities, and excluded some groups of people. The author also identified power asymmetries between developers and promoters of these algorithms and the communities they intend to serve. The author claims that chatbots often benefit tech companies and humanitarian organizations from the Global North at the expense of vulnerable communities in the Global South and thus recreate colonial legacies.
Key Insights
Chatbots in Humanitarian Applications: their promise, usage, and their limitations
Humanitarian organizations claim to use chatbots for information dissemination, communication with communities, and accountability. In this study, the author looked at the psychotherapy bot âKarimâ, the information app for refugees âRefugee Textâ, World Food Programme (WFP)âs CHITCHAT developed for refugee camps in Western Kenya, WFPâs Agrochatea developed for farmers in Peru, and many others like it.
Through her study, the author also identifies reducing cost and increasing efficiency as an unstated motive for using chatbots. While many of these chatbots claim to harness AI to have more natural sounding conversations, the author finds that they are still pretty limited in their ability to understand complex phrases. The author showed that the chatbots reliably answered in case of specific predetermined questions or phrases but struggled when the questions consisted of longer sentences or had multiple and complex ideas. The author claims in case of information dissemination or communication, the chatbots might be functionally equivalent to simple surveys or FAQ pages. The chatbot also privileged short text and phrases to longer explanations and questions which reduces the possibility of meaningful conversation. The author also points out that not all chatbots were designed for specific cultural contexts or with feedback from communities. As a result, their usage did not really resonate with the intended communities.
Using the findings from her previous projects, the author also situates these chatbots into the trend of increasing digitized feedback and efficiency driven projects in humanitarian applications. The author wonders whether this trend and the use of chatbots with their limitations has reduced conversation to simplistic questions and answers, and accountability to a box-ticking exercise. In the long run, this may create distance between affected communities and the aid workers and has the potential to dehumanize interactions in the humanitarian context whilst claiming to be objective and scientific.
Risks: Can âAI for Goodâ be harmful ?
The author identifies various risks and potential for harm in the current approach to chatbots in humanitarian application. TODO:Data Safeguards. Many chatbots are currently released to the communities via existing messaging platforms such as Facebook Messenger, Whatsapp, and Telegram. Often they are released without a formal agreement with the parent tech company. As a result, there are no explicit protocols or accountability mechanisms to secure and prevent data breaches during use of these applications. Sometimes humanitarian organizations create their own websites for better data protection but this causes an accessibility issue, especially among the more digitally illiterate population.
Another risk that the author identifies is the risk of misinformation. The chatbots are most commonly deployed for information dissemination. However, there have been examples where the chatbots shared incorrect or out of date information. In humanitarian applications, such misinformation can directly lead to harm. Often chatbots are deployed to increase efficiency, and are accompanied by a reduction in human resource. In such a context, it may get increasingly difficult to detect and quantify the impact of such misinformation.
Lastly, the author talks about distancing created by automation as a harm. Chatbots can be frustrating if they keep restarting the conversation because they donât understand the question. This problem can be compounded for humanitarian aid where situations are complex situations without predetermined solutions. Because these chatbots are usually developed outside of the cultural contexts of these affected communities, they also pose a risk of further removing the voice of these vulnerable communities and making them more invisible.
Who benefits? Recreating coloniality
The author connects the above limitations and potential harms to existing power imbalances in the technology and the humanitarian sector. Often these âAI for goodâ initiatives are conceived, developed, or funded by organizations from the Global North. As a result, they encode âwestern biasesâ as evidenced by most applications being in English or lacking cultural nuances about the communities where they will be used. The author compares this western bias to existing western bias in how the notions of trust, accountability, and participation promoted by humanitarian organizations are also biased by western values. Thus in their current connotation, chatbots also exist as a notional accountability/participation mechanism and inherit the hierarchies and power dynamics that lie in existing humanitarian practices.
The author also points out that the main benefit for deploying these chatbots often goes to tech companies through the generated hype and publicity. These applications are used as testbeds by the developing companies to tune their consumer-facing products that will be released to the audience from Global North at a steep price. The author cited the example of the psychotherapy chatbot Karim released to refugee camps in Syria. The company which developed Karim also released a similar app âTessâ for the US market. However, while Karim was deployed as a replacement for therapists, Tess was marketed as a supplement to human therapists and its use was accompanied by the availability of licensed psychotherapists. In this way, âAI for goodâ initiatives also allow private tech companies to experiment with untested technologies and extract data, time, and knowledge from vulnerable communities to generate private profits. The author compares this to the history of scientific experiments in former colonies.
The author concludes by warning against the âenchantment of technologyâ where providing visibility and legitimacy to these chatbots developed by for-profit companies, we create a feedback cycle where the needs of communities become more hidden while the limitations are glossed over.
Between the lines
This is a much needed article that examines assumptions and unequal power relations behind âAI for goodâ innovations that are being promoted in the Global South. As a researcher who was trained in North America and is currently based in Nepal, the claims in the paper reflect many of my own experiences. I hope that the issues highlighted in this paper will motivate both AI researchers and humanitarian practitioners to think about power, use cases, and tradeoffs as they develop and push for AI based solutions.Â