

Summary contributed by our researcher Victoria Heath (@victoria_heath7), who’s also a Communications Manager at Creative Commons.
*Link to original paper + authors at the bottom.
Overview: This paper explores the unique harms faced by the âGlobal Southâ from artificial intelligence (AI) through four different instances, and examines the application of international human rights law to mitigate those harms. The author advocates for an approach that is âhuman rights-centric, inclusiveâ and âcontext-driven.â
âIncreasingly,â writes author Chinmayi Arun, âprivate owned web-based platforms control our access to public education, the public sphere, health services and our very relationships with the countries we live in.â This infiltration and permeation of technology requires that we critically examine and evaluate how it is designed and how it operatesâthis is especially true with automation and artificial intelligence (AI). âWe must place the needs, history, and the cultural and economic context of a society at the center of design.â
This is true for many designed artifacts of society, like houses or buildings, why shouldnât it be true for AI?
What is the Global South?
Arunâs focus for this research is on the âGlobal South,â a term she explores in length, concluding that it has come to transcend borders and includes âcountless Souths.â It can be found within Europe and North America (e.g. refugee populations), and can also be used to distinguish the âeliteâ in countries like India, Mexico, and China from the impoverished and oppressed. This is especially useful due to the fact that the âpolitical eliteâ and âindustry eliteâ in many countries encapsulated in the âGlobal Southâ are often more focused on âprotection of markets than on protection of citizens.â For example, âdata colonizationâ is growing within countries like India and China, in which governments contract predominantly Western technology companies for public services. These companies are able to extract data from these populations with little to no regulation or oversight.
Thus, âGlobal South,â as utilized in this research and increasingly elsewhere, âfocuses on inequality, oppression, and resistance to injustice and oppression.â
Technology in Other Worlds
In order to examine some of the harms posed to the âGlobal Southâ from AI, Arun explores âdifferent models of exploitationâ illustrated by four real-world examples. The first example is Facebookâs role in the Rohingya genocide in Myanmar, classified by Arun as a North-to-South model of exploitation, in which a technology âdesigned in the Northâ proves harmful when exported. The second example is the biometric identity database in India called Aadhaar, classified as a model of exploitation stemming from the actions of local elites. In this case, software billionaire Nandan Nilekani helped fund and create the mandatory system that has resulted in excluding people from the local welfare system and even surveilling undocumented migrant workers for deportation.
The third example is the use of data collection systems, like facial recognition, on refugees in Europe. Arun classifies this as exploitation by governments and even international humanitarian agencies of asylum seekers and refugees as they collect their biometrics and subject them to surveillance. Even with the best intentions, these practices often deprive these populations of their agency and can make them more vulnerable. The final example is Chinaâs practice of selling surveillance technology to authoritarian countries like Ethiopia and Zimbabwe, classified by Arun as similar to the North-to-South model of exploitation. However, in this case, itâs facilitated by another Southern Country. These surveillance systems are oftentimes used by the political elite of a country to control the population.
AI and the Global South
At this point, itâs well-known that there are issues of bias and discrimination in algorithmic systems. However, whatâs often missing from conversations around these issues and the harms they cause, is how âSouthern populationsâ are both uniquely affected and unprotected. As Arun explains, âWhen companies deploy these technologies in Southern countries there are fewer resources and institutions to help protect marginalized peopleâs rights.â Thus, institutional frameworks that exist in Southern countries must be taken into account when devising ways of mitigating harms caused by these systems. It will be impossible to ensure the rights of marginalized peoples in the Global South if there is limited space and capabilities for citizens and civil society to engage with both the government and industry.
How International Human Rights Apply
International human rights law âoffers a standard and a threshold that debates on innovation and AI must take into account,â writes Arun. However, as many in the international community have noted, most of the documents and international agreements related to international human rights were adopted before many of todayâs technologies existed. Therefore, more must be done to ensure that AI does not violate basic human rights, and that basic digital rights are also codified in international agreements. One idea is to obligate governments and companies to âconduct human rights impact assessments and public consultations during the design and deployment of new AI systems or existing systems in new markets.â
Conclusion
âWith every year that passes,â reflects Arun, âthis system [of knowledge] intertwines itself with our institutions and permeates our societies.â The time to begin working on âreversing extractive technologies in favor of justice and human rightsâ was yesterday. The harms faced by Southern populations at the hands of AI and automation are significant, but they are not impossible to mitigate. The first point of action, says Arun, is to âaccount for the plural contexts of the Global South and adopt modes of engagement that include these populations, empower them, and design for them.â
Original paper by Chinmayi Arun: https://www.oxfordhandbooks.com/view/10.1093/oxfordhb/9780190067397.001.0001/oxfordhb-9780190067397-e-38