Summary contributed by our researcher Victoria Heath (@victoria_heath7), who’s also a Communications Manager at Creative Commons.
*Link to original paper + authors at the bottom.
Overview: This paper explores the unique harms faced by the “Global South” from artificial intelligence (AI) through four different instances, and examines the application of international human rights law to mitigate those harms. The author advocates for an approach that is “human rights-centric, inclusive” and “context-driven.”
“Increasingly,” writes author Chinmayi Arun, “private owned web-based platforms control our access to public education, the public sphere, health services and our very relationships with the countries we live in.” This infiltration and permeation of technology requires that we critically examine and evaluate how it is designed and how it operates—this is especially true with automation and artificial intelligence (AI). “We must place the needs, history, and the cultural and economic context of a society at the center of design.”
This is true for many designed artifacts of society, like houses or buildings, why shouldn’t it be true for AI?
What is the Global South?
Arun’s focus for this research is on the “Global South,” a term she explores in length, concluding that it has come to transcend borders and includes “countless Souths.” It can be found within Europe and North America (e.g. refugee populations), and can also be used to distinguish the “elite” in countries like India, Mexico, and China from the impoverished and oppressed. This is especially useful due to the fact that the “political elite” and “industry elite” in many countries encapsulated in the “Global South” are often more focused on “protection of markets than on protection of citizens.” For example, “data colonization” is growing within countries like India and China, in which governments contract predominantly Western technology companies for public services. These companies are able to extract data from these populations with little to no regulation or oversight.
Thus, “Global South,” as utilized in this research and increasingly elsewhere, “focuses on inequality, oppression, and resistance to injustice and oppression.”
Technology in Other Worlds
In order to examine some of the harms posed to the “Global South” from AI, Arun explores “different models of exploitation” illustrated by four real-world examples. The first example is Facebook’s role in the Rohingya genocide in Myanmar, classified by Arun as a North-to-South model of exploitation, in which a technology “designed in the North” proves harmful when exported. The second example is the biometric identity database in India called Aadhaar, classified as a model of exploitation stemming from the actions of local elites. In this case, software billionaire Nandan Nilekani helped fund and create the mandatory system that has resulted in excluding people from the local welfare system and even surveilling undocumented migrant workers for deportation.
The third example is the use of data collection systems, like facial recognition, on refugees in Europe. Arun classifies this as exploitation by governments and even international humanitarian agencies of asylum seekers and refugees as they collect their biometrics and subject them to surveillance. Even with the best intentions, these practices often deprive these populations of their agency and can make them more vulnerable. The final example is China’s practice of selling surveillance technology to authoritarian countries like Ethiopia and Zimbabwe, classified by Arun as similar to the North-to-South model of exploitation. However, in this case, it’s facilitated by another Southern Country. These surveillance systems are oftentimes used by the political elite of a country to control the population.
AI and the Global South
At this point, it’s well-known that there are issues of bias and discrimination in algorithmic systems. However, what’s often missing from conversations around these issues and the harms they cause, is how “Southern populations” are both uniquely affected and unprotected. As Arun explains, “When companies deploy these technologies in Southern countries there are fewer resources and institutions to help protect marginalized people’s rights.” Thus, institutional frameworks that exist in Southern countries must be taken into account when devising ways of mitigating harms caused by these systems. It will be impossible to ensure the rights of marginalized peoples in the Global South if there is limited space and capabilities for citizens and civil society to engage with both the government and industry.
How International Human Rights Apply
International human rights law “offers a standard and a threshold that debates on innovation and AI must take into account,” writes Arun. However, as many in the international community have noted, most of the documents and international agreements related to international human rights were adopted before many of today’s technologies existed. Therefore, more must be done to ensure that AI does not violate basic human rights, and that basic digital rights are also codified in international agreements. One idea is to obligate governments and companies to “conduct human rights impact assessments and public consultations during the design and deployment of new AI systems or existing systems in new markets.”
Conclusion
“With every year that passes,” reflects Arun, “this system [of knowledge] intertwines itself with our institutions and permeates our societies.” The time to begin working on “reversing extractive technologies in favor of justice and human rights” was yesterday. The harms faced by Southern populations at the hands of AI and automation are significant, but they are not impossible to mitigate. The first point of action, says Arun, is to “account for the plural contexts of the Global South and adopt modes of engagement that include these populations, empower them, and design for them.”
Original paper by Chinmayi Arun: https://www.oxfordhandbooks.com/view/10.1093/oxfordhb/9780190067397.001.0001/oxfordhb-9780190067397-e-38