• Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • Core Principles of Responsible AI
    • Accountability
    • Fairness
    • Privacy
    • Safety and Security
    • Sustainability
    • Transparency
  • Special Topics
    • AI in Industry
    • Ethical Implications
    • Human-Centered Design
    • Regulatory Landscape
    • Technical Methods
  • Living Dictionary
  • State of AI Ethics
  • AI Ethics Brief
  • 🇫🇷
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

AI and the Global South: Designing for Other Worlds (Research Summary)

December 19, 2020

Summary contributed by our researcher Victoria Heath (@victoria_heath7), who’s also a Communications Manager at Creative Commons.

*Link to original paper + authors at the bottom.


Overview: This paper explores the unique harms faced by the “Global South” from artificial intelligence (AI) through four different instances, and examines the application of international human rights law to mitigate those harms. The author advocates for an approach that is “human rights-centric, inclusive” and “context-driven.”


“Increasingly,” writes author Chinmayi Arun, “private owned web-based platforms control our access to public education, the public sphere, health services and our very relationships with the countries we live in.” This infiltration and permeation of technology requires that we critically examine and evaluate how it is designed and how it operates—this is especially true with automation and artificial intelligence (AI). “We must place the needs, history, and the cultural and economic context of a society at the center of design.” 

This is true for many designed artifacts of society, like houses or buildings, why shouldn’t it be true for AI?

What is the Global South?

Arun’s focus for this research is on the “Global South,” a term she explores in length, concluding that it has come to transcend borders and includes “countless Souths.” It can be found within Europe and North America (e.g. refugee populations), and can also be used to distinguish the “elite” in countries like India, Mexico, and China from the impoverished and oppressed. This is especially useful due to the fact that the “political elite” and “industry elite” in many countries encapsulated in the “Global South” are often more focused on “protection of markets than on protection of citizens.” For example, “data colonization” is growing within countries like India and China, in which governments contract predominantly Western technology companies for public services. These companies are able to extract data from these populations with little to no regulation or oversight. 

Thus, “Global South,” as utilized in this research and increasingly elsewhere, “focuses on inequality, oppression, and resistance to injustice and oppression.” 

Technology in Other Worlds

In order to examine some of the harms posed to the “Global South” from AI, Arun explores “different models of exploitation” illustrated by four real-world examples. The first example is Facebook’s role in the Rohingya genocide in Myanmar, classified by Arun as a North-to-South model of exploitation, in which a technology “designed in the North” proves harmful when exported. The second example is the biometric identity database in India called Aadhaar, classified as a model of exploitation stemming from the actions of local elites. In this case, software billionaire Nandan Nilekani helped fund and create the mandatory system that has resulted in excluding people from the local welfare system and even surveilling undocumented migrant workers for deportation. 

The third example is the use of data collection systems, like facial recognition, on refugees in Europe. Arun classifies this as exploitation by governments and even international humanitarian agencies of asylum seekers and refugees as they collect their biometrics and subject them to surveillance. Even with the best intentions, these practices often deprive these populations of their agency and can make them more vulnerable. The final example is China’s practice of selling surveillance technology to authoritarian countries like Ethiopia and Zimbabwe, classified by Arun as similar to the North-to-South model of exploitation. However, in this case, it’s facilitated by another Southern Country. These surveillance systems are oftentimes used by the political elite of a country to control the population. 

AI and the Global South

At this point, it’s well-known that there are issues of bias and discrimination in algorithmic systems. However, what’s often missing from conversations around these issues and the harms they cause, is how “Southern populations” are both uniquely affected and unprotected. As Arun explains, “When companies deploy these technologies in Southern countries there are fewer resources and institutions to help protect marginalized people’s rights.” Thus, institutional frameworks that exist in Southern countries must be taken into account when devising ways of mitigating harms caused by these systems. It will be impossible to ensure the rights of marginalized peoples in the Global South if there is limited space and capabilities for citizens and civil society to engage with both the government and industry. 

How International Human Rights Apply

International human rights law “offers a standard and a threshold that debates on innovation and AI must take into account,” writes Arun. However, as many in the international community have noted, most of the documents and international agreements related to international human rights were adopted before many of today’s technologies existed. Therefore, more must be done to ensure that AI does not violate basic human rights, and that basic digital rights are also codified in international agreements. One idea is to obligate governments and companies to “conduct human rights impact assessments and public consultations during the design and deployment of new AI systems or existing systems in new markets.” 

Conclusion

“With every year that passes,” reflects Arun, “this system [of knowledge] intertwines itself with our institutions and permeates our societies.” The time to begin working on “reversing extractive technologies in favor of justice and human rights” was yesterday. The harms faced by Southern populations at the hands of AI and automation are significant, but they are not impossible to mitigate. The first point of action, says Arun, is to “account for the plural contexts of the Global South and adopt modes of engagement that include these populations, empower them, and design for them.” 


Original paper by Chinmayi Arun: https://www.oxfordhandbooks.com/view/10.1093/oxfordhb/9780190067397.001.0001/oxfordhb-9780190067397-e-38

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

🔍 SEARCH

Spotlight

AI Policy Corner: Frontier AI Safety Commitments, AI Seoul Summit 2024

AI Policy Corner: The Colorado State Deepfakes Act

Special Edition: Honouring the Legacy of Abhishek Gupta (1992–2024)

AI Policy Corner: The Turkish Artificial Intelligence Law Proposal

From Funding Crisis to AI Misuse: Critical Digital Rights Challenges from RightsCon 2025

related posts

  • International Human Rights, Artificial Intelligence, and the Challenge for the Pondering State: Time...

    International Human Rights, Artificial Intelligence, and the Challenge for the Pondering State: Time...

  • Evaluating a Methodology for Increasing AI Transparency: A Case Study

    Evaluating a Methodology for Increasing AI Transparency: A Case Study

  • State of AI Ethics

    State of AI Ethics

  • Submission to World Intellectual Property Organization on IP & AI

    Submission to World Intellectual Property Organization on IP & AI

  • Understanding Machine Learning Practitioners' Data Documentation Perceptions, Needs, Challenges, and...

    Understanding Machine Learning Practitioners' Data Documentation Perceptions, Needs, Challenges, and...

  • Balancing Data Utility and Confidentiality in the 2020 US Census

    Balancing Data Utility and Confidentiality in the 2020 US Census

  • Cleaning Up the Streets: Understanding Motivations, Mental Models, and Concerns of Users Flagging So...

    Cleaning Up the Streets: Understanding Motivations, Mental Models, and Concerns of Users Flagging So...

  • Responsibility assignment won’t solve the moral issues of artificial intelligence

    Responsibility assignment won’t solve the moral issues of artificial intelligence

  • AI supply chains make it easy to disavow ethical accountability

    AI supply chains make it easy to disavow ethical accountability

  • Incentivized Symbiosis: A Paradigm for Human-Agent Coevolution

    Incentivized Symbiosis: A Paradigm for Human-Agent Coevolution

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer

Categories


• Blog
• Research Summaries
• Columns
• Core Principles of Responsible AI
• Special Topics

Signature Content


• The State Of AI Ethics

• The Living Dictionary

• The AI Ethics Brief

Learn More


• About

• Open Access Policy

• Contributions Policy

• Editorial Stance on AI Tools

• Press

• Donate

• Contact

The AI Ethics Brief (bi-weekly newsletter)

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.


Archive

  • © MONTREAL AI ETHICS INSTITUTE. All rights reserved 2024.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.