• Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • Core Principles of Responsible AI
    • Accountability
    • Fairness
    • Privacy
    • Safety and Security
    • Sustainability
    • Transparency
  • Special Topics
    • AI in Industry
    • Ethical Implications
    • Human-Centered Design
    • Regulatory Landscape
    • Technical Methods
  • Living Dictionary
  • State of AI Ethics
  • AI Ethics Brief
  • 🇫🇷
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

Re-imagining Algorithmic Fairness in India and Beyond (Research Summary)

February 15, 2021

🔬 Research summary contributed by Nithya Sambasivan (@autopoietic), Staff Researcher at PAIR, the lead for the HCI-AI group at Google Research India (Bangalore), and the lead author of the original paper being summarized.

[Link to original paper + authors at the bottom]


Overview: The authors argue that several assumptions of algorithmic fairness are challenged in India. The distance between models and dis-empowered communities is large, and a myopic focus on localising fair model outputs alone can backfire in India. They point to 3 themes that require us to re-examine ML fairness: data & model distortions, double standards & distance by ML makers, and unquestioning AI aspiration.


Algorithmic fairness is West-centric, evidenced in its choice of sub-groups like race and gender, or civil laws in fairness optimisations. However, algorithmic fairness is becoming a universal ethical framework for AI for countries of the Global South. Sambasivan et al. argue that without engaging with the conditions, values, politics, and histories of the non-West, AI fairness can be a tokenism, at best—pernicious, at worst. As algorithmic fairness emerges as the ethical compass of AI systems, the field needs to examine its own defaults, biases, and blindspots. 

In this paper, Sambasivan et al. examine algorithmic power and present a new, holistic framework for algorithmic fairness in India, the world’s largest democracy. Their method involved semi-structured interviews with India-focused scholars and activists from law to LGBTQ rights to disability rights and a systematic review of algorithmic deployments and policies in India, all using feminist, decolonial, and anti-caste lenses. The authors argue that several assumptions of algorithmic fairness are challenged in India. The distance between models and dis-empowered communities is large, and a myopic focus on localising fair model outputs alone can backfire in India. They point to three themes that require us to re-examine ML fairness:

1) Data and model distortions: Datasets may not faithfully correspond to people and phenomena in India due to socio-economic factors. Models are over-fitted for digitally-rich, middle-class men. Caste, tribe, and religion present new bias vectors. Social justice mechanisms like reservations present new fairness conditions.

2) Double standards and distance by ML makers: Indian users are perceived as ‘bottom billion’ data subjects, Petri dishes for intrusive models, and given poor recourse—effectively limiting their agency. While Indians are part of the AI workforce, a majority work in services and the minority engineers often come from privileged class and caste backgrounds, limiting re-mediation of distances. 

3) Unquestioning AI aspiration: AI is aspirational and readily adopted in high-stakes domains, often too early in India. Lack of an ecosystem of tools, policies, and stakeholders to interrogate high-stakes AI limits meaningful fairness in India. 

Call to action

The authors propose an AI fairness research agenda in India along three critical and contingent pathways, calling for going beyond model fairness. 

Re-contextualising data and models: 

Due to the data and model distortions in India, we must be careful with data until they are trustworthy and combine datasets with an understanding of context. The vibrant human infrastructures point to new ways of looking at data as dialogue. Categories, ontologies, and behaviours are context-specific and need to be questioned. The axes of discrimination in India listed could be a starting point to detect and mitigate unfairness issues in models. Fairness criteria should be adopted to social justice mechanisms appropriate to the context.

Empowering communities

Marginalised communities need to be empowered in identifying problems, specifying fairness expectations, and designing systems to avoid top-down fairness. India’s heterogeneity means that Fair-ML researchers’ commitment should go beyond model outputs, to creating accessible systems. Like the fatal Union Carbide gas leak in 1984, unequal standards, inadequate safeguards, and dubious applications of AI in the non-West can lead to catastrophic effects. Fair-ML researchers should understand the systems into which they are embedding, engage with Indian realities, and whether the recourse is meaningful.

Enabling a Fair-ML ecosystem

For Fair-ML research to be impactful and sustainable, it is crucial for researchers to enable a critically conscious Fair-ML ecosystem through solidarity with various stakeholders through partnerships, policy makers, and journalists.

Context matters. We must take care to not copy-paste the western-normative ML fairness everywhere. This paper’s considerations are certainly not limited to India; likewise, they call for inclusively evolving global approaches to Fair-ML.


Original paper by Nithya Sambasivan, Erin Arnesen, Ben Hutchinson, Tulsee Doshi, Vinodkumar Prabhakaran: https://arxiv.org/pdf/2101.09995.pdf

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

🔍 SEARCH

Spotlight

AI Policy Corner: Singapore’s National AI Strategy 2.0

AI Governance in a Competitive World: Balancing Innovation, Regulation and Ethics | Point Zero Forum 2025

AI Policy Corner: Frontier AI Safety Commitments, AI Seoul Summit 2024

AI Policy Corner: The Colorado State Deepfakes Act

Special Edition: Honouring the Legacy of Abhishek Gupta (1992–2024)

related posts

  • Research Summary: Towards Evaluating the Robustness of Neural Networks

    Research Summary: Towards Evaluating the Robustness of Neural Networks

  • A collection of principles for guiding and evaluating large language models

    A collection of principles for guiding and evaluating large language models

  • Post-Mortem Privacy 2.0: Theory, Law and Technology

    Post-Mortem Privacy 2.0: Theory, Law and Technology

  • The Return on Investment in AI Ethics: A Holistic Framework

    The Return on Investment in AI Ethics: A Holistic Framework

  • Enough With “Human-AI Collaboration”

    Enough With “Human-AI Collaboration”

  • Research summary: Troops, Trolls and Troublemakers: A Global Inventory of Organized Social Media Man...

    Research summary: Troops, Trolls and Troublemakers: A Global Inventory of Organized Social Media Man...

  • Research summary: AI Governance in 2019, A Year in Review: Observations of 50 Global Experts

    Research summary: AI Governance in 2019, A Year in Review: Observations of 50 Global Experts

  • Theorizing Femininity in AI: a Framework for Undoing Technology’s Gender Troubles (Research Summary)

    Theorizing Femininity in AI: a Framework for Undoing Technology’s Gender Troubles (Research Summary)

  • A Systematic Review of Ethical Concerns with Voice Assistants

    A Systematic Review of Ethical Concerns with Voice Assistants

  • AI Bias in Healthcare: Using ImpactPro as a Case Study for Healthcare Practitioners’ Duties to Engag...

    AI Bias in Healthcare: Using ImpactPro as a Case Study for Healthcare Practitioners’ Duties to Engag...

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer

Categories


• Blog
• Research Summaries
• Columns
• Core Principles of Responsible AI
• Special Topics

Signature Content


• The State Of AI Ethics

• The Living Dictionary

• The AI Ethics Brief

Learn More


• About

• Open Access Policy

• Contributions Policy

• Editorial Stance on AI Tools

• Press

• Donate

• Contact

The AI Ethics Brief (bi-weekly newsletter)

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.


Archive

  • Š MONTREAL AI ETHICS INSTITUTE. All rights reserved 2024.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.