• Skip to main content
  • Skip to secondary menu
  • Skip to primary sidebar
  • Skip to footer
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

  • Articles
    • Public Policy
    • Privacy & Security
    • Human Rights
      • Ethics
      • JEDI (Justice, Equity, Diversity, Inclusion
    • Climate
    • Design
      • Emerging Technology
    • Application & Adoption
      • Health
      • Education
      • Government
        • Military
        • Public Works
      • Labour
    • Arts & Culture
      • Film & TV
      • Music
      • Pop Culture
      • Digital Art
  • Columns
    • AI Policy Corner
    • Recess
    • Tech Futures
  • The AI Ethics Brief
  • AI Literacy
    • Research Summaries
    • AI Ethics Living Dictionary
    • Learning Community
  • The State of AI Ethics Report
    • Volume 7 (November 2025)
    • Volume 6 (February 2022)
    • Volume 5 (July 2021)
    • Volume 4 (April 2021)
    • Volume 3 (Jan 2021)
    • Volume 2 (Oct 2020)
    • Volume 1 (June 2020)
  • About
    • Our Contributions Policy
    • Our Open Access Policy
    • Contact
    • Donate

Re-imagining Algorithmic Fairness in India and Beyond (Research Summary)

February 15, 2021

🔬 Research summary contributed by Nithya Sambasivan (@autopoietic), Staff Researcher at PAIR, the lead for the HCI-AI group at Google Research India (Bangalore), and the lead author of the original paper being summarized.

[Link to original paper + authors at the bottom]


Overview: The authors argue that several assumptions of algorithmic fairness are challenged in India. The distance between models and dis-empowered communities is large, and a myopic focus on localising fair model outputs alone can backfire in India. They point to 3 themes that require us to re-examine ML fairness: data & model distortions, double standards & distance by ML makers, and unquestioning AI aspiration.


Algorithmic fairness is West-centric, evidenced in its choice of sub-groups like race and gender, or civil laws in fairness optimisations. However, algorithmic fairness is becoming a universal ethical framework for AI for countries of the Global South. Sambasivan et al. argue that without engaging with the conditions, values, politics, and histories of the non-West, AI fairness can be a tokenism, at best—pernicious, at worst. As algorithmic fairness emerges as the ethical compass of AI systems, the field needs to examine its own defaults, biases, and blindspots. 

In this paper, Sambasivan et al. examine algorithmic power and present a new, holistic framework for algorithmic fairness in India, the world’s largest democracy. Their method involved semi-structured interviews with India-focused scholars and activists from law to LGBTQ rights to disability rights and a systematic review of algorithmic deployments and policies in India, all using feminist, decolonial, and anti-caste lenses. The authors argue that several assumptions of algorithmic fairness are challenged in India. The distance between models and dis-empowered communities is large, and a myopic focus on localising fair model outputs alone can backfire in India. They point to three themes that require us to re-examine ML fairness:

1) Data and model distortions: Datasets may not faithfully correspond to people and phenomena in India due to socio-economic factors. Models are over-fitted for digitally-rich, middle-class men. Caste, tribe, and religion present new bias vectors. Social justice mechanisms like reservations present new fairness conditions.

2) Double standards and distance by ML makers: Indian users are perceived as ‘bottom billion’ data subjects, Petri dishes for intrusive models, and given poor recourse—effectively limiting their agency. While Indians are part of the AI workforce, a majority work in services and the minority engineers often come from privileged class and caste backgrounds, limiting re-mediation of distances. 

3) Unquestioning AI aspiration: AI is aspirational and readily adopted in high-stakes domains, often too early in India. Lack of an ecosystem of tools, policies, and stakeholders to interrogate high-stakes AI limits meaningful fairness in India. 

Call to action

The authors propose an AI fairness research agenda in India along three critical and contingent pathways, calling for going beyond model fairness. 

Re-contextualising data and models: 

Due to the data and model distortions in India, we must be careful with data until they are trustworthy and combine datasets with an understanding of context. The vibrant human infrastructures point to new ways of looking at data as dialogue. Categories, ontologies, and behaviours are context-specific and need to be questioned. The axes of discrimination in India listed could be a starting point to detect and mitigate unfairness issues in models. Fairness criteria should be adopted to social justice mechanisms appropriate to the context.

Empowering communities

Marginalised communities need to be empowered in identifying problems, specifying fairness expectations, and designing systems to avoid top-down fairness. India’s heterogeneity means that Fair-ML researchers’ commitment should go beyond model outputs, to creating accessible systems. Like the fatal Union Carbide gas leak in 1984, unequal standards, inadequate safeguards, and dubious applications of AI in the non-West can lead to catastrophic effects. Fair-ML researchers should understand the systems into which they are embedding, engage with Indian realities, and whether the recourse is meaningful.

Enabling a Fair-ML ecosystem

For Fair-ML research to be impactful and sustainable, it is crucial for researchers to enable a critically conscious Fair-ML ecosystem through solidarity with various stakeholders through partnerships, policy makers, and journalists.

Context matters. We must take care to not copy-paste the western-normative ML fairness everywhere. This paper’s considerations are certainly not limited to India; likewise, they call for inclusively evolving global approaches to Fair-ML.


Original paper by Nithya Sambasivan, Erin Arnesen, Ben Hutchinson, Tulsee Doshi, Vinodkumar Prabhakaran: https://arxiv.org/pdf/2101.09995.pdf

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

🔍 SEARCH

Spotlight

The undying sun hangs in the sky, as people gather around signal towers, working through their digital devices.

Dreams and Realities in Modi’s AI Impact Summit

Illustration of a coral reef ecosystem

Tech Futures: Diversity of Thought and Experience: The UN’s Scientific Panel on AI

This image shows a large white, traditional, old building. The top half of the building represents the humanities (which is symbolised by the embedded text from classic literature which is faintly shown ontop the building). The bottom section of the building is embossed with mathematical formulas to represent the sciences. The middle layer of the image is heavily pixelated. On the steps at the front of the building there is a group of scholars, wearing formal suits and tie attire, who are standing around at the enternace talking and some of them are sitting on the steps. There are two stone, statute-like hands that are stretching the building apart from the left side. In the forefront of the image, there are 8 students - which can only be seen from the back. Their graduation gowns have bright blue hoods and they all look as though they are walking towards the old building which is in the background at a distance. There are a mix of students in the foreground.

Tech Futures: Co-opting Research and Education

Agentic AI systems and algorithmic accountability: a new era of e-commerce

ALL IN Conference 2025: Four Key Takeaways from Montreal

related posts

  • Research summary:  Algorithmic Bias: On the Implicit Biases of Social Technology

    Research summary: Algorithmic Bias: On the Implicit Biases of Social Technology

  • A Look at the American Data Privacy and Protection Act

    A Look at the American Data Privacy and Protection Act

  • Siren’s Song in the AI Ocean: A Survey on Hallucination in Large Language Models

    Siren’s Song in the AI Ocean: A Survey on Hallucination in Large Language Models

  • Digital Sex Crime, Online Misogyny, and Digital Feminism in South Korea

    Digital Sex Crime, Online Misogyny, and Digital Feminism in South Korea

  • Balancing Transparency and Risk: The Security and Privacy Risks of Open-Source Machine Learning Mode...

    Balancing Transparency and Risk: The Security and Privacy Risks of Open-Source Machine Learning Mode...

  • Montreal AI Ethics Institute Hosts a TechAIDE Café Session

    Montreal AI Ethics Institute Hosts a TechAIDE Café Session

  • Responsible sourcing and the professionalization of data work

    Responsible sourcing and the professionalization of data work

  • How Kathleen Siminyu created Kenya’s go-to space for Women in Machine Learning

    How Kathleen Siminyu created Kenya’s go-to space for Women in Machine Learning

  • Research summary: On the Edge of Tomorrow: Canada’s AI Augmented Workforce

    Research summary: On the Edge of Tomorrow: Canada’s AI Augmented Workforce

  • Why was your job application rejected: Bias in Recruitment Algorithms? (Part 2)

    Why was your job application rejected: Bias in Recruitment Algorithms? (Part 2)

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer


Articles

Columns

AI Literacy

The State of AI Ethics Report


 

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.

Contact

Donate


  • © 2025 MONTREAL AI ETHICS INSTITUTE.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.