• Skip to main content
  • Skip to secondary menu
  • Skip to primary sidebar
  • Skip to footer
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

  • Articles
    • Public Policy
    • Privacy & Security
    • Human Rights
      • Ethics
      • JEDI (Justice, Equity, Diversity, Inclusion
    • Climate
    • Design
      • Emerging Technology
    • Application & Adoption
      • Health
      • Education
      • Government
        • Military
        • Public Works
      • Labour
    • Arts & Culture
      • Film & TV
      • Music
      • Pop Culture
      • Digital Art
  • Columns
    • AI Policy Corner
    • Recess
  • The AI Ethics Brief
  • AI Literacy
    • Research Summaries
    • AI Ethics Living Dictionary
    • Learning Community
  • The State of AI Ethics Report
    • Volume 7 (November 2025)
    • Volume 6 (February 2022)
    • Volume 5 (July 2021)
    • Volume 4 (April 2021)
    • Volume 3 (Jan 2021)
    • Volume 2 (Oct 2020)
    • Volume 1 (June 2020)
  • About
    • Our Contributions Policy
    • Our Open Access Policy
    • Contact
    • Donate

Language (Technology) is Power: A Critical Survey of “Bias” in NLP (Research summary)

September 28, 2020

Summary contributed by Falaah Arif Khan, our Artist in Residence. She creates art exploring tech, including comics related to AI.

Link to original paper + authors at the bottom.


Mini-summary: With the recent boom in scholarship on Fairness and Bias in Machine Learning, several competing notions of bias and different approaches to mitigate their impact have emerged. This incisive meta-review from Blodgett et al dissects 146 papers on Bias in Natural Language Processing (NLP) and identifies critical discrepancies in motivation, normative reasoning and suggested approaches. Key findings from this study include mismatched motivations and interventions, a lack of engagement with relevant literature outside of NLP and overlooking the underlying power dynamics that inform language.

Full summary:

The authors ground their analysis in the recognition that social hierarchies and power dynamics deeply influence language. With this in mind, they make the following recommendations for future scholarship on Bias in NLP- They implore researchers to engage with relevant literature outside of the technical NLP community, in order to better motivate a deeper, richer formalization of “bias”- it’s sources, why it is harmful, in what ways and to whom. They also underline the importance of engaging with communities who are most affected by NLP systems and to take into account their lived experiences.

Their critical survey on recent scholarship demonstrates that perspectives that reconcile language and social dynamics are currently lacking. They find that most papers contain poorly motivated studies that leave unstated what algorithmic discrimination even entails or how it contributes to social injustice. This is further exacerbated by papers that omit normative reasoning and instead focus entirely on system performance. When motivations are enumerated in papers, they often remain brief and overlook an exposition on what type of model behaviors are deemed as harmful or ‘biased’, in what ways do these behaviors cause harm and to whom do they inflict harm. In the absence of a strong, well-articulated motivation for studying bias in NLP, papers on the same task end up operating with different notions of “bias” and hence take different approaches to mitigating this “bias”.

With opposing notions of “bias”, scholars tend to treat “bias” that is inherently representational (the model represents certain social groups less favorably than others) as allocational (discriminatory allocation of resources to different groups) and so authors tend to incorrectly treat representational norms as problematic only due to the fact that they can affect downstream applications that result in allocations.
In terms of shortcomings of techniques used to study “bias” in NLP, the paper identifies a lack of engagement with relevant literature outside of NLP, a mismatch between motivation and technique, and a narrow focus on the sources of bias.

With these limitations of existing scholarship in mind, the authors propose a fundamental reorientation of scholarship on analysing ‘bias’ in NLP towards the question: How are social hierarchies, language ideologies and NLP systems co-produced? Language is a tool for wielding power and language technologies play a critical role in maintaining power dynamics and enforcing social hierarchies. These dynamics influence every stage of the technological lifecycle and hence scholarship focused only on algorithmic interventions will prove to be inadequate.

The authors also validate their recommendations using a case study on African-American English (AAE). They explain how models such as toxicity detectors that perform extremely poorly on AAE perpetuate social stigmatization of AAE speakers. The case study drives home the authors’ point that analysis of ‘bias’ in such a context cannot be limited to merely algorithmic analyses, without taking into account the underlying systemic and structural inequalities.

The authors conclude with an open call to the scientific community, reiterating the need to unite scholarship on language with scholarship on social and power hierarchies.


Original paper by Su Lin Blodgett, Solon Barocas, Hal Daumé III, Hanna Wallach: https://www.aclweb.org/anthology/2020.acl-main.485.pdf

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

🔍 SEARCH

Spotlight

ALL IN Conference 2025: Four Key Takeaways from Montreal

Beyond Dependency: The Hidden Risk of Social Comparison in Chatbot Companionship

AI Policy Corner: Restriction vs. Regulation: Comparing State Approaches to AI Mental Health Legislation

Beyond Consultation: Building Inclusive AI Governance for Canada’s Democratic Future

AI Policy Corner: U.S. Executive Order on Advancing AI Education for American Youth

related posts

  • Generative AI in Writing Research Papers: A New Type of Algorithmic Bias and Uncertainty in Scholarl...

    Generative AI in Writing Research Papers: A New Type of Algorithmic Bias and Uncertainty in Scholarl...

  • Anthropomorphization of AI: Opportunities and Risks

    Anthropomorphization of AI: Opportunities and Risks

  • A Hazard Analysis Framework for Code Synthesis Large Language Models

    A Hazard Analysis Framework for Code Synthesis Large Language Models

  • Europe : Analysis of the Proposal for an AI Regulation

    Europe : Analysis of the Proposal for an AI Regulation

  • Technology on the Margins: AI and Global Migration Management From a Human Rights Perspective (Resea...

    Technology on the Margins: AI and Global Migration Management From a Human Rights Perspective (Resea...

  • Research summary: Social Biases in NLP Models as Barriers for Persons with Disabilities

    Research summary: Social Biases in NLP Models as Barriers for Persons with Disabilities

  • How Do We Teach Tech Ethics? How Should We?

    How Do We Teach Tech Ethics? How Should We?

  • Private Training Set Inspection in MLaaS

    Private Training Set Inspection in MLaaS

  • The Robot Made Me Do It: Human–Robot Interaction and Risk-Taking Behavior (Research Summary)

    The Robot Made Me Do It: Human–Robot Interaction and Risk-Taking Behavior (Research Summary)

  • Towards Community-Driven Generative AI

    Towards Community-Driven Generative AI

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer


Articles

Columns

AI Literacy

The State of AI Ethics Report


 

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.

Contact

Donate


  • © 2025 MONTREAL AI ETHICS INSTITUTE.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.