• Skip to main content
  • Skip to secondary menu
  • Skip to primary sidebar
  • Skip to footer
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

  • Articles
    • Public Policy
    • Privacy & Security
    • Human Rights
      • Ethics
      • JEDI (Justice, Equity, Diversity, Inclusion
    • Climate
    • Design
      • Emerging Technology
    • Application & Adoption
      • Health
      • Education
      • Government
        • Military
        • Public Works
      • Labour
    • Arts & Culture
      • Film & TV
      • Music
      • Pop Culture
      • Digital Art
  • Columns
    • AI Policy Corner
    • Recess
  • The AI Ethics Brief
  • AI Literacy
    • Research Summaries
    • AI Ethics Living Dictionary
    • Learning Community
  • The State of AI Ethics Report
    • Volume 7 (November 2025)
    • Volume 6 (February 2022)
    • Volume 5 (July 2021)
    • Volume 4 (April 2021)
    • Volume 3 (Jan 2021)
    • Volume 2 (Oct 2020)
    • Volume 1 (June 2020)
  • About
    • Our Contributions Policy
    • Our Open Access Policy
    • Contact
    • Donate

Speciesist bias in AI – How AI applications perpetuate discrimination and unfair outcomes against animals

June 2, 2022

šŸ”¬ Research Summary by Thilo Hagendorff, an AI ethicist at the University of Tuebingen (Germany).

[Original paper by Thilo Hagendorff, Leonie Bossert, Tse Yip Fai, Peter Singer]


Overview: A lot of efforts are made to reduce biases in AI. However, up to now, all these efforts have been anthropocentric and exclude animals, despite the immense influence AI systems can have on either increasing or reducing violence that is inflicted on them, especially on farmed animals. A new paper describes and investigates the ā€œspeciesist biasā€ in many AI applications and stresses the importance of widening the scope of AI fairness frameworks.


Introduction

Bias mitigation in AI systems is probably one of the most important topics in AI ethics. Various high-profile cases where algorithmic decision-making caused harm to women, people ofĀ  color, minorities, etc. spurred considerable efforts to render AI applications fair(er). However, the AI fairness field still succumbs to a blind spot, namely its insensitivity to discrimination against animals. A new paper by Thilo Hagendorff, Leonie Bossert, Yip Fai Tse and Peter Singer seeks to close this gap in research. It describes and empirically investigates the ā€œspeciesist biasā€ in several different AI applications, especially large language models as well as image recognition systems. The paper critically stresses that currently, AI technologies play a significant role in perpetuating and normalizing violence against animals. It calls for widening the scope of debiasing methods for AI systems in order to reduce, instead of increase, the violence that is inflicted especially on farmed animals.

Key Insights

Starting point

Around 60 billion land animals are held captive and killed every year for meat, dairy, and eggs. The industries that carry this out are not just major contributors to climate change, environmental destruction, pandemics and other public health crises. They are also responsible for unimaginable suffering in farmed animals, who are bred and held captive in crowded, filthy conditions. After a fraction of their life expectancy, they are slaughtered, often without being stunned. All this becomes possible due to speciesism, which is a belief system that normalizes and justifies the devaluation of some species of animals. However, as more and more researchers from philosophy, psychology, as well as other disciplines emphasize, there are strong ethical arguments opposing speciesism. Having this in mind, one can pose the question: what role do current AI technologies play in this? New research comes to sobering conclusions. It demonstrates how AI applications perpetuate as well as reinforce patterns that promote violence against animals.

Speciesist Bias

AI researchers use various tools and methods for reducing algorithmic discrimination, primarily by dealing with protected attributes. These attributes typically span gender, race, or ethnicity, sexual and political orientation, religion, nationality, social class, age, appearance, and disability. Striking, however, is the fact that discussions in the field of AI fairness have a purely anthropocentric tailoring. So far, speciesist biases have been completely disregarded. No technical or organizational means to combat them exist. In general, speciesist biases are learned and solidified by AI applications when they are trained on datasets in which speciesist patterns prevail. Ultimately, AI technologies render these patterns difficult to alter and normalizes them as seemingly essentialist

The study’s results

The study describes case studies of speciesist biases in different areas of AI use, especially image recognition and large language models. Regarding image recognition, the study investigates speciesism in ImageNet and other image datasets, for instance with regard to their annotation structures as well as image content. The researchers tested various models’ (MobileNet, VGG16, ResNet50, InveptionV3, Vision Transformer) performance on realistic images of factory farmed animals in contrast to their performance on images depicting free-range environments, showing significant drops in accuracy for the former. The study reflects on the consequences of this, showing that image recognition models perpetuate stereotypes and misconceptions concerning animal welfare and typical living conditions for farmed animals.

The study also investigates language models. It demonstrates speciesist tendencies in text corpora via word embedding models like GloVe or Word2Vec that are able to quantify the relatedness of words. The text corpora, which are also used to train contextual models, meaning full fledged large language models, associate farmed animals predominantly with negative terms like ā€˜ugly’, ā€˜primitive’, ā€˜hate’, etc. On the other hand, companion as well as non-companion species like dogs, cats, or parrots are related to positive concepts like ā€˜cute’, ā€˜love’, personhood, or domesticity. To test large language models like GPT-3, the researchers designed specific prompts that are used for bias detection. Unsurprisingly, GPT-3 shows the very speciesist biases in its outputs that were already signaled by word embeddings. The more an animal species is classified as a farmed animal (in a western sense), the more GPT-3 tends to produce outputs that are related to violence against the respective animals. The study reveals that even language models like Delphi which are fine-tuned for tasks in moral decision making and which are particularly sensitive to biases or discrimination show speciesist patterns, for instance when considering ā€œKilling a pig after it has lived a miserable life in a factory farmā€ okay, whereas ā€œKilling a dog if it is culturally acceptedā€ is deemed to be wrong.

Between the lines

Traditionally, fairness in AI means to foster outcomes that do not provide unjustified harms to individuals, regardless of their race, gender, or other protected attributes. The paper argues for extending this tenet to algorithmic discrimination of animals. Up to now, the AI fairness community has largely disregarded this particular dimension of discrimination. Even more so, the field of AI ethics hitherto has had an anthropocentric tailoring. However, the manifold occurrences of speciesist machine biases lead to subtle support, endorsement, and consolidation of systems that foster unnecessary violence against animals. This should be a wake-up call for AI practitioners, engaging them to apply the rich toolbox of existing bias mitigation measures in this regard. Whether they will succeed or fail with this task is likely to determineĀ  whether AI applications from various domains willĀ  underpin systems of violence against animals or counteract them by putting anti-discrimination measures into practice to the fullest possible extent.

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

šŸ” SEARCH

Spotlight

ALL IN Conference 2025: Four Key Takeaways from Montreal

Beyond Dependency: The Hidden Risk of Social Comparison in Chatbot Companionship

AI Policy Corner: Restriction vs. Regulation: Comparing State Approaches to AI Mental Health Legislation

Beyond Consultation: Building Inclusive AI Governance for Canada’s Democratic Future

AI Policy Corner: U.S. Executive Order on Advancing AI Education for American Youth

related posts

  • How Prevalent is Gender Bias in ChatGPT? - Exploring German and English ChatGPT Responses

    How Prevalent is Gender Bias in ChatGPT? - Exploring German and English ChatGPT Responses

  • On Measuring Fairness in Generative Modelling (NeurIPS 2023)

    On Measuring Fairness in Generative Modelling (NeurIPS 2023)

  • Research summary: Adversarial Machine Learning - Industry Perspectives

    Research summary: Adversarial Machine Learning - Industry Perspectives

  • AI Framework for Healthy Built Environments

    AI Framework for Healthy Built Environments

  • Mapping the Responsible AI Profession, A Field in Formation (techUK)

    Mapping the Responsible AI Profession, A Field in Formation (techUK)

  • The State of Artificial Intelligence in the Pacific Islands

    The State of Artificial Intelligence in the Pacific Islands

  • Customization is Key: Four Characteristics of Textual Affordances for Accessible Data Visualizatio...

    "Customization is Key": Four Characteristics of Textual Affordances for Accessible Data Visualizatio...

  • Managing Human and Robots Together - Can That Be a Leadership Dilemma?

    Managing Human and Robots Together - Can That Be a Leadership Dilemma?

  • Research summary: The Toxic Potential of YouTube's Feedback Loop

    Research summary: The Toxic Potential of YouTube's Feedback Loop

  • Outsourced & Automated: How AI Companies Have Taken Over Government Decision-Making

    Outsourced & Automated: How AI Companies Have Taken Over Government Decision-Making

Partners

  • Ā 
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer


Articles

Columns

AI Literacy

The State of AI Ethics Report


 

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.

Contact

Donate


  • Ā© 2025 MONTREAL AI ETHICS INSTITUTE.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.