• Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • Core Principles of Responsible AI
    • Accountability
    • Fairness
    • Privacy
    • Safety and Security
    • Sustainability
    • Transparency
  • Special Topics
    • AI in Industry
    • Ethical Implications
    • Human-Centered Design
    • Regulatory Landscape
    • Technical Methods
  • Living Dictionary
  • State of AI Ethics
  • AI Ethics Brief
  • šŸ‡«šŸ‡·
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

Speciesist bias in AI – How AI applications perpetuate discrimination and unfair outcomes against animals

June 2, 2022

šŸ”¬ Research Summary by Thilo Hagendorff, an AI ethicist at the University of Tuebingen (Germany).

[Original paper by Thilo Hagendorff, Leonie Bossert, Tse Yip Fai, Peter Singer]


Overview: A lot of efforts are made to reduce biases in AI. However, up to now, all these efforts have been anthropocentric and exclude animals, despite the immense influence AI systems can have on either increasing or reducing violence that is inflicted on them, especially on farmed animals. A new paper describes and investigates the ā€œspeciesist biasā€ in many AI applications and stresses the importance of widening the scope of AI fairness frameworks.


Introduction

Bias mitigation in AI systems is probably one of the most important topics in AI ethics. Various high-profile cases where algorithmic decision-making caused harm to women, people ofĀ  color, minorities, etc. spurred considerable efforts to render AI applications fair(er). However, the AI fairness field still succumbs to a blind spot, namely its insensitivity to discrimination against animals. A new paper by Thilo Hagendorff, Leonie Bossert, Yip Fai Tse and Peter Singer seeks to close this gap in research. It describes and empirically investigates the ā€œspeciesist biasā€ in several different AI applications, especially large language models as well as image recognition systems. The paper critically stresses that currently, AI technologies play a significant role in perpetuating and normalizing violence against animals. It calls for widening the scope of debiasing methods for AI systems in order to reduce, instead of increase, the violence that is inflicted especially on farmed animals.

Key Insights

Starting point

Around 60 billion land animals are held captive and killed every year for meat, dairy, and eggs. The industries that carry this out are not just major contributors to climate change, environmental destruction, pandemics and other public health crises. They are also responsible for unimaginable suffering in farmed animals, who are bred and held captive in crowded, filthy conditions. After a fraction of their life expectancy, they are slaughtered, often without being stunned. All this becomes possible due to speciesism, which is a belief system that normalizes and justifies the devaluation of some species of animals. However, as more and more researchers from philosophy, psychology, as well as other disciplines emphasize, there are strong ethical arguments opposing speciesism. Having this in mind, one can pose the question: what role do current AI technologies play in this? New research comes to sobering conclusions. It demonstrates how AI applications perpetuate as well as reinforce patterns that promote violence against animals.

Speciesist Bias

AI researchers use various tools and methods for reducing algorithmic discrimination, primarily by dealing with protected attributes. These attributes typically span gender, race, or ethnicity, sexual and political orientation, religion, nationality, social class, age, appearance, and disability. Striking, however, is the fact that discussions in the field of AI fairness have a purely anthropocentric tailoring. So far, speciesist biases have been completely disregarded. No technical or organizational means to combat them exist. In general, speciesist biases are learned and solidified by AI applications when they are trained on datasets in which speciesist patterns prevail. Ultimately, AI technologies render these patterns difficult to alter and normalizes them as seemingly essentialist

The study’s results

The study describes case studies of speciesist biases in different areas of AI use, especially image recognition and large language models. Regarding image recognition, the study investigates speciesism in ImageNet and other image datasets, for instance with regard to their annotation structures as well as image content. The researchers tested various models’ (MobileNet, VGG16, ResNet50, InveptionV3, Vision Transformer) performance on realistic images of factory farmed animals in contrast to their performance on images depicting free-range environments, showing significant drops in accuracy for the former. The study reflects on the consequences of this, showing that image recognition models perpetuate stereotypes and misconceptions concerning animal welfare and typical living conditions for farmed animals.

The study also investigates language models. It demonstrates speciesist tendencies in text corpora via word embedding models like GloVe or Word2Vec that are able to quantify the relatedness of words. The text corpora, which are also used to train contextual models, meaning full fledged large language models, associate farmed animals predominantly with negative terms like ā€˜ugly’, ā€˜primitive’, ā€˜hate’, etc. On the other hand, companion as well as non-companion species like dogs, cats, or parrots are related to positive concepts like ā€˜cute’, ā€˜love’, personhood, or domesticity. To test large language models like GPT-3, the researchers designed specific prompts that are used for bias detection. Unsurprisingly, GPT-3 shows the very speciesist biases in its outputs that were already signaled by word embeddings. The more an animal species is classified as a farmed animal (in a western sense), the more GPT-3 tends to produce outputs that are related to violence against the respective animals. The study reveals that even language models like Delphi which are fine-tuned for tasks in moral decision making and which are particularly sensitive to biases or discrimination show speciesist patterns, for instance when considering ā€œKilling a pig after it has lived a miserable life in a factory farmā€ okay, whereas ā€œKilling a dog if it is culturally acceptedā€ is deemed to be wrong.

Between the lines

Traditionally, fairness in AI means to foster outcomes that do not provide unjustified harms to individuals, regardless of their race, gender, or other protected attributes. The paper argues for extending this tenet to algorithmic discrimination of animals. Up to now, the AI fairness community has largely disregarded this particular dimension of discrimination. Even more so, the field of AI ethics hitherto has had an anthropocentric tailoring. However, the manifold occurrences of speciesist machine biases lead to subtle support, endorsement, and consolidation of systems that foster unnecessary violence against animals. This should be a wake-up call for AI practitioners, engaging them to apply the rich toolbox of existing bias mitigation measures in this regard. Whether they will succeed or fail with this task is likely to determineĀ  whether AI applications from various domains willĀ  underpin systems of violence against animals or counteract them by putting anti-discrimination measures into practice to the fullest possible extent.

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

šŸ” SEARCH

Spotlight

AI Policy Corner: The Kenya National AI Strategy

AI Policy Corner: New York City Local Law 144

Canada’s Minister of AI and Digital Innovation is a Historic First. Here’s What We Recommend.

Am I Literate? Redefining Literacy in the Age of Artificial Intelligence

AI Policy Corner: The Texas Responsible AI Governance Act

related posts

  • Research summary: Algorithmic Colonization of Africa

    Research summary: Algorithmic Colonization of Africa

  • Choices, Risks, and Reward Reports: Charting Public Policy for Reinforcement Learning Systems

    Choices, Risks, and Reward Reports: Charting Public Policy for Reinforcement Learning Systems

  • Design Principles for User Interfaces in AI-Based Decision Support Systems: The Case of Explainable ...

    Design Principles for User Interfaces in AI-Based Decision Support Systems: The Case of Explainable ...

  • Achieving Fairness at No Utility Cost via Data Reweighing with Influence

    Achieving Fairness at No Utility Cost via Data Reweighing with Influence

  • Tiny, Always-on and Fragile: Bias Propagation through Design Choices in On-device Machine Learning W...

    Tiny, Always-on and Fragile: Bias Propagation through Design Choices in On-device Machine Learning W...

  • Research summary: Snapshot Series: Facial Recognition Technology

    Research summary: Snapshot Series: Facial Recognition Technology

  • The AI Gambit – Leveraging Artificial Intelligence to Combat Climate Change

    The AI Gambit – Leveraging Artificial Intelligence to Combat Climate Change

  • International Human Rights, Artificial Intelligence, and the Challenge for the Pondering State: Time...

    International Human Rights, Artificial Intelligence, and the Challenge for the Pondering State: Time...

  • Generative AI-Driven Storytelling: A New Era for Marketing

    Generative AI-Driven Storytelling: A New Era for Marketing

  • Research summary: Social Biases in NLP Models as Barriers for Persons with Disabilities

    Research summary: Social Biases in NLP Models as Barriers for Persons with Disabilities

Partners

  • Ā 
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer

Categories


• Blog
• Research Summaries
• Columns
• Core Principles of Responsible AI
• Special Topics

Signature Content


• The State Of AI Ethics

• The Living Dictionary

• The AI Ethics Brief

Learn More


• About

• Open Access Policy

• Contributions Policy

• Editorial Stance on AI Tools

• Press

• Donate

• Contact

The AI Ethics Brief (bi-weekly newsletter)

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.


Archive

  • Ā© MONTREAL AI ETHICS INSTITUTE. All rights reserved 2024.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.