• Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • Core Principles of Responsible AI
    • Accountability
    • Fairness
    • Privacy
    • Safety and Security
    • Sustainability
    • Transparency
  • Special Topics
    • AI in Industry
    • Ethical Implications
    • Human-Centered Design
    • Regulatory Landscape
    • Technical Methods
  • Living Dictionary
  • State of AI Ethics
  • AI Ethics Brief
  • 🇫🇷
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

2022 AI Index Report – Technical AI Ethics Chapter

May 26, 2022

🔬 Research Summary by Helen Ngo, an affiliated researcher with the AI Index at Stanford HAI. She can be found on Twitter @mathemakitten.

[Original paper by Daniel Zhang, Nestor Maslej, Erik Brynjolfsson, John Etchemendy, Terah Lyons, James Manyika, Helen Ngo, Juan Carlos Niebles, Michael Sellitto, Ellie Sakhaee, Yoav Shoham, Jack Clark, and Raymond Perrault]


Overview:  The 2022 AI Index report tracks, collates, distills, and visualizes data relating to artificial intelligence, enabling decision-makers to take meaningful action to advance AI responsibly and ethically with humans in mind. The 2022 edition includes a new chapter on technical AI ethics, highlighting metrics adopted by the research community related to the measurement of fairness and bias in artificial intelligence systems.


Introduction

AI systems are being broadly deployed into the world, but researchers and practitioners are also reckoning with their real-world harms, including systems that discriminate based on race, rĂ©sumĂ© screening systems that discriminate on gender, and AI-powered clinical health tools that are biased along socioeconomic and racial lines. These systems ​​reflect and amplify human social biases, discriminate based on protected attributes, and generate false information about the world. 

This year, the AI Index highlights metrics which have been adopted by the community for reporting progress in eliminating bias and promoting fairness. Tracking performance on these metrics alongside technical capabilities provides a more comprehensive perspective on how fairness and bias change as systems improve. 

Highlights include findings that larger language models are more capable of reflecting biases from their training data, and a concrete shift in algorithmic fairness and bias moving from being primarily an academic pursuit to becoming firmly embedded as a mainstream research topic with wide-ranging implications, with researchers with industry affiliations contributing 71% more publications year over year at ethics-focused conferences in recent years.

Key Insights

Language models are more capable than ever, but also more biased

Large language models are setting new records on technical benchmarks, but new data shows that larger models are also more capable of reflecting biases from their training data. A 280 billion parameter model developed in 2021 shows a 29% increase in elicited toxicity over a 117 million parameter model considered the state of the art as of 2018.

Figure 3.2.3a and Figure 3.2.3b from the Gopher paper show that larger models

are more likely to produce toxic outputs when prompted with inputs of varying levels of toxicity,

but that they are also more capable of detecting toxicity with regard to their own outputs as well as in other contexts.

The rise of AI ethics everywhere

Research on fairness and transparency in AI has exploded since 2014, with a fivefold increase in related publications at ethics-related conferences. Algorithmic fairness and bias has shifted from being primarily an academic pursuit to becoming firmly embedded as a mainstream research topic with wide-ranging implications. Researchers with industry affiliations contributed 71% more publications year over year at ethics-focused conferences in recent years. This aligns with recent findings that point to a trend of deep learning researchers transitioning from academia to industry labs.

Multimodal models learn multimodal biases

Rapid progress has been made on training multimodal language-vision models which exhibit new levels of capability on joint language-vision tasks. These models have set new records on tasks like image classification and the creation of images from text descriptions, but they also reflect societal stereotypes and biases in their outputs—experiments on CLIP showed that images of Black people were misclassified as nonhuman at over twice the rate of any other race. While there has been significant work to develop metrics for measuring bias within both computer vision and natural language processing, this highlights the need for metrics that provide insight into biases in models with multiple modalities.

Between the lines

The Technical AI Ethics chapter of the 2022 AI Index captures a small facet of work within the broader AI ethics community, and exists as part of a wider ecosystem including those working on topics such as governance and societal norms. The field is changing quickly, and it will become important to assess impact as more data emerges along other ethical dimensions, such as the environmental impact of training large models. It will also be important to track this data over time as benchmarks and metric adoption changes within the research community to understand how the landscape shifts over time.

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

🔍 SEARCH

Spotlight

AI Policy Corner: Singapore’s National AI Strategy 2.0

AI Governance in a Competitive World: Balancing Innovation, Regulation and Ethics | Point Zero Forum 2025

AI Policy Corner: Frontier AI Safety Commitments, AI Seoul Summit 2024

AI Policy Corner: The Colorado State Deepfakes Act

Special Edition: Honouring the Legacy of Abhishek Gupta (1992–2024)

related posts

  • Research summary: A Case for Humans-in-the-Loop: Decisions in the Presence of Erroneous Algorithmic ...

    Research summary: A Case for Humans-in-the-Loop: Decisions in the Presence of Erroneous Algorithmic ...

  • Building a Credible Case for Safety: Waymo's Approach for the Determination of Absence of Unreasonab...

    Building a Credible Case for Safety: Waymo's Approach for the Determination of Absence of Unreasonab...

  • Industry AI Ethics 101 with Kathy Baxter (Podcast Summary)

    Industry AI Ethics 101 with Kathy Baxter (Podcast Summary)

  • Structured access to AI capabilities: an emerging paradigm for safe AI deployment

    Structured access to AI capabilities: an emerging paradigm for safe AI deployment

  • System Safety and Artificial Intelligence

    System Safety and Artificial Intelligence

  • Siren’s Song in the AI Ocean: A Survey on Hallucination in Large Language Models

    Siren’s Song in the AI Ocean: A Survey on Hallucination in Large Language Models

  • Study of Competition Issues in Data-Driven Markets in Canada

    Study of Competition Issues in Data-Driven Markets in Canada

  • Modeling Content Creator Incentives on Algorithm-Curated Platforms

    Modeling Content Creator Incentives on Algorithm-Curated Platforms

  • Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising

    Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising

  • Exploiting Large Language Models (LLMs) through Deception Techniques and Persuasion Principles

    Exploiting Large Language Models (LLMs) through Deception Techniques and Persuasion Principles

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer

Categories


• Blog
• Research Summaries
• Columns
• Core Principles of Responsible AI
• Special Topics

Signature Content


• The State Of AI Ethics

• The Living Dictionary

• The AI Ethics Brief

Learn More


• About

• Open Access Policy

• Contributions Policy

• Editorial Stance on AI Tools

• Press

• Donate

• Contact

The AI Ethics Brief (bi-weekly newsletter)

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.


Archive

  • © MONTREAL AI ETHICS INSTITUTE. All rights reserved 2024.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.