• Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • Core Principles of Responsible AI
    • Accountability
    • Fairness
    • Privacy
    • Safety and Security
    • Sustainability
    • Transparency
  • Special Topics
    • AI in Industry
    • Ethical Implications
    • Human-Centered Design
    • Regulatory Landscape
    • Technical Methods
  • Living Dictionary
  • State of AI Ethics
  • AI Ethics Brief
  • 🇫🇷
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

2022 AI Index Report – Technical AI Ethics Chapter

May 26, 2022

🔬 Research Summary by Helen Ngo, an affiliated researcher with the AI Index at Stanford HAI. She can be found on Twitter @mathemakitten.

[Original paper by Daniel Zhang, Nestor Maslej, Erik Brynjolfsson, John Etchemendy, Terah Lyons, James Manyika, Helen Ngo, Juan Carlos Niebles, Michael Sellitto, Ellie Sakhaee, Yoav Shoham, Jack Clark, and Raymond Perrault]


Overview:  The 2022 AI Index report tracks, collates, distills, and visualizes data relating to artificial intelligence, enabling decision-makers to take meaningful action to advance AI responsibly and ethically with humans in mind. The 2022 edition includes a new chapter on technical AI ethics, highlighting metrics adopted by the research community related to the measurement of fairness and bias in artificial intelligence systems.


Introduction

AI systems are being broadly deployed into the world, but researchers and practitioners are also reckoning with their real-world harms, including systems that discriminate based on race, résumé screening systems that discriminate on gender, and AI-powered clinical health tools that are biased along socioeconomic and racial lines. These systems ​​reflect and amplify human social biases, discriminate based on protected attributes, and generate false information about the world. 

This year, the AI Index highlights metrics which have been adopted by the community for reporting progress in eliminating bias and promoting fairness. Tracking performance on these metrics alongside technical capabilities provides a more comprehensive perspective on how fairness and bias change as systems improve. 

Highlights include findings that larger language models are more capable of reflecting biases from their training data, and a concrete shift in algorithmic fairness and bias moving from being primarily an academic pursuit to becoming firmly embedded as a mainstream research topic with wide-ranging implications, with researchers with industry affiliations contributing 71% more publications year over year at ethics-focused conferences in recent years.

Key Insights

Language models are more capable than ever, but also more biased

Large language models are setting new records on technical benchmarks, but new data shows that larger models are also more capable of reflecting biases from their training data. A 280 billion parameter model developed in 2021 shows a 29% increase in elicited toxicity over a 117 million parameter model considered the state of the art as of 2018.

Figure 3.2.3a and Figure 3.2.3b from the Gopher paper show that larger models

are more likely to produce toxic outputs when prompted with inputs of varying levels of toxicity,

but that they are also more capable of detecting toxicity with regard to their own outputs as well as in other contexts.

The rise of AI ethics everywhere

Research on fairness and transparency in AI has exploded since 2014, with a fivefold increase in related publications at ethics-related conferences. Algorithmic fairness and bias has shifted from being primarily an academic pursuit to becoming firmly embedded as a mainstream research topic with wide-ranging implications. Researchers with industry affiliations contributed 71% more publications year over year at ethics-focused conferences in recent years. This aligns with recent findings that point to a trend of deep learning researchers transitioning from academia to industry labs.

Multimodal models learn multimodal biases

Rapid progress has been made on training multimodal language-vision models which exhibit new levels of capability on joint language-vision tasks. These models have set new records on tasks like image classification and the creation of images from text descriptions, but they also reflect societal stereotypes and biases in their outputs—experiments on CLIP showed that images of Black people were misclassified as nonhuman at over twice the rate of any other race. While there has been significant work to develop metrics for measuring bias within both computer vision and natural language processing, this highlights the need for metrics that provide insight into biases in models with multiple modalities.

Between the lines

The Technical AI Ethics chapter of the 2022 AI Index captures a small facet of work within the broader AI ethics community, and exists as part of a wider ecosystem including those working on topics such as governance and societal norms. The field is changing quickly, and it will become important to assess impact as more data emerges along other ethical dimensions, such as the environmental impact of training large models. It will also be important to track this data over time as benchmarks and metric adoption changes within the research community to understand how the landscape shifts over time.

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

🔍 SEARCH

Spotlight

AI Policy Corner: Texas and New York: Comparing U.S. State-Level AI Laws

What is Sovereign Artificial Intelligence?

AI Policy Corner: The Kenya National AI Strategy

AI Policy Corner: New York City Local Law 144

Canada’s Minister of AI and Digital Innovation is a Historic First. Here’s What We Recommend.

related posts

  • Never trust, always verify: a roadmap for Trustworthy AI?

    Never trust, always verify: a roadmap for Trustworthy AI?

  • Research summary: Beyond Near- and Long-Term: Towards a Clearer Account of Research Priorities in AI...

    Research summary: Beyond Near- and Long-Term: Towards a Clearer Account of Research Priorities in AI...

  • Private Training Set Inspection in MLaaS

    Private Training Set Inspection in MLaaS

  • Understanding Toxicity Triggers on Reddit in the Context of Singapore

    Understanding Toxicity Triggers on Reddit in the Context of Singapore

  • Robust Distortion-free Watermarks for Language Models

    Robust Distortion-free Watermarks for Language Models

  • Reduced, Reused, and Recycled: The Life of a Benchmark in Machine Learning Research

    Reduced, Reused, and Recycled: The Life of a Benchmark in Machine Learning Research

  • Ethics in the Software Development Process: from Codes of Conduct to Ethical Deliberation

    Ethics in the Software Development Process: from Codes of Conduct to Ethical Deliberation

  • Research summary:  Algorithmic Bias: On the Implicit Biases of Social Technology

    Research summary: Algorithmic Bias: On the Implicit Biases of Social Technology

  • International Human Rights, Artificial Intelligence, and the Challenge for the Pondering State: Time...

    International Human Rights, Artificial Intelligence, and the Challenge for the Pondering State: Time...

  • Understanding Machine Learning Practitioners' Data Documentation Perceptions, Needs, Challenges, and...

    Understanding Machine Learning Practitioners' Data Documentation Perceptions, Needs, Challenges, and...

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer

Categories


• Blog
• Research Summaries
• Columns
• Core Principles of Responsible AI
• Special Topics

Signature Content


• The State Of AI Ethics

• The Living Dictionary

• The AI Ethics Brief

Learn More


• About

• Open Access Policy

• Contributions Policy

• Editorial Stance on AI Tools

• Press

• Donate

• Contact

The AI Ethics Brief (bi-weekly newsletter)

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.


Archive

  • © MONTREAL AI ETHICS INSTITUTE. All rights reserved 2024.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.