• Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • Core Principles of Responsible AI
    • Accountability
    • Fairness
    • Privacy
    • Safety and Security
    • Sustainability
    • Transparency
  • Special Topics
    • AI in Industry
    • Ethical Implications
    • Human-Centered Design
    • Regulatory Landscape
    • Technical Methods
  • Living Dictionary
  • State of AI Ethics
  • AI Ethics Brief
  • 🇫🇷
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

On the Challenges of Using Black-Box APIs for Toxicity Evaluation in Research

September 6, 2023

🔬 Research Summary by Luiza Pozzobon, a Research Scholar at Cohere For AI where she currently researches model safety. She’s also a master’s student at the University of Campinas, Brazil.

[Original paper by Luiza Pozzobon, Beyza Ermis, Patrick Lewis, and Sara Hooker]


Overview: We show how silent changes in a toxicity scoring API have impacted a fair comparison of toxicity metrics between language models over time. This affected research reproducibility and living benchmarks of model risk such as HELM. We suggest caution in applying apples-to-apples comparisons between toxicity studies and lay recommendations for a more structured approach to evaluating toxicity over time.


Introduction

An unintended consequence of the recent progress in language modeling is the models’ increasing capability of generating toxic or harmful text. Although there are usually protections to mitigate the harm of these models, they’re not fail-proof. For example, it has been shown how asking ChatGPT to act as a different persona (e.g., the boxer Muhammad Ali) increases toxic generations [1].

A quick and low-cost way to measure the possible harm a model can cause to its users is through automatic evaluation. Model generations are evaluated for toxicity by tools such as the Perspective API, which has become the standard for many research use cases as a free tool maintained by a credible institution.

However, the scientific community has overlooked the reality: the API’s underlying models are silently updated over time, and we cannot access model versioning. This implies that all research that relies on such API is not inherently reproducible, and results are not inherently comparable over time. We show the impacts of such API changes in research reproducibility and ranking of model risk featured in the HELM benchmark. We call for a more structured approach to evaluating toxicity over time.

Key Insights

Automatic toxicity evaluation

Human toxicity evaluation presents serious challenges, such as the variability of different geographies and cultural norms, the ever-expanding size of datasets, and the mental health risk it poses to evaluators exposed to highly toxic content. Due to this, automatic toxicity classification became the standard in language model evaluation and acts as a first low-cost means of metrifying a model’s toxicity. 

The most widely used tool in this regard is the Perspective API, maintained by Google’s Jigsaw team. Originally, the API was aimed to aid human-supervised content moderation online, but it’s also been frequently used in research papers and rankings of model risk. 

Backed by machine learning models, the Perspective API returns up to seven attributes of a given sequence of text. These attributes represent the perceived impact of a given comment on a range of emotional concepts. The toxicity attribute, the focus of this work, is defined as “a rude, disrespectful, or unreasonable comment that is likely to make people leave a discussion” and is available to assess sentences in more than ten languages.

Impacts on Rankings of Model Risk

To robustly evaluate a model for toxicity, we need to investigate the text they generate at scale and given a variety of contexts. In this study, we’re concerned about foundational, general language models, and we evaluate how they complete a given sentence.

A common benchmark for toxicity evaluation is the RealToxicityPrompts (RTP), a dataset built to assess the amount of toxicity a language model generates when continuing a given toxic or non-toxic text. It contains 100 thousand naturally occurring English prompts and their Perspective API toxicity scores.

Here’s how the evaluation of toxicity works in practice:

  1. The evaluated model generates 25 continuations to each prompt of the RealToxicityPrompts dataset.
  2. Those continuations are sent to the Perspective API for toxicity scoring.
  3. Toxicity metrics are computed for each set of prompts and their continuations. Reported values are the mean scores over all prompts.

Along with the dataset release, the authors ranked out-of-the-box models for toxicity, such as GPT1, GPT2, and GPT3. We got the open-sourced continuations from each model (step 1) and re-did steps 2 and 3 from above. Nothing has changed besides the time the toxicity evaluation was performed. However, toxicity scores for all models reduced drastically. GPT3’s expected maximum toxicity when conditioned on toxic prompts was 0.75 when RTP was released, and at the time of our evaluation, it was 0.62. An absolute reduction of 0.13 points just by using different API versions.

These results indicate that since toxicity scores generally got lower over time, more recent evaluations yield lower toxicity scores. If authors don’t rescore old generations, they might be led to believe that models are a lot less toxic than their predecessors, which might not be true. 

The changes in score distributions from Perspective API are true to all returned attributes, not only toxicity. In fact, toxicity was amongst the three attributes that changed the least in our evaluations. 

Impacts on Living Benchmarks

The Holistic Evaluation of Language Models (HELM) is “a living benchmark that aims to improve the transparency of language models.” It is a one-of-a-kind and extensive benchmark that aims to evaluate foundation language models from open, limited-access, or closed sources over the same set of scenarios. Before its existence, only 17.9% of its core scenarios were used to evaluate models in general, and some of the benchmarked models did not share any scenario in common. At the time of this work, HELM had benchmarked 37 models in more than 40 scenarios. Twenty other models have been added to the benchmark since.

The RealToxicityPrompts is one of the scenarios of evaluation in HELM, with models’ continuations also being scored by the Perspective API. However, the benchmark is static and prone to being outdated if the API has been updated since the model was added to the benchmark. 

When taking the published continuations of all 37 models and rescoring them under the same version (i.e., same date) of the Perspective API, the rankings changed. The most striking change was of `openai_text-curie-001`, which jumped 11 positions, going from 34th to 23rd place. Lower positions in the ranking mean lower toxicity, so this model’s perceived toxicity was largely harmed due to its outdated scores.

These findings conclude that we have not been comparing apples-to-apples due to subtle changes in the Perspective API scores. These are alarming results, as the HELM benchmark had only been active for close to 6 months at the date of this work.

Between the lines

As more and more machine learning models are being served through black-box APIs, reproducibility constraints such as the ones reported should gain visibility. Awareness of an evaluation’s limitations is crucial for effective, reproducible, and trustworthy research. 

Given our findings, we lay recommendations on how the community can help achieve such goals for toxicity evaluation:

  1. For API maintainers: version models and notify users of updates consistently. 
  2. For authors: release model generations, their toxicity scores, and code whenever possible. Add the date of toxicity scoring for each evaluated model. 
  3. When comparing new toxicity mitigation techniques with results from previous papers: for sanity, always rescore open-sourced generations. Assume unreleased generations have outdated scores and are not safely comparable.
  4. For living benchmarks such as HELM: establish a control set of sequences that is rescored with Perspective API on every model addition. If the toxicity metrics for that control set change, all previous models should be rescored. If a model cannot be rescored due to access restrictions, add a note regarding outdated results or remove the results from that benchmark version.
Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

🔍 SEARCH

Spotlight

AI Policy Corner: Singapore’s National AI Strategy 2.0

AI Governance in a Competitive World: Balancing Innovation, Regulation and Ethics | Point Zero Forum 2025

AI Policy Corner: Frontier AI Safety Commitments, AI Seoul Summit 2024

AI Policy Corner: The Colorado State Deepfakes Act

Special Edition: Honouring the Legacy of Abhishek Gupta (1992–2024)

related posts

  • Democracy, epistemic agency, and AI: Political Epistemology in Times of Artificial Intelligence

    Democracy, epistemic agency, and AI: Political Epistemology in Times of Artificial Intelligence

  • Humans are not Boltzmann Distributions: Challenges and Opportunities for Modelling Human Feedback an...

    Humans are not Boltzmann Distributions: Challenges and Opportunities for Modelling Human Feedback an...

  • UNESCO’s Recommendation on the Ethics of AI

    UNESCO’s Recommendation on the Ethics of AI

  • Towards User-Centered Metrics for Trustworthy AI in Immersive Cyberspace

    Towards User-Centered Metrics for Trustworthy AI in Immersive Cyberspace

  • Research summary: Mass Incarceration and the Future of AI

    Research summary: Mass Incarceration and the Future of AI

  • Research summary: Digital Abundance and Scarce Genius: Implications for Wages, Interest Rates, and G...

    Research summary: Digital Abundance and Scarce Genius: Implications for Wages, Interest Rates, and G...

  • Fairness and Bias in Algorithmic Hiring

    Fairness and Bias in Algorithmic Hiring

  • Ethics and Governance of Trustworthy Medical Artificial Intelligence

    Ethics and Governance of Trustworthy Medical Artificial Intelligence

  • The Struggle for AI’s Recognition: Understanding the Normative Implications of Gender Bias in AI wit...

    The Struggle for AI’s Recognition: Understanding the Normative Implications of Gender Bias in AI wit...

  • Exploring the Carbon Footprint of Hugging Face's ML Models: A Repository Mining Study

    Exploring the Carbon Footprint of Hugging Face's ML Models: A Repository Mining Study

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer

Categories


• Blog
• Research Summaries
• Columns
• Core Principles of Responsible AI
• Special Topics

Signature Content


• The State Of AI Ethics

• The Living Dictionary

• The AI Ethics Brief

Learn More


• About

• Open Access Policy

• Contributions Policy

• Editorial Stance on AI Tools

• Press

• Donate

• Contact

The AI Ethics Brief (bi-weekly newsletter)

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.


Archive

  • © MONTREAL AI ETHICS INSTITUTE. All rights reserved 2024.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.