• Skip to main content
  • Skip to secondary menu
  • Skip to primary sidebar
  • Skip to footer
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

  • Articles
    • Public Policy
    • Privacy & Security
    • Human Rights
      • Ethics
      • JEDI (Justice, Equity, Diversity, Inclusion
    • Climate
    • Design
      • Emerging Technology
    • Application & Adoption
      • Health
      • Education
      • Government
        • Military
        • Public Works
      • Labour
    • Arts & Culture
      • Film & TV
      • Music
      • Pop Culture
      • Digital Art
  • Columns
    • AI Policy Corner
    • Recess
    • Tech Futures
  • The AI Ethics Brief
  • AI Literacy
    • Research Summaries
    • AI Ethics Living Dictionary
    • Learning Community
  • The State of AI Ethics Report
    • Volume 7 (November 2025)
    • Volume 6 (February 2022)
    • Volume 5 (July 2021)
    • Volume 4 (April 2021)
    • Volume 3 (Jan 2021)
    • Volume 2 (Oct 2020)
    • Volume 1 (June 2020)
  • About
    • Our Contributions Policy
    • Our Open Access Policy
    • Contact
    • Donate

FairQueue: Rethinking Prompt Learning for Fair Text-to-Image Generation (NeurIPS 2024)

December 10, 2024

🔬 Research Summary by Christopher Teo, PhD, Singapore University of Technology and Design (SUTD).

[Original paper by Christopher T.H Teo, Milad Abdollahzadeh, Xinda Ma, Ngai-man Cheung]

Note: This paper, FairQueue: Rethinking Prompt Learning for Fair Text-to-Image Generation, will be presented at NeurIPS 2024 in Vancouver, Canada. It explores advancements in fair text-to-image diffusion models and contributes to the growing body of research on Fair Gen AI in computer vision.


Read: On Measuring Fairness in Generative Modelling (NeurIPS 2023)


Overview: This paper introduces FairQueue, a novel framework for achieving high-quality and fair text-to-image (T2I) generation. Existing T2I generations, such as Stable Diffusion are biased. State-of-the-art approach to mitigate bias in T2I suffers from quality degradation. We propose FairQueue, which incorporates two key strategies—Prompt Queuing and Attention Amplification—to address these issues, achieving outstanding image quality, semantic preservation, and competitive fairness.


Introduction

Generative AI models, especially text-to-image (T2I) systems, have reshaped industries, enabling applications from creative arts to personalized content. However, traditional hard prompts—like “a headshot of a smiling person”—often fail to achieve balanced sensitive attribute (SA) distributions, such as gender or ethnicity, due to linguistic ambiguity.

The current state-of-the-art (SOTA) method, ITI-GEN, introduced a novel prompt learning approach to address these shortcomings. Instead of relying solely on hard prompts, ITI-GEN leverages reference images to learn inclusive prompts tailored to specific SA categories. By aligning the embeddings of prompts and reference images, ITI-GEN seeks to ensure fair representation. However, this method has its limitations: learned prompts often distort generated outputs, resulting in reduced image quality and semantic inconsistencies.

This paper introduces FairQueue, a framework designed to overcome these issues while maintaining competitive fairness. By stabilizing early denoising steps with Prompt Queuing and enhancing SA representation through Attention Amplification, FairQueue significantly improves image quality, semantic preservation, and fairness consistency. Extensive experiments on diverse datasets highlight its advantages over ITI-GEN, marking a step forward for fair and high-quality generative AI systems.

Fairness in Generative Models

Fairness in generative AI requires outputs to represent sensitive attributes (SAs) equally, such as gender, race, and age. For instance, when generating images from prompts like “a person,” the outputs should not disproportionately depict one gender or ethnic group over another. Fair representation ensures inclusivity and avoids reinforcing societal biases.

Limitations of Hard Prompts

Hard prompts, such as appending SA-related phrases (“with pale skin”) to a base prompt (“a person”), have been an intuitive method for achieving fairness. However, these prompts often fail to generate balanced outputs because they are constrained by linguistic ambiguity, which is inherent to the T2I models. For example, terms like “smiling” or “not Smiling” are not easily differentiated by the T2I model, resulting in bias-generated samples.

Analyzing the Existing State-of-the-Art Prompt Learning Approach: ITI-GEN

ITI-GEN sought to address these limitations by introducing a prompt learning approach with respect to some reference images. Specifically, instead of relying solely on textual descriptions, ITI-GEN aligns the embeddings of reference images and a learned inclusive prompt–containing a combination of some learnable token and the original base prompt—in a shared CLIP space. This directional alignment aims to capture nuanced SA representations and improve fairness.

While ITI-GEN showed significant progress, it has notable drawbacks. Specifically, our extensive analysis found that the reference images introduce unrelated concepts into the learned prompts. This thereby results in degraded image quality such as distorted faces or irrelevant elements e.g., cartoonish images, in the generated samples.

Our further analysis of the cross-attention maps (which we coined as H2I and I2H) reveals that this is due to the learned tokens being distorted in the early denoising steps, leading to incomplete or inconsistent global structures in the generated images.

Proposed Solution: FairQueue

FairQueue introduces two key strategies:

  1. Prompt Queuing: To address early-stage degradation, FairQueue uses base prompts without SA-specific tokens in the initial denoising steps, allowing the model to form stable global structures. ITI-GEN prompts are then introduced in later stages to refine SA-specific details.
  2. Attention Amplification: By scaling the attention weights of SA tokens during the later denoising steps, FairQueue enhances SA expression without sacrificing image quality or semantic coherence.

Why These Innovations Matter

  • Stabilizing Early Denoising: By deferring the use of learned prompts, FairQueue avoids the disruptions caused by distorted tokens in the critical early stages of image synthesis.
  • Enhancing Fine-Grained Control: Attention Amplification ensures that SA-specific details are effectively incorporated, preserving fairness while improving image clarity and fidelity.

Between the lines

FairQueue’s innovations matter because they address a critical gap in T2I generation: balancing fairness with quality and semantic coherence. By identifying and tackling the root causes of ITI-GEN’s limitations, FairQueue demonstrates that fairness need not come at the expense of quality—a key consideration for ethical AI deployment.

However, challenges remain. Current approaches still rely on predefined sensitive attributes, limiting their applicability to real-world contexts where attributes are fluid or intersectional. 


Read: On Measuring Fairness in Generative Modelling (NeurIPS 2023)


Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

🔍 SEARCH

Spotlight

A brightly coloured illustration which can be viewed in any direction. It has many elements to it working together: men in suits around a table, someone in a data centre, big hands controlling the scenes and holding a phone, people in a production line. Motifs such as network diagrams and melting emojis are placed throughout the busy vignettes.

Tech Futures: The Fossil Fuels Playbook for Big Tech: Part II

A rock embedded with intricate circuit board patterns, held delicately by pale hands drawn in a ghostly style. The contrast between the rough, metallic mineral and the sleek, artificial circuit board illustrates the relationship between raw natural resources and modern technological development. The hands evoke human involvement in the extraction and manufacturing processes.

Tech Futures: The Fossil Fuels Playbook for Big Tech: Part I

Close-up of a cat sleeping on a computer keyboard

Tech Futures: The threat of AI-generated code to the world’s digital infrastructure

The undying sun hangs in the sky, as people gather around signal towers, working through their digital devices.

Dreams and Realities in Modi’s AI Impact Summit

Illustration of a coral reef ecosystem

Tech Futures: Diversity of Thought and Experience: The UN’s Scientific Panel on AI

related posts

  • Research summary:  Laughing is Scary, but Farting is Cute: A Conceptual Model of Children’s Perspect...

    Research summary: Laughing is Scary, but Farting is Cute: A Conceptual Model of Children’s Perspect...

  • Measuring Fairness of Text Classifiers via Prediction Sensitivity

    Measuring Fairness of Text Classifiers via Prediction Sensitivity

  • Value-based Fast and Slow AI Nudging

    Value-based Fast and Slow AI Nudging

  • People are not coins: Morally distinct types of predictions necessitate different fairness constrain...

    People are not coins: Morally distinct types of predictions necessitate different fairness constrain...

  • Challenges of AI Development in Vietnam: Funding, Talent and Ethics

    Challenges of AI Development in Vietnam: Funding, Talent and Ethics

  • Towards User-Centered Metrics for Trustworthy AI in Immersive Cyberspace

    Towards User-Centered Metrics for Trustworthy AI in Immersive Cyberspace

  • A Taxonomy of Foundation Model based Systems for Responsible-AI-by-Design

    A Taxonomy of Foundation Model based Systems for Responsible-AI-by-Design

  • The Montreal AI Ethics Institute (MAIEI) Joins the AI Alliance

    The Montreal AI Ethics Institute (MAIEI) Joins the AI Alliance

  • Exploiting Large Language Models (LLMs) through Deception Techniques and Persuasion Principles

    Exploiting Large Language Models (LLMs) through Deception Techniques and Persuasion Principles

  • The GPTJudge: Justice in a Generative AI World

    The GPTJudge: Justice in a Generative AI World

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer


Articles

Columns

AI Literacy

The State of AI Ethics Report


 

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.

Contact

Donate


  • © 2025 MONTREAL AI ETHICS INSTITUTE.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.