• Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • Core Principles of Responsible AI
    • Accountability
    • Fairness
    • Privacy
    • Safety and Security
    • Sustainability
    • Transparency
  • Special Topics
    • AI in Industry
    • Ethical Implications
    • Human-Centered Design
    • Regulatory Landscape
    • Technical Methods
  • Living Dictionary
  • State of AI Ethics
  • AI Ethics Brief
  • 🇫🇷
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

Fair allocation of exposure in recommender systems

August 24, 2023

🔬 Research Summary by Virginie Do and Nicolas Usunier

Virginie Do is a former PhD student at Meta AI (Facebook AI Research) and PSL University

Nicolas Usunier is a research scientist at Meta AI (Facebook AI Research).

[Papers on which this research summary is based are in the References section]


Overview: Within the domain of recommender systems, algorithmic decisions regarding content exposure carry significant ethical implications, potentially marginalizing minority or disadvantaged content producers. In a series of works [2,3,4], we propose to define the fairness of ranked recommendations based on principles from economic fair division. Following these principles, we introduce new recommendation algorithms and show that they can distribute exposure more fairly among content producers while preserving the quality of recommendations for users.


Introduction

Motivation

Machine learning algorithms are widely used in the recommender systems that drive marketplaces, streaming, and social networking platforms. Their main purpose is to provide users with personalized recommendations by predicting their preferences and sorting available content according to these predictions. However, by selecting content from some producers over others, recommendation algorithms decide who is visible and who is not. These decisions have real ethical and social implications, such as the risks of overlooking minority or disadvantaged groups when suggesting profiles to employers or the problems of over-representation of certain opinions on social networks. Our work aims to develop recommendation algorithms that limit exposure bias, taking into account both users and content producers. 

Context

We consider a classical model of the recommendation problem where the system observes users in sequential sessions and must choose K items (videos) to recommend from a set of items created by producers (video creators). The traditional solution comprises two steps: 1) Estimation: predicting a preference score for the current user for each item, based on a history of interactions via a learning model; 2) Ranking: ranking the items by their estimated scores and recommending the ordered list (or ranking) of the K best. This ranking step can produce “superstar” or “winner-take-all” effects, where certain groups of producers capture all the exposure, even with slightly higher scores. In addition, biases in estimated preferences due to learning stereotypes can be amplified by ranking.

Key insights

Fair recommendation as fair allocation

To limit these exposure biases, we draw on the economic theory of fair division [1]. We formalize recommendation as a fair division problem where the scarce resource to be distributed among content producers is the available exposure [2]. The decision-maker must consider the interests or “utility” of both users and item producers. We assume that the utilities of users and items can be defined as follows: 

  • Users want high-quality rankings
  • Content producers want high exposure. 

In this formal framework, fairness consists of simultaneously following two distributive principles: 

  • I/ Pareto efficiency: increasing utility for all if it harms no one,
  • II/ Transfer principle: giving priority to the worst-off otherwise. 

Maximizing concave welfare functions

To generate rankings that fairly allocate exposure to content producers while considering the quality of recommendations from the users’ perspective, we propose to maximize functions that consider the welfare of both users and producers. Instead of maximizing a classical measure of ranking quality, we add to the objective a concave function of the exposure given to each producer in the rankings. Concavity allows exposure to be redistributed from the most visible producers to those less visible, as it encodes the property of diminishing returns: an additional view counts more for a producer with ten views than for a producer with 10 million views [2].

Following these principles, we propose new global recommendation objectives f  that make trade-offs between a classical measure of ranking performance for users (i.e., the Discounted Cumulated Gain) and a concave function that redistributes exposure between producers. We consider two types of concave welfare functions from the economic literature: Gini welfare functions [3] and additive functions [2]. These welfare functions are related to well-known inequality measures like the Gini index. Maximizing these functions allows the reduction of inequality in exposure among content producers without destroying total utility. 

Using convex optimization techniques, we develop various algorithms that can maximize this form’s recommendation objectives in various settings. In the online setting where users are observed in consecutive sessions, our algorithm estimates the current user’s preferences, applies a bonus to items that have received little exposure in previous sessions and ranks items according to this modified score. The corresponding ranking then maximizes an approximation of f  [4].

Results 

The first two results are theoretical. First, we show that all rankings generated by maximizing a function of the form f simultaneously satisfy the distributive fairness properties (I)-(II). Second, we show that the implementation of our algorithm has a computational cost equivalent to the cost of sorting: it is, therefore, as efficient as traditional ranking algorithms. 

Finally, we confirm these results experimentally. With simulations of music recommendation data, we show that our algorithm can balance the quality of recommendations for users and the inequality of exposure between producers. We compare it to a recent method that enforces hard fairness constraints on the exposure given to each producer. The latter method strongly degrades recommendation quality for users when reducing inequality among producers. In contrast, by varying the relative weight given to user welfare and item welfare in the objective f, our algorithm can reduce exposure inequalities at a low cost for recommendation quality.

Between the lines

In this work, we crafted a conceptual framework using distributive justice principles to evaluate ranked recommendation fairness. Our results have led to the development of efficient algorithms that can be practically implemented, serving as a stepping stone towards developing principled approaches to fairness in recommender systems.

In addition to fairness for producers, we also address fairness for users with a similar approach based on concave welfare functions and economic principles of redistribution. There is room for improvement: most work on fairness considers static user behavior, whereas real-world recommender systems have an impact on user preferences and habits. In future work, we intend to incorporate more complex models of the dynamics of recommender systems.

References

[1] Moulin. Fair division and collective welfare. MIT Press. 2004.

[2] Do, Corbett-Davies, Atif, & Usunier. Two-sided fairness in rankings via Lorenz dominance. NeurIPS 2021.

[4] Do & Usunier. Optimizing generalized Gini indices for fairness in rankings. SIGIR 2021.

[3] Do, Dohmatob, Pirotta, Lazaric, & Usunier. Contextual bandits with concave rewards and an application to fair ranking. ICLR 2023.

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

🔍 SEARCH

Spotlight

AI Policy Corner: Frontier AI Safety Commitments, AI Seoul Summit 2024

AI Policy Corner: The Colorado State Deepfakes Act

Special Edition: Honouring the Legacy of Abhishek Gupta (1992–2024)

AI Policy Corner: The Turkish Artificial Intelligence Law Proposal

From Funding Crisis to AI Misuse: Critical Digital Rights Challenges from RightsCon 2025

related posts

  • International Institutions for Advanced AI

    International Institutions for Advanced AI

  • Routing with Privacy for Drone Package Delivery Systems

    Routing with Privacy for Drone Package Delivery Systems

  • State of AI Ethics

    State of AI Ethics

  • Epistemic fragmentation poses a threat to the governance of online targeting

    Epistemic fragmentation poses a threat to the governance of online targeting

  • Self-Consuming Generative Models Go MAD

    Self-Consuming Generative Models Go MAD

  • The Confidence-Competence Gap in Large Language Models: A Cognitive Study

    The Confidence-Competence Gap in Large Language Models: A Cognitive Study

  • Research summary: Aligning Super Human AI with Human Behavior: Chess as a Model System

    Research summary: Aligning Super Human AI with Human Behavior: Chess as a Model System

  • Participation and Division of Labor in User-Driven Algorithm Audits: How Do Everyday Users Work toge...

    Participation and Division of Labor in User-Driven Algorithm Audits: How Do Everyday Users Work toge...

  • Artificial Intelligence and the Privacy Paradox of Opportunity, Big Data and The Digital Universe

    Artificial Intelligence and the Privacy Paradox of Opportunity, Big Data and The Digital Universe

  • Responsible Design Patterns for Machine Learning Pipelines

    Responsible Design Patterns for Machine Learning Pipelines

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer

Categories


• Blog
• Research Summaries
• Columns
• Core Principles of Responsible AI
• Special Topics

Signature Content


• The State Of AI Ethics

• The Living Dictionary

• The AI Ethics Brief

Learn More


• About

• Open Access Policy

• Contributions Policy

• Editorial Stance on AI Tools

• Press

• Donate

• Contact

The AI Ethics Brief (bi-weekly newsletter)

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.


Archive

  • © MONTREAL AI ETHICS INSTITUTE. All rights reserved 2024.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.