• Skip to main content
  • Skip to secondary menu
  • Skip to primary sidebar
  • Skip to footer
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

  • Articles
    • Public Policy
    • Privacy & Security
    • Human Rights
      • Ethics
      • JEDI (Justice, Equity, Diversity, Inclusion
    • Climate
    • Design
      • Emerging Technology
    • Application & Adoption
      • Health
      • Education
      • Government
        • Military
        • Public Works
      • Labour
    • Arts & Culture
      • Film & TV
      • Music
      • Pop Culture
      • Digital Art
  • Columns
    • AI Policy Corner
    • Recess
  • The AI Ethics Brief
  • AI Literacy
    • Research Summaries
    • AI Ethics Living Dictionary
    • Learning Community
  • The State of AI Ethics Report
    • Volume 6 (February 2022)
    • Volume 5 (July 2021)
    • Volume 4 (April 2021)
    • Volume 3 (Jan 2021)
    • Volume 2 (Oct 2020)
    • Volume 1 (June 2020)
  • About
    • Our Contributions Policy
    • Our Open Access Policy
    • Contact
    • Donate

Fair allocation of exposure in recommender systems

August 24, 2023

🔬 Research Summary by Virginie Do and Nicolas Usunier

Virginie Do is a former PhD student at Meta AI (Facebook AI Research) and PSL University

Nicolas Usunier is a research scientist at Meta AI (Facebook AI Research).

[Papers on which this research summary is based are in the References section]


Overview: Within the domain of recommender systems, algorithmic decisions regarding content exposure carry significant ethical implications, potentially marginalizing minority or disadvantaged content producers. In a series of works [2,3,4], we propose to define the fairness of ranked recommendations based on principles from economic fair division. Following these principles, we introduce new recommendation algorithms and show that they can distribute exposure more fairly among content producers while preserving the quality of recommendations for users.


Introduction

Motivation

Machine learning algorithms are widely used in the recommender systems that drive marketplaces, streaming, and social networking platforms. Their main purpose is to provide users with personalized recommendations by predicting their preferences and sorting available content according to these predictions. However, by selecting content from some producers over others, recommendation algorithms decide who is visible and who is not. These decisions have real ethical and social implications, such as the risks of overlooking minority or disadvantaged groups when suggesting profiles to employers or the problems of over-representation of certain opinions on social networks. Our work aims to develop recommendation algorithms that limit exposure bias, taking into account both users and content producers. 

Context

We consider a classical model of the recommendation problem where the system observes users in sequential sessions and must choose K items (videos) to recommend from a set of items created by producers (video creators). The traditional solution comprises two steps: 1) Estimation: predicting a preference score for the current user for each item, based on a history of interactions via a learning model; 2) Ranking: ranking the items by their estimated scores and recommending the ordered list (or ranking) of the K best. This ranking step can produce “superstar” or “winner-take-all” effects, where certain groups of producers capture all the exposure, even with slightly higher scores. In addition, biases in estimated preferences due to learning stereotypes can be amplified by ranking.

Key insights

Fair recommendation as fair allocation

To limit these exposure biases, we draw on the economic theory of fair division [1]. We formalize recommendation as a fair division problem where the scarce resource to be distributed among content producers is the available exposure [2]. The decision-maker must consider the interests or “utility” of both users and item producers. We assume that the utilities of users and items can be defined as follows: 

  • Users want high-quality rankings
  • Content producers want high exposure. 

In this formal framework, fairness consists of simultaneously following two distributive principles: 

  • I/ Pareto efficiency: increasing utility for all if it harms no one,
  • II/ Transfer principle: giving priority to the worst-off otherwise. 

Maximizing concave welfare functions

To generate rankings that fairly allocate exposure to content producers while considering the quality of recommendations from the users’ perspective, we propose to maximize functions that consider the welfare of both users and producers. Instead of maximizing a classical measure of ranking quality, we add to the objective a concave function of the exposure given to each producer in the rankings. Concavity allows exposure to be redistributed from the most visible producers to those less visible, as it encodes the property of diminishing returns: an additional view counts more for a producer with ten views than for a producer with 10 million views [2].

Following these principles, we propose new global recommendation objectives f  that make trade-offs between a classical measure of ranking performance for users (i.e., the Discounted Cumulated Gain) and a concave function that redistributes exposure between producers. We consider two types of concave welfare functions from the economic literature: Gini welfare functions [3] and additive functions [2]. These welfare functions are related to well-known inequality measures like the Gini index. Maximizing these functions allows the reduction of inequality in exposure among content producers without destroying total utility. 

Using convex optimization techniques, we develop various algorithms that can maximize this form’s recommendation objectives in various settings. In the online setting where users are observed in consecutive sessions, our algorithm estimates the current user’s preferences, applies a bonus to items that have received little exposure in previous sessions and ranks items according to this modified score. The corresponding ranking then maximizes an approximation of f  [4].

Results 

The first two results are theoretical. First, we show that all rankings generated by maximizing a function of the form f simultaneously satisfy the distributive fairness properties (I)-(II). Second, we show that the implementation of our algorithm has a computational cost equivalent to the cost of sorting: it is, therefore, as efficient as traditional ranking algorithms. 

Finally, we confirm these results experimentally. With simulations of music recommendation data, we show that our algorithm can balance the quality of recommendations for users and the inequality of exposure between producers. We compare it to a recent method that enforces hard fairness constraints on the exposure given to each producer. The latter method strongly degrades recommendation quality for users when reducing inequality among producers. In contrast, by varying the relative weight given to user welfare and item welfare in the objective f, our algorithm can reduce exposure inequalities at a low cost for recommendation quality.

Between the lines

In this work, we crafted a conceptual framework using distributive justice principles to evaluate ranked recommendation fairness. Our results have led to the development of efficient algorithms that can be practically implemented, serving as a stepping stone towards developing principled approaches to fairness in recommender systems.

In addition to fairness for producers, we also address fairness for users with a similar approach based on concave welfare functions and economic principles of redistribution. There is room for improvement: most work on fairness considers static user behavior, whereas real-world recommender systems have an impact on user preferences and habits. In future work, we intend to incorporate more complex models of the dynamics of recommender systems.

References

[1] Moulin. Fair division and collective welfare. MIT Press. 2004.

[2] Do, Corbett-Davies, Atif, & Usunier. Two-sided fairness in rankings via Lorenz dominance. NeurIPS 2021.

[4] Do & Usunier. Optimizing generalized Gini indices for fairness in rankings. SIGIR 2021.

[3] Do, Dohmatob, Pirotta, Lazaric, & Usunier. Contextual bandits with concave rewards and an application to fair ranking. ICLR 2023.

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

🔍 SEARCH

Spotlight

ALL IN Conference 2025: Four Key Takeaways from Montreal

Beyond Dependency: The Hidden Risk of Social Comparison in Chatbot Companionship

AI Policy Corner: Restriction vs. Regulation: Comparing State Approaches to AI Mental Health Legislation

Beyond Consultation: Building Inclusive AI Governance for Canada’s Democratic Future

AI Policy Corner: U.S. Executive Order on Advancing AI Education for American Youth

related posts

  • Challenges of AI Development in Vietnam: Funding, Talent and Ethics

    Challenges of AI Development in Vietnam: Funding, Talent and Ethics

  • Brave: what it means to be an AI Ethicist

    Brave: what it means to be an AI Ethicist

  • A Case Study: Increasing AI Ethics Maturity in a Startup

    A Case Study: Increasing AI Ethics Maturity in a Startup

  • Effects of ROSS Intelligence and NDAS, highlighting the need for AI regulation

    Effects of ROSS Intelligence and NDAS, highlighting the need for AI regulation

  • Setting the Right Expectations: Algorithmic Recourse Over Time

    Setting the Right Expectations: Algorithmic Recourse Over Time

  • Now I’m Seen: An AI Ethics Discussion Across the Globe

    Now I’m Seen: An AI Ethics Discussion Across the Globe

  • Research Summary: Toward Fairness in AI for People with Disabilities: A Research Roadmap

    Research Summary: Toward Fairness in AI for People with Disabilities: A Research Roadmap

  • Examining the Impact of Provenance-Enabled Media on Trust and Accuracy Perceptions

    Examining the Impact of Provenance-Enabled Media on Trust and Accuracy Perceptions

  • Using Pre-Trained Language Models for Producing Counter Narratives Against Hate Speech: a Comparativ...

    Using Pre-Trained Language Models for Producing Counter Narratives Against Hate Speech: a Comparativ...

  • Research summary: Decision Points in AI Governance

    Research summary: Decision Points in AI Governance

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer


Articles

Columns

AI Literacy

The State of AI Ethics Report


 

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.

Contact

Donate


  • © 2025 MONTREAL AI ETHICS INSTITUTE.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.