🔬 Research Summary by Virginie Do and Nicolas Usunier
Virginie Do is a former PhD student at Meta AI (Facebook AI Research) and PSL University
Nicolas Usunier is a research scientist at Meta AI (Facebook AI Research).
[Papers on which this research summary is based are in the References section]
Overview: Within the domain of recommender systems, algorithmic decisions regarding content exposure carry significant ethical implications, potentially marginalizing minority or disadvantaged content producers. In a series of works [2,3,4], we propose to define the fairness of ranked recommendations based on principles from economic fair division. Following these principles, we introduce new recommendation algorithms and show that they can distribute exposure more fairly among content producers while preserving the quality of recommendations for users.
Machine learning algorithms are widely used in the recommender systems that drive marketplaces, streaming, and social networking platforms. Their main purpose is to provide users with personalized recommendations by predicting their preferences and sorting available content according to these predictions. However, by selecting content from some producers over others, recommendation algorithms decide who is visible and who is not. These decisions have real ethical and social implications, such as the risks of overlooking minority or disadvantaged groups when suggesting profiles to employers or the problems of over-representation of certain opinions on social networks. Our work aims to develop recommendation algorithms that limit exposure bias, taking into account both users and content producers.
We consider a classical model of the recommendation problem where the system observes users in sequential sessions and must choose K items (videos) to recommend from a set of items created by producers (video creators). The traditional solution comprises two steps: 1) Estimation: predicting a preference score for the current user for each item, based on a history of interactions via a learning model; 2) Ranking: ranking the items by their estimated scores and recommending the ordered list (or ranking) of the K best. This ranking step can produce “superstar” or “winner-take-all” effects, where certain groups of producers capture all the exposure, even with slightly higher scores. In addition, biases in estimated preferences due to learning stereotypes can be amplified by ranking.
Fair recommendation as fair allocation
To limit these exposure biases, we draw on the economic theory of fair division . We formalize recommendation as a fair division problem where the scarce resource to be distributed among content producers is the available exposure . The decision-maker must consider the interests or “utility” of both users and item producers. We assume that the utilities of users and items can be defined as follows:
- Users want high-quality rankings
- Content producers want high exposure.
In this formal framework, fairness consists of simultaneously following two distributive principles:
- I/ Pareto efficiency: increasing utility for all if it harms no one,
- II/ Transfer principle: giving priority to the worst-off otherwise.
Maximizing concave welfare functions
To generate rankings that fairly allocate exposure to content producers while considering the quality of recommendations from the users’ perspective, we propose to maximize functions that consider the welfare of both users and producers. Instead of maximizing a classical measure of ranking quality, we add to the objective a concave function of the exposure given to each producer in the rankings. Concavity allows exposure to be redistributed from the most visible producers to those less visible, as it encodes the property of diminishing returns: an additional view counts more for a producer with ten views than for a producer with 10 million views .
Following these principles, we propose new global recommendation objectives f that make trade-offs between a classical measure of ranking performance for users (i.e., the Discounted Cumulated Gain) and a concave function that redistributes exposure between producers. We consider two types of concave welfare functions from the economic literature: Gini welfare functions  and additive functions . These welfare functions are related to well-known inequality measures like the Gini index. Maximizing these functions allows the reduction of inequality in exposure among content producers without destroying total utility.
Using convex optimization techniques, we develop various algorithms that can maximize this form’s recommendation objectives in various settings. In the online setting where users are observed in consecutive sessions, our algorithm estimates the current user’s preferences, applies a bonus to items that have received little exposure in previous sessions and ranks items according to this modified score. The corresponding ranking then maximizes an approximation of f .
The first two results are theoretical. First, we show that all rankings generated by maximizing a function of the form f simultaneously satisfy the distributive fairness properties (I)-(II). Second, we show that the implementation of our algorithm has a computational cost equivalent to the cost of sorting: it is, therefore, as efficient as traditional ranking algorithms.
Finally, we confirm these results experimentally. With simulations of music recommendation data, we show that our algorithm can balance the quality of recommendations for users and the inequality of exposure between producers. We compare it to a recent method that enforces hard fairness constraints on the exposure given to each producer. The latter method strongly degrades recommendation quality for users when reducing inequality among producers. In contrast, by varying the relative weight given to user welfare and item welfare in the objective f, our algorithm can reduce exposure inequalities at a low cost for recommendation quality.
Between the lines
In this work, we crafted a conceptual framework using distributive justice principles to evaluate ranked recommendation fairness. Our results have led to the development of efficient algorithms that can be practically implemented, serving as a stepping stone towards developing principled approaches to fairness in recommender systems.
In addition to fairness for producers, we also address fairness for users with a similar approach based on concave welfare functions and economic principles of redistribution. There is room for improvement: most work on fairness considers static user behavior, whereas real-world recommender systems have an impact on user preferences and habits. In future work, we intend to incorporate more complex models of the dynamics of recommender systems.
 Moulin. Fair division and collective welfare. MIT Press. 2004.
 Do, Corbett-Davies, Atif, & Usunier. Two-sided fairness in rankings via Lorenz dominance. NeurIPS 2021.
 Do & Usunier. Optimizing generalized Gini indices for fairness in rankings. SIGIR 2021.
 Do, Dohmatob, Pirotta, Lazaric, & Usunier. Contextual bandits with concave rewards and an application to fair ranking. ICLR 2023.