• Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • Core Principles of Responsible AI
    • Accountability
    • Fairness
    • Privacy
    • Safety and Security
    • Sustainability
    • Transparency
  • Special Topics
    • AI in Industry
    • Ethical Implications
    • Human-Centered Design
    • Regulatory Landscape
    • Technical Methods
  • Living Dictionary
  • State of AI Ethics
  • AI Ethics Brief
  • 🇫🇷
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

Modeling Content Creator Incentives on Algorithm-Curated Platforms

May 31, 2023

🔬 Research Summary by Jiri Hron, a PhD student at the University of Cambridge and worked as a student researcher at Google Brain for most of his PhD.

[Original paper by Jiri Hron, Karl Krauth, Michael I. Jordan, Niki Kilbertus, and Sarah Dean]


Overview: While content creators on online platforms compete for user attention, their exposure crucially depends on algorithmic choices made by the platform. In this paper, we formalize exposure games, a model of the incentives induced by recommender systems. We prove that seemingly innocuous algorithmic choices in modern recommenders may affect incentivized creator behaviors in significant and unexpected ways. We develop techniques to numerically find equilibria in exposure games and leverage them for pre-deployment audits of recommender systems. 


Introduction

In 2018, Jonah Peretti (CEO of Buzzfeed) raised the alarm when the Facebook newsfeed started boosting junk and divisive content. In Poland, the same update caused politicians to increase negative messaging. Tailoring content to algorithms is not unique to social media. For example, some search engine optimization (SEO) professionals specialize in managing the impacts of Google Search updates. While motivations for adapting content range from economic to socio-political, they often translate into the same operative goal: exposure maximization.

We formalize a game-theoretic model of how a platform’s recommendation system shapes the incentives of content creators, which we call exposure games. By developing tools to find equilibria in exposure games, we show that subtle algorithmic choices may significantly and unexpectedly affect incentivized creator behaviors. These tools can also be used for pre-deployment audits of recommendation systems on such platforms.

Key Insights

How recommender systems induce incentives for content creators

Consider the case of content creators on Youtube and the recommender system that displays “videos to watch next.” Since the revenue of video creators is proportional to their view numbers, they are incentivized to maximize exposure, i.e., tailor their content to be ranked highly in the “to watch next” column. In our setting, we assume there is a fixed recommender system trained on past data and a fixed population of users.  This induces a demand distribution, representing the typical platform traffic over a predefined period.

We study how the algorithmic choices of the recommender system may affect the strategies of exposure-maximizing content creators. We propose an incentive-based behavior model called an exposure game, where creators compete for the finite user attention pool by tailoring content to the given algorithm. When creators act strategically, a steady state—Nash equilibrium (NE)—may be reached, with no one able to improve their exposure unilaterally. The content produced in a Nash equilibrium can thus be interpreted as what the algorithm implicitly incentivizes.

How to model an exposure game

To abstract from the specific content modality (videos, images, text, etc.), we focus on algorithms that model user preferences as an inner product of user and item embeddings (numerical vectors representing the content) and recommend items based on the estimated preference. The expected exposure of a creator is the expected number of interactions under the user demand distribution and the rankings provided by the recommender system. An exposure game consists of a finite number of creators trying to produce content to maximize their exposure. Creators choose to produce content by selecting its embedding vector, rationally adapting to the user demand distribution and the precise workings of the recommender system.

Common factorization-based algorithms also have a non-negative temperature parameter τ, which controls the spread of exposure probabilities over the top-scoring items. This parameter can be thought of as controlling the level of “exploration” performed by the recommender: when τ is zero, the top-ranking content is exposed with certainty; when τ is greater than zero, randomness is added such that all contents have a non-zero (albeit potentially small) probability of being exposed. No assumptions are made on how the embeddings are obtained. Thus all our results apply equally to classical matrix factorization and deep learning-based systems.

Existence of equilibria and the effect of exploration

First, we theoretically study the existence of different types of equilibria in exposure games, where each producer is satisfied either with a single strategy vector (pure Nash equilibrium) or a distribution over strategies (mixed Nash equilibrium). Mixed strategies can be thought of as creating multiple items and distributing time or budget over them. When no equilibrium exists, creators may persistently oscillate in competition between strategies. The key results are that at least one mixed Nash equilibrium exists in every exposure game, whereas pure Nash equilibria need not exist in the τ = 0 or the τ > 0 case.

However, when we relax the concept of Nash equilibria to situations in which no player can improve their exposure by at least some fixed non-zero amount ε, the situation changes: the number and existence of such equilibria critically depend on the temperature parameter τ. For heavily exploring recommenders, all creators are incentivized to uniformly produce homogeneous content, whereas low exploration levels may lead to the non-existence of equilibria.

This may contradict the intuition that more exploration should lead to greater content diversity due to the higher exposure of niche content. One way to understand this result is the tension between randomization and the ability of niche creators to reach their audience: creators may be discouraged from creating niche content when the algorithm is exploring too much (τ high) and encouraged to mercilessly seek and protect their niche when the algorithm performs little exploration (τ low). When the algorithm captures user preferences well, exploration is typically thought of as having a negative impact on the user experience through an immediate reduction in the quality of service. However, the above results show secondary long-term effects.

Pre-deployment audits of strategic creator incentives

We also demonstrate how to utilize exposure games for pre-deployment audits of different rating models on real-world datasets. Based on data from MovieLens and LastFM, all creators cluster at the same strategy for growing τ. We can corroborate on the MovieLens data that there is an incentive to target content towards male users, presumably since 71% of users are male. In our pre-deployment audits, we can also analyze whether a given algorithm (de)incentivizes content by a particular creator group, which can help limit future harm and discrimination.

Between the lines

From social media and streaming to Google Search, many interact with recommender and information retrieval systems daily. While the core algorithms were developed and analyzed years ago, the socio-economic context in which they operate received comparatively little attention in the technical computer science literature.

Our producer model has several limitations, from assuming rationality, complete information, and total control, to taking the skill set of each producer to be the same, their exposure to be linear in full exposure, and ignoring algorithmic diversification of recommendations. We also consider the attention pool as fixed and finite, neglecting the problematic reality of the modern attention economy, where online platforms constantly struggle to increase their user numbers and daily usage. While the formalization and study of more realistic producer models is certainly an important direction for future work, a critical hindrance to empirical evaluation is the lack of academic access to the almost exclusively privately owned platforms.

Therefore, increased transparency will be an important step to incorporate independent pre-deployment audits as a practical addition to the algorithm auditing toolbox. We hope our research enriches the debate about online platforms’ role in our society and economy.

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

🔍 SEARCH

Spotlight

Canada’s Minister of AI and Digital Innovation is a Historic First. Here’s What We Recommend.

Am I Literate? Redefining Literacy in the Age of Artificial Intelligence

AI Policy Corner: The Texas Responsible AI Governance Act

AI Policy Corner: Singapore’s National AI Strategy 2.0

AI Governance in a Competitive World: Balancing Innovation, Regulation and Ethics | Point Zero Forum 2025

related posts

  • Implications of the use of artificial intelligence in public governance: A systematic literature rev...

    Implications of the use of artificial intelligence in public governance: A systematic literature rev...

  • The Moral Machine Experiment on Large Language Models

    The Moral Machine Experiment on Large Language Models

  • Research summary: Machine Learning Fairness - Lessons Learned

    Research summary: Machine Learning Fairness - Lessons Learned

  • The Logic of Strategic Assets: From Oil to AI

    The Logic of Strategic Assets: From Oil to AI

  • Bias in Automated Speaker Recognition

    Bias in Automated Speaker Recognition

  • Digital Sex Crime, Online Misogyny, and Digital Feminism in South Korea

    Digital Sex Crime, Online Misogyny, and Digital Feminism in South Korea

  • Fairness Amidst Non-IID Graph Data: A Literature Review

    Fairness Amidst Non-IID Graph Data: A Literature Review

  • Friend or foe? Exploring the implications of large language models on the science system

    Friend or foe? Exploring the implications of large language models on the science system

  • Unpacking Invisible Work Practices, Constraints, and Latent Power Relationships in Child Welfare thr...

    Unpacking Invisible Work Practices, Constraints, and Latent Power Relationships in Child Welfare thr...

  • Promises and Challenges of Causality for Ethical Machine Learning

    Promises and Challenges of Causality for Ethical Machine Learning

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer

Categories


• Blog
• Research Summaries
• Columns
• Core Principles of Responsible AI
• Special Topics

Signature Content


• The State Of AI Ethics

• The Living Dictionary

• The AI Ethics Brief

Learn More


• About

• Open Access Policy

• Contributions Policy

• Editorial Stance on AI Tools

• Press

• Donate

• Contact

The AI Ethics Brief (bi-weekly newsletter)

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.


Archive

  • © MONTREAL AI ETHICS INSTITUTE. All rights reserved 2024.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.