• Skip to main content
  • Skip to secondary menu
  • Skip to primary sidebar
  • Skip to footer
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

  • Articles
    • Public Policy
    • Privacy & Security
    • Human Rights
      • Ethics
      • JEDI (Justice, Equity, Diversity, Inclusion
    • Climate
    • Design
      • Emerging Technology
    • Application & Adoption
      • Health
      • Education
      • Government
        • Military
        • Public Works
      • Labour
    • Arts & Culture
      • Film & TV
      • Music
      • Pop Culture
      • Digital Art
  • Columns
    • AI Policy Corner
    • Recess
    • Tech Futures
  • The AI Ethics Brief
  • AI Literacy
    • Research Summaries
    • AI Ethics Living Dictionary
    • Learning Community
  • The State of AI Ethics Report
    • Volume 7 (November 2025)
    • Volume 6 (February 2022)
    • Volume 5 (July 2021)
    • Volume 4 (April 2021)
    • Volume 3 (Jan 2021)
    • Volume 2 (Oct 2020)
    • Volume 1 (June 2020)
  • About
    • Our Contributions Policy
    • Our Open Access Policy
    • Contact
    • Donate

Dating Through the Filters

June 7, 2021

šŸ”¬ Research summary by Karim Nader, a graduate student at the University of Texas at Austin whose research focuses on the ethics of information & technology.

[Original paper by Karim Nader]


Overview: This essay explores ethical considerations that might arise from the use of collaborative filtering algorithms on dating apps. Collaborative filtering algorithms learn from behavior patterns of users generally to predict preferences and build recommendations for a target user. But since users on dating apps show deep racial bias in their own preferences, collaborative filtering can exacerbate biased sexual and romantic behavior. Maybe something as intimate as sexual and romantic preferences should not be the subject of algorithmic control.


Introduction

Dating apps have allowed people from extremely different backgrounds to connect and are often credited with the rise of interacial marriage in the United States. However, people of color still experience substantial harassment from other users that can include racial generalizations or even fetishization. This bias can extend from the users to the algorithm that filters and recommends potential romantic and sexual partners. Dating apps algorithms are built to predict the intimate preferences of a target user and recommend profiles to them accordingly, but biased data leads to biased recommendations. 

This research establishes that the data that is fed to the algorithm on dating apps reflects deep racial bias and that dating apps can perpetuate this bias in its own recommendations. Further, since recommendations are extremely effective at altering user behavior, dating apps are influencing the intimate behaviors of their users. A look into the philosophy of desires further complicates the issue: intimate biases are often seen merely as personal preferences. But since users have little control over algorithmic filtering, dating apps can come between users and their romantic and sexual autonomy. 

Collaborative filtering

Collaborative filtering works by predicting the behavior of one target user by comparing it to the behavior of other users around them. For example, if a majority of users who buy chips also buy salsa, the algorithm will learn to recommend salsa to anyone who buys chips. This way, filtering algorithms can build recommendations that reflect general patterns of behavior. And it turns out that they are highly effective at doing it! However, collaborative filtering has a tendency to homogenize the behavior of users on a platform without necessarily increasing utility. Moreover, studies on YouTube’s recommender system show that, through algorithmic recommendation, reasonable searches can quickly lead a user to videos that promote conspiracy theories. Algorithmic filtering can thus normalize problematic patterns of behavior through gradual technological nudges and pressures. Is the same true of dating apps? To show that, we’d have to establish that dating app users themselves are feeding the algorithm biased data through their activity.Ā 

Race and online dating

Christian Rudder, founder of OkCupid, explains that match scores (OkCupid’s compatibility score which is calculated by an algorithm) are the best way to predict a user’s race. In other words, the match scores of users of different races will show patterns that are distinct enough that we can identify the race of the profile simply by seeing which profiles the algorithm believes is a good match to them. Again, algorithms learn from user data so what kind of data is leading to this kind of racial algorithmic bias on dating apps? Well, it turns out that dating app users show distinct patterns of preference when it comes to race. Several empirical studies confirm those trends: users on online dating platforms seem to segregate themselves based on race and so, prefer people of their own race. Most users exclude people of color from consideration, except those of their own race, and generally show a preference for white men and women. People of color are more likely to include the profiles of white users for consideration, but white people are not as likely to include the profiles of people of color. Since correlations lead to recommendations, users on dating apps will be recommended to other users of their own race and will receive more recommendations for white users.

Shaping sexual and romantic preferences 

Now, we’ve established that the algorithm behind dating apps can exacerbate some kind of racial bias. The problem is that it is not clear if this is a problem that needs to be addressed. Surely the Spotify algorithm favors some artists over others, but when it comes to personal taste like music, bias is simply a preference. Sexual and romantic biases might similarly be simple preferences. However, sexual and romantic biases reflect larger patterns of discrimination and exclusion that are grounded in a history of racism and fetishization. And so, there might be some justification for us to raise a moral objection to the use of collaborative filtering on dating apps. After all, recommendations can and do change the behavior and preferences of users. Studies show that if two people are told they are a good match, they will act as if they are regardless of whether or not they are truly compatible with each other. Regardless, the issue might be that users have absolutely no control over the filtering that determines who they see on dating apps. Explicitly stated preferences are sometimes overridden by algorithmic predictions. Using collaborative data in the context of dating apps seems to undermine extremely personal sexual and romantic desires that should not be ā€˜predicted’ by an algorithm.

Between the lines

Most of the research on dating platforms has focused on dating websites that allow users to browse through a collection of profiles with little to no algorithmic intervention. However, dating platforms have evolved substantially and algorithmic suggestions play a powerful role in the experience of dating app users. This research brings attention to the reach of algorithmic bias on platforms that researchers often overlook. 

While people of color anecdotally report lower success rates and occasional harassment and fetishization, those concerns are not taken seriously because personal romantic preferences are seen to be outside of the realm of moral evaluation. Philosophers and moral experts need to pay closer attention to biases that evade ethical scrutiny in this way. 

While this research is an important step towards bringing race, romance and attraction into discussions of algorithmic bias, it is merely a conceptual, philosophical and ethical analysis of the question and more empirical work needs to go into understanding the algorithms behind dating apps and the experience of users on those platforms.

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

šŸ” SEARCH

Spotlight

Close-up of a cat sleeping on a computer keyboard

Tech Futures: The threat of AI-generated code to the world’s digital infrastructure

The undying sun hangs in the sky, as people gather around signal towers, working through their digital devices.

Dreams and Realities in Modi’s AI Impact Summit

Illustration of a coral reef ecosystem

Tech Futures: Diversity of Thought and Experience: The UN’s Scientific Panel on AI

This image shows a large white, traditional, old building. The top half of the building represents the humanities (which is symbolised by the embedded text from classic literature which is faintly shown ontop the building). The bottom section of the building is embossed with mathematical formulas to represent the sciences. The middle layer of the image is heavily pixelated. On the steps at the front of the building there is a group of scholars, wearing formal suits and tie attire, who are standing around at the enternace talking and some of them are sitting on the steps. There are two stone, statute-like hands that are stretching the building apart from the left side. In the forefront of the image, there are 8 students - which can only be seen from the back. Their graduation gowns have bright blue hoods and they all look as though they are walking towards the old building which is in the background at a distance. There are a mix of students in the foreground.

Tech Futures: Co-opting Research and Education

Agentic AI systems and algorithmic accountability: a new era of e-commerce

related posts

  • AI Economist: Reinforcement Learning is the Future for Equitable Economic Policy

    AI Economist: Reinforcement Learning is the Future for Equitable Economic Policy

  • Measuring Fairness of Text Classifiers via Prediction Sensitivity

    Measuring Fairness of Text Classifiers via Prediction Sensitivity

  • From Dance App to Political Mercenary: How disinformation on TikTok gaslights political tensions in ...

    From Dance App to Political Mercenary: How disinformation on TikTok gaslights political tensions in ...

  • Research summary:  Learning to Complement Humans

    Research summary: Learning to Complement Humans

  • Experts Doubt Ethical AI Design Will Be Broadly Adopted as the Norm Within the Next Decade

    Experts Doubt Ethical AI Design Will Be Broadly Adopted as the Norm Within the Next Decade

  • The State of Artificial Intelligence in the Pacific Islands

    The State of Artificial Intelligence in the Pacific Islands

  • From Pretraining Data to Language Models to Downstream Tasks: Tracking the Trails of Political Biase...

    From Pretraining Data to Language Models to Downstream Tasks: Tracking the Trails of Political Biase...

  • On Prediction-Modelers and Decision-Makers: Why Fairness Requires More Than a Fair Prediction Model

    On Prediction-Modelers and Decision-Makers: Why Fairness Requires More Than a Fair Prediction Model

  • A Sequentially Fair Mechanism for Multiple Sensitive Attributes

    A Sequentially Fair Mechanism for Multiple Sensitive Attributes

  • What lies behind AGI: ethical concerns related to LLMs

    What lies behind AGI: ethical concerns related to LLMs

Partners

  • Ā 
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer


Articles

Columns

AI Literacy

The State of AI Ethics Report


 

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.

Contact

Donate


  • Ā© 2025 MONTREAL AI ETHICS INSTITUTE.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.