• Skip to main content
  • Skip to secondary menu
  • Skip to primary sidebar
  • Skip to footer
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

  • Articles
    • Public Policy
    • Privacy & Security
    • Human Rights
      • Ethics
      • JEDI (Justice, Equity, Diversity, Inclusion
    • Climate
    • Design
      • Emerging Technology
    • Application & Adoption
      • Health
      • Education
      • Government
        • Military
        • Public Works
      • Labour
    • Arts & Culture
      • Film & TV
      • Music
      • Pop Culture
      • Digital Art
  • Columns
    • AI Policy Corner
    • Recess
    • Tech Futures
  • The AI Ethics Brief
  • AI Literacy
    • Research Summaries
    • AI Ethics Living Dictionary
    • Learning Community
  • The State of AI Ethics Report
    • Volume 7 (November 2025)
    • Volume 6 (February 2022)
    • Volume 5 (July 2021)
    • Volume 4 (April 2021)
    • Volume 3 (Jan 2021)
    • Volume 2 (Oct 2020)
    • Volume 1 (June 2020)
  • About
    • Our Contributions Policy
    • Our Open Access Policy
    • Contact
    • Donate

The algorithmic imaginary: exploring the ordinary affects of Facebook algorithms (Research Summary)

February 21, 2021

🔬 Research summary contributed by Dr. Iga Kozlowska (@kozlowska_iga), a sociologist working on Microsoft’s Ethics & Society team where she’s tasked with guiding responsible AI innovation.

✍️ This is part 5 of the ongoing Sociology of AI Ethics series; read previous entries here.

[Link to original paper + authors at the bottom]


Overview: Bucher explores the spaces where humans and algorithms meet. Using Facebook as a case study, she examines the platform users’ thoughts and feelings about how the Facebook algorithm impacts them in their daily lives. She concludes that, despite not knowing exactly how the algorithm works, users imagine how it works. The algorithm, even if indirectly, not only produces emotions (often negative) but also alters online behaviour, thus exerting social power back onto the algorithm in a human-algorithm interaction feedback loop.


Facebook seems to think I’m “pregnant, single, broke and should lose weight.” These are the kinds of comments that Bucher uncovers as she reaches out to 25 ordinary Facebook users who have tweeted about their dissatisfaction or confusion over Facebook’s news feed algorithm. 

In popular imagination and in public discourse, we often think of algorithms as objective and capable of accurately reflecting reality. Because we don’t associate algorithms with emotions, we tend to underestimate the affective power of algorithms on people’s social lives and experiences. While Facebook algorithms are not necessarily designed to make users feel one way or another (except when they are: see Facebook’s emotion contagion experiment), they certainly have the power to produce emotional reactions and even alter behaviour. 

How Facebook Makes People Feel

Bucher summarizes several ways in which Facebook users experience negative, confusing, or disconcerting situations when interacting with algorithms. Users readily admit they don’t understand the inner workings of the algorithm, as no one outside of Facebook does. However, not understanding how something works doesn’t preclude us from experiencing its effects. Bucher discovers the following themes:

  • Dealing with algorithmically-built profiling identities that are not flattering or do not comport to how users see themselves
  • Creepy moments when people feel like their privacy is violated
  • Frustration and anxiety when posts don’t do well
  • “Cruel” moments when unwanted memories from the past are resurfaced in feeds

In response to some of these unpleasant experiences, savvy Facebook users try to “play Facebook’s game” (see my last post summarizing research on “gaming” SEO) by adjusting content (wording and images), timing of posts, and forms of interaction with friends’ content. Facebook’s “game” consists of explicit and implicit rules (much like Google’s SEO guidelines), and if you play the game, over time, you get better and are more likely to “win.” In fact, this is not too far from how social norms function in the real world — there are spoken and unspoken cultural norms that we are socialized into at an early age and we are rewarded for playing by the rules and penalized for breaking them.

The fact that there is a game you have to play to get rewards out of using Facebook is not inherently good or bad. However, it is something that the company needs to recognize and address. The platform is not just an open, free space for organic human interaction as Facebook sometimes likes to argue to avoid accountability; rather it is a highly structured and circumscribed website with features that encourage and enable some outcomes and discourage and forestall others. Engineers need to take this seriously if for no other reason than the fact that interaction with the platform and its algorithms does cause patterned changes in user behaviour that feedback into Facebook’s machine learning algorithm to unknown effect.

Let’s Talk About Feelings

What can machine learning developers and product designers do with Bucher’s findings?   

First, consider the social power your AI-based feature or product will have on the user (and indirect stakeholders). Consider, the good, the bad, and the ugly. In particular, think about the emotional and psychological effects that the algorithm may produce in different contexts as humans interact with it. These can be more obvious harms like attention hijacking, gaslighting or reputation damage, but can also include things like confusion, anxiety, and harm to self-esteem or positive self-identity. In tech, we don’t talk about feelings because we like to focus on what we can easily measure. That gives us the false comfort that we’re being objective, unbiased as well as efficient and effective. Bringing feelings back in during the design phase of algorithmic systems is critical, however, to designing experiences that are human-centred. 

Second, consider how users will imagine that your algorithm works, even if you know that it doesn’t actually work that way. To the extent you can, aim for transparency and balancing the information asymmetry, but consider the agency that people will ascribe to the algorithm. You know that the news feed algorithm doesn’t “think” that a given user is overweight, lonely, or sad. But since people tend to anthropomorphize machines and algorithms, what effect, nonetheless, might that have on someone? In other words, people know that machines don’t think, feel or judge, but they can still have emotional responses to interactions with machines that are similar to those that are generated when interacting with other humans. 

Third, when in doubt, give the user more control rather than less. How can your algorithm and the features within which it’s embedded produce a user experience that puts the human back in the driver’s seat? Maybe it’s tweaking the UI wording. Maybe it’s giving the user a simple option to turn a feature on or off. Maybe it’s using other automated machine learning techniques to improve the experience. Always optimize for direct human well-being, rather than indirect measures of human satisfaction like usage metrics that can be misleading. 

Fourth, consider how emotional or behavioural changes in response to various types of interactions with your algorithm that humans can have will impact that algorithm’s continued performance. How might the algorithm encourage feedback loops that might stray over time from your intended outcomes? How will you monitor that? What benchmarks will you use to flag issues and make appropriate tweaks to the algorithm in response? What kind of feedback can you seek from users on how they feel about your product? 

Designing for Feelings

The more algorithmic systems proliferate in our social world, the more social power they will exert on relationships, identity, the self, and yes, our feelings. Designing for things that are not easily measured is challenging because it’s hard to tell when you’re successful. But not designing for affect causes real human harm, not to mention negative front-page news stories. A/B testing (responsibly!), focus groups, and interview-based probing during design phases, are all good methods of discovering potential emotional impacts of your product before release. Likewise, designing feedback channels for customers as they engage with your product is a good idea.

Human-centred algorithmic design must be guided by a measure of the user’s holistic well-being. This must include psychological, emotional, and social health. With algorithmic systems proliferating deep into our social lives, Bucher encourages us to pay attention to the affective power of these systems. From there, it’s up to all of us to decide how we want algorithms to make us feel.


Original paper by Taina Bucher: https://doi.org/10.1080/1369118X.2016.1154086

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

🔍 SEARCH

Spotlight

Close-up of a cat sleeping on a computer keyboard

Tech Futures: The threat of AI-generated code to the world’s digital infrastructure

The undying sun hangs in the sky, as people gather around signal towers, working through their digital devices.

Dreams and Realities in Modi’s AI Impact Summit

Illustration of a coral reef ecosystem

Tech Futures: Diversity of Thought and Experience: The UN’s Scientific Panel on AI

This image shows a large white, traditional, old building. The top half of the building represents the humanities (which is symbolised by the embedded text from classic literature which is faintly shown ontop the building). The bottom section of the building is embossed with mathematical formulas to represent the sciences. The middle layer of the image is heavily pixelated. On the steps at the front of the building there is a group of scholars, wearing formal suits and tie attire, who are standing around at the enternace talking and some of them are sitting on the steps. There are two stone, statute-like hands that are stretching the building apart from the left side. In the forefront of the image, there are 8 students - which can only be seen from the back. Their graduation gowns have bright blue hoods and they all look as though they are walking towards the old building which is in the background at a distance. There are a mix of students in the foreground.

Tech Futures: Co-opting Research and Education

Agentic AI systems and algorithmic accountability: a new era of e-commerce

related posts

  • Re-imagining Algorithmic Fairness in India and Beyond (Research Summary)

    Re-imagining Algorithmic Fairness in India and Beyond (Research Summary)

  • A fair pricing model via adversarial learning

    A fair pricing model via adversarial learning

  • Target specification bias, counterfactual prediction, and algorithmic fairness in healthcare

    Target specification bias, counterfactual prediction, and algorithmic fairness in healthcare

  • The Grand Illusion: The Myth of Software Portability and Implications for ML Progress

    The Grand Illusion: The Myth of Software Portability and Implications for ML Progress

  • Research summary: AI Mediated Exchange Theory by Xiao Ma and Taylor W. Brown

    Research summary: AI Mediated Exchange Theory by Xiao Ma and Taylor W. Brown

  • An Introduction to Corporate Digital Responsibility

    An Introduction to Corporate Digital Responsibility

  • Research summary: Social Biases in NLP Models as Barriers for Persons with Disabilities

    Research summary: Social Biases in NLP Models as Barriers for Persons with Disabilities

  • Assessing the nature of large language models: A caution against anthropocentrism

    Assessing the nature of large language models: A caution against anthropocentrism

  • On Prediction-Modelers and Decision-Makers: Why Fairness Requires More Than a Fair Prediction Model

    On Prediction-Modelers and Decision-Makers: Why Fairness Requires More Than a Fair Prediction Model

  • Research summary: Bring the People Back In: Contesting Benchmark Machine Learning

    Research summary: Bring the People Back In: Contesting Benchmark Machine Learning

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer


Articles

Columns

AI Literacy

The State of AI Ethics Report


 

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.

Contact

Donate


  • © 2025 MONTREAL AI ETHICS INSTITUTE.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.