• Skip to main content
  • Skip to secondary menu
  • Skip to primary sidebar
  • Skip to footer
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

  • Articles
    • Public Policy
    • Privacy & Security
    • Human Rights
      • Ethics
      • JEDI (Justice, Equity, Diversity, Inclusion
    • Climate
    • Design
      • Emerging Technology
    • Application & Adoption
      • Health
      • Education
      • Government
        • Military
        • Public Works
      • Labour
    • Arts & Culture
      • Film & TV
      • Music
      • Pop Culture
      • Digital Art
  • Columns
    • AI Policy Corner
    • Recess
    • Tech Futures
  • The AI Ethics Brief
  • AI Literacy
    • Research Summaries
    • AI Ethics Living Dictionary
    • Learning Community
  • The State of AI Ethics Report
    • Volume 7 (November 2025)
    • Volume 6 (February 2022)
    • Volume 5 (July 2021)
    • Volume 4 (April 2021)
    • Volume 3 (Jan 2021)
    • Volume 2 (Oct 2020)
    • Volume 1 (June 2020)
  • About
    • Our Contributions Policy
    • Our Open Access Policy
    • Contact
    • Donate

The algorithmic imaginary: exploring the ordinary affects of Facebook algorithms (Research Summary)

February 21, 2021

🔬 Research summary contributed by Dr. Iga Kozlowska (@kozlowska_iga), a sociologist working on Microsoft’s Ethics & Society team where she’s tasked with guiding responsible AI innovation.

✍️ This is part 5 of the ongoing Sociology of AI Ethics series; read previous entries here.

[Link to original paper + authors at the bottom]


Overview: Bucher explores the spaces where humans and algorithms meet. Using Facebook as a case study, she examines the platform users’ thoughts and feelings about how the Facebook algorithm impacts them in their daily lives. She concludes that, despite not knowing exactly how the algorithm works, users imagine how it works. The algorithm, even if indirectly, not only produces emotions (often negative) but also alters online behaviour, thus exerting social power back onto the algorithm in a human-algorithm interaction feedback loop.


Facebook seems to think I’m “pregnant, single, broke and should lose weight.” These are the kinds of comments that Bucher uncovers as she reaches out to 25 ordinary Facebook users who have tweeted about their dissatisfaction or confusion over Facebook’s news feed algorithm. 

In popular imagination and in public discourse, we often think of algorithms as objective and capable of accurately reflecting reality. Because we don’t associate algorithms with emotions, we tend to underestimate the affective power of algorithms on people’s social lives and experiences. While Facebook algorithms are not necessarily designed to make users feel one way or another (except when they are: see Facebook’s emotion contagion experiment), they certainly have the power to produce emotional reactions and even alter behaviour. 

How Facebook Makes People Feel

Bucher summarizes several ways in which Facebook users experience negative, confusing, or disconcerting situations when interacting with algorithms. Users readily admit they don’t understand the inner workings of the algorithm, as no one outside of Facebook does. However, not understanding how something works doesn’t preclude us from experiencing its effects. Bucher discovers the following themes:

  • Dealing with algorithmically-built profiling identities that are not flattering or do not comport to how users see themselves
  • Creepy moments when people feel like their privacy is violated
  • Frustration and anxiety when posts don’t do well
  • “Cruel” moments when unwanted memories from the past are resurfaced in feeds

In response to some of these unpleasant experiences, savvy Facebook users try to “play Facebook’s game” (see my last post summarizing research on “gaming” SEO) by adjusting content (wording and images), timing of posts, and forms of interaction with friends’ content. Facebook’s “game” consists of explicit and implicit rules (much like Google’s SEO guidelines), and if you play the game, over time, you get better and are more likely to “win.” In fact, this is not too far from how social norms function in the real world — there are spoken and unspoken cultural norms that we are socialized into at an early age and we are rewarded for playing by the rules and penalized for breaking them.

The fact that there is a game you have to play to get rewards out of using Facebook is not inherently good or bad. However, it is something that the company needs to recognize and address. The platform is not just an open, free space for organic human interaction as Facebook sometimes likes to argue to avoid accountability; rather it is a highly structured and circumscribed website with features that encourage and enable some outcomes and discourage and forestall others. Engineers need to take this seriously if for no other reason than the fact that interaction with the platform and its algorithms does cause patterned changes in user behaviour that feedback into Facebook’s machine learning algorithm to unknown effect.

Let’s Talk About Feelings

What can machine learning developers and product designers do with Bucher’s findings?   

First, consider the social power your AI-based feature or product will have on the user (and indirect stakeholders). Consider, the good, the bad, and the ugly. In particular, think about the emotional and psychological effects that the algorithm may produce in different contexts as humans interact with it. These can be more obvious harms like attention hijacking, gaslighting or reputation damage, but can also include things like confusion, anxiety, and harm to self-esteem or positive self-identity. In tech, we don’t talk about feelings because we like to focus on what we can easily measure. That gives us the false comfort that we’re being objective, unbiased as well as efficient and effective. Bringing feelings back in during the design phase of algorithmic systems is critical, however, to designing experiences that are human-centred. 

Second, consider how users will imagine that your algorithm works, even if you know that it doesn’t actually work that way. To the extent you can, aim for transparency and balancing the information asymmetry, but consider the agency that people will ascribe to the algorithm. You know that the news feed algorithm doesn’t “think” that a given user is overweight, lonely, or sad. But since people tend to anthropomorphize machines and algorithms, what effect, nonetheless, might that have on someone? In other words, people know that machines don’t think, feel or judge, but they can still have emotional responses to interactions with machines that are similar to those that are generated when interacting with other humans. 

Third, when in doubt, give the user more control rather than less. How can your algorithm and the features within which it’s embedded produce a user experience that puts the human back in the driver’s seat? Maybe it’s tweaking the UI wording. Maybe it’s giving the user a simple option to turn a feature on or off. Maybe it’s using other automated machine learning techniques to improve the experience. Always optimize for direct human well-being, rather than indirect measures of human satisfaction like usage metrics that can be misleading. 

Fourth, consider how emotional or behavioural changes in response to various types of interactions with your algorithm that humans can have will impact that algorithm’s continued performance. How might the algorithm encourage feedback loops that might stray over time from your intended outcomes? How will you monitor that? What benchmarks will you use to flag issues and make appropriate tweaks to the algorithm in response? What kind of feedback can you seek from users on how they feel about your product? 

Designing for Feelings

The more algorithmic systems proliferate in our social world, the more social power they will exert on relationships, identity, the self, and yes, our feelings. Designing for things that are not easily measured is challenging because it’s hard to tell when you’re successful. But not designing for affect causes real human harm, not to mention negative front-page news stories. A/B testing (responsibly!), focus groups, and interview-based probing during design phases, are all good methods of discovering potential emotional impacts of your product before release. Likewise, designing feedback channels for customers as they engage with your product is a good idea.

Human-centred algorithmic design must be guided by a measure of the user’s holistic well-being. This must include psychological, emotional, and social health. With algorithmic systems proliferating deep into our social lives, Bucher encourages us to pay attention to the affective power of these systems. From there, it’s up to all of us to decide how we want algorithms to make us feel.


Original paper by Taina Bucher: https://doi.org/10.1080/1369118X.2016.1154086

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

🔍 SEARCH

Spotlight

A network diagram with lots of little emojis, organised in clusters.

Tech Futures: AI For and Against Knowledge

A brightly coloured illustration which can be viewed in any direction. It has many elements to it working together: men in suits around a table, someone in a data centre, big hands controlling the scenes and holding a phone, people in a production line. Motifs such as network diagrams and melting emojis are placed throughout the busy vignettes.

Tech Futures: The Fossil Fuels Playbook for Big Tech: Part II

A rock embedded with intricate circuit board patterns, held delicately by pale hands drawn in a ghostly style. The contrast between the rough, metallic mineral and the sleek, artificial circuit board illustrates the relationship between raw natural resources and modern technological development. The hands evoke human involvement in the extraction and manufacturing processes.

Tech Futures: The Fossil Fuels Playbook for Big Tech: Part I

Close-up of a cat sleeping on a computer keyboard

Tech Futures: The threat of AI-generated code to the world’s digital infrastructure

The undying sun hangs in the sky, as people gather around signal towers, working through their digital devices.

Dreams and Realities in Modi’s AI Impact Summit

related posts

  • Self-Consuming Generative Models Go MAD

    Self-Consuming Generative Models Go MAD

  • Artificial Intelligence and Aesthetic Judgment

    Artificial Intelligence and Aesthetic Judgment

  • Measuring Disparate Outcomes of Content Recommendation Algorithms with Distributional Inequality Met...

    Measuring Disparate Outcomes of Content Recommendation Algorithms with Distributional Inequality Met...

  • Making Kin with the Machines

    Making Kin with the Machines

  • One Map to Rule Them All? Google Maps as Digital Technical Object

    One Map to Rule Them All? Google Maps as Digital Technical Object

  • The Ethics of AI in Medtech: A Discussion With Abhishek Gupta

    The Ethics of AI in Medtech: A Discussion With Abhishek Gupta

  • Rethinking Fairness: An Interdisciplinary Survey of Critiques of Hegemonic ML

    Rethinking Fairness: An Interdisciplinary Survey of Critiques of Hegemonic ML

  • Group Fairness Is Not Derivable From Justice: a Mathematical Proof

    Group Fairness Is Not Derivable From Justice: a Mathematical Proof

  • The Social Contract for AI

    The Social Contract for AI

  • The Brussels Effect and AI: How EU Regulation will Impact the Global AI Market

    The Brussels Effect and AI: How EU Regulation will Impact the Global AI Market

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer


Articles

Columns

AI Literacy

The State of AI Ethics Report


 

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.

Contact

Donate


  • © 2025 MONTREAL AI ETHICS INSTITUTE.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.