🔬 Research summary contributed by Dr. Iga Kozlowska (@kozlowska_iga), a sociologist working on Microsoft’s Ethics & Society team where she’s tasked with guiding responsible AI innovation.
✍️ This is part 5 of the ongoing Sociology of AI Ethics series; read previous entries here.
[Link to original paper + authors at the bottom]
Overview: Bucher explores the spaces where humans and algorithms meet. Using Facebook as a case study, she examines the platform users’ thoughts and feelings about how the Facebook algorithm impacts them in their daily lives. She concludes that, despite not knowing exactly how the algorithm works, users imagine how it works. The algorithm, even if indirectly, not only produces emotions (often negative) but also alters online behaviour, thus exerting social power back onto the algorithm in a human-algorithm interaction feedback loop.
Facebook seems to think I’m “pregnant, single, broke and should lose weight.” These are the kinds of comments that Bucher uncovers as she reaches out to 25 ordinary Facebook users who have tweeted about their dissatisfaction or confusion over Facebook’s news feed algorithm.
In popular imagination and in public discourse, we often think of algorithms as objective and capable of accurately reflecting reality. Because we don’t associate algorithms with emotions, we tend to underestimate the affective power of algorithms on people’s social lives and experiences. While Facebook algorithms are not necessarily designed to make users feel one way or another (except when they are: see Facebook’s emotion contagion experiment), they certainly have the power to produce emotional reactions and even alter behaviour.
How Facebook Makes People Feel
Bucher summarizes several ways in which Facebook users experience negative, confusing, or disconcerting situations when interacting with algorithms. Users readily admit they don’t understand the inner workings of the algorithm, as no one outside of Facebook does. However, not understanding how something works doesn’t preclude us from experiencing its effects. Bucher discovers the following themes:
- Dealing with algorithmically-built profiling identities that are not flattering or do not comport to how users see themselves
- Creepy moments when people feel like their privacy is violated
- Frustration and anxiety when posts don’t do well
- “Cruel” moments when unwanted memories from the past are resurfaced in feeds
In response to some of these unpleasant experiences, savvy Facebook users try to “play Facebook’s game” (see my last post summarizing research on “gaming” SEO) by adjusting content (wording and images), timing of posts, and forms of interaction with friends’ content. Facebook’s “game” consists of explicit and implicit rules (much like Google’s SEO guidelines), and if you play the game, over time, you get better and are more likely to “win.” In fact, this is not too far from how social norms function in the real world — there are spoken and unspoken cultural norms that we are socialized into at an early age and we are rewarded for playing by the rules and penalized for breaking them.
The fact that there is a game you have to play to get rewards out of using Facebook is not inherently good or bad. However, it is something that the company needs to recognize and address. The platform is not just an open, free space for organic human interaction as Facebook sometimes likes to argue to avoid accountability; rather it is a highly structured and circumscribed website with features that encourage and enable some outcomes and discourage and forestall others. Engineers need to take this seriously if for no other reason than the fact that interaction with the platform and its algorithms does cause patterned changes in user behaviour that feedback into Facebook’s machine learning algorithm to unknown effect.
Let’s Talk About Feelings
What can machine learning developers and product designers do with Bucher’s findings?
First, consider the social power your AI-based feature or product will have on the user (and indirect stakeholders). Consider, the good, the bad, and the ugly. In particular, think about the emotional and psychological effects that the algorithm may produce in different contexts as humans interact with it. These can be more obvious harms like attention hijacking, gaslighting or reputation damage, but can also include things like confusion, anxiety, and harm to self-esteem or positive self-identity. In tech, we don’t talk about feelings because we like to focus on what we can easily measure. That gives us the false comfort that we’re being objective, unbiased as well as efficient and effective. Bringing feelings back in during the design phase of algorithmic systems is critical, however, to designing experiences that are human-centred.
Second, consider how users will imagine that your algorithm works, even if you know that it doesn’t actually work that way. To the extent you can, aim for transparency and balancing the information asymmetry, but consider the agency that people will ascribe to the algorithm. You know that the news feed algorithm doesn’t “think” that a given user is overweight, lonely, or sad. But since people tend to anthropomorphize machines and algorithms, what effect, nonetheless, might that have on someone? In other words, people know that machines don’t think, feel or judge, but they can still have emotional responses to interactions with machines that are similar to those that are generated when interacting with other humans.
Third, when in doubt, give the user more control rather than less. How can your algorithm and the features within which it’s embedded produce a user experience that puts the human back in the driver’s seat? Maybe it’s tweaking the UI wording. Maybe it’s giving the user a simple option to turn a feature on or off. Maybe it’s using other automated machine learning techniques to improve the experience. Always optimize for direct human well-being, rather than indirect measures of human satisfaction like usage metrics that can be misleading.
Fourth, consider how emotional or behavioural changes in response to various types of interactions with your algorithm that humans can have will impact that algorithm’s continued performance. How might the algorithm encourage feedback loops that might stray over time from your intended outcomes? How will you monitor that? What benchmarks will you use to flag issues and make appropriate tweaks to the algorithm in response? What kind of feedback can you seek from users on how they feel about your product?
Designing for Feelings
The more algorithmic systems proliferate in our social world, the more social power they will exert on relationships, identity, the self, and yes, our feelings. Designing for things that are not easily measured is challenging because it’s hard to tell when you’re successful. But not designing for affect causes real human harm, not to mention negative front-page news stories. A/B testing (responsibly!), focus groups, and interview-based probing during design phases, are all good methods of discovering potential emotional impacts of your product before release. Likewise, designing feedback channels for customers as they engage with your product is a good idea.
Human-centred algorithmic design must be guided by a measure of the user’s holistic well-being. This must include psychological, emotional, and social health. With algorithmic systems proliferating deep into our social lives, Bucher encourages us to pay attention to the affective power of these systems. From there, it’s up to all of us to decide how we want algorithms to make us feel.
Original paper by Taina Bucher: https://doi.org/10.1080/1369118X.2016.1154086