📚 Book summary by Dr. Iga Kozlowska (@kozlowska_iga), a sociologist working on Microsoft’s Ethics & Society team where she guides responsible AI innovation.
✍️ This is part 8 of the ongoing Sociology of AI Ethics series; read previous entries here.
[Original book by Jenny L. Davis]
Overview: Davis’s book is an accessible and succinct treatment of technology as a journey through social power and politics and serves as an invaluable guidebook for tech analysts, designers and builders willing to venture on that path. Davis traces the idea of “affordance” through its intellectual history to show how, despite its multifarious meanings and (mis)uses, it remains a useful concept. She introduces the “mechanisms and conditions framework” of affordances which specifies how technologies reflect and shape human social behaviour, giving us a transferrable tool for critical analysis and intentional design.
“…if left unchecked, technologies will arc toward privilege and normality.”
Davis revives “affordance” as a useful conceptual tool to help us build better technologies and systematically evaluate existing ones. She updates it with her mechanisms and condition framework to help us get the most out of the concept, particularly significant in a world in which digital and AI technologies have gotten more nuanced and complex than anything we’ve seen to date.
The mechanisms and conditions framework of affordances
Davis defines affordances as mediators between the features of technologies and the social outcomes of those technologies—how the technologies “push, pull, enable, and constrain” for socially situated subjects. Although the concept of affordance is widely used across disciplines, it remains beleaguered by two problems: binary application and presumed universal subjects. Davis distils these problems in the early part of the book and spends the rest of the work addressing them.
The central contribution of the How Artifacts Afford is Davis’s transformation of a singular concept (affordance) into an operational model, called the mechanisms and conditions framework. This model shifts the orienting question from what technologies affordances to how technologies afford, for whom and under what circumstances? providing an easy vocabulary, overlaid with a critical lens.
First, we must look at the mechanisms through which affordances operate. This part of the framework addresses the problem of binary application. Rather than presenting technologies as either affording some action/outcome or not, the mechanisms of affordance lay out a porous continuum by which technologies request, demand, encourage, discourage, refuse and allow.
Technologies request or demand that people do things. The former is a polite “please and thank you” and the latter is a “you’ve got no choice!” Once a user takes action, technologies respond by encouraging, discouraging or refusing that behaviour. Finally, allow is a neutral situation where the user is not pressured to do one thing or another. These mechanisms operate on the individual, inter-personal, and cultural-structural levels. This means that the technology exerts influence on the end-user interacting with it. But it also shapes relationships between people, and on an even more macro scale, has the ability to (re)make cultural norms, practices, and institutions.
The second part of the model—the conditions of affordance—deals with the problem of presumed universal subjects, asking for whom and under what circumstances will social outcomes take shape? The conditions part of the framework focuses more on the user and their social context. Davis groups the conditions in three broad and interrelated dimensions of perception, dexterity and cultural & institutional legitimacy. To determine how users will interact with a technology, it matters how users perceive that technology based on their skill level or technology literacy. Dexterity refers to the subject’s ability to effectively manipulate the functions of an object or technology and is a spectrum rather an ability or disability. Finally, cultural and institutional legitimacy refers to the structural position of a user within institutions of power, whether that be family, the workplace, education, law or culture. In other words, a subject’s authority or position of power will vary depending on the cultural and institutional context within which they are using a given technology and will therefore impact how they interact with it. Based on these conditions, a technology may request something of one person, while demanding it of another; it may encourage access for one group, while a different group is refused.
As a holistic model of both analysis and design, the mechanisms and conditions framework grounds complex human-technology relations, accounts for social and structural context, and systematically addresses the politics and power of sociotechnical systems.
What does this mean for AI developers?
Davis’s framework is applicable to all technologies, digital and analog. However, it is especially important for AI-based technologies because automated prediction, classification, and pattern recognition through dynamic learning is often multivalent and indeterminate. In other words, a single AI system can be used and interpreted in many ways. Algorithmic inputs and outputs change dynamically, thus impacting user behaviour unpredictably. Moreover, interpretability is often a struggle for average AI-system users and even machine learning experts themselves. Because of these features of AI, we need to pay special attention to how systems will interact with people and objects in the social world in which they will be deployed. Davis’s framework gives us a clear and methodical way to address these complexities when evaluating existing AI systems, when readjusting those systems, and when building new ones.
Although the framework itself serves as a method on its own, Davis highlights how the mechanisms and conditions can also operate alongside existing methods, augmenting familiar practices. She gives a few examples of these complementary methods that practitioners can use to better understand how their product is going to work, for whom, and under what conditions. For example, she mentions interface analysis and critical technocultural discourse analysis where analysts can analyze affordances from the perspective of the designer and users of different positionalities and backgrounds, centralizing in particular marginalized and underrepresented groups.
The walkthrough method might be particularly useful for AI-based apps, where the reviewer walks through the various stages of app use: “registration and entry; everyday use; and suspension, closure, and leaving.” It particularly pays attention to the “cultural and political underpinnings” of the UI and the app’s “environment of expected use” or the cultural beliefs, norms, and practices that make up the user’s environment and will (or won’t) lead to the designer’s expected use(s).
Values reflection is another suitable technique for thinking about AI systems via affordances, where the focus is on excavating the values that are baked into the system’s design, whether the designers are initially conscious of it or not. It’s also a reflection of the values that individual users and society as a whole espouse to see if there is a potential mismatch between those and the designers’ and their company’s values. This is important because based on our values, we experience a technology as either beneficial or harmful. If I have a different stack rank of values (because of my background or culture) than you, then what I experience as beneficial you might experience as harmful or vice versa.
Regardless of which method(s) you consider when designing and evaluating your AI system, the important part is that you follow the general themes that Davis outlines as common to all of these complementary methods and her mechanisms and condition framework:
- centralize political dynamics
- give voice to marginal populations and groups
- maintain a reflexive orientation
- assume multiplicity of meaning, experience, and outcome
- treat materiality as consequential but not determinative
Why bother?
Most developers are in a hurry to get their feature or product out the door, and these exercises may seem like they’ll slow you down. However, the mechanisms and conditions framework can streamline planning, evaluation, and cycles of design, thus saving time overall. And, more importantly, developers can make better, more socially just products. Here’s why that matters:
- AI is powerful and will continue to harm people if not done right
- By centralizing historically marginalized communities in the design process, your product will get more adoption
- By designing with values in mind, you will be able to prevent harm and increase benefit, making your product that much more attractive, driving higher usage
- By designing for inclusion and social justice, you will find new AI applications, new user needs, and new non-AI opportunities, thus fueling broader innovation
- By consistently releasing responsible products and services, your brand will stand out in the market as the most reliable and attractive choice
Bad design used to make technology unintuitive or irritating. Now, bad AI design can mean unjust criminal sentencing, inaccurate medical diagnosing, or unfair hiring and firing. We all have a moral imperative to do better. Jenny Davis’s book can help guide us on this important journey.