🔬 Research summary by Dr. Iga Kozlowska (@kozlowska_iga), a sociologist working on Microsoft’s Ethics & Society team where she guides responsible AI innovation.
✍️ This is part 10 of the ongoing Sociology of AI Ethics series; read previous entries here.
[Original paper by John Tomlinson]
Overview: The sociology of speed considers how people experience temporality in different contexts and how humans make meaning out of the social construct of time. John Tomlinson argues that we’ve moved from a culture of “mechanical speed” that dominated the 19th century as the Western world industrialized to “telemediated speed” at the turn of this century. This “immediacy” is an accelerated experience of time where information, goods, and people can be accessed immediately and effortlessly anywhere, anytime. While this is not categorically undesirable, Tomlinson considers how this imperative limits our imaginaries of the good life.
Introduction
We live in a one-click culture. Our digitally mediated world, in the last two decades, has all but obliterated the time between “now” and “then” and the space between “here” and “now.” We are accustomed to interacting with things and people directly and immediately. The “middle term” is made “redundant,” as Tomlinson puts it. The gap between desire and fulfillment has closed. With one click. Is this our version of the good life or can we imagine something better?
Our obsession with speed is driven by what Tomlinson calls “fast capitalism,” which is a form of late capitalism where information and communication technologies, aka the Internet, accelerate the imperative to consume and therefore produce, thereby creating a vicious cycle. It is aided and abetted by a culture that still equates speed with positive values like efficiency and productivity whereas slowness it associates with laziness, idleness, waste, and even dimwittedness or backwardness. The cult of efficiency, underpinned by Frederick Taylor’s scientific management of the early 20th century, still reigns supreme, particularly in the tech industry which is producing Tomlinson’s “telemediated” world. In fact, efficiency and productivity are values that reign so supreme, that they sometimes obliterate other human values like dignity, pleasure, freedom, leisure, and yes, idleness (see Bertrand Russell’s In Praise of Idleness).
While Tomlinson doesn’t address AI specifically, extending his concept of telemediated immediacy, I will argue that, in the context of AI, we need to take a step back and consider which social processes can or should be sped up through algorithmic intervention and which should not be. As we’ll see, sometimes ethical AI means slow AI.
Human dignity and work
Obviously, not all digitally mediated experiences should be decelerated. It has been a long-standing design imperative, from the telegraph to Zoom, to make the user experience smooth and seamless. We want fast connectivity. We want our YouTube videos to buffer quickly and our Google documents to load rapidly. There is no reason why checking out an e-book from my local library should take five clicks. Amazon can do it in one, and it’s the immediacy of the one-click to which we’ve become accustomed and now expect, nay demand!
However, for many human experiences, such as work, where consequential decisions are made about life opportunities, we need to think twice about whether we design for speed at all costs. In Alec MacGillis’s recent book about Amazon, Fulfillment, we learn how automated surveillance systems “measure productivity” by calculating each employee’s “time off task.” These productivity scores are then used to make algorithmic suggestions on whom to retain and whom to fire. A quote from one of the company lawyers illustrates this:
“Amazon’s system tracks the rates of each individual associate’s productivity and automatically generates any warnings or terminations regarding quality or productivity without input from supervisors” (emphasis mine).
Hundreds of employees are fired this way. Others struggle to have enough time to use the restroom for fear of the algorithm catching them “off task.” Such scoring has the potential to remove human bias in firing decision-making (though more research is needed to determine if that is actually true) and no doubt adds speed to the decision-making process thus generating “time-savings” for the supervisors who no longer have to review each case manually. But what are the consequences of this type of treatment for the people involved and their communities? It’s unlikely that someone who is not given enough time to use the restroom can do their best work, to say the least. Social bonds and a sense of belonging and community at work are very important features of our social lives and could be impacted negatively knowing that, as a worker, I can be fired at any minute by an algorithm without even the decency of human input.
For information workers too, the immediacy demanded by digital technologies and the removal of in-person interaction due to the COVID-19 pandemic have led to “digital exhaustion.” A recent “hybrid workplace” study by Microsoft found that employees feel the burden of accelerated, always-on digital exchange. While immediate contact with teammates through email, chat, and video calls, and sometimes all at once, seems efficient, effortless and time-saving (a walk down the hall no longer required!), there are undesirable sociopsychological consequences to this kind of accelerated communication: stress, anxiety, feeling harried, inability to focus, feelings of loss of control, and exhaustion from always being on and available. In the workplace, time is money, but sometimes it pays to slow down.
Designing slow AI
We’ve already seen how AI-based consequential decision-making in the workplace like firing and hiring is just one social context where the consequences can be harmful enough to warrant a second look at the cost of speed to human dignity. Other scenarios include healthcare diagnosing and treatment, bail and sentencing in the criminal justice system, policing and arrest-making, grading student exams and assignments, and the list goes on.
In addition to the more common concerns around fairness, accountability, and transparency, designers and developers should consider how accelerating a decision to digital speed impacts all the stakeholders in that process. Designing for slowness may not be popular, but it is not a new idea (see Hallnäs & Redström 2001) and is especially pertinent in the age of AI. The question each AI designer should ask themself then is, how is the automation of this task speeding up the social rhythm of that activity? And what are the potential benefits and harms of that acceleration to all stakeholders involved?
For example, a teacher may benefit from automated exam graders by saving the time that it would have taken him to perform that activity manually, and maybe the students benefit too because now the teacher can invest that time in quality interaction with the students. But is there anything that is lost in that time gained? How might this rapidity eliminate the opportunity for the teacher to better get to know his students’ writing style and learn more about them through their writing? How could that affect the teacher-student relationship? Maybe the student is grateful because the teacher has been biased toward her for one reason or another and always gave her a lower grade than she deserved. Or maybe the student feels like why bother trying hard when the only “person” that will read her paper is a machine that by definition cannot care about her work, learning, and development as a human being.
Through user research in the early design phase of the development lifecycle, these kinds of questions should come to the fore. Potential harms should be identified and mitigations considered. For example, for automated decision-making systems, you may require a “human-in-the-loop” so that the AI system doesn’t trigger an action immediately but instead gives a human time to interpret the results, check with other experts, make sense of it, and then confirm the next step or override the system. Requiring human intervention slows down “the experience,” but it can mitigate harms that would result from the system making the wrong decision.
Another example might be slowing down the user experience to elicit healthier, human-centred behaviour online. For example, in any personal data collection scenario, the user should be made aware of what is being collected, why, and what control they will have over their data. In situations with particularly important consequences to the user’s privacy, we may want to slow the experience down by putting in a “roadblock” or a “speed bump” like a meaningful consent experience. This may require the user to read more information and click a few more buttons, but it also allows them to make a more informed decision. Equally in the context of social media, psychologists have documented that the online disinhibition effect sometimes makes us say or do things that we wouldn’t otherwise do in person. Therefore, designers could consider helpful tips in the UI or pop-ups that help us stop and think before making a post. Meaningful human control over an AI system often requires slowing it down so that the user can pause, think, and then act. Rather than feeling out of control, the user is again made to feel like they are in the driver’s seat instead of the algorithm.
Between the lines
Published in 2005, Tomlinson’s book already feels a bit outdated and almost naively innocent given how much our world has “sped up” even in the last decade and a half. However, that is testament to the strength of his thesis, in that if applied to the latest digital technologies, like AI-based systems, it not only holds true but helps to illustrate just how effortlessly fast capitalism has obliterated the gaps in time and space. What were once physical, manual social processes are now ones that are quickly becoming digitized and automated. As Tomlinson argues, this is not necessarily bad, but nor should speed be taken at face value as a social good either. Automation does not always equal efficiency, and efficiency is not always the value we should be solving for. There are many roads to progress, and not all of them lead us through efficiency. In other words, we need a more nuanced approach to AI-based automation that examines the social contexts of each application and the range of values that people want to be expressed and enacted through AI. This is where sociology can help.