🔬 Research summary by Connor Wright, our Partnerships Manager.
[Original paper by Maya Indira Ganesh and Emanuel Moss]
Overview: When it comes to issues in AI, we have two categories of response: resistance and refusal. Resistance comes from Big Tech, but this proves futile without the refusal responses produced by external actors. Above all, a holistic approach is required for any response approach to be successful.
Introduction
The authors explore two categories of responses to AI harms: resistance and refusal. Resistance is associated with the efforts made by Big Tech to combat AI harms, while refusals link to efforts outside the world of Big Tech, which acknowledge the limitations of Big Tech’s efforts. Here, AI doesn’t just cause harm in targeted ways (such as intentional discrimination) but also in neutral environments by perpetuating existing inequalities. This will form the basis for exploring a holistic approach to AI issues. In this way, the role of knowledge as power will be explored, which includes deciding what is ethical. Opting out of this Big Tech environment will be taken as a key refusal practice before I conclude about the benefits of a holistic and systematic approach to AI issues.
Key Insights
A holistic approach to solving AI issues
Complications associated with fairness and bias within AI are not just related to an AI system itself. Despite this, Big Tech has usually tried to address these problems through tweaks to the system design. Yet, in some cases, the problem is instead exacerbated by an AI system, rather than proving the source of the problem.
The authors mention the case of Kronos, an automated shift scheduler adopted by Starbucks. This resulted in a single mother being scheduled a shift at 8 am, meaning that she had to get up at 5 am for the 3-hour journey to childcare and to work. In this case, the AI is not the central problem. Instead, the lack of affordable housing close to childcare centres is also blameworthy.
Understanding how such automated systems work is paramount to mitigating these harms. Consequently, we revert back to the old maxim that knowledge is power.
Power and knowledge
To resist algorithmic harms, we have to document them. The AIAAIC (an open Google spreadsheet) details current AI issues, proving helpful in learning from the mistakes made in the past. Important to note is how all AI issues are mentioned, rather than a refined and select few. Should this be the case, different types of knowledge begin to carry different values, becoming situated within the power structure of society.
To illustrate, those who are most involved in the AI sphere (such as Silicon Valley) would be able to shape what is public knowledge and what is not. The power to influence what information is widely accessible would render ‘public’ information as simply propaganda. Nevertheless, Silicon Valley has still enjoyed a substantial say on what is ‘ethical’ within the AI field.
Deciding what is ethical
The authors detail how initial investments from Silicon Valley converted values such as fairness and ethics from values attributed to the good life into central pieces of AI development. Consequently, Silicon Valley became the final arbitrator on ethical dilemmas. In addition, ethics has shifted from a domain which involves governments and activist groups into a single-player dominated league. In this way, the world of AI is not treated as a relational endeavour. Every player is in it for themselves.
As a result, when algorithms are at fault, the industry has been shaped such that we ask how the algorithm was unethical or unfair instead of questioning its position there in the first place. Subsequently, activist technologists argue that change must come through societal or political action rather than solely modifying systems. Without doing so, we fall into the broken part fallacy. We start treating AI problems as individualistic, which does nothing to solve a problem that is, in fact, systemic.
In this way, “even when technical fixes are designed to mitigate harms, they fall short because the socio-technical aspects of how violence happens are not fully addressed by re-design alone.” (p. 98). Hence, refusal efforts come into play to help showcase the socio-political implications of the technology.
Varieties of refusals
For acts of refusal to be possible, it must be feasible to refuse to participate in the Big Tech environment and still hold a place in society. Hence, actions like “keeping personal and private aspects of life offline, such as…the adoption of anonymous social media accounts to speak to a smaller circle of confidants (e.g. “Small Twitter”) are small acts of refusal in this vein.” (p. 99). Through these efforts, we refuse notions of scale and connection to stay out of the problems that the world of Big Tech produces. Without such possibilities, society outside the Big Tech environment remains powerless to refuse.
Between the lines
I find the idea of a holistic approach to AI systems very appealing. In some cases, the technology is clearly at fault, such as with Kronos. However, the environment in which Kronos can exacerbate an already existing social problem requires examination. I like to think of the problem as trying to grow a plant in a desert. If we simply focus on why the plant is not growing, we will just keep trying to water it more and more. However, the crux of the problem lies in how we need to look at the environment in which we live to diagnose the problem entirely. Perspective is crucial and how problems arise is as worthy of consideration as the problem itself.