• Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • Core Principles of Responsible AI
    • Accountability
    • Fairness
    • Privacy
    • Safety and Security
    • Sustainability
    • Transparency
  • Special Topics
    • AI in Industry
    • Ethical Implications
    • Human-Centered Design
    • Regulatory Landscape
    • Technical Methods
  • Living Dictionary
  • State of AI Ethics
  • AI Ethics Brief
  • 🇫🇷
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

Resistance and refusal to algorithmic harms: Varieties of ‘knowledge projects’

June 2, 2022

🔬 Research summary by Connor Wright, our Partnerships Manager.

[Original paper by Maya Indira Ganesh and Emanuel Moss]


Overview: When it comes to issues in AI, we have two categories of response: resistance and refusal. Resistance comes from Big Tech, but this proves futile without the refusal responses produced by external actors. Above all, a holistic approach is required for any response approach to be successful.


Introduction

The authors explore two categories of responses to AI harms: resistance and refusal. Resistance is associated with the efforts made by Big Tech to combat AI harms, while refusals link to efforts outside the world of Big Tech, which acknowledge the limitations of Big Tech’s efforts. Here, AI doesn’t just cause harm in targeted ways (such as intentional discrimination) but also in neutral environments by perpetuating existing inequalities. This will form the basis for exploring a holistic approach to AI issues. In this way, the role of knowledge as power will be explored, which includes deciding what is ethical. Opting out of this Big Tech environment will be taken as a key refusal practice before I conclude about the benefits of a holistic and systematic approach to AI issues.

Key Insights

A holistic approach to solving AI issues

Complications associated with fairness and bias within AI are not just related to an AI system itself. Despite this, Big Tech has usually tried to address these problems through tweaks to the system design. Yet, in some cases, the problem is instead exacerbated by an AI system, rather than proving the source of the problem. 

The authors mention the case of Kronos, an automated shift scheduler adopted by Starbucks. This resulted in a single mother being scheduled a shift at 8 am, meaning that she had to get up at 5 am for the 3-hour journey to childcare and to work. In this case, the AI is not the central problem. Instead, the lack of affordable housing close to childcare centres is also blameworthy. 

Understanding how such automated systems work is paramount to mitigating these harms. Consequently, we revert back to the old maxim that knowledge is power.

Power and knowledge

To resist algorithmic harms, we have to document them. The AIAAIC (an open Google spreadsheet) details current AI issues, proving helpful in learning from the mistakes made in the past. Important to note is how all AI issues are mentioned,  rather than a refined and select few. Should this be the case, different types of knowledge begin to carry different values, becoming situated within the power structure of society. 

To illustrate, those who are most involved in the AI sphere (such as Silicon Valley) would be able to shape what is public knowledge and what is not. The power to influence what information is widely accessible would render ‘public’ information as simply propaganda. Nevertheless, Silicon Valley has still enjoyed a substantial say on what is ‘ethical’ within the AI field.

Deciding what is ethical

The authors detail how initial investments from Silicon Valley converted values such as fairness and ethics from values attributed to the good life into central pieces of AI development. Consequently, Silicon Valley became the final arbitrator on ethical dilemmas. In addition, ethics has shifted from a domain which involves governments and activist groups into a single-player dominated league. In this way, the world of AI is not treated as a relational endeavour. Every player is in it for themselves. 

As a result, when algorithms are at fault, the industry has been shaped such that we ask how the algorithm was unethical or unfair instead of questioning its position there in the first place. Subsequently, activist technologists argue that change must come through societal or political action rather than solely modifying systems. Without doing so, we fall into the broken part fallacy. We start treating AI problems as individualistic, which does nothing to solve a problem that is, in fact, systemic.

In this way, “even when technical fixes are designed to mitigate harms, they fall short because the socio-technical aspects of how violence happens are not fully addressed by re-design alone.” (p. 98). Hence, refusal efforts come into play to help showcase the socio-political implications of the technology.

Varieties of refusals

For acts of refusal to be possible, it must be feasible to refuse to participate in the Big Tech environment and still hold a place in society. Hence, actions like “keeping personal and private aspects of life offline, such as…the adoption of anonymous social media accounts to speak to a smaller circle of confidants (e.g. “Small Twitter”) are small acts of refusal in this vein.” (p. 99). Through these efforts, we refuse notions of scale and connection to stay out of the problems that the world of Big Tech produces. Without such possibilities, society outside the Big Tech environment remains powerless to refuse.

Between the lines

I find the idea of a holistic approach to AI systems very appealing. In some cases, the technology is clearly at fault, such as with Kronos. However, the environment in which Kronos can exacerbate an already existing social problem requires examination. I like to think of the problem as trying to grow a plant in a desert. If we simply focus on why the plant is not growing, we will just keep trying to water it more and more. However, the crux of the problem lies in how we need to look at the environment in which we live to diagnose the problem entirely. Perspective is crucial and how problems arise is as worthy of consideration as the problem itself.

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

🔍 SEARCH

Spotlight

Canada’s Minister of AI and Digital Innovation is a Historic First. Here’s What We Recommend.

Am I Literate? Redefining Literacy in the Age of Artificial Intelligence

AI Policy Corner: The Texas Responsible AI Governance Act

AI Policy Corner: Singapore’s National AI Strategy 2.0

AI Governance in a Competitive World: Balancing Innovation, Regulation and Ethics | Point Zero Forum 2025

related posts

  • The Ethics of Sustainability for Artificial Intelligence

    The Ethics of Sustainability for Artificial Intelligence

  • Cleaning Up the Streets: Understanding Motivations, Mental Models, and Concerns of Users Flagging So...

    Cleaning Up the Streets: Understanding Motivations, Mental Models, and Concerns of Users Flagging So...

  • Beyond Bias and Compliance: Towards Individual Agency and Plurality of Ethics in AI

    Beyond Bias and Compliance: Towards Individual Agency and Plurality of Ethics in AI

  • Language (Technology) is Power: A Critical Survey of “Bias” in NLP (Research summary)

    Language (Technology) is Power: A Critical Survey of “Bias” in NLP (Research summary)

  • Research summary: Apps Gone Rogue: Maintaining Personal Privacy in an Epidemic

    Research summary: Apps Gone Rogue: Maintaining Personal Privacy in an Epidemic

  • FeedbackLogs: Recording and Incorporating Stakeholder Feedback into Machine Learning Pipelines

    FeedbackLogs: Recording and Incorporating Stakeholder Feedback into Machine Learning Pipelines

  • Research summary: Beyond a Human Rights Based Approach To AI Governance: Promise, Pitfalls and Plea

    Research summary: Beyond a Human Rights Based Approach To AI Governance: Promise, Pitfalls and Plea

  • Ethics-based auditing of automated decision-making systems: intervention points and policy implicati...

    Ethics-based auditing of automated decision-making systems: intervention points and policy implicati...

  • A Matrix for Selecting Responsible AI Frameworks

    A Matrix for Selecting Responsible AI Frameworks

  • Research summary: AI Governance in 2019, A Year in Review: Observations of 50 Global Experts

    Research summary: AI Governance in 2019, A Year in Review: Observations of 50 Global Experts

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer

Categories


• Blog
• Research Summaries
• Columns
• Core Principles of Responsible AI
• Special Topics

Signature Content


• The State Of AI Ethics

• The Living Dictionary

• The AI Ethics Brief

Learn More


• About

• Open Access Policy

• Contributions Policy

• Editorial Stance on AI Tools

• Press

• Donate

• Contact

The AI Ethics Brief (bi-weekly newsletter)

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.


Archive

  • © MONTREAL AI ETHICS INSTITUTE. All rights reserved 2024.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.