• Skip to main content
  • Skip to secondary menu
  • Skip to primary sidebar
  • Skip to footer
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

  • Articles
    • Public Policy
    • Privacy & Security
    • Human Rights
      • JEDI (Justice, Equity, Diversity, Inclusion
      • Ethics
    • Climate
    • Design
      • Emerging Technology
    • Application & Adoption
      • Health
      • Education
      • Government
        • Military
        • Public Works
      • Labour
    • Arts & Culture
      • Film & TV
      • Music
      • Pop Culture
      • Digital Art
  • Columns
    • AI Policy Corner
    • Recess
  • The AI Ethics Brief
  • AI Literacy
    • AI Ethics Living Dictionary
    • Learning Community
  • The State of AI Ethics Report
    • Volume 6 (February 2022)
    • Volume 5 (July 2021)
    • Volume 4 (April 2021)
    • Volume 3 (Jan 2021)
    • Volume 2 (Oct 2020)
    • Volume 1 (June 2020)
  • About
    • Our Contributions Policy
    • Our Open Access Policy
    • Contact
    • Donate

Resistance and refusal to algorithmic harms: Varieties of ‘knowledge projects’

June 2, 2022

🔬 Research summary by Connor Wright, our Partnerships Manager.

[Original paper by Maya Indira Ganesh and Emanuel Moss]


Overview: When it comes to issues in AI, we have two categories of response: resistance and refusal. Resistance comes from Big Tech, but this proves futile without the refusal responses produced by external actors. Above all, a holistic approach is required for any response approach to be successful.


Introduction

The authors explore two categories of responses to AI harms: resistance and refusal. Resistance is associated with the efforts made by Big Tech to combat AI harms, while refusals link to efforts outside the world of Big Tech, which acknowledge the limitations of Big Tech’s efforts. Here, AI doesn’t just cause harm in targeted ways (such as intentional discrimination) but also in neutral environments by perpetuating existing inequalities. This will form the basis for exploring a holistic approach to AI issues. In this way, the role of knowledge as power will be explored, which includes deciding what is ethical. Opting out of this Big Tech environment will be taken as a key refusal practice before I conclude about the benefits of a holistic and systematic approach to AI issues.

Key Insights

A holistic approach to solving AI issues

Complications associated with fairness and bias within AI are not just related to an AI system itself. Despite this, Big Tech has usually tried to address these problems through tweaks to the system design. Yet, in some cases, the problem is instead exacerbated by an AI system, rather than proving the source of the problem. 

The authors mention the case of Kronos, an automated shift scheduler adopted by Starbucks. This resulted in a single mother being scheduled a shift at 8 am, meaning that she had to get up at 5 am for the 3-hour journey to childcare and to work. In this case, the AI is not the central problem. Instead, the lack of affordable housing close to childcare centres is also blameworthy. 

Understanding how such automated systems work is paramount to mitigating these harms. Consequently, we revert back to the old maxim that knowledge is power.

Power and knowledge

To resist algorithmic harms, we have to document them. The AIAAIC (an open Google spreadsheet) details current AI issues, proving helpful in learning from the mistakes made in the past. Important to note is how all AI issues are mentioned,  rather than a refined and select few. Should this be the case, different types of knowledge begin to carry different values, becoming situated within the power structure of society. 

To illustrate, those who are most involved in the AI sphere (such as Silicon Valley) would be able to shape what is public knowledge and what is not. The power to influence what information is widely accessible would render ‘public’ information as simply propaganda. Nevertheless, Silicon Valley has still enjoyed a substantial say on what is ‘ethical’ within the AI field.

Deciding what is ethical

The authors detail how initial investments from Silicon Valley converted values such as fairness and ethics from values attributed to the good life into central pieces of AI development. Consequently, Silicon Valley became the final arbitrator on ethical dilemmas. In addition, ethics has shifted from a domain which involves governments and activist groups into a single-player dominated league. In this way, the world of AI is not treated as a relational endeavour. Every player is in it for themselves. 

As a result, when algorithms are at fault, the industry has been shaped such that we ask how the algorithm was unethical or unfair instead of questioning its position there in the first place. Subsequently, activist technologists argue that change must come through societal or political action rather than solely modifying systems. Without doing so, we fall into the broken part fallacy. We start treating AI problems as individualistic, which does nothing to solve a problem that is, in fact, systemic.

In this way, “even when technical fixes are designed to mitigate harms, they fall short because the socio-technical aspects of how violence happens are not fully addressed by re-design alone.” (p. 98). Hence, refusal efforts come into play to help showcase the socio-political implications of the technology.

Varieties of refusals

For acts of refusal to be possible, it must be feasible to refuse to participate in the Big Tech environment and still hold a place in society. Hence, actions like “keeping personal and private aspects of life offline, such as…the adoption of anonymous social media accounts to speak to a smaller circle of confidants (e.g. “Small Twitter”) are small acts of refusal in this vein.” (p. 99). Through these efforts, we refuse notions of scale and connection to stay out of the problems that the world of Big Tech produces. Without such possibilities, society outside the Big Tech environment remains powerless to refuse.

Between the lines

I find the idea of a holistic approach to AI systems very appealing. In some cases, the technology is clearly at fault, such as with Kronos. However, the environment in which Kronos can exacerbate an already existing social problem requires examination. I like to think of the problem as trying to grow a plant in a desert. If we simply focus on why the plant is not growing, we will just keep trying to water it more and more. However, the crux of the problem lies in how we need to look at the environment in which we live to diagnose the problem entirely. Perspective is crucial and how problems arise is as worthy of consideration as the problem itself.

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

🔍 SEARCH

Spotlight

ALL IN Conference 2025: Four Key Takeaways from Montreal

Beyond Dependency: The Hidden Risk of Social Comparison in Chatbot Companionship

AI Policy Corner: Restriction vs. Regulation: Comparing State Approaches to AI Mental Health Legislation

Beyond Consultation: Building Inclusive AI Governance for Canada’s Democratic Future

AI Policy Corner: U.S. Executive Order on Advancing AI Education for American Youth

related posts

  • Exploiting Large Language Models (LLMs) through Deception Techniques and Persuasion Principles

    Exploiting Large Language Models (LLMs) through Deception Techniques and Persuasion Principles

  • Introduction To Ethical AI Principles

    Introduction To Ethical AI Principles

  • Survey of EU Ethical Guidelines for Commercial AI: Case Studies in Financial Services

    Survey of EU Ethical Guidelines for Commercial AI: Case Studies in Financial Services

  • On Prediction-Modelers and Decision-Makers: Why Fairness Requires More Than a Fair Prediction Model

    On Prediction-Modelers and Decision-Makers: Why Fairness Requires More Than a Fair Prediction Model

  • Breaking Fair Binary Classification with Optimal Flipping Attacks

    Breaking Fair Binary Classification with Optimal Flipping Attacks

  • The Values Encoded in Machine Learning Research

    The Values Encoded in Machine Learning Research

  • Why We Need to Audit Government AI

    Why We Need to Audit Government AI

  • Beyond Bias and Compliance: Towards Individual Agency and Plurality of Ethics in AI

    Beyond Bias and Compliance: Towards Individual Agency and Plurality of Ethics in AI

  • Universal and Transferable Adversarial Attacks on Aligned Language Models

    Universal and Transferable Adversarial Attacks on Aligned Language Models

  • Bias Propagation in Federated Learning

    Bias Propagation in Federated Learning

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer

Categories


• Blog
• Research Summaries
• Columns
• Core Principles of Responsible AI
• Special Topics

Signature Content


• The State Of AI Ethics

• The Living Dictionary

• The AI Ethics Brief

Learn More


• About

• Open Access Policy

• Contributions Policy

• Editorial Stance on AI Tools

• Press

• Donate

• Contact

The AI Ethics Brief (bi-weekly newsletter)

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.


Archive

  • © 2025 MONTREAL AI ETHICS INSTITUTE.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.