• Skip to main content
  • Skip to secondary menu
  • Skip to primary sidebar
  • Skip to footer
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

  • Articles
    • Public Policy
    • Privacy & Security
    • Human Rights
      • Ethics
      • JEDI (Justice, Equity, Diversity, Inclusion
    • Climate
    • Design
      • Emerging Technology
    • Application & Adoption
      • Health
      • Education
      • Government
        • Military
        • Public Works
      • Labour
    • Arts & Culture
      • Film & TV
      • Music
      • Pop Culture
      • Digital Art
  • Columns
    • AI Policy Corner
    • Recess
  • The AI Ethics Brief
  • AI Literacy
    • Research Summaries
    • AI Ethics Living Dictionary
    • Learning Community
  • The State of AI Ethics Report
    • Volume 7 (November 2025)
    • Volume 6 (February 2022)
    • Volume 5 (July 2021)
    • Volume 4 (April 2021)
    • Volume 3 (Jan 2021)
    • Volume 2 (Oct 2020)
    • Volume 1 (June 2020)
  • About
    • Our Contributions Policy
    • Our Open Access Policy
    • Contact
    • Donate

Resistance and refusal to algorithmic harms: Varieties of ‘knowledge projects’

June 2, 2022

🔬 Research summary by Connor Wright, our Partnerships Manager.

[Original paper by Maya Indira Ganesh and Emanuel Moss]


Overview: When it comes to issues in AI, we have two categories of response: resistance and refusal. Resistance comes from Big Tech, but this proves futile without the refusal responses produced by external actors. Above all, a holistic approach is required for any response approach to be successful.


Introduction

The authors explore two categories of responses to AI harms: resistance and refusal. Resistance is associated with the efforts made by Big Tech to combat AI harms, while refusals link to efforts outside the world of Big Tech, which acknowledge the limitations of Big Tech’s efforts. Here, AI doesn’t just cause harm in targeted ways (such as intentional discrimination) but also in neutral environments by perpetuating existing inequalities. This will form the basis for exploring a holistic approach to AI issues. In this way, the role of knowledge as power will be explored, which includes deciding what is ethical. Opting out of this Big Tech environment will be taken as a key refusal practice before I conclude about the benefits of a holistic and systematic approach to AI issues.

Key Insights

A holistic approach to solving AI issues

Complications associated with fairness and bias within AI are not just related to an AI system itself. Despite this, Big Tech has usually tried to address these problems through tweaks to the system design. Yet, in some cases, the problem is instead exacerbated by an AI system, rather than proving the source of the problem. 

The authors mention the case of Kronos, an automated shift scheduler adopted by Starbucks. This resulted in a single mother being scheduled a shift at 8 am, meaning that she had to get up at 5 am for the 3-hour journey to childcare and to work. In this case, the AI is not the central problem. Instead, the lack of affordable housing close to childcare centres is also blameworthy. 

Understanding how such automated systems work is paramount to mitigating these harms. Consequently, we revert back to the old maxim that knowledge is power.

Power and knowledge

To resist algorithmic harms, we have to document them. The AIAAIC (an open Google spreadsheet) details current AI issues, proving helpful in learning from the mistakes made in the past. Important to note is how all AI issues are mentioned,  rather than a refined and select few. Should this be the case, different types of knowledge begin to carry different values, becoming situated within the power structure of society. 

To illustrate, those who are most involved in the AI sphere (such as Silicon Valley) would be able to shape what is public knowledge and what is not. The power to influence what information is widely accessible would render ‘public’ information as simply propaganda. Nevertheless, Silicon Valley has still enjoyed a substantial say on what is ‘ethical’ within the AI field.

Deciding what is ethical

The authors detail how initial investments from Silicon Valley converted values such as fairness and ethics from values attributed to the good life into central pieces of AI development. Consequently, Silicon Valley became the final arbitrator on ethical dilemmas. In addition, ethics has shifted from a domain which involves governments and activist groups into a single-player dominated league. In this way, the world of AI is not treated as a relational endeavour. Every player is in it for themselves. 

As a result, when algorithms are at fault, the industry has been shaped such that we ask how the algorithm was unethical or unfair instead of questioning its position there in the first place. Subsequently, activist technologists argue that change must come through societal or political action rather than solely modifying systems. Without doing so, we fall into the broken part fallacy. We start treating AI problems as individualistic, which does nothing to solve a problem that is, in fact, systemic.

In this way, “even when technical fixes are designed to mitigate harms, they fall short because the socio-technical aspects of how violence happens are not fully addressed by re-design alone.” (p. 98). Hence, refusal efforts come into play to help showcase the socio-political implications of the technology.

Varieties of refusals

For acts of refusal to be possible, it must be feasible to refuse to participate in the Big Tech environment and still hold a place in society. Hence, actions like “keeping personal and private aspects of life offline, such as…the adoption of anonymous social media accounts to speak to a smaller circle of confidants (e.g. “Small Twitter”) are small acts of refusal in this vein.” (p. 99). Through these efforts, we refuse notions of scale and connection to stay out of the problems that the world of Big Tech produces. Without such possibilities, society outside the Big Tech environment remains powerless to refuse.

Between the lines

I find the idea of a holistic approach to AI systems very appealing. In some cases, the technology is clearly at fault, such as with Kronos. However, the environment in which Kronos can exacerbate an already existing social problem requires examination. I like to think of the problem as trying to grow a plant in a desert. If we simply focus on why the plant is not growing, we will just keep trying to water it more and more. However, the crux of the problem lies in how we need to look at the environment in which we live to diagnose the problem entirely. Perspective is crucial and how problems arise is as worthy of consideration as the problem itself.

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

🔍 SEARCH

Spotlight

This image shows a large white, traditional, old building. The top half of the building represents the humanities (which is symbolised by the embedded text from classic literature which is faintly shown ontop the building). The bottom section of the building is embossed with mathematical formulas to represent the sciences. The middle layer of the image is heavily pixelated. On the steps at the front of the building there is a group of scholars, wearing formal suits and tie attire, who are standing around at the enternace talking and some of them are sitting on the steps. There are two stone, statute-like hands that are stretching the building apart from the left side. In the forefront of the image, there are 8 students - which can only be seen from the back. Their graduation gowns have bright blue hoods and they all look as though they are walking towards the old building which is in the background at a distance. There are a mix of students in the foreground.

Tech Futures: Co-opting Research and Education

Agentic AI systems and algorithmic accountability: a new era of e-commerce

ALL IN Conference 2025: Four Key Takeaways from Montreal

Beyond Dependency: The Hidden Risk of Social Comparison in Chatbot Companionship

AI Policy Corner: Restriction vs. Regulation: Comparing State Approaches to AI Mental Health Legislation

related posts

  • The Larger The Fairer? Small Neural Networks Can Achieve Fairness for Edge Devices

    The Larger The Fairer? Small Neural Networks Can Achieve Fairness for Edge Devices

  • Adding Structure to AI Harm

    Adding Structure to AI Harm

  • Unpacking Invisible Work Practices, Constraints, and Latent Power Relationships in Child Welfare thr...

    Unpacking Invisible Work Practices, Constraints, and Latent Power Relationships in Child Welfare thr...

  • Will an Artificial Intellichef be Cooking Your Next Meal at a Michelin Star Restaurant?

    Will an Artificial Intellichef be Cooking Your Next Meal at a Michelin Star Restaurant?

  • Atomist or holist? A diagnosis and vision for more productive interdisciplinary AI ethics dialogue

    Atomist or holist? A diagnosis and vision for more productive interdisciplinary AI ethics dialogue

  • Diagnosing Gender Bias In Image Recognition Systems (Research Summary)

    Diagnosing Gender Bias In Image Recognition Systems (Research Summary)

  • Who to Trust, How and Why: Untangling AI Ethics Principles, Trustworthiness and Trust

    Who to Trust, How and Why: Untangling AI Ethics Principles, Trustworthiness and Trust

  • It’s COMPASlicated: The Messy Relationship between RAI Datasets and Algorithmic Fairness Benchmarks

    It’s COMPASlicated: The Messy Relationship between RAI Datasets and Algorithmic Fairness Benchmarks

  • The Impact of AI Art on the Creative Industry

    The Impact of AI Art on the Creative Industry

  • Towards Environmentally Equitable AI via Geographical Load Balancing

    Towards Environmentally Equitable AI via Geographical Load Balancing

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer


Articles

Columns

AI Literacy

The State of AI Ethics Report


 

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.

Contact

Donate


  • © 2025 MONTREAL AI ETHICS INSTITUTE.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.