• Skip to main content
  • Skip to secondary menu
  • Skip to primary sidebar
  • Skip to footer
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

  • Articles
    • Public Policy
    • Privacy & Security
    • Human Rights
      • Ethics
      • JEDI (Justice, Equity, Diversity, Inclusion
    • Climate
    • Design
      • Emerging Technology
    • Application & Adoption
      • Health
      • Education
      • Government
        • Military
        • Public Works
      • Labour
    • Arts & Culture
      • Film & TV
      • Music
      • Pop Culture
      • Digital Art
  • Columns
    • AI Policy Corner
    • Recess
  • The AI Ethics Brief
  • AI Literacy
    • Research Summaries
    • AI Ethics Living Dictionary
    • Learning Community
  • The State of AI Ethics Report
    • Volume 6 (February 2022)
    • Volume 5 (July 2021)
    • Volume 4 (April 2021)
    • Volume 3 (Jan 2021)
    • Volume 2 (Oct 2020)
    • Volume 1 (June 2020)
  • About
    • Our Contributions Policy
    • Our Open Access Policy
    • Contact
    • Donate

AI supply chains make it easy to disavow ethical accountability

August 9, 2023

🔬 Research Summary by David Gray Widder, an incoming Postdoctoral Fellow at the Digital Life Initiative at Cornell Tech, studying how AI creators think about the downstream impact of what they create. You engage with him on Mastodon or Twitter.

[Original paper by David Gray Widder and Dawn Nafus]


Overview: AI system components are often produced in different organizational contexts. For example, we might not know how upstream datasets were collected or how downstream users may use our system. In our recent article, Dawn Nafus and I show how software supply chains make AI ethics work hard and how they lead people to disavow accountability for ethical questions that require scrutiny of upstream components or downstream use. 


Introduction

OpenAI has been in the news lately: it has disavowed accountability for the low pay and poor working conditions of the data labelers who filter traumatic content from ChatGPT because they had outsourced this work to a subcontractor. This echoes past debates on ethics in supply chains: Nike at first disavowed the sweatshop working conditions in which its shoes were produced because they had outsourced this to a subcontractor. 

Our research shows how dislocated organizational contexts in AI supply chains make it easy to disavow accountability for ethical scrutiny – both upstream components (e.g., is this dataset biased?) or downstream uses (e.g., how could people use my system for harm?). This was based on interviews with 27 global AI practitioners, from data labelers, academic researchers, and framework builders to model builders to those building end-user systems, in which we asked them what ethical issues they saw as within their ability and responsibility to address. 

Key Insights

Doing AI ethics well requires you to know a lot. For example, if using existing datasets or pretrained models, you want to know whether they exhibit bias, which can be challenging to answer if you didn’t collect the data or train the model. Also, you might want to know how people will use your system so you can think about what bias would even mean in that context or think about how people may misuse it.

However, like many software systems, AI is often built on the idea of “modularity”: we reuse standard components and create our systems to be modular so that no single person needs to understand how each and every piece works (implementation), only how to use it (interface). This manages complexity and helps us build large systems without one person or organization needing to know the innards of, or control, each module in the supply chain. 

However, we show in the paper that the ethos of Modularity has important ethical drawbacks: it makes it harder to know the answers to upstream (dataset bias?) and downstream (context? misuse?) questions that are important when thinking about ethics, and therefore it makes it harder to feel responsible for doing this kind of work. In short: the AI system’s supply chain nature leads people to disavow responsibility for thinking about ethics because it fractures the knowledge that would be needed to do so. We show various ways that AI supply chains inflect this disavowal, enabling division of labor, the rush to scale, and status hierarchies between ethics tasks and the “real work” of building systems.

How can we proceed? We draw on Lucy Suchman’s notion of located accountability, which suggests that to build responsible systems, we must understand their creation as an “entry into the networks of working relations.” This means looking for links that may remain in the AI supply chain, and our participants show possibilities for action in the logic of “customer centricity,” marketing, kinships and friendships, and opportunities for soft resistance. 

We conclude by suggesting possible ways forward: (1) we could work within modularity by better delineating who is responsible for which ethical tasks in AI supply chains to better distribute ethical accountability; (2) we could seek to strengthen interfaces in the AI supply chain so it is easier to ethics work that requires many partial perspectives to meet; and finally (3) we may imagine radically different modes of solving problems that do not depend on modular supply chains. 

Between the lines

Right now, we’re seeing enormous debate about how to regulate AI systems. However, I am worried that too often, this debate talks about “AI” as if it is created by one monolithic actor, often a company releasing an end-user system. We can build more robust regulation if we find more places to locate accountability, scrutinizing not only final systems but also the components used to create them. Richmond Wong and I explored policy levers higher upstream in AI supply chains in a two-page follow-up piece. Blair Attard-Frost and I showed how the related concept of Value Chains can help integrate and situate AI ethics work in a recent preprint. Others have also explored accountability in AI supply chains, and these ideas have begun to filter into resources for EU policymakers and regulators. Let’s hope this continues!

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

🔍 SEARCH

Spotlight

ALL IN Conference 2025: Four Key Takeaways from Montreal

Beyond Dependency: The Hidden Risk of Social Comparison in Chatbot Companionship

AI Policy Corner: Restriction vs. Regulation: Comparing State Approaches to AI Mental Health Legislation

Beyond Consultation: Building Inclusive AI Governance for Canada’s Democratic Future

AI Policy Corner: U.S. Executive Order on Advancing AI Education for American Youth

related posts

  • Clueless AI: Should AI Models Report to Us When They Are Clueless?

    Clueless AI: Should AI Models Report to Us When They Are Clueless?

  • Sharing Space in Conversational AI

    Sharing Space in Conversational AI

  • FeedbackLogs: Recording and Incorporating Stakeholder Feedback into Machine Learning Pipelines

    FeedbackLogs: Recording and Incorporating Stakeholder Feedback into Machine Learning Pipelines

  • Transferring Fairness under Distribution Shifts via Fair Consistency Regularization

    Transferring Fairness under Distribution Shifts via Fair Consistency Regularization

  • Research summary: PolicyKit: Building Governance in Online Communities

    Research summary: PolicyKit: Building Governance in Online Communities

  • How to invest in Data and AI companies responsibly

    How to invest in Data and AI companies responsibly

  • Responsible Use of Technology in Credit Reporting: White Paper

    Responsible Use of Technology in Credit Reporting: White Paper

  • Research summary: Detecting Misinformation on WhatsApp without Breaking Encryption

    Research summary: Detecting Misinformation on WhatsApp without Breaking Encryption

  • Zoom Out and Observe: News Environment Perception for Fake News Detection

    Zoom Out and Observe: News Environment Perception for Fake News Detection

  • It doesn't tell me anything about how my data is used'': User Perceptions of Data Collection Purpos...

    "It doesn't tell me anything about how my data is used'': User Perceptions of Data Collection Purpos...

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer


Articles

Columns

AI Literacy

The State of AI Ethics Report


 

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.

Contact

Donate


  • © 2025 MONTREAL AI ETHICS INSTITUTE.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.