• Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • Core Principles of Responsible AI
    • Accountability
    • Fairness
    • Privacy
    • Safety and Security
    • Sustainability
    • Transparency
  • Special Topics
    • AI in Industry
    • Ethical Implications
    • Human-Centered Design
    • Regulatory Landscape
    • Technical Methods
  • Living Dictionary
  • State of AI Ethics
  • AI Ethics Brief
  • 🇫🇷
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

AI supply chains make it easy to disavow ethical accountability

August 9, 2023

🔬 Research Summary by David Gray Widder, an incoming Postdoctoral Fellow at the Digital Life Initiative at Cornell Tech, studying how AI creators think about the downstream impact of what they create. You engage with him on Mastodon or Twitter.

[Original paper by David Gray Widder and Dawn Nafus]


Overview: AI system components are often produced in different organizational contexts. For example, we might not know how upstream datasets were collected or how downstream users may use our system. In our recent article, Dawn Nafus and I show how software supply chains make AI ethics work hard and how they lead people to disavow accountability for ethical questions that require scrutiny of upstream components or downstream use. 


Introduction

OpenAI has been in the news lately: it has disavowed accountability for the low pay and poor working conditions of the data labelers who filter traumatic content from ChatGPT because they had outsourced this work to a subcontractor. This echoes past debates on ethics in supply chains: Nike at first disavowed the sweatshop working conditions in which its shoes were produced because they had outsourced this to a subcontractor. 

Our research shows how dislocated organizational contexts in AI supply chains make it easy to disavow accountability for ethical scrutiny – both upstream components (e.g., is this dataset biased?) or downstream uses (e.g., how could people use my system for harm?). This was based on interviews with 27 global AI practitioners, from data labelers, academic researchers, and framework builders to model builders to those building end-user systems, in which we asked them what ethical issues they saw as within their ability and responsibility to address. 

Key Insights

Doing AI ethics well requires you to know a lot. For example, if using existing datasets or pretrained models, you want to know whether they exhibit bias, which can be challenging to answer if you didn’t collect the data or train the model. Also, you might want to know how people will use your system so you can think about what bias would even mean in that context or think about how people may misuse it.

However, like many software systems, AI is often built on the idea of “modularity”: we reuse standard components and create our systems to be modular so that no single person needs to understand how each and every piece works (implementation), only how to use it (interface). This manages complexity and helps us build large systems without one person or organization needing to know the innards of, or control, each module in the supply chain. 

However, we show in the paper that the ethos of Modularity has important ethical drawbacks: it makes it harder to know the answers to upstream (dataset bias?) and downstream (context? misuse?) questions that are important when thinking about ethics, and therefore it makes it harder to feel responsible for doing this kind of work. In short: the AI system’s supply chain nature leads people to disavow responsibility for thinking about ethics because it fractures the knowledge that would be needed to do so. We show various ways that AI supply chains inflect this disavowal, enabling division of labor, the rush to scale, and status hierarchies between ethics tasks and the “real work” of building systems.

How can we proceed? We draw on Lucy Suchman’s notion of located accountability, which suggests that to build responsible systems, we must understand their creation as an “entry into the networks of working relations.” This means looking for links that may remain in the AI supply chain, and our participants show possibilities for action in the logic of “customer centricity,” marketing, kinships and friendships, and opportunities for soft resistance. 

We conclude by suggesting possible ways forward: (1) we could work within modularity by better delineating who is responsible for which ethical tasks in AI supply chains to better distribute ethical accountability; (2) we could seek to strengthen interfaces in the AI supply chain so it is easier to ethics work that requires many partial perspectives to meet; and finally (3) we may imagine radically different modes of solving problems that do not depend on modular supply chains. 

Between the lines

Right now, we’re seeing enormous debate about how to regulate AI systems. However, I am worried that too often, this debate talks about “AI” as if it is created by one monolithic actor, often a company releasing an end-user system. We can build more robust regulation if we find more places to locate accountability, scrutinizing not only final systems but also the components used to create them. Richmond Wong and I explored policy levers higher upstream in AI supply chains in a two-page follow-up piece. Blair Attard-Frost and I showed how the related concept of Value Chains can help integrate and situate AI ethics work in a recent preprint. Others have also explored accountability in AI supply chains, and these ideas have begun to filter into resources for EU policymakers and regulators. Let’s hope this continues!

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

🔍 SEARCH

Spotlight

AI Policy Corner: Singapore’s National AI Strategy 2.0

AI Governance in a Competitive World: Balancing Innovation, Regulation and Ethics | Point Zero Forum 2025

AI Policy Corner: Frontier AI Safety Commitments, AI Seoul Summit 2024

AI Policy Corner: The Colorado State Deepfakes Act

Special Edition: Honouring the Legacy of Abhishek Gupta (1992–2024)

related posts

  • How Prevalent is Gender Bias in ChatGPT? - Exploring German and English ChatGPT Responses

    How Prevalent is Gender Bias in ChatGPT? - Exploring German and English ChatGPT Responses

  • How to Help People Understand AI

    How to Help People Understand AI

  • Using attention methods to predict judicial outcomes

    Using attention methods to predict judicial outcomes

  • (Re)Politicizing Digital Well-Being: Beyond User Engagements

    (Re)Politicizing Digital Well-Being: Beyond User Engagements

  • Our Top-5 takeaways from our meetup “Protecting the Ecosystem: AI, Data and Algorithms”

    Our Top-5 takeaways from our meetup “Protecting the Ecosystem: AI, Data and Algorithms”

  • FeedbackLogs: Recording and Incorporating Stakeholder Feedback into Machine Learning Pipelines

    FeedbackLogs: Recording and Incorporating Stakeholder Feedback into Machine Learning Pipelines

  • Blending Brushstrokes with Bytes: My Artistic Odyssey from Analog to AI

    Blending Brushstrokes with Bytes: My Artistic Odyssey from Analog to AI

  • Connecting the dots in trustworthy Artificial Intelligence: From AI principles, ethics, and key requ...

    Connecting the dots in trustworthy Artificial Intelligence: From AI principles, ethics, and key requ...

  • On the Creativity of Large Language Models

    On the Creativity of Large Language Models

  • Generative AI in Writing Research Papers: A New Type of Algorithmic Bias and Uncertainty in Scholarl...

    Generative AI in Writing Research Papers: A New Type of Algorithmic Bias and Uncertainty in Scholarl...

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer

Categories


• Blog
• Research Summaries
• Columns
• Core Principles of Responsible AI
• Special Topics

Signature Content


• The State Of AI Ethics

• The Living Dictionary

• The AI Ethics Brief

Learn More


• About

• Open Access Policy

• Contributions Policy

• Editorial Stance on AI Tools

• Press

• Donate

• Contact

The AI Ethics Brief (bi-weekly newsletter)

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.


Archive

  • © MONTREAL AI ETHICS INSTITUTE. All rights reserved 2024.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.