• Skip to main content
  • Skip to secondary menu
  • Skip to primary sidebar
  • Skip to footer
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

  • Articles
    • Public Policy
    • Privacy & Security
    • Human Rights
      • Ethics
      • JEDI (Justice, Equity, Diversity, Inclusion
    • Climate
    • Design
      • Emerging Technology
    • Application & Adoption
      • Health
      • Education
      • Government
        • Military
        • Public Works
      • Labour
    • Arts & Culture
      • Film & TV
      • Music
      • Pop Culture
      • Digital Art
  • Columns
    • AI Policy Corner
    • Recess
    • Tech Futures
  • The AI Ethics Brief
  • AI Literacy
    • Research Summaries
    • AI Ethics Living Dictionary
    • Learning Community
  • The State of AI Ethics Report
    • Volume 7 (November 2025)
    • Volume 6 (February 2022)
    • Volume 5 (July 2021)
    • Volume 4 (April 2021)
    • Volume 3 (Jan 2021)
    • Volume 2 (Oct 2020)
    • Volume 1 (June 2020)
  • About
    • Our Contributions Policy
    • Our Open Access Policy
    • Contact
    • Donate

Enough With “Human-AI Collaboration”

July 30, 2023

🔬 Research Summary by Advait Sarkar, an affiliate lecturer at the University of Cambridge, and honorary lecturer at UCL.

[Original paper by Advait Sarkar]


Overview: The term “human-AI collaboration” is misleading and inaccurate. It erases the labor of AI producers and obscures the often exploitative relationship between AI producers and consumers. Instead of viewing AI as a collaborator, we should view it as a tool or an instrument. This is more accurate and ultimately fairer to the humans who create and use AI.


Introduction

The term “human-AI collaboration” is everywhere these days. We hear it in the tech industry, in the media, and even in our everyday conversations. But what does it really mean? Does the “labor” we attribute to AI really represent work done by the machine?

The term “human-AI collaboration” obscures the often exploitative relationship between AI producers and consumers. To do what they do, AI systems require careful training and annotation by thousands of data workers. The majority of this AI labor is done by people in the Global South, who are often paid very little for their work. The term “collaboration” implies that AI producers and consumers are working together on equal footing when in reality, the relationship is often one-sided and unfair.

This is a cautionary tale about the dangers of misusing language. The term “human-AI collaboration” is a convenient metaphor, but it is also a dangerous one. It can lead us to overlook the real-world implications of our interactions with AI systems.

We need to be careful about how we talk about AI. The language we use shapes the way we think about AI, and the way we think about AI shapes the way we structure our societal divisions of labor, credit, attribution, and reward. If we want to use AI in a fair and equitable way, we need to start by using accurate and honest language.

Key Insights

Behind AI, a hidden workforce toils away, largely overlooked and underappreciated. Meet the data labelers – the unsung heroes behind the scenes of artificial intelligence. As we marvel at the magic of AI, we often forget that it’s not just the work of programmers and users but also the expertise of data labelers that fuels these intelligent systems.

AI systems learn from carefully labeled training data provided by human experts. But these data labelers, often from the Global South, are like ghost workers, hidden from view by those who seek to maintain their illusions of scalability and technological “magic.”

In an industry where the demand for labeled data is exploding, the grueling work of data annotation falls on the shoulders of those earning less than $30 a week in countries around the world, such as Kenya, India, and the Philippines, their creativity and skills reduced to mere datasets. Rather than being acknowledged as knowledge partners, they are treated as disposable commodities, subject to surveillance and punitive measures to improve data quality.

The consequences of this oversight are far-reaching. Despite promises of social inclusion and mobility, the reality is a dead-end for these workers, with little room for advancement beyond the hierarchical structures of the industry.

The result of work has often been separated from its source, with far-reaching consequences. The commodities which drove the era of colonialism: sugar, coffee, cocoa, cotton, and tobacco – were all packaged and presented sanitized for European consumers to distance the product from their horrifying origins. In artificial intelligence, we see this labor distancing at play again as AI systems replay human judgments made on the other side of the world.

The middlemen of AI software development profit from labor arbitrage, echoing past exploitations. As AI promises productivity, we must heed the call to agitate against labor distancing, recognizing the value and agency of those whose knowledge drives the machines.

Synthetic or open datasets won’t magically solve AI’s labor exploitation problem. While generating synthetic data or using freely accessible information from the internet seems like a remedy, it’s not foolproof. Synthetic datasets work best when there are clear mathematical models, like in computer graphics, but this is not the case for every scenario. Similarly, automatic feedback is efficient in constrained settings like games but lacks the flexibility needed for broader applications. Human-labeled data remains crucial for accurately representing desired behaviors. Open data, while seemingly harmless, conceals its own exploitation. Social media platforms profit from users who unwittingly become digital laborers, providing content without realizing they’re the product. The issue extends beyond labor, as these platforms capture behavioral information, anticipating and influencing future interests.

Using data from underrepresented communities can diversify AI, but it raises ethical questions of consent and representation.

Is AI truly a collaborator or just a tool? AI, after all, is not just any ordinary hammer or scalpel; it can write stories, generate stunning images, and now perform tasks beyond imagination just a few short years ago.

Instead, some call AI a “supertool” or “cognitive extender”. Yet, our desire to elevate AI to a higher status than other tools may stem from our own pride and human exceptionalism. We tend to value intelligence that resembles our own, overlooking the unique capabilities of AI and other forms of intelligence in the animal kingdom. The question remains: does AI truly understand the tasks we collaborate on? For now, the answer is no. AI lacks the rich sensorimotor context that humans have for acquiring language and meaning. But the media, academia, and industry are vested in presenting AI as something more than a tool. 

It’s time to challenge this narrative and recognize that AI is indeed a tool, not a collaborator or partner. Let’s empower users without ignoring the labor behind AI and acknowledge that human-AI collaboration is, in fact, human-human collaboration distanced and disguised. The path to a fair and equitable future in AI begins with recognizing the true nature of this powerful tool.

Between the lines

The metaphor of human-AI collaboration suggests that we work together with AI systems as equals. In reality, intelligence in AI systems requires the labor of countless data annotators to work. 

The article raises some important questions that warrant further research. For example, how can we ensure that AI systems are used in a way that is fair and equitable? How can we credit the humans who work behind the scenes to create AI systems? And how can we ensure that AI systems do not degrade the craftsmanship inherent in knowledge work?

The article suggests that we must be more careful about how we talk about human-AI collaboration. We need to be aware of the power imbalances that exist and find ways to ensure that AI systems are used in a way that benefits everyone.

Here are some additional directions for further research:

  • How do people from different cultures and backgrounds understand the metaphor of human-AI collaboration?
  • How does the metaphor of human-AI collaboration affect how people interact with AI systems?

By paying attention to the metaphors we use with AI systems, we can design systems that are more inclusive and beneficial to everyone.

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

🔍 SEARCH

Spotlight

Close-up of a cat sleeping on a computer keyboard

Tech Futures: The threat of AI-generated code to the world’s digital infrastructure

The undying sun hangs in the sky, as people gather around signal towers, working through their digital devices.

Dreams and Realities in Modi’s AI Impact Summit

Illustration of a coral reef ecosystem

Tech Futures: Diversity of Thought and Experience: The UN’s Scientific Panel on AI

This image shows a large white, traditional, old building. The top half of the building represents the humanities (which is symbolised by the embedded text from classic literature which is faintly shown ontop the building). The bottom section of the building is embossed with mathematical formulas to represent the sciences. The middle layer of the image is heavily pixelated. On the steps at the front of the building there is a group of scholars, wearing formal suits and tie attire, who are standing around at the enternace talking and some of them are sitting on the steps. There are two stone, statute-like hands that are stretching the building apart from the left side. In the forefront of the image, there are 8 students - which can only be seen from the back. Their graduation gowns have bright blue hoods and they all look as though they are walking towards the old building which is in the background at a distance. There are a mix of students in the foreground.

Tech Futures: Co-opting Research and Education

Agentic AI systems and algorithmic accountability: a new era of e-commerce

related posts

  • Is the Human Being Lost in the Hiring Process?

    Is the Human Being Lost in the Hiring Process?

  • Perspectives and Approaches in AI Ethics: East Asia (Research Summary)

    Perspectives and Approaches in AI Ethics: East Asia (Research Summary)

  • Employee Perceptions of the Effective Adoption of AI Principles

    Employee Perceptions of the Effective Adoption of AI Principles

  • Rewiring What-to-Watch-Next Recommendations to Reduce Radicalization Pathways

    Rewiring What-to-Watch-Next Recommendations to Reduce Radicalization Pathways

  • Beyond Bias and Discrimination: Redefining the AI Ethics Principle of Fairness in Healthcare Machine...

    Beyond Bias and Discrimination: Redefining the AI Ethics Principle of Fairness in Healthcare Machine...

  • CodeAid: Evaluating a Classroom Deployment of an LLM-based Programming Assistant that Balances Stude...

    CodeAid: Evaluating a Classroom Deployment of an LLM-based Programming Assistant that Balances Stude...

  • Equal Improvability: A New Fairness Notion Considering the Long-term Impact

    Equal Improvability: A New Fairness Notion Considering the Long-term Impact

  • Strategic Behavior of Large Language Models: Game Structure vs. Contextual Framing

    Strategic Behavior of Large Language Models: Game Structure vs. Contextual Framing

  • Public Strategies for Artificial Intelligence: Which Value Drivers?

    Public Strategies for Artificial Intelligence: Which Value Drivers?

  • Setting the Right Expectations: Algorithmic Recourse Over Time

    Setting the Right Expectations: Algorithmic Recourse Over Time

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer


Articles

Columns

AI Literacy

The State of AI Ethics Report


 

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.

Contact

Donate


  • © 2025 MONTREAL AI ETHICS INSTITUTE.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.