• Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • Core Principles of Responsible AI
    • Accountability
    • Fairness
    • Privacy
    • Safety and Security
    • Sustainability
    • Transparency
  • Special Topics
    • AI in Industry
    • Ethical Implications
    • Human-Centered Design
    • Regulatory Landscape
    • Technical Methods
  • Living Dictionary
  • State of AI Ethics
  • AI Ethics Brief
  • šŸ‡«šŸ‡·
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

Algorithmic Domination in the Gig Economy

March 23, 2022

šŸ”¬ Research summary by Connor Wright, our Partnerships Manager.

[Original paper by James Muldoon and Paul Raekstad]


Overview: Despite being seen as the solution to human bias, algorithms foster a relationship of domination and a power structure between bosses and workers. We explore how this is expressed within the gig economy and the subsequent precarious situation for its on-demand workers.


Introduction

Algorithms were initially employed as a solution to human bias. However, increasing reports question whether they can avoid this trap. Studying the effect of ā€œalgorithmic dominationā€ (p. 2) on our freedom, the authors focus on the ride hail and food delivery sectors within the Gig economy (based around contractors receiving short-term tasks). Here, focusing on on-demand workers, algorithmic domination is being subjected to a power over which you have no control. Not being a new form of power, algorithmic domination amplifies existing power controls. I will now explore how this amplification is expressed and how it affects the workers involved.

Key Insights

Utilising a Republicanism framework, domination through algorithms is interpreted as ā€œsubject to uncontrolled powerā€ (p. 4). Algorithms facilitate the concept in three different ways:

  1. The control of the flow of information 

Bosses can withhold key information from workers. Most apps examined within the literature covered are unidirectional, meaning workers cannot communicate directly with the company or negotiate different parts of their contract as they would do with human management. Even within the limited communication they can engage in, workers are second-guessing whether they receive communications from an algorithm or human agent. 

With lesser human engagement involved, workers are less likely to have the chance to negotiate any potential compromises. The unidirectional nature of the apps covered also means that workers do not have access to even their own data, let alone relevant information to inform any work-related decisions they make. Hence, the disparities between ā€œknowledge and computational capacities exceed certain standard employer/employee relationshipsā€ (p. 7). Consequently, workers are at the informational behest of those in charge.

  1. Performance assessment

Bosses can assess the performance of other workers in a more round-the-clock surveillance fashion through the use of algorithms. Furthermore, this automation means that the effects of customer reviews on these workers are profound.

  1. Who carries out the task and when

Workers are subjected to leadership decisions on who carries out a task and when it gets done. The dynamic pricing utilised within ride hailing and food delivery apps mean that workers cannot contest the remuneration offered for different tasks. As a result, prices fluctuating at peak times allow the company even to control when people work.
In sum, these contexts give rise to ā€œalgorithmic managementā€ (p. 6), which is a more pervasive, continuous and intense form of management.

Reflecting on algorithmic domination

It is worth noting that algorithmic domination is not all about the algorithm, but how it perpetuates the already present asymmetric power relations between bosses and workers. Services such as Uber and Deliveroo see themselves as championing ideals of flexible and autonomous labour, yet the algorithm relies on ā€œtechnology, social relationships and economic institutionsā€ (p. 7) within these entities to fully take hold. Hence, bosses deciding between opting out or opting-in to sustain such a system becomes the pivotal moral moment.

While prolonging such a situation, companies have vowed to ā€˜do better’. Yet, without advocating for systemic change, such promises amount ā€œto a claim about their intentions to use their uncontrolled power in better ways, not to actions that would remove their uncontrolled powerā€ (p. 13). Higher pay and better conditions don’t eliminate any power relations but serve to blow smoke over the power dynamics at play. Hence, without advocating for changing the rules of the game, ā€œwe should be sceptical of voluntary pledges by tech companiesā€ (p. 8).

Between the lines

I found this text both illuminating and frightening at the same time. While the ā€œalgorithmic panopticonā€ (p. 13) is not an inevitable fate, the precarious employment conditions are very tangible. Removing the human element from management takes away workers’ ability to fight for themselves, losing the opportunity to negotiate with a fellow being and instead now bang their heads against an automated brick wall. Hence, while there is a superficial sense of liberty involved in the Gig economy, the invisible algorithmic chains still persist.

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

šŸ” SEARCH

Spotlight

AI Policy Corner: New York City Local Law 144

Canada’s Minister of AI and Digital Innovation is a Historic First. Here’s What We Recommend.

Am I Literate? Redefining Literacy in the Age of Artificial Intelligence

AI Policy Corner: The Texas Responsible AI Governance Act

AI Policy Corner: Singapore’s National AI Strategy 2.0

related posts

  • Analysis of the ā€œArtificial Intelligence governance principles: towards ethical and trustworthy arti...

    Analysis of the ā€œArtificial Intelligence governance principles: towards ethical and trustworthy arti...

  • Risk and Trust Perceptions of the Public of Artificial Intelligence Applications

    Risk and Trust Perceptions of the Public of Artificial Intelligence Applications

  • The Chinese Approach to AI: An Analysis of Policy, Ethics, and Regulation

    The Chinese Approach to AI: An Analysis of Policy, Ethics, and Regulation

  • The AI Carbon Footprint and Responsibilities of AI Scientists

    The AI Carbon Footprint and Responsibilities of AI Scientists

  • Model Positionality and Computational Reflexivity: Promoting Reflexivity in Data Science

    Model Positionality and Computational Reflexivity: Promoting Reflexivity in Data Science

  • Research summary: Suckers List: How Allstate’s Secret Auto Insurance Algorithm Squeezes Big Spenders

    Research summary: Suckers List: How Allstate’s Secret Auto Insurance Algorithm Squeezes Big Spenders

  • The struggle for recognition in the age of facial recognition technology

    The struggle for recognition in the age of facial recognition technology

  • Warning Signs: The Future of Privacy and Security in an Age of Machine Learning  (Research summary)

    Warning Signs: The Future of Privacy and Security in an Age of Machine Learning (Research summary)

  • Public Perceptions of Gender Bias in Large Language Models: Cases of ChatGPT and Ernie

    Public Perceptions of Gender Bias in Large Language Models: Cases of ChatGPT and Ernie

  • Managing Human and Robots Together - Can That Be a Leadership Dilemma?

    Managing Human and Robots Together - Can That Be a Leadership Dilemma?

Partners

  • Ā 
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer

Categories


• Blog
• Research Summaries
• Columns
• Core Principles of Responsible AI
• Special Topics

Signature Content


• The State Of AI Ethics

• The Living Dictionary

• The AI Ethics Brief

Learn More


• About

• Open Access Policy

• Contributions Policy

• Editorial Stance on AI Tools

• Press

• Donate

• Contact

The AI Ethics Brief (bi-weekly newsletter)

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.


Archive

  • Ā© MONTREAL AI ETHICS INSTITUTE. All rights reserved 2024.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.