• Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • Core Principles of Responsible AI
    • Accountability
    • Fairness
    • Privacy
    • Safety and Security
    • Sustainability
    • Transparency
  • Special Topics
    • AI in Industry
    • Ethical Implications
    • Human-Centered Design
    • Regulatory Landscape
    • Technical Methods
  • Living Dictionary
  • State of AI Ethics
  • AI Ethics Brief
  • 🇫🇷
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

Research Summary: Risk Shifts in the Gig Economy: The Normative Case for an Insurance Scheme against the Effects of Precarious Work

September 13, 2020

Research summary contributed by Anne Boily. She is a doctoral student at the Université de Montréal, in the Department of Political Science. Her research interests are the ethics of AI and the dialogue of policymakers.

*Author & link to original paper at the bottom.


Mini-summary: Authors Bieber and Moggia examine the issue of the gig economy from a political philosophy perspective. It is the notion of “risk shifting” that is central to their analysis, and which is relatively unexplored in the discipline with respect to labour research. The gig economy refers to the phenomenon of hiring workers for concrete and temporary tasks: the supply of labour therefore depends on the demand. The central thesis of the article is that the authors are rather critical of the deleterious effects of the gig economy on workers, as the risk is shifted to workers and becomes a personal one.

They propose a policy framework to policymakers, the “Principle of Inverse Coverage (PIC)”, that would allow them to reduce the risks and compensate for the disadvantages caused to workers and, by extension, to society, without entirely prohibiting the way in which the gig economy operates, which is not entirely detrimental, but whose harmful sides are never entirely erased. This policy would stabilize the working conditions of the gig workers and allow them to project themselves into the future, by giving them back this agility. Compared to the “UBI”, the “PIC” does not generalize the risks of the firms to the whole population but compensates all the same for the risks to which the workers are exposed.

Full summary:

Authors Bieber and Moggia examine the issue of the gig economy from a political philosophy perspective. It is the notion of “risk shifting” that is central to their analysis, and which is relatively unexplored in the discipline with respect to labour research. It is also from the point of view of workers themselves (especially low-skill workers) that the article is written, rather than from the point of view of the interests of firms and employers.

The gig economy refers to the phenomenon of hiring workers for concrete and temporary tasks: the supply of labour, therefore, depends on the demand: think of companies like Lyft, Uber, or even Google (pp.1-2). The central thesis of the article is that the authors are rather critical of the deleterious effects of the gig economy on workers. They propose a policy framework to policymakers that would allow them to reduce the risks and compensate for the disadvantages caused to workers and, by extension, to society, without entirely prohibiting the way in which the gig economy operates, which is not entirely detrimental, but whose harmful sides are never entirely erased (pp.7, 15-16).

Biber and Moggia present their argument in three sections:

1) Their diagnosis of the situation of the gig economy and its “risk shift”. This “risk shift” is expressed in the fact that the firms are abandoning the risk and placing it on the workers, for whom such a risk (overly flexible and therefore precarious working conditions, no guaranteed income, difficulty in making long-term plans, the impossibility of finding time for further training, high levels of stress) becomes personal (pp.3-5, 7-9). Moreover, disparities are aggravated in the population, which can make social solidarity more difficult to achieve (pp.12-15). Firms “externalize their risk” to third parties through 5 strategies, and encourage each other to remain competitive in the market (pp.5-6):

  • 1. Short-term contracts
  • 2. Flexible number of hours
  • 3. Flexible remuneration
  • 4. A flexible schedule
  • 5. Less insurance coverage

2) The normative analysis of these “risk shifts”, which weaken the workers themselves and open them up to situations of domination and exploitation. Workers seem to be free to make their own choices, but as the market changes, they are in reality less and less so. They depend too much on the supply side to be able to position themselves with freedom (pp.3, 10-12).

3) A proposal to policymakers, the “Principle of Inverse Coverage (PIC)”, which contains two key aspects (pp.18-20):

  • 1. “A contribution side”: introduction of a Pigou tax (following the same principle as a carbon tax), which forces employers to financially compensate for the deleterious effects of their mode of operation.
  • 2. “An expenditure side”: the profits from the Pigou tax could be used to finance social insurance for gig workers, to ensure a more stable income stream without discouraging them from working. If their total working hours decrease, the insurance income will also decrease.

This policy would stabilize the working conditions of the gig workers and allow them to project themselves into the future, by giving them back this agility. Compared to the “UBI”, the “PIC” does not generalize the risks of the firms to the whole population but compensates all the same for the risks to which the workers are exposed.


Original paper by Bieber, Friedemann & Jakob Moggia: https://onlinelibrary.wiley.com/doi/abs/10.1111/jopp.12233

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

🔍 SEARCH

Spotlight

AI Policy Corner: The Kenya National AI Strategy

AI Policy Corner: New York City Local Law 144

Canada’s Minister of AI and Digital Innovation is a Historic First. Here’s What We Recommend.

Am I Literate? Redefining Literacy in the Age of Artificial Intelligence

AI Policy Corner: The Texas Responsible AI Governance Act

related posts

  • Research summary: The Deepfake Detection  Challenge: Insights and Recommendations  for AI and Media ...

    Research summary: The Deepfake Detection Challenge: Insights and Recommendations for AI and Media ...

  • DC-Check: A Data-Centric AI checklist to guide the development of reliable machine learning systems

    DC-Check: A Data-Centric AI checklist to guide the development of reliable machine learning systems

  • Research summary: Roles for Computing in Social Change

    Research summary: Roles for Computing in Social Change

  • A Snapshot of the Frontiers of Fairness in Machine Learning (Research Summary)

    A Snapshot of the Frontiers of Fairness in Machine Learning (Research Summary)

  • An Empirical Study of Modular Bias Mitigators and Ensembles

    An Empirical Study of Modular Bias Mitigators and Ensembles

  • Studying up Machine Learning Data: Why Talk About Bias When We Mean Power?

    Studying up Machine Learning Data: Why Talk About Bias When We Mean Power?

  • Eticas Foundation external audits VioGĂ©n: Spain’s algorithm designed to protect victims of gender vi...

    Eticas Foundation external audits VioGén: Spain’s algorithm designed to protect victims of gender vi...

  • Mapping the Design Space of Human-AI Interaction in Text Summarization

    Mapping the Design Space of Human-AI Interaction in Text Summarization

  • Research summary: Challenges in Supporting Exploratory Search through Voice Assistants

    Research summary: Challenges in Supporting Exploratory Search through Voice Assistants

  • Confidence-Building Measures for Artificial Intelligence

    Confidence-Building Measures for Artificial Intelligence

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer

Categories


• Blog
• Research Summaries
• Columns
• Core Principles of Responsible AI
• Special Topics

Signature Content


• The State Of AI Ethics

• The Living Dictionary

• The AI Ethics Brief

Learn More


• About

• Open Access Policy

• Contributions Policy

• Editorial Stance on AI Tools

• Press

• Donate

• Contact

The AI Ethics Brief (bi-weekly newsletter)

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.


Archive

  • © MONTREAL AI ETHICS INSTITUTE. All rights reserved 2024.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.