• Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • Core Principles of Responsible AI
    • Accountability
    • Fairness
    • Privacy
    • Safety and Security
    • Sustainability
    • Transparency
  • Special Topics
    • AI in Industry
    • Ethical Implications
    • Human-Centered Design
    • Regulatory Landscape
    • Technical Methods
  • Living Dictionary
  • State of AI Ethics
  • AI Ethics Brief
  • 🇫🇷
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

The Artificiality of AI – Why are We Letting Machines Manage Employees?

February 22, 2021

✍️ Column by Alexandrine Royer, our Educational Program Manager.


Algorithms already heavily mediate several aspects of our daily lives, from where we decide to eat, how we get from point A to B, what news we see, and how we organize our day. As Peter Sondergaard, senior vice president at Gartner, observed “Amazon’s algorithm keeps you buying. Netflix keeps you watching. And newer algorithmic applications like Waze keep you moving… I now have so many smart devices, that the only thing that is not smart, is me.”

Sondergaard’s statement reveals how we take artificial intelligence to be often superior to, or at the least, smarter than, human intelligence. There is a tendency in popular discussions surrounding AI to forget the first part of the acronym, the key being artificial, and instead focusing on the second, intelligence. This is especially true with the growing integration of algorithmic management systems in workplace settings that range from small and locally-owned to massive multinational corporations. 

Despite the enthusiasm for data-driven business development and management, we must not obscure the fact that algorithmic systems are human-made. While their information processing and quantifying capacities go beyond the mechanics of a single human brain, they have yet to match our complete range of cognitive abilities. Why then are we blindly trusting algorithmic management systems to be in charge?

Algorithmic management systems refer broadly to the set of technological tools and techniques to manage a pool of employees and enact automated or semi-automated decisions. From managing staff schedules, allocating tasks, issuing performance appraisals and even terminating employment, algorithms increasingly edge on formerly human-based managerial roles. Examples of algorithmic management systems often concentrate on rideshare or food delivery apps- the Ubers, Deliveroos, and Foodoras along with others — though these systems are becoming ever-present across a range of businesses. 

Companies such as UPS and FedEx are using algorithms to optimize workers’ daily routes. Target and Walmart are relying on algorithmic systems to manage employee timetables.  Domestic workers and handypersons contracted through TaskRabbit are being advertised, tracked and traced by automated software. Music streaming companies such as Spotify rely on a combination of human and data-driven curation, what Tiziano Bonini and Alessandro Gandini have referred to as “platform gatekeepers” in setting the listening preferences of a global audience. As one Spotify employee confessed, “the culture of having faith in data is the first thing I learned here.”

Having an algorithm act as your boss may seem incredibly novel. Yet, algorithmic management, or data-driven management more broadly, is part of a longer historical trend of “scientific management” in the name of increased productivity. At the start of the 20th century, when American industries were booming, Frederick Taylor set out to improve industrial efficiency by claiming that industrial management could be turned into a precise science. A mechanical engineer by training, he followed factory workers around with a stopwatch, precisely noting the time it took each worker to complete a series of motions to calculate how tasks could be done faster and be streamlined. Complex tasks that took too long were to be broken down or mechanized. Taylor had no sympathy for sluggish workers, and if you failed to meet the estimated time, you would be sacked.

Taylor’s techniques, which often placed assembly line workers’ safety at risk, drew considerable backlash. They were famously satirized by Charlie Chaplin in “Modern Times”, where the dehumanizing effects of turning workers into mechanizable and disposable objects took a tragicomic form.  A hundred years later, Taylorism is still thriving, and the factory stopwatch has simply been remodelled in a digitized form.  

Algorithmic management systems are criticized heavily for their lack of transparency, accountability and potential bias towards employees. Predictably, working for an automated system that acts simultaneously as management and human resources is causing immense frustration and mental strain among employees. Uber drivers in global cities have often resorted to protests against the company’s low wages and lack of employee support. Facing the constant threat of deactivation – essentially being fired by an algorithm- Uber drivers have criticized the company’s main selling point of “being your own boss.” In response to UberEats carrier complaints about plummeting wages, the company responded, “we have no manual control over how many deliveries you receive”. 

Algorithmic management is frequently linked to deteriorating working conditions. Amazon warehouse workers, operating under the watchful eye of digital systems, are expected to pack at a rate of 700 items per hour, leading one employee to declare, “I am a human being, I’m not a robot.” Retail and food industry workers, interviewed by Vice, complained of exhaustion and higher rates of depression due to automated scheduling systems, which often made erratic changes. As one anonymized Target employee stated, “the software doesn’t look at the other 51 weeks in a year and know that you haven’t seen your family all year”, adding that “a human in charge of scheduling can”. 

In response to being mismanaged by machines, employees will try to manipulate the system in their favour. Researchers Mareike Möhlmann and Ola Henfridsson found that Uber drivers, feeling dehumanized by the app, frequently resorted to “gaming” the system to win better-paid rides and punish the multibillion-dollar company-providing them with a Robin Hood-esque sense of vindication. Such tactics included drivers collectively logging off at a given time to force a price surge or cancelling UberPool once the first passenger embarked to avoid long detours. User-manipulation of systems was also found among E-Bay sellers, many of whom felt that the system’s evaluation procedures unfairly relied on buyers’ reviews. The purported productivity-boosting and cost-reducing benefits of algorithmic management over human supervision come at the expense of workers’ rights, workplace wellbeing and human dignity. 

Allowing employees to be run by machines in this way without any human interference is not only morally deplorable but may also have overstated economic benefits. It is well-documented that neglecting your workforce’s mental health can have long-term adverse financial consequences. Presenteeism- the problem of workers’ being present but not fully functioning due to poor health- is costing US and UK companies respectively over $90 billion and £61 billion a year.  

These findings have not stopped companies from reaching new lows. In 2018, Amazon filed a patent for wristbands designed to track and nudge workers’ movements, which vibrate when there is a drop in efficiency. IBM also issued a patent for a system that would track employee pupil dilation to monitor fatigue – and send drones to deliver jolts of coffee to these tired employees. Being human will no longer be excusable. 

As technology keeps moving forward, we must stop and ask why artificially intelligent systems are given greater weight over human sense-making, agency and wellbeing? And in raising this question, we must remember that these systems are continuously being patented, financed, marketed and deployed by a small corporate elite that remains largely unchecked by governments and other regulatory bodies. 

Part of the regulation of algorithmic management systems must go beyond a simple “human-in-the-loop” approach. It merits a profound reconsideration of why we sanction technological abuse in the chase towards an ultimately unsustainable goal of economic productivity. Governments must amend current labour laws before allowing even more of these algorithmic management systems to be set in place. If not, in this race to the bottom, the vast majority of the world’s workers will be penalized and with very few reaping the rewards.

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

🔍 SEARCH

Spotlight

AI Policy Corner: Frontier AI Safety Commitments, AI Seoul Summit 2024

AI Policy Corner: The Colorado State Deepfakes Act

Special Edition: Honouring the Legacy of Abhishek Gupta (1992–2024)

AI Policy Corner: The Turkish Artificial Intelligence Law Proposal

From Funding Crisis to AI Misuse: Critical Digital Rights Challenges from RightsCon 2025

related posts

  • Social Context of LLMs - the BigScience Approach, Part 2: Project Ethical and Legal Grounding

    Social Context of LLMs - the BigScience Approach, Part 2: Project Ethical and Legal Grounding

  • How Machine Learning Can Enhance Remote Patient Monitoring

    How Machine Learning Can Enhance Remote Patient Monitoring

  • The Ethical AI Startup Ecosystem 03: ModelOps, Monitoring, and Observability

    The Ethical AI Startup Ecosystem 03: ModelOps, Monitoring, and Observability

  • Upgrading China Through Automation: Manufacturers, Workers and Techno-Development State (Research Su...

    Upgrading China Through Automation: Manufacturers, Workers and Techno-Development State (Research Su...

  • AI Governance on the Ground: Canada’s Algorithmic Impact Assessment Process and Algorithm has evolve...

    AI Governance on the Ground: Canada’s Algorithmic Impact Assessment Process and Algorithm has evolve...

  • The Death of Canada’s Artificial Intelligence and Data Act: What Happened, and What’s Next for AI Re...

    The Death of Canada’s Artificial Intelligence and Data Act: What Happened, and What’s Next for AI Re...

  • AI Ethics During Warfare: An Evolving Paradox

    AI Ethics During Warfare: An Evolving Paradox

  • Regulating Artificial Intelligence: The EU AI Act - Part 1

    Regulating Artificial Intelligence: The EU AI Act - Part 1

  • The Ethical AI Startup Ecosystem 04: Targeted AI Solutions and Technologies

    The Ethical AI Startup Ecosystem 04: Targeted AI Solutions and Technologies

  • Does diversity really go well with Large Language Models?

    Does diversity really go well with Large Language Models?

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer

Categories


• Blog
• Research Summaries
• Columns
• Core Principles of Responsible AI
• Special Topics

Signature Content


• The State Of AI Ethics

• The Living Dictionary

• The AI Ethics Brief

Learn More


• About

• Open Access Policy

• Contributions Policy

• Editorial Stance on AI Tools

• Press

• Donate

• Contact

The AI Ethics Brief (bi-weekly newsletter)

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.


Archive

  • © MONTREAL AI ETHICS INSTITUTE. All rights reserved 2024.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.