• Skip to main content
  • Skip to secondary menu
  • Skip to primary sidebar
  • Skip to footer
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

  • Articles
    • Public Policy
    • Privacy & Security
    • Human Rights
      • Ethics
      • JEDI (Justice, Equity, Diversity, Inclusion
    • Climate
    • Design
      • Emerging Technology
    • Application & Adoption
      • Health
      • Education
      • Government
        • Military
        • Public Works
      • Labour
    • Arts & Culture
      • Film & TV
      • Music
      • Pop Culture
      • Digital Art
  • Columns
    • AI Policy Corner
    • Recess
  • The AI Ethics Brief
  • AI Literacy
    • Research Summaries
    • AI Ethics Living Dictionary
    • Learning Community
  • The State of AI Ethics Report
    • Volume 6 (February 2022)
    • Volume 5 (July 2021)
    • Volume 4 (April 2021)
    • Volume 3 (Jan 2021)
    • Volume 2 (Oct 2020)
    • Volume 1 (June 2020)
  • About
    • Our Contributions Policy
    • Our Open Access Policy
    • Contact
    • Donate

Research summary: Working Algorithms: Software Automation and the Future of Work

August 3, 2020

Summary contributed by Alexandrine Royer, who works at The Foundation for Genocide Education.

*Authors of full paper & link at the bottom


Mini-summary: Fears surrounding automation and human labour have consumed economists, with the advance of “smart machines” threatening to trigger mass unemployment. Contrary to this perspective, there is the argument that new forms of labour and human-machine complementarity will contribute to the endurance of human labour. Through a 19-month participation-observation study at AllDone, a software startup in San Francisco, Shestakofsky gained first-hand insight into the relations between workers and technologies throughout three stages of corporate development. He documented how new complementarities continued to emerge between humans and software systems as the startup grew. Rather than solely relying on economic abstractions, daily observations can help identify to what extent software systems can operate autonomously and where they continue to require human assistance. Through the author’s micro-study, we can trace how companies, particularly startups, will need to adapt and be dynamic in the face of automation. Shestakofsky’s work can also point to larger macro-trends in the domestic and global division of labour.

Full summary:

Predictions on automation, job loss and the future of labour are often grounded in macro-level phenomena, such as labour markets, job categories and work tasks. These large-scale observations often fail to take into account the lags and gaps in AI systems, who cannot fully operate without human assistance. Shestakofsky, using the example of the startup AllDone, demonstrates how human workers will not be replaced but will continue to work alongside machines.  His close examination of the startup’s development reveals how AI systems continue to rely on human skills and complementary forms of emotional labour. The author also delves into instances where human labour is preferred over computational labour for cost-saving and strategic reasons. 

AllDone was a tech startup that sought to transform local service markers by building a website that would connect buyers and sellers of small services, from drape-making to cleaning and construction jobs. At the first stage of its development, when AllDone was looking to attract users to its system, it relied upon the efforts of a Filipino contract team to collect information and to target potential users and to conduct a digital marketing campaign. This team of workers provided what the author termed “computational labour” when software engineers at AllDone did not have the resources to design and develop an automated system. After acquiring a customer base, AllDone turned its attention to securing sellers on the website. Many of the sellers were small entrepreneurs or individual workers that did not understand the design and rules of the system and often voiced frustration over a lack of responses to quotes. To help repair this knowledge gap, AllDone hired a team of customer service agents that would patiently explain the system to new sellers and offer them advice on how to improve their profile. This team provided the emotional labour needed to help users adapt to the system. In its third stage, when AllDone sought to extract greater profits from its users and sellers, it relied on both emotional labour to convince users to keep their subscription and computational labour to prevent sellers from circumventing and gaming the new rules. 

The author credits AllDone’s internal dynamism by responding to the challenges facing new startups and creating new complementarities between humans and machines. The rough edges of machine systems will almost always require complementary human workers to smooth it out. The company’s limited resources meant that it could not automate every task. Even with its AI systems, software engineers often called on the assistance of workers to quickly collect information or test out features. Human touch was also necessary to create a loyal customer base. While more forms of economic exchange will likely become technologically mediated, key emotional skills of persuasion, support and empathy are still difficult to automate. The context here is also important, as AllDone’s development was also dictated by the expectations of venture capitalists which funded its growth. The author does not touch upon how this dynamism might work in a larger and more established corporation with bigger resources at its disposition. 

While some new complementarities may appear between humans and machines, it also appears that old habits die hard. Shestakofsky’s study reveals how, even with new technologies and expectations of changes, longstanding trends and divides in labour continue to persist. The emotional labour of counselling potential sellers and reassuring customers was accomplished by the women who led the phone line support staff. Women have long been characterized as support and care workers. These perceptions are replicated in technologies and within the tech industry, as seen by the ubiquity of female voices in virtual home and phone assistants. The cheap and disposable labour is extracted from the global south, where underpaid Filipino workers are given short-term contracts to fill in when computational resources are missing. Their efforts, in comparison with North American working and wage standards, are largely uncompensated. Outsourcing, offshoring and contracting labour to the global south will continue to persist even in a world of software automation, with underpaid workers providing human assistance needed to create and keep running these systems and the managerial team accruing wealth. Rather than pushing humans out of production, tech automation may be replicating and reinforcing inequities at the domestic and global scale as well as furthering the divide between the managerial and working class. 

A few words of caution, the startup appears to no longer be in existence, most of the observations took place in 2011, and there is a limit to how much we can extrapolate from a single observational case study. Shestakosky’s article nonetheless provides interesting insights in how we think about human-machine interactions and the future of labour. Instead of fearing the replacement of human workers, we can turn our attention to ensuring the positive development of human-machine complementarities and preventing vulnerable people from bearing the brunt of emotional and underpaid labour.


Original paper by Benjamin Shestakofsky: https://digitalassets.lib.berkeley.edu/etd/ucb/text/Shestakofsky_berkeley_0028E_18112.pdf

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

🔍 SEARCH

Spotlight

ALL IN Conference 2025: Four Key Takeaways from Montreal

Beyond Dependency: The Hidden Risk of Social Comparison in Chatbot Companionship

AI Policy Corner: Restriction vs. Regulation: Comparing State Approaches to AI Mental Health Legislation

Beyond Consultation: Building Inclusive AI Governance for Canada’s Democratic Future

AI Policy Corner: U.S. Executive Order on Advancing AI Education for American Youth

related posts

  • The Impact of Recommendation Systems on Opinion Dynamics: Microscopic versus Macroscopic Effects

    The Impact of Recommendation Systems on Opinion Dynamics: Microscopic versus Macroscopic Effects

  • Research summary: Legal Risks of Adversarial Machine Learning Research

    Research summary: Legal Risks of Adversarial Machine Learning Research

  • Reports on Communication Surveillance in Botswana, Malawi and the DRC, and the Chinese Digital Infra...

    Reports on Communication Surveillance in Botswana, Malawi and the DRC, and the Chinese Digital Infra...

  • Getting from Commitment to Content in AI and Data Ethics: Justice and Explainability

    Getting from Commitment to Content in AI and Data Ethics: Justice and Explainability

  • Towards Intersectionality in Machine Learning: Including More Identities, Handling Underrepresentati...

    Towards Intersectionality in Machine Learning: Including More Identities, Handling Underrepresentati...

  • Theorizing Femininity in AI: a Framework for Undoing Technology’s Gender Troubles (Research Summary)

    Theorizing Femininity in AI: a Framework for Undoing Technology’s Gender Troubles (Research Summary)

  • The State of AI Ethics Report (Jan 2021)

    The State of AI Ethics Report (Jan 2021)

  • Fairness Amidst Non-IID Graph Data: A Literature Review

    Fairness Amidst Non-IID Graph Data: A Literature Review

  • The ethical ambiguity of AI data enrichment: Measuring gaps in research ethics norms and practices

    The ethical ambiguity of AI data enrichment: Measuring gaps in research ethics norms and practices

  • Collective Action on Artificial Intelligence: A Primer and Review

    Collective Action on Artificial Intelligence: A Primer and Review

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer


Articles

Columns

AI Literacy

The State of AI Ethics Report


 

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.

Contact

Donate


  • © 2025 MONTREAL AI ETHICS INSTITUTE.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.