• Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • Core Principles of Responsible AI
    • Accountability
    • Fairness
    • Privacy
    • Safety and Security
    • Sustainability
    • Transparency
  • Special Topics
    • AI in Industry
    • Ethical Implications
    • Human-Centered Design
    • Regulatory Landscape
    • Technical Methods
  • Living Dictionary
  • State of AI Ethics
  • AI Ethics Brief
  • 🇫🇷
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

Research summary: Artificial Intelligence: The Ambiguous Labor Market Impact of Automating Prediction

July 27, 2020

Summary contributed by Ryan Khurana, a graduate of UofT’s Rotman School of Management in the Master’s of Management Analytics program.

*Authors of original paper & link at the bottom


Mini-summary:

The automation impacts of artificial intelligence have been the subject of much discussion and debate, often hampered by a poor demarcation of the limits of AI. Agrawal, Gans, and Goldfarb have provided a framework that helps us understand where AI fits into organizations and what tasks are at risk of being automated. They argue that AI is fundamentally a prediction technology, and prediction is one of the key aspects of a decision-task, though not the only one. Things like judgement and action are also critical parts of decision-making and are not susceptible to direct automation by AIs. This does not mean, however, that they will not be affected by improved prediction, since the value of judgement and action may change as predictions become cheaper, better, and faster. Using this framework they identify four possible ways AI can affect jobs: replacing prediction tasks, replacing entire decision tasks, augmenting decision tasks, and creating new decision tasks. 

A job is highly susceptible to automation when the work is mainly one of prediction. Tasks like responding to emails done by executive assistants, case summaries done by paralegals, and demand forecasting done by operations staff are all capable of being done by AIs either now or in the near future. These jobs, however, often have non-prediction tasks as well, and being able to redesign workflows to emphasize the uniquely human tasks is critical for enabling them to continue to provide value. Another possibility is judgement and action tasks being automated because predictions make them easier. They give the example of self-driving cars which can process the environment, anticipate the behavior of other actors, and predict the outcomes of different maneuvers with higher accuracy and speed than a human. Under these conditions the expert judgment required in good driving may become less valuable. Another possibility is that better predictions can enable humans to do their jobs better, such as in emergency rooms, where faster and higher quality assessments improve the ability for medical staff to prioritize patients and give them even better care. Finally, there may be new tasks that the rise of AI creates that were previously too costly or unnecessary. We already see this with the rise of data labelling. By providing this framework the authors help us have a more nuanced assessment of the impacts of automation, allowing policy makers, educators, and business leaders to understand what tasks should be emphasized and what initiatives can help workers prepare for AIs impact on their jobs.

Full summary:

The automation impacts of artificial intelligence have been the subject of much discussion and debate, often hampered by a poor demarcation of the limits of AI. In this paper, Agrawal, Gans, and Goldfarb build upon the framework that they introduced in their book Prediction Machines, that AI is fundamentally a prediction technology, and as it improves it will decrease the cost of prediction resulting in wider use of the technology in prediction tasks. The effect this has on labor then depends on the relative importance of prediction in a given job. They identify four possibilities. 

The first is that many job tasks are pure prediction, for example forecasting work within operations departments, legal summary work done by paralegals, and email response work done by executive assistants, are all tasks that can be substituted by AIs as is, and possibly with greater efficiency, threatening these jobs if they do not have other high value-add tasks to do. A second possibility is that while a task may have a decision-component beyond prediction, this would no longer be important if predictions were better and cheaper. They give the example of autonomous vehicles, as driving is a common task that involves both prediction (what is happening in the environment around you and the potential payoffs of each decision) and judgement (what is the right action to take given this information). This judgement component may only be important because humans cannot make as fast and accurate a prediction as an autonomous vehicle. Should understanding the environment and the outcome of each action be nearly instantaneous and highly accurate, picking the one with the highest expected reward given a certain set of rules may result in the human judgement having reduced importance. In these contexts, even decision tasks could be substituted by AIs when the predictions are fast, cheap, and high quality enough. 

The third possibility is that AI results in greater labor need as expert judgement becomes an important complement to better prediction. For example, in emergency medicine should diagnostics become better, faster, and cheaper through better AI, medical staff are now able to have a more accurate understanding of their patient needs, being able to prioritize workloads better and make targeted interventions. This increased productivity could in turn make hospitals more efficient, requiring more staff to provide even more care. Finally, the possibility exists that new types of tasks are created by the advent of AI. We can already see this appear somewhat in the data labelling industry that has arisen to support the models being deployed. As prediction becomes better and cheaper there may be more tasks that were simply unfeasible when prediction was poor and costly that are now able to be done. 

This paper provides a valuable framework for decomposing the impacts of AIs on jobs and provides a language for understanding the different effects it can have on workers depending on their specific circumstances. This theoretical understanding can provide the basis for more nuanced quantifications of the employment impact of new AIs, help policy makers and educators understand the skills that will have stable demand, and help businesses prepare for worker transition by identifying which employees require skills support.


Original paper by Ajay Agrawal, Joshua S. Gans, and Avi Goldfarb: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3341456 

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

🔍 SEARCH

Spotlight

Canada’s Minister of AI and Digital Innovation is a Historic First. Here’s What We Recommend.

Am I Literate? Redefining Literacy in the Age of Artificial Intelligence

AI Policy Corner: The Texas Responsible AI Governance Act

AI Policy Corner: Singapore’s National AI Strategy 2.0

AI Governance in a Competitive World: Balancing Innovation, Regulation and Ethics | Point Zero Forum 2025

related posts

  • The Ethics of Artificial Intelligence through the Lens of Ubuntu

    The Ethics of Artificial Intelligence through the Lens of Ubuntu

  • The State of AI Ethics Report (Volume 6)

    The State of AI Ethics Report (Volume 6)

  • Subreddit Links Drive Community Creation and User Engagement on Reddit

    Subreddit Links Drive Community Creation and User Engagement on Reddit

  • Clinical trial site matching with improved diversity using fair policy learning

    Clinical trial site matching with improved diversity using fair policy learning

  • REAL ML: Recognizing, Exploring, and Articulating Limitations of Machine Learning Research

    REAL ML: Recognizing, Exploring, and Articulating Limitations of Machine Learning Research

  • Representation and Imagination for Preventing AI Harms

    Representation and Imagination for Preventing AI Harms

  • Counterfactual Explanations via Locally-guided Sequential Algorithmic Recourse

    Counterfactual Explanations via Locally-guided Sequential Algorithmic Recourse

  • AI supply chains make it easy to disavow ethical accountability

    AI supply chains make it easy to disavow ethical accountability

  • Privacy Limitations Of Interest-based Advertising On The Web: A Post-mortem Empirical Analysis Of Go...

    Privacy Limitations Of Interest-based Advertising On The Web: A Post-mortem Empirical Analysis Of Go...

  • Ethics of AI‑Enabled Recruiting and Selection: A Review and Research Agenda

    Ethics of AI‑Enabled Recruiting and Selection: A Review and Research Agenda

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer

Categories


• Blog
• Research Summaries
• Columns
• Core Principles of Responsible AI
• Special Topics

Signature Content


• The State Of AI Ethics

• The Living Dictionary

• The AI Ethics Brief

Learn More


• About

• Open Access Policy

• Contributions Policy

• Editorial Stance on AI Tools

• Press

• Donate

• Contact

The AI Ethics Brief (bi-weekly newsletter)

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.


Archive

  • © MONTREAL AI ETHICS INSTITUTE. All rights reserved 2024.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.