• Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • Core Principles of Responsible AI
    • Accountability
    • Fairness
    • Privacy
    • Safety and Security
    • Sustainability
    • Transparency
  • Special Topics
    • AI in Industry
    • Ethical Implications
    • Human-Centered Design
    • Regulatory Landscape
    • Technical Methods
  • Living Dictionary
  • State of AI Ethics
  • AI Ethics Brief
  • 🇫🇷
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

The Wrong Kind of AI? Artificial Intelligence and the Future of Labour Demand (Research Summary)

December 8, 2020

Summary contributed by our researcher Alexandrine Royer, who works at The Foundation for Genocide Education.

*Link to original paper + authors at the bottom.


Overview: Daron Acemoglu and Pascual Restrepo, in this exploratory paper, delineate the consequences of future unchecked automation and present new avenues for the positive social development of AI labour technologies.


Daron Acemoglu, Professor of Economics at MIT and the famed author behind Why Nations Fail, in this article co-authored by Pascual Restrepo, argues that we have missed a key piece in the AI labour puzzle. By concentrating on the rise of automation, we have failed to focus on whether society is investing in the “right” AI. 

Quotes from Stephen Hawking and Elon Musk that present AI as signalling the end of humanity have created a popular doomsday approach to viewing the rise of labour automation. Economic debates surrounding AI tend to push human agency out of the conversation and accept that we are inevitably heading towards an unstoppable technological revolution in which humans will be rendered obsolete. For Acemoglu and Restrepo, there is still time to stop and think about the kind of AI we are developing and if we are on the right track to creating AI which has the “greatest potential for raising productivity and generating broad-based prosperity.” Part of this change in mindset involves viewing AI as a “technology platform” and recognizing that none of AI technologies’ economic and social consequences are preordained. 

In classical economics, there is an understanding that any advance that increases productivity will naturally lead to an increase in labour demand and hence, wages. Automation, by replacing workers altogether, threatens to throw this cycle off course. When it comes to automation, firms need to consider whether new technologies’ productivity effect outweighs the displacement effect. The authors point to the double jeopardy of “so-so” automation, whereby the algorithmic system that displaces human labourers generates productivity gains that are only incrementally better than before and increases income for a select few. One example provided is the rise of industrial robots in cities dominated by the motor industry. The introduction of these robots into the industry largely negatively affected workers on the lower half of the earnings distribution, while industry owners saw an increase in profits. The managerial classes, those with capital income and college-degrees, are the ones who will profit from automation. 

In response to such trends, Acemoglu and Restrepo assert that “the future of work will be much brighter if we can mobilize more of the technologies that increase labour and ensure vigorous productivity growth.” Indeed, as the authors point out, private and public officials can decide whether we want to generate more labour automation that risks–without counterbalancing innovation–further entrenching income and social inequality. They offer suggestions on how to “reinstate AI” in ways that will lead to favourable societal outcomes, such as encouraging AI-powered classroom technologies that can help students better absorb the taught material and introduce them to new skills. Another example is in healthcare, where AI applications that aggregate medical information can assist healthcare workers in providing real-time health advice, diagnosis, and treatment. Going back to industrial robots, the authors affirm that augmented reality technologies can help human workers perform high-precision production and integrated design tasks alongside these machines. 

Acemoglu and Restrepo warn that the laissez-faire market-based approach towards AI technologies risks undermining such positive innovation in AI development, instead favouring sheer productivity growth and allowing the already rich and powerful to profit. The political pull of highly skilled tech professionals and industrial leaders, if left unchecked, will continue to push AI development towards new forms of automation. Along with dire social dire consequences, missing in the authors’ argument is the environmental impact of automated AI systems, which can far outweigh human energy consumption. With individuals across the globe experiencing record-breaking temperatures this year, the health of our planet will be another aspect to consider when reflecting on the future of automated labour. 


Original paper by Daron Acemoglu, Pascual Restrepo: http://economics.mit.edu/files/18782

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

🔍 SEARCH

Spotlight

ALL IN Conference 2025: Four Key Takeaways from Montreal

Beyond Dependency: The Hidden Risk of Social Comparison in Chatbot Companionship

AI Policy Corner: Restriction vs. Regulation: Comparing State Approaches to AI Mental Health Legislation

Beyond Consultation: Building Inclusive AI Governance for Canada’s Democratic Future

AI Policy Corner: U.S. Executive Order on Advancing AI Education for American Youth

related posts

  • Research summary: Detecting Misinformation on WhatsApp without Breaking Encryption

    Research summary: Detecting Misinformation on WhatsApp without Breaking Encryption

  • Generative AI in Writing Research Papers: A New Type of Algorithmic Bias and Uncertainty in Scholarl...

    Generative AI in Writing Research Papers: A New Type of Algorithmic Bias and Uncertainty in Scholarl...

  • Data Statements for Natural Language Processing: Toward Mitigating System Bias and Enabling Better S...

    Data Statements for Natural Language Processing: Toward Mitigating System Bias and Enabling Better S...

  • FeedbackLogs: Recording and Incorporating Stakeholder Feedback into Machine Learning Pipelines

    FeedbackLogs: Recording and Incorporating Stakeholder Feedback into Machine Learning Pipelines

  • Understanding the Effect of Counterfactual Explanations on Trust and Reliance on AI for Human-AI Col...

    Understanding the Effect of Counterfactual Explanations on Trust and Reliance on AI for Human-AI Col...

  • The Struggle for AI’s Recognition: Understanding the Normative Implications of Gender Bias in AI wit...

    The Struggle for AI’s Recognition: Understanding the Normative Implications of Gender Bias in AI wit...

  • You cannot have AI ethics without ethics

    You cannot have AI ethics without ethics

  • The Whiteness of AI (Research Summary)

    The Whiteness of AI (Research Summary)

  • A Matrix for Selecting Responsible AI Frameworks

    A Matrix for Selecting Responsible AI Frameworks

  • When AI Ethics Goes Astray: A Case Study of Autonomous Vehicles

    When AI Ethics Goes Astray: A Case Study of Autonomous Vehicles

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer

Categories


• Blog
• Research Summaries
• Columns
• Core Principles of Responsible AI
• Special Topics

Signature Content


• The State Of AI Ethics

• The Living Dictionary

• The AI Ethics Brief

Learn More


• About

• Open Access Policy

• Contributions Policy

• Editorial Stance on AI Tools

• Press

• Donate

• Contact

The AI Ethics Brief (bi-weekly newsletter)

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.


Archive

  • © 2025 MONTREAL AI ETHICS INSTITUTE.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.