• Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • Core Principles of Responsible AI
    • Accountability
    • Fairness
    • Privacy
    • Safety and Security
    • Sustainability
    • Transparency
  • Special Topics
    • AI in Industry
    • Ethical Implications
    • Human-Centered Design
    • Regulatory Landscape
    • Technical Methods
  • Living Dictionary
  • State of AI Ethics
  • AI Ethics Brief
  • 🇫🇷
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

The Wrong Kind of AI? Artificial Intelligence and the Future of Labour Demand (Research Summary)

December 8, 2020

Summary contributed by our researcher Alexandrine Royer, who works at The Foundation for Genocide Education.

*Link to original paper + authors at the bottom.


Overview: Daron Acemoglu and Pascual Restrepo, in this exploratory paper, delineate the consequences of future unchecked automation and present new avenues for the positive social development of AI labour technologies.


Daron Acemoglu, Professor of Economics at MIT and the famed author behind Why Nations Fail, in this article co-authored by Pascual Restrepo, argues that we have missed a key piece in the AI labour puzzle. By concentrating on the rise of automation, we have failed to focus on whether society is investing in the “right” AI. 

Quotes from Stephen Hawking and Elon Musk that present AI as signalling the end of humanity have created a popular doomsday approach to viewing the rise of labour automation. Economic debates surrounding AI tend to push human agency out of the conversation and accept that we are inevitably heading towards an unstoppable technological revolution in which humans will be rendered obsolete. For Acemoglu and Restrepo, there is still time to stop and think about the kind of AI we are developing and if we are on the right track to creating AI which has the “greatest potential for raising productivity and generating broad-based prosperity.” Part of this change in mindset involves viewing AI as a “technology platform” and recognizing that none of AI technologies’ economic and social consequences are preordained. 

In classical economics, there is an understanding that any advance that increases productivity will naturally lead to an increase in labour demand and hence, wages. Automation, by replacing workers altogether, threatens to throw this cycle off course. When it comes to automation, firms need to consider whether new technologies’ productivity effect outweighs the displacement effect. The authors point to the double jeopardy of “so-so” automation, whereby the algorithmic system that displaces human labourers generates productivity gains that are only incrementally better than before and increases income for a select few. One example provided is the rise of industrial robots in cities dominated by the motor industry. The introduction of these robots into the industry largely negatively affected workers on the lower half of the earnings distribution, while industry owners saw an increase in profits. The managerial classes, those with capital income and college-degrees, are the ones who will profit from automation. 

In response to such trends, Acemoglu and Restrepo assert that “the future of work will be much brighter if we can mobilize more of the technologies that increase labour and ensure vigorous productivity growth.” Indeed, as the authors point out, private and public officials can decide whether we want to generate more labour automation that risks–without counterbalancing innovation–further entrenching income and social inequality. They offer suggestions on how to “reinstate AI” in ways that will lead to favourable societal outcomes, such as encouraging AI-powered classroom technologies that can help students better absorb the taught material and introduce them to new skills. Another example is in healthcare, where AI applications that aggregate medical information can assist healthcare workers in providing real-time health advice, diagnosis, and treatment. Going back to industrial robots, the authors affirm that augmented reality technologies can help human workers perform high-precision production and integrated design tasks alongside these machines. 

Acemoglu and Restrepo warn that the laissez-faire market-based approach towards AI technologies risks undermining such positive innovation in AI development, instead favouring sheer productivity growth and allowing the already rich and powerful to profit. The political pull of highly skilled tech professionals and industrial leaders, if left unchecked, will continue to push AI development towards new forms of automation. Along with dire social dire consequences, missing in the authors’ argument is the environmental impact of automated AI systems, which can far outweigh human energy consumption. With individuals across the globe experiencing record-breaking temperatures this year, the health of our planet will be another aspect to consider when reflecting on the future of automated labour. 


Original paper by Daron Acemoglu, Pascual Restrepo: http://economics.mit.edu/files/18782

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

🔍 SEARCH

Spotlight

Canada’s Minister of AI and Digital Innovation is a Historic First. Here’s What We Recommend.

Am I Literate? Redefining Literacy in the Age of Artificial Intelligence

AI Policy Corner: The Texas Responsible AI Governance Act

AI Policy Corner: Singapore’s National AI Strategy 2.0

AI Governance in a Competitive World: Balancing Innovation, Regulation and Ethics | Point Zero Forum 2025

related posts

  • AI vs. Maya Angelou: Experimental Evidence That People Cannot Differentiate AI-Generated From Human-...

    AI vs. Maya Angelou: Experimental Evidence That People Cannot Differentiate AI-Generated From Human-...

  • AI-synthesized faces are indistinguishable from real faces and more trustworthy

    AI-synthesized faces are indistinguishable from real faces and more trustworthy

  • The Ethics Owners — A New Model of Organizational Responsibility in Data-Driven Technology Companies...

    The Ethics Owners — A New Model of Organizational Responsibility in Data-Driven Technology Companies...

  • Research summary: Health Care, Capabilities, and AI Assistive Technologies

    Research summary: Health Care, Capabilities, and AI Assistive Technologies

  • Research summary: Fairness in Clustering with Multiple Sensitive Attributes

    Research summary: Fairness in Clustering with Multiple Sensitive Attributes

  • In Consideration of Indigenous Data Sovereignty: Data Mining as a Colonial Practice

    In Consideration of Indigenous Data Sovereignty: Data Mining as a Colonial Practice

  • The Sociology of Race and Digital Society

    The Sociology of Race and Digital Society

  • Research summary: Suckers List: How Allstate’s Secret Auto Insurance Algorithm Squeezes Big Spenders

    Research summary: Suckers List: How Allstate’s Secret Auto Insurance Algorithm Squeezes Big Spenders

  • Research Summary: Toward Fairness in AI for People with Disabilities: A Research Roadmap

    Research Summary: Toward Fairness in AI for People with Disabilities: A Research Roadmap

  • Consent as a Foundation for Responsible Autonomy

    Consent as a Foundation for Responsible Autonomy

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer

Categories


• Blog
• Research Summaries
• Columns
• Core Principles of Responsible AI
• Special Topics

Signature Content


• The State Of AI Ethics

• The Living Dictionary

• The AI Ethics Brief

Learn More


• About

• Open Access Policy

• Contributions Policy

• Editorial Stance on AI Tools

• Press

• Donate

• Contact

The AI Ethics Brief (bi-weekly newsletter)

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.


Archive

  • © MONTREAL AI ETHICS INSTITUTE. All rights reserved 2024.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.