• Skip to main content
  • Skip to secondary menu
  • Skip to primary sidebar
  • Skip to footer
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

  • Articles
    • Public Policy
    • Privacy & Security
    • Human Rights
      • Ethics
      • JEDI (Justice, Equity, Diversity, Inclusion
    • Climate
    • Design
      • Emerging Technology
    • Application & Adoption
      • Health
      • Education
      • Government
        • Military
        • Public Works
      • Labour
    • Arts & Culture
      • Film & TV
      • Music
      • Pop Culture
      • Digital Art
  • Columns
    • AI Policy Corner
    • Recess
  • The AI Ethics Brief
  • AI Literacy
    • Research Summaries
    • AI Ethics Living Dictionary
    • Learning Community
  • The State of AI Ethics Report
    • Volume 7 (November 2025)
    • Volume 6 (February 2022)
    • Volume 5 (July 2021)
    • Volume 4 (April 2021)
    • Volume 3 (Jan 2021)
    • Volume 2 (Oct 2020)
    • Volume 1 (June 2020)
  • About
    • Our Contributions Policy
    • Our Open Access Policy
    • Contact
    • Donate

The Wrong Kind of AI? Artificial Intelligence and the Future of Labour Demand (Research Summary)

December 8, 2020

Summary contributed by our researcher Alexandrine Royer, who works at The Foundation for Genocide Education.

*Link to original paper + authors at the bottom.


Overview: Daron Acemoglu and Pascual Restrepo, in this exploratory paper, delineate the consequences of future unchecked automation and present new avenues for the positive social development of AI labour technologies.


Daron Acemoglu, Professor of Economics at MIT and the famed author behind Why Nations Fail, in this article co-authored by Pascual Restrepo, argues that we have missed a key piece in the AI labour puzzle. By concentrating on the rise of automation, we have failed to focus on whether society is investing in the “right” AI. 

Quotes from Stephen Hawking and Elon Musk that present AI as signalling the end of humanity have created a popular doomsday approach to viewing the rise of labour automation. Economic debates surrounding AI tend to push human agency out of the conversation and accept that we are inevitably heading towards an unstoppable technological revolution in which humans will be rendered obsolete. For Acemoglu and Restrepo, there is still time to stop and think about the kind of AI we are developing and if we are on the right track to creating AI which has the “greatest potential for raising productivity and generating broad-based prosperity.” Part of this change in mindset involves viewing AI as a “technology platform” and recognizing that none of AI technologies’ economic and social consequences are preordained. 

In classical economics, there is an understanding that any advance that increases productivity will naturally lead to an increase in labour demand and hence, wages. Automation, by replacing workers altogether, threatens to throw this cycle off course. When it comes to automation, firms need to consider whether new technologies’ productivity effect outweighs the displacement effect. The authors point to the double jeopardy of “so-so” automation, whereby the algorithmic system that displaces human labourers generates productivity gains that are only incrementally better than before and increases income for a select few. One example provided is the rise of industrial robots in cities dominated by the motor industry. The introduction of these robots into the industry largely negatively affected workers on the lower half of the earnings distribution, while industry owners saw an increase in profits. The managerial classes, those with capital income and college-degrees, are the ones who will profit from automation. 

In response to such trends, Acemoglu and Restrepo assert that “the future of work will be much brighter if we can mobilize more of the technologies that increase labour and ensure vigorous productivity growth.” Indeed, as the authors point out, private and public officials can decide whether we want to generate more labour automation that risks–without counterbalancing innovation–further entrenching income and social inequality. They offer suggestions on how to “reinstate AI” in ways that will lead to favourable societal outcomes, such as encouraging AI-powered classroom technologies that can help students better absorb the taught material and introduce them to new skills. Another example is in healthcare, where AI applications that aggregate medical information can assist healthcare workers in providing real-time health advice, diagnosis, and treatment. Going back to industrial robots, the authors affirm that augmented reality technologies can help human workers perform high-precision production and integrated design tasks alongside these machines. 

Acemoglu and Restrepo warn that the laissez-faire market-based approach towards AI technologies risks undermining such positive innovation in AI development, instead favouring sheer productivity growth and allowing the already rich and powerful to profit. The political pull of highly skilled tech professionals and industrial leaders, if left unchecked, will continue to push AI development towards new forms of automation. Along with dire social dire consequences, missing in the authors’ argument is the environmental impact of automated AI systems, which can far outweigh human energy consumption. With individuals across the globe experiencing record-breaking temperatures this year, the health of our planet will be another aspect to consider when reflecting on the future of automated labour. 


Original paper by Daron Acemoglu, Pascual Restrepo: http://economics.mit.edu/files/18782

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

🔍 SEARCH

Spotlight

This image shows a large white, traditional, old building. The top half of the building represents the humanities (which is symbolised by the embedded text from classic literature which is faintly shown ontop the building). The bottom section of the building is embossed with mathematical formulas to represent the sciences. The middle layer of the image is heavily pixelated. On the steps at the front of the building there is a group of scholars, wearing formal suits and tie attire, who are standing around at the enternace talking and some of them are sitting on the steps. There are two stone, statute-like hands that are stretching the building apart from the left side. In the forefront of the image, there are 8 students - which can only be seen from the back. Their graduation gowns have bright blue hoods and they all look as though they are walking towards the old building which is in the background at a distance. There are a mix of students in the foreground.

Tech Futures: Co-opting Research and Education

Agentic AI systems and algorithmic accountability: a new era of e-commerce

ALL IN Conference 2025: Four Key Takeaways from Montreal

Beyond Dependency: The Hidden Risk of Social Comparison in Chatbot Companionship

AI Policy Corner: Restriction vs. Regulation: Comparing State Approaches to AI Mental Health Legislation

related posts

  • Do Large GPT Models Discover Moral Dimensions in Language Representations? A Topological Study Of Se...

    Do Large GPT Models Discover Moral Dimensions in Language Representations? A Topological Study Of Se...

  • Public Strategies for Artificial Intelligence: Which Value Drivers?

    Public Strategies for Artificial Intelligence: Which Value Drivers?

  • A fair pricing model via adversarial learning

    A fair pricing model via adversarial learning

  • The Ethical AI Startup Ecosystem 02: Data for AI

    The Ethical AI Startup Ecosystem 02: Data for AI

  • Jake Elwes: Constructing and Deconstructing Gender with AI-Generated Art

    Jake Elwes: Constructing and Deconstructing Gender with AI-Generated Art

  • The Technologists are Not in Control: What the Internet Experience Can Teach us about AI Ethics and ...

    The Technologists are Not in Control: What the Internet Experience Can Teach us about AI Ethics and ...

  • A Virtue-Based Framework to Support Putting AI Ethics into Practice

    A Virtue-Based Framework to Support Putting AI Ethics into Practice

  • It’s COMPASlicated: The Messy Relationship between RAI Datasets and Algorithmic Fairness Benchmarks

    It’s COMPASlicated: The Messy Relationship between RAI Datasets and Algorithmic Fairness Benchmarks

  • Confucius, cyberpunk and Mr. Science: comparing AI ethics principles between China and the EU

    Confucius, cyberpunk and Mr. Science: comparing AI ethics principles between China and the EU

  • The role of the African value of Ubuntu in global AI inclusion discourse: A normative ethics perspec...

    The role of the African value of Ubuntu in global AI inclusion discourse: A normative ethics perspec...

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer


Articles

Columns

AI Literacy

The State of AI Ethics Report


 

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.

Contact

Donate


  • © 2025 MONTREAL AI ETHICS INSTITUTE.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.