• Skip to main content
  • Skip to secondary menu
  • Skip to primary sidebar
  • Skip to footer
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

  • Articles
    • Public Policy
    • Privacy & Security
    • Human Rights
      • Ethics
      • JEDI (Justice, Equity, Diversity, Inclusion
    • Climate
    • Design
      • Emerging Technology
    • Application & Adoption
      • Health
      • Education
      • Government
        • Military
        • Public Works
      • Labour
    • Arts & Culture
      • Film & TV
      • Music
      • Pop Culture
      • Digital Art
  • Columns
    • AI Policy Corner
    • Recess
  • The AI Ethics Brief
  • AI Literacy
    • Research Summaries
    • AI Ethics Living Dictionary
    • Learning Community
  • The State of AI Ethics Report
    • Volume 7 (November 2025)
    • Volume 6 (February 2022)
    • Volume 5 (July 2021)
    • Volume 4 (April 2021)
    • Volume 3 (Jan 2021)
    • Volume 2 (Oct 2020)
    • Volume 1 (June 2020)
  • About
    • Our Contributions Policy
    • Our Open Access Policy
    • Contact
    • Donate

Research summary: Changing My Mind About AI, Universal Basic Income, and the Value of Data

August 17, 2020

Summary contributed by Sneha Deo, a computer scientist (PM @ Microsoft), grassroots organizer, and musician based in Seattle, WA.

*Authors of full paper & link at the bottom


As Artificial Intelligence grows more ubiquitous, policy-makers and technologists dispute what will happen. The resulting labor landscape could lead to an underemployed, impoverished working class; or, it could provide a higher standard of living for all, regardless of employment status. Recently, many claim the latter outcome will come to pass if AI-generated wealth can support a Universal Basic Income – an unconditional monetary allocation to every individual. In the article “Changing my Mind about AI, Universal Basic Income, and the Value of Data”, author Vi Hart examines this claim for its practicality and pitfalls. 

Through this examination, the author deconstructs the belief that humans are rendered obsolete by AI. The author notes this belief benefits the owners of profitable AI systems, allowing them to acquire the on-demand and data labor they need at unfairly low rates – often for less than a living wage or for free. And although a useful introduction to wealth redistribution, UBI does not address the underlying dynamics of this unbalanced labor market. Calling for the fair attribution of prosperity, the author proposes an extension to UBI: a model of compensation that assigns explicit value to the human labor that keeps AI systems running.

Full summary:

Artificial Intelligence may soon become powerful enough to change the landscape of work. When it does, will it devastate the job market and widen the wealth gap, or will it lay the foundation for a technological utopia where human labor is no longer required? A potential intersection between these seemingly opposed theories has developed into an increasingly popular idea in the past 5 years: the idea that human work may become obsolete, but that AI will generate such excess wealth that redistribution in the form of Universal Basic Income is possible. In the article “Changing my Mind about AI, Universal Basic Income, and the Value of Data”, author Vi Hart explores the attractive idea of UBI and AI – long prophesied by tech industry leaders – and weighs its practicality and pitfalls.

Universal Basic Income is a program that provides every individual with a standardized unconditional income. It has been presented as a salve to the existential problem of massive unemployment as AI replaces human workers. It could reduce financial dependence on traditional jobs, freeing individuals to pursue meaningful (rather than market-driven) skill development. And although UBI may appear costly, the relative cheapness of AI labor could generate capital for redistribution. 

While it might seem an ideal solution at first glance, UBI doesn’t address the most dangerous threat presented by AI: the devaluation of the human labor that makes AI programs work. 

For the past 5 years, the tech elite have justified the devaluation of the human worker by claiming artificial intelligence will be orders of magnitude more productive than manual work. They extend this line of reasoning by idealizing “pure” AI, which will move beyond the need for human participation at all. 

But this rhetoric is untrue: human contributions are necessary inputs for AI to make decisions. AI is only as useful as the “collective intelligence” it draws upon – human-generated data collected knowingly or unknowingly. The gig economy of producing data through online marketplaces like MTurk is unregulated and can pay less than a living wage. This is, in part, because the value of data is set by an unbalanced data market (a monopsony), as many data are collected freely in exchange for use of online services. 

In addition to their role in data creation, human workers participate in customer service, delivery, and other on-demand tasks under the guise of full automation. Call center workers, content moderators, and other humans invisibly fill in the “last mile” of decisions that AI systems cannot make. This illusion helps justify the artificially low value of data labor, even though that labor will generate massive wealth for corporations. 

In sum, a marketplace radically transformed by AI will likely drive workers’ perceived worth down – and UBI may not reverse the harmful results. The utopian vision for AI and UBI, touted by the tech elite, deflects responsibility from corporations to pay for the data labor that is so valuable to them. The author proposes a solution that goes beyond UBI to establish “data dignity”: fair compensation for data labor in a balanced marketplace. Above all else, individuals must be recognized and valued for their data. They must be able to reason on the value of their contributions and make the choice to contribute.


Original paper by Vi Hart: https://theartofresearch.org/ai-ubi-and-data/

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

🔍 SEARCH

Spotlight

This image shows a large white, traditional, old building. The top half of the building represents the humanities (which is symbolised by the embedded text from classic literature which is faintly shown ontop the building). The bottom section of the building is embossed with mathematical formulas to represent the sciences. The middle layer of the image is heavily pixelated. On the steps at the front of the building there is a group of scholars, wearing formal suits and tie attire, who are standing around at the enternace talking and some of them are sitting on the steps. There are two stone, statute-like hands that are stretching the building apart from the left side. In the forefront of the image, there are 8 students - which can only be seen from the back. Their graduation gowns have bright blue hoods and they all look as though they are walking towards the old building which is in the background at a distance. There are a mix of students in the foreground.

Tech Futures: Co-opting Research and Education

Agentic AI systems and algorithmic accountability: a new era of e-commerce

ALL IN Conference 2025: Four Key Takeaways from Montreal

Beyond Dependency: The Hidden Risk of Social Comparison in Chatbot Companionship

AI Policy Corner: Restriction vs. Regulation: Comparing State Approaches to AI Mental Health Legislation

related posts

  • How Artifacts Afford: The Power and Politics of Everyday Things

    How Artifacts Afford: The Power and Politics of Everyday Things

  • Designing for Meaningful Human Control in Military Human-Machine Teams

    Designing for Meaningful Human Control in Military Human-Machine Teams

  • Do Large GPT Models Discover Moral Dimensions in Language Representations? A Topological Study Of Se...

    Do Large GPT Models Discover Moral Dimensions in Language Representations? A Topological Study Of Se...

  • An Audit Framework for Adopting AI-Nudging on Children

    An Audit Framework for Adopting AI-Nudging on Children

  • DICES Dataset: Diversity in Conversational AI Evaluation for Safety

    DICES Dataset: Diversity in Conversational AI Evaluation for Safety

  • GenAI Against Humanity: Nefarious Applications of Generative Artificial Intelligence and Large Langu...

    GenAI Against Humanity: Nefarious Applications of Generative Artificial Intelligence and Large Langu...

  • AI supply chains make it easy to disavow ethical accountability

    AI supply chains make it easy to disavow ethical accountability

  • Unprofessional Peer Reviews Disproportionately Harm Underrepresented Groups in STEM (Research Summar...

    Unprofessional Peer Reviews Disproportionately Harm Underrepresented Groups in STEM (Research Summar...

  • Research summary: Troubling Trends in Machine Learning Scholarship

    Research summary: Troubling Trends in Machine Learning Scholarship

  • How Prevalent is Gender Bias in ChatGPT? - Exploring German and English ChatGPT Responses

    How Prevalent is Gender Bias in ChatGPT? - Exploring German and English ChatGPT Responses

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer


Articles

Columns

AI Literacy

The State of AI Ethics Report


 

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.

Contact

Donate


  • © 2025 MONTREAL AI ETHICS INSTITUTE.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.