• Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • Core Principles of Responsible AI
    • Accountability
    • Fairness
    • Privacy
    • Safety and Security
    • Sustainability
    • Transparency
  • Special Topics
    • AI in Industry
    • Ethical Implications
    • Human-Centered Design
    • Regulatory Landscape
    • Technical Methods
  • Living Dictionary
  • State of AI Ethics
  • AI Ethics Brief
  • 🇫🇷
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

Research summary: Changing My Mind About AI, Universal Basic Income, and the Value of Data

August 17, 2020

Summary contributed by Sneha Deo, a computer scientist (PM @ Microsoft), grassroots organizer, and musician based in Seattle, WA.

*Authors of full paper & link at the bottom


As Artificial Intelligence grows more ubiquitous, policy-makers and technologists dispute what will happen. The resulting labor landscape could lead to an underemployed, impoverished working class; or, it could provide a higher standard of living for all, regardless of employment status. Recently, many claim the latter outcome will come to pass if AI-generated wealth can support a Universal Basic Income – an unconditional monetary allocation to every individual. In the article “Changing my Mind about AI, Universal Basic Income, and the Value of Data”, author Vi Hart examines this claim for its practicality and pitfalls. 

Through this examination, the author deconstructs the belief that humans are rendered obsolete by AI. The author notes this belief benefits the owners of profitable AI systems, allowing them to acquire the on-demand and data labor they need at unfairly low rates – often for less than a living wage or for free. And although a useful introduction to wealth redistribution, UBI does not address the underlying dynamics of this unbalanced labor market. Calling for the fair attribution of prosperity, the author proposes an extension to UBI: a model of compensation that assigns explicit value to the human labor that keeps AI systems running.

Full summary:

Artificial Intelligence may soon become powerful enough to change the landscape of work. When it does, will it devastate the job market and widen the wealth gap, or will it lay the foundation for a technological utopia where human labor is no longer required? A potential intersection between these seemingly opposed theories has developed into an increasingly popular idea in the past 5 years: the idea that human work may become obsolete, but that AI will generate such excess wealth that redistribution in the form of Universal Basic Income is possible. In the article “Changing my Mind about AI, Universal Basic Income, and the Value of Data”, author Vi Hart explores the attractive idea of UBI and AI – long prophesied by tech industry leaders – and weighs its practicality and pitfalls.

Universal Basic Income is a program that provides every individual with a standardized unconditional income. It has been presented as a salve to the existential problem of massive unemployment as AI replaces human workers. It could reduce financial dependence on traditional jobs, freeing individuals to pursue meaningful (rather than market-driven) skill development. And although UBI may appear costly, the relative cheapness of AI labor could generate capital for redistribution. 

While it might seem an ideal solution at first glance, UBI doesn’t address the most dangerous threat presented by AI: the devaluation of the human labor that makes AI programs work. 

For the past 5 years, the tech elite have justified the devaluation of the human worker by claiming artificial intelligence will be orders of magnitude more productive than manual work. They extend this line of reasoning by idealizing “pure” AI, which will move beyond the need for human participation at all. 

But this rhetoric is untrue: human contributions are necessary inputs for AI to make decisions. AI is only as useful as the “collective intelligence” it draws upon – human-generated data collected knowingly or unknowingly. The gig economy of producing data through online marketplaces like MTurk is unregulated and can pay less than a living wage. This is, in part, because the value of data is set by an unbalanced data market (a monopsony), as many data are collected freely in exchange for use of online services. 

In addition to their role in data creation, human workers participate in customer service, delivery, and other on-demand tasks under the guise of full automation. Call center workers, content moderators, and other humans invisibly fill in the “last mile” of decisions that AI systems cannot make. This illusion helps justify the artificially low value of data labor, even though that labor will generate massive wealth for corporations. 

In sum, a marketplace radically transformed by AI will likely drive workers’ perceived worth down – and UBI may not reverse the harmful results. The utopian vision for AI and UBI, touted by the tech elite, deflects responsibility from corporations to pay for the data labor that is so valuable to them. The author proposes a solution that goes beyond UBI to establish “data dignity”: fair compensation for data labor in a balanced marketplace. Above all else, individuals must be recognized and valued for their data. They must be able to reason on the value of their contributions and make the choice to contribute.


Original paper by Vi Hart: https://theartofresearch.org/ai-ubi-and-data/

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

🔍 SEARCH

Spotlight

AI Policy Corner: Singapore’s National AI Strategy 2.0

AI Governance in a Competitive World: Balancing Innovation, Regulation and Ethics | Point Zero Forum 2025

AI Policy Corner: Frontier AI Safety Commitments, AI Seoul Summit 2024

AI Policy Corner: The Colorado State Deepfakes Act

Special Edition: Honouring the Legacy of Abhishek Gupta (1992–2024)

related posts

  • Governing AI to Advance Shared Prosperity

    Governing AI to Advance Shared Prosperity

  • Rewiring What-to-Watch-Next Recommendations to Reduce Radicalization Pathways

    Rewiring What-to-Watch-Next Recommendations to Reduce Radicalization Pathways

  • The State of AI Ethics Report (Volume 6)

    The State of AI Ethics Report (Volume 6)

  • Research summary: What does it mean for ML to be trustworthy?

    Research summary: What does it mean for ML to be trustworthy?

  • Beyond Bias and Compliance: Towards Individual Agency and Plurality of Ethics in AI

    Beyond Bias and Compliance: Towards Individual Agency and Plurality of Ethics in AI

  • De-platforming disinformation: conspiracy theories and their control

    De-platforming disinformation: conspiracy theories and their control

  • Embedded ethics: a proposal for integrating ethics into the development of medical AI

    Embedded ethics: a proposal for integrating ethics into the development of medical AI

  • The Ethical Implications of Generative Audio Models: A Systematic Literature Review

    The Ethical Implications of Generative Audio Models: A Systematic Literature Review

  • Ethics in the Software Development Process: from Codes of Conduct to Ethical Deliberation

    Ethics in the Software Development Process: from Codes of Conduct to Ethical Deliberation

  • Designing for Meaningful Human Control in Military Human-Machine Teams

    Designing for Meaningful Human Control in Military Human-Machine Teams

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer

Categories


• Blog
• Research Summaries
• Columns
• Core Principles of Responsible AI
• Special Topics

Signature Content


• The State Of AI Ethics

• The Living Dictionary

• The AI Ethics Brief

Learn More


• About

• Open Access Policy

• Contributions Policy

• Editorial Stance on AI Tools

• Press

• Donate

• Contact

The AI Ethics Brief (bi-weekly newsletter)

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.


Archive

  • © MONTREAL AI ETHICS INSTITUTE. All rights reserved 2024.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.