• Skip to main content
  • Skip to secondary menu
  • Skip to primary sidebar
  • Skip to footer
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

  • Articles
    • Public Policy
    • Privacy & Security
    • Human Rights
      • Ethics
      • JEDI (Justice, Equity, Diversity, Inclusion
    • Climate
    • Design
      • Emerging Technology
    • Application & Adoption
      • Health
      • Education
      • Government
        • Military
        • Public Works
      • Labour
    • Arts & Culture
      • Film & TV
      • Music
      • Pop Culture
      • Digital Art
  • Columns
    • AI Policy Corner
    • Recess
    • Tech Futures
  • The AI Ethics Brief
  • AI Literacy
    • Research Summaries
    • AI Ethics Living Dictionary
    • Learning Community
  • The State of AI Ethics Report
    • Volume 7 (November 2025)
    • Volume 6 (February 2022)
    • Volume 5 (July 2021)
    • Volume 4 (April 2021)
    • Volume 3 (Jan 2021)
    • Volume 2 (Oct 2020)
    • Volume 1 (June 2020)
  • About
    • Our Contributions Policy
    • Our Open Access Policy
    • Contact
    • Donate

Tech Futures: The threat of AI-generated code to the world’s digital infrastructure

March 2, 2026

Close-up of a cat sleeping on a computer keyboard

✍️ By Ismael Kherroubi Garcia.

Ismael is Founder & Co-lead of the Responsible Artificial Intelligence Network (RAIN), and Founder & CEO of Kairoi.


📌 Editor’s Note: This article is part of our Tech Futures series, a collaboration between the Montreal AI Ethics Institute (MAIEI) and the Responsible Artificial Intelligence Network (RAIN). The series challenges mainstream AI narratives, proposing that rigorous research and science are better sources of information about AI than industry leaders. This third installment of Tech Futures describes the threat of low-quality, AI-generated code being contributed at scale to the world’s open source digital infrastructure.


Generative AI tools are allowing people to make low-quality contributions to open source resources whose maintainers are already pretty stretched, and often volunteers. Before delving into what this means, consider an analogy.

Cats famously sit on laptops when you’re at work; they might make it impossible for you to type, or even push a few random keys themselves. Driven by curiosity, they might get up close and swipe at anything intricate that you’re working on, especially if there’s string involved. Finally, go under a sink to do some maintenance and the cat might just pop itself in the seemingly new space it had never before seen.

“Vibe contributing” to open source repositories is as helpful as a cat folding the laundry. Vibe contributing consists of mostly first-time contributors who don’t know what they’re contributing but want to be a part of whatever they’re attempting to contribute to. Generative AI tools are being used to vibe-code one’s way into an open source project. But these contributions, like all others, are subject to repositories’ quality assurance processes led by “maintainers.” For maintainers, reviewing low-quality code is especially burdensome when produced at scale. As one report finds, “AI accelerates output, but it also amplifies certain categories of mistakes,” with AI-generated code involving 1.7 times the errors of human-generated code.

But this is only one part of a much more complex picture. For starters, maintainers are often not paid for their work. In other words, vibe contributors are adding to the workload of volunteers. And maintainers are critical, as being stretched and underresourced can lead to lower quality of peer-review for potential contributions, and bugs being missed or not prioritized. What’s more, open source resources are already under great pressure from companies that want better services, governments imposing regulations, and systems that must adhere to high standards. The diversity of sources of pressure points to the critical nature of what open source code ultimately is: the world’s digital infrastructure.

A tower symbolising all modern digital infrastructure is held up by a project some random person in Nebraska has been thanklessly maintaining since 2003. A playful cat is looking closely at the project.

Caption: © 2026 Responsible Artificial Intelligence Network (RAIN) and Ismael Kherroubi Garcia, CC BY 4.0, adapted from xkcd.com (Dependency) and Ricinator on Pixabay

Towards Robust Digital Infrastructure

Open source projects can include everything from entire operating systems (e.g.: Linux) to messaging applications (e.g.: Signal) and field-specific software (e.g.: R for statistics and data visualization). Their variety and openness means that many can adapt and adopt their code –provided they respect any licensing requirements– including private and public sector organizations, and indeed other open source initiatives. The result is a complex interweaving of tools, dependencies and licences that seeps through into many of the closed-source software we are exposed to in our day-to-day, whether we know it or not.

Given the potential that comes with succeeding in the competitive computer science job market, showing public contributions to impactful open-source projects can be a differentiating factor. Unfortunately, there is an undeniably steep learning curve to making a valuable contribution. Even if we assume a contributor writes high-quality code, it will take time and research to ensure it meets the standards of the initiative to which they want to contribute. And then there is the need to understand relevant dependencies, and to collaborate with maintainers and other contributors to test assumptions and polish ideas. All in all, it takes hard work to make valuable changes to the world’s digital infrastructure.

Getting a job in what seems to be a lucrative field is not the only incentive for low-quality contributions; there is also the overemphasis of innovation over maintenance. Indeed, innovation is what maintainers are usually paid for – not the mundane tasks related to keeping digital infrastructure afloat. In this regard, the push for innovation and productivity that dominate AI narratives lend themselves seamlessly to the interests of funders in the open source ecosystem.

Until 2014, Meta (then Facebook) followed the motto “move fast and break things.” To this date, “move fast and break things” describes the spirit of many developments in data science and AI; indeed, the very essence of Silicon Valley. The desire for speed over care, and quantity over quality might be seen as overlapping with the incentive structures of the open source ecosystem, where innovation is valued above maintenance. With this, open source contributions have become ripe for the injection of AI-generated content.

However, Big Tech ideals are contrary to what open source stands for. Overwhelming maintainers with low-quality code is indicative of the tension between open source and Big Tech. Another source of tension is Big Tech co-opting “open source” when not meeting the high standards expected by maintainers and their wider communities. And this is without mentioning the long history and diversity of open science practices that are foundational to open source. Greater awareness of open source systems, practices and histories may dissuade poor code contributions. But much wider cultural and structural changes will be needed to dissuade vibe coders from undermining the world’s digital infrastructure.

Image credit: Gerhard Bögner from Pixabay

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

🔍 SEARCH

Spotlight

A network diagram with lots of little emojis, organised in clusters.

Tech Futures: AI For and Against Knowledge

A brightly coloured illustration which can be viewed in any direction. It has many elements to it working together: men in suits around a table, someone in a data centre, big hands controlling the scenes and holding a phone, people in a production line. Motifs such as network diagrams and melting emojis are placed throughout the busy vignettes.

Tech Futures: The Fossil Fuels Playbook for Big Tech: Part II

A rock embedded with intricate circuit board patterns, held delicately by pale hands drawn in a ghostly style. The contrast between the rough, metallic mineral and the sleek, artificial circuit board illustrates the relationship between raw natural resources and modern technological development. The hands evoke human involvement in the extraction and manufacturing processes.

Tech Futures: The Fossil Fuels Playbook for Big Tech: Part I

Close-up of a cat sleeping on a computer keyboard

Tech Futures: The threat of AI-generated code to the world’s digital infrastructure

The undying sun hangs in the sky, as people gather around signal towers, working through their digital devices.

Dreams and Realities in Modi’s AI Impact Summit

related posts

  • The Paris AI Summit: Deregulation, Fear, and Surveillance

    The Paris AI Summit: Deregulation, Fear, and Surveillance

  • Regulating Artificial Intelligence: The EU AI Act - Part 1 (i)

    Regulating Artificial Intelligence: The EU AI Act - Part 1 (i)

  • The Ethical AI Startup Ecosystem 05: Governance, Risk, and Compliance (GRC)

    The Ethical AI Startup Ecosystem 05: Governance, Risk, and Compliance (GRC)

  • AI Policy Corner: New York City Local Law 144

    AI Policy Corner: New York City Local Law 144

  • This image shows a large white, traditional, old building. The top half of the building represents the humanities (which is symbolised by the embedded text from classic literature which is faintly shown ontop the building). The bottom section of the building is embossed with mathematical formulas to represent the sciences. The middle layer of the image is heavily pixelated. On the steps at the front of the building there is a group of scholars, wearing formal suits and tie attire, who are standing around at the enternace talking and some of them are sitting on the steps. There are two stone, statute-like hands that are stretching the building apart from the left side. In the forefront of the image, there are 8 students - which can only be seen from the back. Their graduation gowns have bright blue hoods and they all look as though they are walking towards the old building which is in the background at a distance. There are a mix of students in the foreground.

    Tech Futures: Co-opting Research and Education

  • Am I Literate? Redefining Literacy in the Age of Artificial Intelligence

    Am I Literate? Redefining Literacy in the Age of Artificial Intelligence

  • AI Policy Corner: Texas and New York: Comparing U.S. State-Level AI Laws

    AI Policy Corner: Texas and New York: Comparing U.S. State-Level AI Laws

  • The coming AI 'culture war'

    The coming AI 'culture war'

  • AI Chatbots: The Future of Socialization

    AI Chatbots: The Future of Socialization

  • AI Policy Corner: An Overview of Illinois Public Act 103-0804

    AI Policy Corner: An Overview of Illinois Public Act 103-0804

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer


Articles

Columns

AI Literacy

The State of AI Ethics Report


 

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.

Contact

Donate


  • © 2025 MONTREAL AI ETHICS INSTITUTE.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.