• Skip to main content
  • Skip to secondary menu
  • Skip to primary sidebar
  • Skip to footer
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

  • Articles
    • Public Policy
    • Privacy & Security
    • Human Rights
      • Ethics
      • JEDI (Justice, Equity, Diversity, Inclusion
    • Climate
    • Design
      • Emerging Technology
    • Application & Adoption
      • Health
      • Education
      • Government
        • Military
        • Public Works
      • Labour
    • Arts & Culture
      • Film & TV
      • Music
      • Pop Culture
      • Digital Art
  • Columns
    • AI Policy Corner
    • Recess
    • Tech Futures
  • The AI Ethics Brief
  • AI Literacy
    • Research Summaries
    • AI Ethics Living Dictionary
    • Learning Community
  • The State of AI Ethics Report
    • Volume 7 (November 2025)
    • Volume 6 (February 2022)
    • Volume 5 (July 2021)
    • Volume 4 (April 2021)
    • Volume 3 (Jan 2021)
    • Volume 2 (Oct 2020)
    • Volume 1 (June 2020)
  • About
    • Our Contributions Policy
    • Our Open Access Policy
    • Contact
    • Donate

The AI Junkyard: Thinking Through the Lifecycle of AI Systems

April 5, 2021

✍️ Column by Alexandrine Royer, our Educational Program Manager.


In mid-March, Netflix revealed the details behind our streaming consumption patterns that tally into the company’s carbon footprint. DIMPACT, a tool that calculates digital companies’ emissions, determined that one hour of streaming was equivalent to one ceiling fan running for four hours in North America or six hours in Europe. While such statistics do not seem shocking nor particularly alarming, they omit what we might be doing instead of spending an additional hour on our computer. 

Streaming services also do not operate in the same way in a given environment as, say, a ceiling fan or a washing machine. For a more precise understanding of the environmental and social impacts of streaming, calculations ought to include the energy costs of charging our laptops to keep streaming, securing access to high-speed Internet, upgrading devices and discarding old hardware, and so on.

The widespread usage of streaming platforms shows how a single technological change produces ripple effects that modify our habits, our energy needs and our carbon footprint, even if they appear minute. As many anthropologists have argued, AI systems should be viewed as socio-technical systems instead of single bounded entities. The term further invites a holistic approach in understanding the social and environmental impact of having these algorithms run in our lives and what happens once they enter disuse. 

Our inability to see and comprehend the lines of code behind the design of our favourite apps and platforms has helped foster the view of AI systems as operating in a virtual realm of their own. The invisibility of algorithmic systems contributes to the lack of transparency regarding the biases integrated within and integral to these systems. Kept away from peering eyes are the massive data centers required to keep these systems running and their polluting effects, leading us to quickly forget that tech is a double-edge sword capable of preventing and generating damage to the environment. 

The implications of our long-term reliance on tech are often tinged with techno-dystopian discourses of artificial intelligence taking over and dispelling the end of the human race. Such alarmist views encourage a distorted view of AI systems’ current capacities and what is probable and possible. Instead, I argue for a more critical inquiry of the social and environmental effects of AI systems that follows each step of the systems’ life cycle and how they interact with previously existing structures along the way.  

As highlighted by Eric Broda, ML systems’ lifecycle from tech companies such as Microsoft and Google is often presented through the same stages of data understanding/project objectives,  acquiring and engineering data, model development and training and model deployment and monitoring. For Broda, the requirements of model reproducibility, traceability and verifiability tend to be omitted or underplayed in these AI/ML lifecycles. To this criteria, I would add the sustainability of AI systems, the long-term consequences of keeping these systems running and their likely expiration date within the tech space. 

To better understand the shortcomings of AI systems, Broda suggests introducing a life cycle catalogue, a type of “book of record”, that “provides a viewport into the data science lifecycle” by allowing “data scientists to visualize, categorize, group, manage, and govern all of an enterprise’s data assets”. The lifecycle catalogue can be a valuable tool to estimate an AI system’s impacts from within its code to its outside connections to other existing systems and deployment into our lives. It can serve to provide visible ethical ‘checkpoints’ for legislators and citizens alike to understand the implications of each stage of the AI/ML process.  

The catalogue also pushes us to reflect on what happens to the end of life of a system, knowing that systems cannot exist forever and that they must dispose of the masses of accumulated data. Scholars such as Edina Harbinja have already pushed for legislating and regulating post-mortem privacy to protect the trails of data that individuals have left behind through a lifetime of online activities. But beyond the individual level, little is known as to how companies dispose of their data (and perhaps for good security reasons). Just as importantly, few reports have addressed the consequences of the dismantling systems that people have come to rely on. 

With the lifecycle catalogue mindset, we can return to our initial example of Netflix. Netflix streaming services will have accumulated precise personal information over viewers’ series and movie selection preferences throughout their deployment stage. If Netflix is suddenly unable to compete with other streaming service providers and must be discontinued, it will have to dispose of the masses of data on their users. Even if the data is anonymized, individuals can be easily traced back through cross-checking with user profiles on other public sites, such as IMDB. 

Alongside these ethical privacy concerns are the environmental costs of keeping these systems running. In their continuous learning and monitoring phase, Netflix’s algorithms will be tweaked to incentivize viewers to keep watching for hours on end, increasing both the company’s computational power demands and the individual’s energy consumption. Individuals looking to improve their streaming experiences will be encouraged to acquire high-speed internet and devices with better image quality. Tracking the sustainability of AI systems throughout their lifecycle will require monitoring all these factors.

Thinking holistically about AI systems also involves thinking about their endpoint. AI as socio-technical systems can create a series of connections but also ruptures and breaks. How much do we know about AI systems that have been discarded? To frame this question otherwise, what might be resting in the AI junkyard? And what can we learn from it?

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

🔍 SEARCH

Spotlight

A brightly coloured illustration which can be viewed in any direction. It has many elements to it working together: men in suits around a table, someone in a data centre, big hands controlling the scenes and holding a phone, people in a production line. Motifs such as network diagrams and melting emojis are placed throughout the busy vignettes.

Tech Futures: The Fossil Fuels Playbook for Big Tech: Part II

A rock embedded with intricate circuit board patterns, held delicately by pale hands drawn in a ghostly style. The contrast between the rough, metallic mineral and the sleek, artificial circuit board illustrates the relationship between raw natural resources and modern technological development. The hands evoke human involvement in the extraction and manufacturing processes.

Tech Futures: The Fossil Fuels Playbook for Big Tech: Part I

Close-up of a cat sleeping on a computer keyboard

Tech Futures: The threat of AI-generated code to the world’s digital infrastructure

The undying sun hangs in the sky, as people gather around signal towers, working through their digital devices.

Dreams and Realities in Modi’s AI Impact Summit

Illustration of a coral reef ecosystem

Tech Futures: Diversity of Thought and Experience: The UN’s Scientific Panel on AI

related posts

  • The Political Power of Platforms: How Current Attempts to Regulate Misinformation Amplify Opinion Po...

    The Political Power of Platforms: How Current Attempts to Regulate Misinformation Amplify Opinion Po...

  • Rethinking Fairness: An Interdisciplinary Survey of Critiques of Hegemonic ML

    Rethinking Fairness: An Interdisciplinary Survey of Critiques of Hegemonic ML

  • Open Problems and Fundamental Limitations of Reinforcement Learning from Human Feedback

    Open Problems and Fundamental Limitations of Reinforcement Learning from Human Feedback

  • Engaging the Public in AI's Journey: Lessons from the UK AI Safety Summit on Standards, Policy, and ...

    Engaging the Public in AI's Journey: Lessons from the UK AI Safety Summit on Standards, Policy, and ...

  • Jack Clark Presenting the 2022 AI Index Report

    Jack Clark Presenting the 2022 AI Index Report

  • GAM(e) changer or not? An evaluation of interpretable machine learning models

    GAM(e) changer or not? An evaluation of interpretable machine learning models

  • On the Challenges of Deploying Privacy-Preserving Synthetic Data in the Enterprise

    On the Challenges of Deploying Privacy-Preserving Synthetic Data in the Enterprise

  • Public Strategies for Artificial Intelligence: Which Value Drivers?

    Public Strategies for Artificial Intelligence: Which Value Drivers?

  • Risk of AI in Healthcare: A Study Framework

    Risk of AI in Healthcare: A Study Framework

  • Exploring the under-explored areas in teaching tech ethics today

    Exploring the under-explored areas in teaching tech ethics today

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer


Articles

Columns

AI Literacy

The State of AI Ethics Report


 

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.

Contact

Donate


  • © 2025 MONTREAL AI ETHICS INSTITUTE.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.