• Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • Core Principles of Responsible AI
    • Accountability
    • Fairness
    • Privacy
    • Safety and Security
    • Sustainability
    • Transparency
  • Special Topics
    • AI in Industry
    • Ethical Implications
    • Human-Centered Design
    • Regulatory Landscape
    • Technical Methods
  • Living Dictionary
  • State of AI Ethics
  • AI Ethics Brief
  • 🇫🇷
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

Efficiency is Not Enough: A Critical Perspective of Environmentally Sustainable AI

December 14, 2023

🔬 Research Summary by Dustin Wright, a postdoc at the University of Copenhagen working on sustainable machine learning.

[Original paper by Dustin Wright, Christian Igel, Gabrielle Samuel, and Raghavendra Selvan]


Overview: The main response to the carbon emissions and environmental impacts of compute-hungry and energy-intensive machine learning (ML) has been to improve the efficiency with which ML systems operate. In this article, we present three reasons why efficiency is not enough to address the environmental impacts of ML and propose systems thinking as a way to make further progress toward this goal.


Introduction

ML in the form of deep learning has led to rapid progress in AI. Still, the success of ML systems is tied to how much computing they require, which can require significant energy, produce significant carbon emissions, and increase the demand for hardware to run ML systems. This threatens to make ML unsustainable from an environmental perspective. The dominant solution to this problem touted by much of the community is to make things more efficient. Here, we describe three discrepancies between the promise of efficiency and the reality of it, arguing that efficiency is not enough to make ML environmentally sustainable: 

  1. Compute, energy, and carbon efficiency are not the same
  2. Efficiency has unexpected effects across ML model life cycles
  3. Efficiency does not account for and can potentially worsen broad environmental impacts from hardware platforms

Systems thinking provides a lens and framework for dealing with complexity and can help us make ML environmentally sustainable.

Key Insights

Compute, energy, and carbon efficiency are not the same

Intuitively, lowering compute lowers energy consumption in kind. However, the relationship between compute and energy consumption is not so straightforward. For example, making a model smaller by removing some of its parameters, i.e., making it sparse, is seen as a way to make it more efficient. However, this can actually increase the energy consumption of that model when used on certain hardware, as calculations with sparse models can be more energy-intensive. In other words, making a model compute efficient will only sometimes make it energy efficient.

Additionally, carbon emissions are a function of both energy consumption and carbon intensity. Carbon intensity measures how much carbon will be emitted per unit of energy consumed on a given electric grid at a particular time. This varies enormously by both time and location. Given this, making an ML model energy efficient does not necessarily reduce its carbon emissions.

Efficiency has unexpected effects across the model life cycle

There is a huge disparity in the compute used between developing an ML model and using it in practice, i.e., deployment, where deployment dominates up to 9:1. In light of this, how and when to be efficient across the model life cycle is an open problem. For example, a common method to find a good model architecture during the development phase is neural architecture search (NAS). NAS is not efficient, sometimes requiring thousands of GPU days. But NAS may be worth it if an efficient architecture that reduces the cost of deployment is found, revealing an important question worth exploring about how to use efficiency best.

Efficiency can also impact how people use ML systems. It is established that increasing the efficiency of a resource can lead to increased usage of that resource, known as the rebound effect. Rebounds can easily occur in ML. For example, a practitioner can make their model more computationally efficient, but this could lead them to run more experiments due to increased speed, resulting in more energy consumption and carbon emissions. Rebounds in terms of energy consumption have been seen at multiple companies using ML.

Efficiency doesn’t account for and can potentially worsen impacts from hardware platforms

The hardware platforms that power ML systems have broad environmental impacts, including:

  • Pollution and deforestation from mining raw materials.
  • Carbon emissions from manufacturing and transporting devices.
  • Water scarcity from cooling devices, which ML systems are trained on.
  • Pollution and health hazards from disposal of old hardware.

Efficiency has both positive and negative effects on this. On the one hand, efficiency has helped limit the energy consumption of data centers and has helped slow the rate at which devices are replaced. But energy efficiency is slowing in line with the slowing of Moore’s law, so it isn’t certain if this trend will continue. Energy and compute efficiency also allow ML systems to be used on edge devices like smart sensors and mobile phones. The number of these devices running ML systems is projected to grow rapidly in the coming years, and their environmental impact is dominated by their manufacture, production, and disposal as opposed to their energy consumption. This makes it increasingly important to address these impacts, where efficiency is, at best, a partial solution.

Beyond efficiency

The environmental impacts we described arise from many factors interacting with each other: what ML models are used, when and where they are used, how they are used over time, what hardware they are used on, and more. Efficiency can help address some impacts but needs to account for this complexity on its own. How can we go beyond efficiency?

One way is systems thinking. Systems thinking goes back to the 1960s and has been applied successfully in many fields, including engineering, computer science, management, and sustainability. It is a conceptual framework for understanding what happens when many things interact with each other, forming a system. Examples of such systems are cities, companies, and human bodies. The benefit of systems thinking is a shift from looking at individual causes and effects (for example, by taking the bus, I may reduce my carbon emissions) to thinking about what happens at the system level (for example, the transportation system produces carbon emissions, in which buses, cars, and people play some part). Systems thinking can help us view ML as a technology more holistically, including better understanding and mitigating the environmental impacts it creates, as well as its potential to be used in applications that help make other sectors sustainable.

Between the lines

It has been argued recently that AI is becoming an “infrastructure.” It is growing as a technology, and many areas of society are adopting AI tools. Therefore, it is essential to ensure that the technology is environmentally, economically, and socially sustainable. Doing this in practice will be extremely challenging, and it isn’t enough to make things efficient. We need a more holistic approach, like systems thinking, to mitigate negative impacts and encourage positive ones. Doing so will help us move towards aligning AI with sustainable development goals.

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

🔍 SEARCH

Spotlight

AI Policy Corner: New York City Local Law 144

Canada’s Minister of AI and Digital Innovation is a Historic First. Here’s What We Recommend.

Am I Literate? Redefining Literacy in the Age of Artificial Intelligence

AI Policy Corner: The Texas Responsible AI Governance Act

AI Policy Corner: Singapore’s National AI Strategy 2.0

related posts

  • NATO Artificial Intelligence Strategy

    NATO Artificial Intelligence Strategy

  • The Participatory Turn in AI Design: Theoretical Foundations and the Current State of Practice

    The Participatory Turn in AI Design: Theoretical Foundations and the Current State of Practice

  • Quantifying the Carbon Emissions of Machine Learning

    Quantifying the Carbon Emissions of Machine Learning

  • The Sociology of Race and Digital Society

    The Sociology of Race and Digital Society

  • Research summary: What does it mean for ML to be trustworthy?

    Research summary: What does it mean for ML to be trustworthy?

  • Incentivized Symbiosis: A Paradigm for Human-Agent Coevolution

    Incentivized Symbiosis: A Paradigm for Human-Agent Coevolution

  • Understanding Toxicity Triggers on Reddit in the Context of Singapore

    Understanding Toxicity Triggers on Reddit in the Context of Singapore

  • Research summary: Sponge Examples: Energy-Latency Attacks on Neural Networks

    Research summary: Sponge Examples: Energy-Latency Attacks on Neural Networks

  • GenAI Against Humanity: Nefarious Applications of Generative Artificial Intelligence and Large Langu...

    GenAI Against Humanity: Nefarious Applications of Generative Artificial Intelligence and Large Langu...

  • HAI Weekly Seminar Series: Decolonizing AI with Sabelo Mhlambi

    HAI Weekly Seminar Series: Decolonizing AI with Sabelo Mhlambi

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer

Categories


• Blog
• Research Summaries
• Columns
• Core Principles of Responsible AI
• Special Topics

Signature Content


• The State Of AI Ethics

• The Living Dictionary

• The AI Ethics Brief

Learn More


• About

• Open Access Policy

• Contributions Policy

• Editorial Stance on AI Tools

• Press

• Donate

• Contact

The AI Ethics Brief (bi-weekly newsletter)

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.


Archive

  • © MONTREAL AI ETHICS INSTITUTE. All rights reserved 2024.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.