• Skip to main content
  • Skip to secondary menu
  • Skip to primary sidebar
  • Skip to footer
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

  • Articles
    • Public Policy
    • Privacy & Security
    • Human Rights
      • Ethics
      • JEDI (Justice, Equity, Diversity, Inclusion
    • Climate
    • Design
      • Emerging Technology
    • Application & Adoption
      • Health
      • Education
      • Government
        • Military
        • Public Works
      • Labour
    • Arts & Culture
      • Film & TV
      • Music
      • Pop Culture
      • Digital Art
  • Columns
    • AI Policy Corner
    • Recess
    • Tech Futures
  • The AI Ethics Brief
  • AI Literacy
    • Research Summaries
    • AI Ethics Living Dictionary
    • Learning Community
  • The State of AI Ethics Report
    • Volume 7 (November 2025)
    • Volume 6 (February 2022)
    • Volume 5 (July 2021)
    • Volume 4 (April 2021)
    • Volume 3 (Jan 2021)
    • Volume 2 (Oct 2020)
    • Volume 1 (June 2020)
  • About
    • Our Contributions Policy
    • Our Open Access Policy
    • Contact
    • Donate

Towards Environmentally Equitable AI via Geographical Load Balancing

August 2, 2023

🔬 Research Summary by Pengfei Li and Shaolei Ren

Pengfei Li is a Ph.D. candidate in computer science and engineering at the University of California, Riverside.

Shaolei Ren is an associate professor in electrical and computer engineering at the University of California, Riverside.

[Original paper by Pengfei Li, Jianyi Yang, Adam Wierman, and Shaolei Ren]


Overview: The exponentially growing demand for AI has created an enormous appetite for energy and a negative environmental impact. Despite recent efforts to make AI more environmentally friendly, environmental inequity — the fact that AI’s environmental footprint is disproportionately higher in certain regions than in others —  has unfortunately emerged, raising social-ecological justice concerns. To achieve environmentally equitable AI, we propose equity-aware geographical load balancing (GLB) to ensure fair distribution of AI’s environmental costs across different regions.


Introduction

The success of AI relies heavily on computationally intensive calculations to learn useful information from data during training and provide insightful predictions during inference. As such, AI models are typically trained on large clusters of power-hungry servers that may each have multiple graphic processing units (GPUs) and are housed in warehouse-scale data centers. Consequently, AI has a huge hidden environmental cost on communities and regions where the AI models are trained and deployed. For example, thermal-based electricity generation produces local air pollutants, discharges pollution into water bodies, and generates solid wastes (possibly including hazardous wastes); elevated carbon emissions in an area may increase local ozone, particulate matter, and premature mortality; staggering water consumption can further stress limited local freshwater resources and worsen megadroughts in regions like Arizona.

Even worse, AI’s environmental costs are often disproportionately higher in certain (sometimes marginalized) regions than others, worsening the social-ecological inequity. The AI Now Institute even compared the uneven regional distribution of AI’s environmental costs to “historical practices of settler colonialism and racial capitalism” in its 2023 Landscape report.

To support the healthy and responsible development of AI, international organizations, such as the United Nations Educational, Scientific and Cultural Organization (UNESCO) and the Organization for Economic Cooperation and Development (OECD), have explicitly called for efforts to address AI’s environmental inequity. 

Key Insights

What is the state-of-the-art?

Equity and fairness are crucial considerations for the success of AI. The existing research in this space has predominantly focused on mitigating prediction unfairness against disadvantaged individuals and/or groups in various settings. Our work on environmental equity adds a unique dimension of fairness and greatly complements the existing rich body of research, collaboratively and holistically building equitable and socially-responsible AI.

Various approaches have been explored to make AI more energy-efficient and sustainable, including computationally efficient training and inference, energy-efficient GPU and accelerator designs, and carbon-aware task scheduling, among others. In particular, existing data center workload scheduling studies focus on minimizing the electricity costs, total latency, and/or environmental footprint. But, this does not mean all regions are treated equitably. Let’s consider two data centers as a toy example and suppose that, for the next hour, one is twice as efficient as the other in water usage. All workloads will be routed to the more water-efficient data center to minimize the total water footprint for the next hour. However, such aggressive “exploitation” is unfair despite reducing the overall water footprint. Instead, we may want to schedule two-thirds and one-third of the workloads to these two data centers for more equitable workload distribution. Of course, the real problem is more challenging as the environmental costs are more than just water footprints, and we must take into account additional system constraints, such as latency requirements.

How to define environmental equity?

Our goal is not to blindly equalize AI’s regional environmental cost, which may artificially elevate the environmental footprints in those otherwise advantaged regions and provide a false sense of equity. Instead, we consider minimax fairness and aim to minimize AI’s highest regional environmental cost — reducing AI’s impact on the worst affected region. Also, our minimax fairness can be easily extended to consider proportional equity by normalizing the regional environmental cost with respect to each data center’s total compute capacity since a larger data center inevitably has a larger environmental impact than a smaller one.

How to achieve environmentally equitable AI?

AI models can be trained and deployed in different data centers, which allows us to do a lot to address AI’s environmental inequity by flexibly and equitably distributing its regional environmental cost. For example, air pollution by freeway traffic can negatively impact nearby communities, but it can be challenging to reroute traffic once we build the freeways. In contrast, we can exploit AI’s scheduling flexibility and easily redistribute AI workloads to different data centers depending on real-time local information, such as the current percentage of coal-based energy sources and water efficiency. By moving AI workloads around from one data center to another, we also move AI’s environmental costs around, making AI’s regional environmental impacts more balanced.

The key novelty is that we explicitly minimize the most significant negative environmental impacts (e.g., local impacts of water and carbon footprints) among all the data centers by optimizing which data centers we use and when. Intuitively, when a certain region already has a high environmental cost, we’ll prioritize data centers in other regions when scheduling AI workloads. We do so by adding an equity cost (i.e., maximum regional environmental cost) to the scheduling objective for AI workloads as a regularizer.

Nonetheless, this is challenging in practice. Naturally, AI’s environmental impacts are affected by scheduling decisions over the long term. But, when we dynamically schedule AI workloads in real-time, we can’t possibly know all future information, such as workload demands and water and carbon efficiency. We must also maintain a certain level of AI model performance and quality. We can leverage machine learning predictions to address these challenges to estimate future water and carbon efficiency and workload demands, but the estimates will probably be noisy. We have a separate line of work to utilize noisy machine learning predictions to help us improve the decision quality.

Any price we pay for environmentally equitable AI?

It’s certainly not free for AI to be environmentally equitable, but the cost we pay for environmental equity is rather small. For example, by considering a set of 10 geographically-distributed data centers, our trace-based simulations show that equity-aware GLB can significantly reduce AI’s regional disparity in terms of the carbon and water footprints while only marginally increasing the operating cost. Also, geographical load balancing is a fairly mature technology that AI systems can easily adopt with minimum latency impacts on AI inference. For AI training, the performance impacts are even less, as AI training is more flexible and typically doesn’t have as strict deadlines as inference. Additionally, we don’t have to move one single job of AI training back and forth between multiple data centers; we just need to balance the AI system’s overall long-term regional environmental impacts.

Between the lines

There has been a lot of research on mitigating AI’s prediction unfairness against disadvantaged individuals and/or groups in various settings. Our work on environmental equity greatly complements the existing research on AI’s algorithmic fairness, addressing a critical concern for equitable and socially-responsible AI. A simple way to apply our research in practice is to add equity cost to the equation or assign a total environmental footprint target for each region as part of how a company optimizes its AI workload management, whether the company is operating its own geographically distributed data centers or relying on public clouds.

AI’s environmental cost is real but often hidden from the public. We hope our work can make the research community and the general public aware of AI’s emerging environmental inequity. When we build sustainable AI, let’s not forget about environmental equity.

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

🔍 SEARCH

Spotlight

This image is a collage with a colourful Japanese vintage landscape showing a mountain, hills, flowers and other plants and a small stream. There are 3 large black data servers placed in the bottom half of the image, with a cloud of black smoke emitting from them, partly obscuring the scenery.

Tech Futures: Crafting Participatory Tech Futures

A network diagram with lots of little emojis, organised in clusters.

Tech Futures: AI For and Against Knowledge

A brightly coloured illustration which can be viewed in any direction. It has many elements to it working together: men in suits around a table, someone in a data centre, big hands controlling the scenes and holding a phone, people in a production line. Motifs such as network diagrams and melting emojis are placed throughout the busy vignettes.

Tech Futures: The Fossil Fuels Playbook for Big Tech: Part II

A rock embedded with intricate circuit board patterns, held delicately by pale hands drawn in a ghostly style. The contrast between the rough, metallic mineral and the sleek, artificial circuit board illustrates the relationship between raw natural resources and modern technological development. The hands evoke human involvement in the extraction and manufacturing processes.

Tech Futures: The Fossil Fuels Playbook for Big Tech: Part I

Close-up of a cat sleeping on a computer keyboard

Tech Futures: The threat of AI-generated code to the world’s digital infrastructure

related posts

  • Research summary: Warning Signs: The Future of Privacy and Security in the Age of Machine Learning

    Research summary: Warning Signs: The Future of Privacy and Security in the Age of Machine Learning

  • Online public discourse on artificial intelligence and ethics in China: context, content, and implic...

    Online public discourse on artificial intelligence and ethics in China: context, content, and implic...

  • Perspectives and Approaches in AI Ethics: East Asia (Research Summary)

    Perspectives and Approaches in AI Ethics: East Asia (Research Summary)

  • Beyond Empirical Windowing: An Attention-Based Approach for Trust Prediction in Autonomous Vehicles

    Beyond Empirical Windowing: An Attention-Based Approach for Trust Prediction in Autonomous Vehicles

  • Research summary: AI in Context: The Labor of Integrating New Technologies

    Research summary: AI in Context: The Labor of Integrating New Technologies

  • Research summary: Different Intelligibility for Different Folks

    Research summary: Different Intelligibility for Different Folks

  • NATO Artificial Intelligence Strategy

    NATO Artificial Intelligence Strategy

  • Artificial Intelligence and Inequality in the Middle East: The Political Economy of Inclusion

    Artificial Intelligence and Inequality in the Middle East: The Political Economy of Inclusion

  • Public Strategies for Artificial Intelligence: Which Value Drivers?

    Public Strategies for Artificial Intelligence: Which Value Drivers?

  • The Grand Illusion: The Myth of Software Portability and Implications for ML Progress

    The Grand Illusion: The Myth of Software Portability and Implications for ML Progress

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer


Articles

Columns

AI Literacy

The State of AI Ethics Report


 

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.

Contact

Donate


  • © 2025 MONTREAL AI ETHICS INSTITUTE.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.