🔬 Research Summary by Pengfei Li and Shaolei Ren
Pengfei Li is a Ph.D. candidate in computer science and engineering at the University of California, Riverside.
Shaolei Ren is an associate professor in electrical and computer engineering at the University of California, Riverside.
[Original paper by Pengfei Li, Jianyi Yang, Adam Wierman, and Shaolei Ren]
Overview: The exponentially growing demand for AI has created an enormous appetite for energy and a negative environmental impact. Despite recent efforts to make AI more environmentally friendly, environmental inequity — the fact that AI’s environmental footprint is disproportionately higher in certain regions than in others — has unfortunately emerged, raising social-ecological justice concerns. To achieve environmentally equitable AI, we propose equity-aware geographical load balancing (GLB) to ensure fair distribution of AI’s environmental costs across different regions.
Introduction
The success of AI relies heavily on computationally intensive calculations to learn useful information from data during training and provide insightful predictions during inference. As such, AI models are typically trained on large clusters of power-hungry servers that may each have multiple graphic processing units (GPUs) and are housed in warehouse-scale data centers. Consequently, AI has a huge hidden environmental cost on communities and regions where the AI models are trained and deployed. For example, thermal-based electricity generation produces local air pollutants, discharges pollution into water bodies, and generates solid wastes (possibly including hazardous wastes); elevated carbon emissions in an area may increase local ozone, particulate matter, and premature mortality; staggering water consumption can further stress limited local freshwater resources and worsen megadroughts in regions like Arizona.
Even worse, AI’s environmental costs are often disproportionately higher in certain (sometimes marginalized) regions than others, worsening the social-ecological inequity. The AI Now Institute even compared the uneven regional distribution of AI’s environmental costs to “historical practices of settler colonialism and racial capitalism” in its 2023 Landscape report.
To support the healthy and responsible development of AI, international organizations, such as the United Nations Educational, Scientific and Cultural Organization (UNESCO) and the Organization for Economic Cooperation and Development (OECD), have explicitly called for efforts to address AI’s environmental inequity.
Key Insights
What is the state-of-the-art?
Equity and fairness are crucial considerations for the success of AI. The existing research in this space has predominantly focused on mitigating prediction unfairness against disadvantaged individuals and/or groups in various settings. Our work on environmental equity adds a unique dimension of fairness and greatly complements the existing rich body of research, collaboratively and holistically building equitable and socially-responsible AI.
Various approaches have been explored to make AI more energy-efficient and sustainable, including computationally efficient training and inference, energy-efficient GPU and accelerator designs, and carbon-aware task scheduling, among others. In particular, existing data center workload scheduling studies focus on minimizing the electricity costs, total latency, and/or environmental footprint. But, this does not mean all regions are treated equitably. Let’s consider two data centers as a toy example and suppose that, for the next hour, one is twice as efficient as the other in water usage. All workloads will be routed to the more water-efficient data center to minimize the total water footprint for the next hour. However, such aggressive “exploitation” is unfair despite reducing the overall water footprint. Instead, we may want to schedule two-thirds and one-third of the workloads to these two data centers for more equitable workload distribution. Of course, the real problem is more challenging as the environmental costs are more than just water footprints, and we must take into account additional system constraints, such as latency requirements.
How to define environmental equity?
Our goal is not to blindly equalize AI’s regional environmental cost, which may artificially elevate the environmental footprints in those otherwise advantaged regions and provide a false sense of equity. Instead, we consider minimax fairness and aim to minimize AI’s highest regional environmental cost — reducing AI’s impact on the worst affected region. Also, our minimax fairness can be easily extended to consider proportional equity by normalizing the regional environmental cost with respect to each data center’s total compute capacity since a larger data center inevitably has a larger environmental impact than a smaller one.
How to achieve environmentally equitable AI?
AI models can be trained and deployed in different data centers, which allows us to do a lot to address AI’s environmental inequity by flexibly and equitably distributing its regional environmental cost. For example, air pollution by freeway traffic can negatively impact nearby communities, but it can be challenging to reroute traffic once we build the freeways. In contrast, we can exploit AI’s scheduling flexibility and easily redistribute AI workloads to different data centers depending on real-time local information, such as the current percentage of coal-based energy sources and water efficiency. By moving AI workloads around from one data center to another, we also move AI’s environmental costs around, making AI’s regional environmental impacts more balanced.
The key novelty is that we explicitly minimize the most significant negative environmental impacts (e.g., local impacts of water and carbon footprints) among all the data centers by optimizing which data centers we use and when. Intuitively, when a certain region already has a high environmental cost, we’ll prioritize data centers in other regions when scheduling AI workloads. We do so by adding an equity cost (i.e., maximum regional environmental cost) to the scheduling objective for AI workloads as a regularizer.
Nonetheless, this is challenging in practice. Naturally, AI’s environmental impacts are affected by scheduling decisions over the long term. But, when we dynamically schedule AI workloads in real-time, we can’t possibly know all future information, such as workload demands and water and carbon efficiency. We must also maintain a certain level of AI model performance and quality. We can leverage machine learning predictions to address these challenges to estimate future water and carbon efficiency and workload demands, but the estimates will probably be noisy. We have a separate line of work to utilize noisy machine learning predictions to help us improve the decision quality.
Any price we pay for environmentally equitable AI?
It’s certainly not free for AI to be environmentally equitable, but the cost we pay for environmental equity is rather small. For example, by considering a set of 10 geographically-distributed data centers, our trace-based simulations show that equity-aware GLB can significantly reduce AI’s regional disparity in terms of the carbon and water footprints while only marginally increasing the operating cost. Also, geographical load balancing is a fairly mature technology that AI systems can easily adopt with minimum latency impacts on AI inference. For AI training, the performance impacts are even less, as AI training is more flexible and typically doesn’t have as strict deadlines as inference. Additionally, we don’t have to move one single job of AI training back and forth between multiple data centers; we just need to balance the AI system’s overall long-term regional environmental impacts.
Between the lines
There has been a lot of research on mitigating AI’s prediction unfairness against disadvantaged individuals and/or groups in various settings. Our work on environmental equity greatly complements the existing research on AI’s algorithmic fairness, addressing a critical concern for equitable and socially-responsible AI. A simple way to apply our research in practice is to add equity cost to the equation or assign a total environmental footprint target for each region as part of how a company optimizes its AI workload management, whether the company is operating its own geographically distributed data centers or relying on public clouds.
AI’s environmental cost is real but often hidden from the public. We hope our work can make the research community and the general public aware of AI’s emerging environmental inequity. When we build sustainable AI, let’s not forget about environmental equity.