• Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • Core Principles of Responsible AI
    • Accountability
    • Fairness
    • Privacy
    • Safety and Security
    • Sustainability
    • Transparency
  • Special Topics
    • AI in Industry
    • Ethical Implications
    • Human-Centered Design
    • Regulatory Landscape
    • Technical Methods
  • Living Dictionary
  • State of AI Ethics
  • AI Ethics Brief
  • 🇫🇷
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

Common but Different Futures: AI Inequity and Climate Change

January 18, 2022

🔬 Research summary by Trisha Ray, an Associate Fellow with the Centre for Security, Strategy and Technology at the Observer Research Foundation, where she works on geopolitics and emerging technologies.

[Original paper by Trisha Ray]


Overview: AI, experts say, can help “solve” climate change. At the same time, the carbon footprint of emerging technologies like AI is increasingly under scrutiny, especially due to pressure from climate-conscious shareholders and consumers. The “Global South” faces a dual challenge: first, the social and economic benefits of AI are accruing to a privileged few countries, and second, most of the efforts and narratives on AI and climate impact are being driven by the developed West, meaning that by not engaging with these debates early on, they risk being locked into rules and terms set by a small group of powerful actors. This paper proposes, among other recommendations, a revival of the CBDR principle for the AI and climate context.


Introduction

Sustainable AI is rapidly making its way into mainstream debates on AI ethics and sustainable development. Just last month, the 193 members of  UNESCO  adopted the Recommendation on the Ethics of Artificial Intelligence, which called on actors to “reduce the environmental impact of AI systems, including but not limited to its carbon footprint.” Similarly, major technology giants like Amazon, Microsoft, Alphabet and Facebook have announced “net zero” policies and initiatives. These gentle rumblings of change are a good sign, but they only scratch the surface: both global AI governance and climate change policy are both contentious, being rooted in geopolitics, inequitable access to resources and competing interpretations of responsibility. Take the heated debate over a recent draft UNSC Resolution to integrate climate change related risk into conflict prevention strategies. India and Russia voted against the resolution, with the Indian representative calling out the resolution for being both exclusionary in the process it would establish and for securitising climate action “ to obfuscate lack of progress on critical issues under the UNFCCC process”. 

“The incumbents of the digital revolution –mostly based in the US, Europe and China”, as the paper states, “have an advantage in AI R&D and deployment, in terms of data, compute infrastructure, skills, investment as well as their ability to set the terms by which other actors engage in governance and ethical debates.” The paper explores the interplay of global inequities in AI with emissions politics, zooming in specifically on geographical trends in compute demand. The paper also builds on consultations with experts in the AI and climate space. It puts forward three recommendations: better datasets for local impact assessment, establishing complementary standards for AI-linked emissions and the application of an overhauled CBDR in the context of climate change and AI.

AI and Climate in an Unequal World

The report sets the scene with a brief recap of the history of emissions politics, describing the emergence of the principle of Common but Differentiated Responsibilities (CBDR), institutionalized in the 1997 Kyoto Protocol. Developing nations rallied behind CBDR based on the belief that developed, industrialized nations are responsible for the stock of GHGs, or historical emissions, and should therefore make financial and emissions reduction commitments based on this ground reality. However, while CBDR is enshrined in the Protocol, it is not framed in this language, due to staunch opposition from developed countries. In recent years, the idea of “just transitions” has also found purchase, including in the Paris Agreement but still stops short of mentioning how the burden of risk is disproportionately placed on small and developing nations in the “Global South”.

How would the concentration of AI development and capacity—technical and governance—in the Global North affect emissions, and by extension, emission politics and narratives?

Narratives around AI-led development are already marked by neocolonial patterns: the outflow of data from the developing world to a handful of tech giants in large economies, and the inflow of new and emerging technology products and services from developed countries to underdeveloped and developing ones. Sub-Saharan Africa, Latin America, the Caribbean and South and Central Asia are falling behind in AI development and use, startups, funding and skills.

Data Center Market Trends as Proxy

The paper looks at data centre market trends to identify problems in how we currently calculate the climate impact of AI. Aggregating data based on region may, for instance, be useful to identify how underserved Latin America and Sub-Saharan Africa are in terms of compute availability but country-level insights are needed for a complete picture. The US alone accounts for a significant portion of data centres worldwide, and 39.5% of availability zones of the big 4 CSPs, AWS, Google, IBM and Azure. Even within the Asia Pacific region, which is the fastest growing data centre market, most of these centres are located in developed economies like South Korea, Japan, Australia and Singapore.

In addition to granular geographic insight, the paper states that quantifying emissions also requires that data centre operators be transparent about their energy use, including energy mix. A common theme in many studies on data centre emissions is the lack of information, meaning that models and estimates vary significantly. Finally, the paper questions whether the focus on efficiency in the greening of data centres is the right approach, especially when efficiency-oriented solutions are likely to lead to greater energy consumption overall.

An Equitable Model for Environmentally-Sustainable AI?

The paper concludes with three recommendations that could serve to refine ongoing efforts on sustainable AI. The first is robust and granular data to aid local impact assessment and action. The second is to work toward complementary and consistent standards, encouraging developing nations to engage early on, especially to avoid trapping themselves into new forms of emissions dumping. The final recommendation urges governments in developing and underdeveloped countries to assess their technology-led growth priorities in the context of the climate costs of AI. “While the economic growth imperative of AI is understandably the priority, not engaging in emerging debates in climate and AI risks these narratives and soon, governance processes, being shaped by contexts and terms set by a small group of powerful actors.”

Between the lines

There is no dearth of studies on the climate impact of AI, but this space is still in the early stages, with a number of different models for quantifying impact, most of them stemming from the US. Part of the problem lies in the lack of transparency from tech companies in the AI space on the lifecycle emissions of their operations (the complexity of their supply chains certainly plays a part, but should not serve as a reason to dodge responsibility). Funding is also key: could, for example, governments and industry fund the creation of Centres of Excellence (CoE) for sustainable AI, especially in countries in the “Global South”? This paper is meant to trigger debate on whether emerging narratives and processes on climate and AI are perpetuating inequities inherent to both spaces.

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

🔍 SEARCH

Spotlight

Canada’s Minister of AI and Digital Innovation is a Historic First. Here’s What We Recommend.

Am I Literate? Redefining Literacy in the Age of Artificial Intelligence

AI Policy Corner: The Texas Responsible AI Governance Act

AI Policy Corner: Singapore’s National AI Strategy 2.0

AI Governance in a Competitive World: Balancing Innovation, Regulation and Ethics | Point Zero Forum 2025

related posts

  • A Critical Analysis of the What3Words Geocoding Algorithm

    A Critical Analysis of the What3Words Geocoding Algorithm

  • The Unequal Opportunities of Large Language Models: Revealing Demographic Bias through Job Recommend...

    The Unequal Opportunities of Large Language Models: Revealing Demographic Bias through Job Recommend...

  • Research summary: Different Intelligibility for Different Folks

    Research summary: Different Intelligibility for Different Folks

  • Facial Recognition - Can It Evolve From A “Source of Bias” to A “Tool Against Bias”

    Facial Recognition - Can It Evolve From A “Source of Bias” to A “Tool Against Bias”

  • Technology on the Margins: AI and Global Migration Management From a Human Rights Perspective (Resea...

    Technology on the Margins: AI and Global Migration Management From a Human Rights Perspective (Resea...

  • From the Gut? Questions on Artificial Intelligence and Music

    From the Gut? Questions on Artificial Intelligence and Music

  • Towards a Feminist Metaethics of AI

    Towards a Feminist Metaethics of AI

  • A Case for AI Safety via Law

    A Case for AI Safety via Law

  • Bias Propagation in Federated Learning

    Bias Propagation in Federated Learning

  • Faith and Fate: Limits of Transformers on Compositionality

    Faith and Fate: Limits of Transformers on Compositionality

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer

Categories


• Blog
• Research Summaries
• Columns
• Core Principles of Responsible AI
• Special Topics

Signature Content


• The State Of AI Ethics

• The Living Dictionary

• The AI Ethics Brief

Learn More


• About

• Open Access Policy

• Contributions Policy

• Editorial Stance on AI Tools

• Press

• Donate

• Contact

The AI Ethics Brief (bi-weekly newsletter)

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.


Archive

  • © MONTREAL AI ETHICS INSTITUTE. All rights reserved 2024.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.