• Skip to main content
  • Skip to secondary menu
  • Skip to primary sidebar
  • Skip to footer
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

  • Articles
    • Public Policy
    • Privacy & Security
    • Human Rights
      • Ethics
      • JEDI (Justice, Equity, Diversity, Inclusion
    • Climate
    • Design
      • Emerging Technology
    • Application & Adoption
      • Health
      • Education
      • Government
        • Military
        • Public Works
      • Labour
    • Arts & Culture
      • Film & TV
      • Music
      • Pop Culture
      • Digital Art
  • Columns
    • AI Policy Corner
    • Recess
    • Tech Futures
  • The AI Ethics Brief
  • AI Literacy
    • Research Summaries
    • AI Ethics Living Dictionary
    • Learning Community
  • The State of AI Ethics Report
    • Volume 7 (November 2025)
    • Volume 6 (February 2022)
    • Volume 5 (July 2021)
    • Volume 4 (April 2021)
    • Volume 3 (Jan 2021)
    • Volume 2 (Oct 2020)
    • Volume 1 (June 2020)
  • About
    • Our Contributions Policy
    • Our Open Access Policy
    • Contact
    • Donate

Common but Different Futures: AI Inequity and Climate Change

January 18, 2022

🔬 Research summary by Trisha Ray, an Associate Fellow with the Centre for Security, Strategy and Technology at the Observer Research Foundation, where she works on geopolitics and emerging technologies.

[Original paper by Trisha Ray]


Overview: AI, experts say, can help “solve” climate change. At the same time, the carbon footprint of emerging technologies like AI is increasingly under scrutiny, especially due to pressure from climate-conscious shareholders and consumers. The “Global South” faces a dual challenge: first, the social and economic benefits of AI are accruing to a privileged few countries, and second, most of the efforts and narratives on AI and climate impact are being driven by the developed West, meaning that by not engaging with these debates early on, they risk being locked into rules and terms set by a small group of powerful actors. This paper proposes, among other recommendations, a revival of the CBDR principle for the AI and climate context.


Introduction

Sustainable AI is rapidly making its way into mainstream debates on AI ethics and sustainable development. Just last month, the 193 members of  UNESCO  adopted the Recommendation on the Ethics of Artificial Intelligence, which called on actors to “reduce the environmental impact of AI systems, including but not limited to its carbon footprint.” Similarly, major technology giants like Amazon, Microsoft, Alphabet and Facebook have announced “net zero” policies and initiatives. These gentle rumblings of change are a good sign, but they only scratch the surface: both global AI governance and climate change policy are both contentious, being rooted in geopolitics, inequitable access to resources and competing interpretations of responsibility. Take the heated debate over a recent draft UNSC Resolution to integrate climate change related risk into conflict prevention strategies. India and Russia voted against the resolution, with the Indian representative calling out the resolution for being both exclusionary in the process it would establish and for securitising climate action “ to obfuscate lack of progress on critical issues under the UNFCCC process”. 

“The incumbents of the digital revolution –mostly based in the US, Europe and China”, as the paper states, “have an advantage in AI R&D and deployment, in terms of data, compute infrastructure, skills, investment as well as their ability to set the terms by which other actors engage in governance and ethical debates.” The paper explores the interplay of global inequities in AI with emissions politics, zooming in specifically on geographical trends in compute demand. The paper also builds on consultations with experts in the AI and climate space. It puts forward three recommendations: better datasets for local impact assessment, establishing complementary standards for AI-linked emissions and the application of an overhauled CBDR in the context of climate change and AI.

AI and Climate in an Unequal World

The report sets the scene with a brief recap of the history of emissions politics, describing the emergence of the principle of Common but Differentiated Responsibilities (CBDR), institutionalized in the 1997 Kyoto Protocol. Developing nations rallied behind CBDR based on the belief that developed, industrialized nations are responsible for the stock of GHGs, or historical emissions, and should therefore make financial and emissions reduction commitments based on this ground reality. However, while CBDR is enshrined in the Protocol, it is not framed in this language, due to staunch opposition from developed countries. In recent years, the idea of “just transitions” has also found purchase, including in the Paris Agreement but still stops short of mentioning how the burden of risk is disproportionately placed on small and developing nations in the “Global South”.

How would the concentration of AI development and capacity—technical and governance—in the Global North affect emissions, and by extension, emission politics and narratives?

Narratives around AI-led development are already marked by neocolonial patterns: the outflow of data from the developing world to a handful of tech giants in large economies, and the inflow of new and emerging technology products and services from developed countries to underdeveloped and developing ones. Sub-Saharan Africa, Latin America, the Caribbean and South and Central Asia are falling behind in AI development and use, startups, funding and skills.

Data Center Market Trends as Proxy

The paper looks at data centre market trends to identify problems in how we currently calculate the climate impact of AI. Aggregating data based on region may, for instance, be useful to identify how underserved Latin America and Sub-Saharan Africa are in terms of compute availability but country-level insights are needed for a complete picture. The US alone accounts for a significant portion of data centres worldwide, and 39.5% of availability zones of the big 4 CSPs, AWS, Google, IBM and Azure. Even within the Asia Pacific region, which is the fastest growing data centre market, most of these centres are located in developed economies like South Korea, Japan, Australia and Singapore.

In addition to granular geographic insight, the paper states that quantifying emissions also requires that data centre operators be transparent about their energy use, including energy mix. A common theme in many studies on data centre emissions is the lack of information, meaning that models and estimates vary significantly. Finally, the paper questions whether the focus on efficiency in the greening of data centres is the right approach, especially when efficiency-oriented solutions are likely to lead to greater energy consumption overall.

An Equitable Model for Environmentally-Sustainable AI?

The paper concludes with three recommendations that could serve to refine ongoing efforts on sustainable AI. The first is robust and granular data to aid local impact assessment and action. The second is to work toward complementary and consistent standards, encouraging developing nations to engage early on, especially to avoid trapping themselves into new forms of emissions dumping. The final recommendation urges governments in developing and underdeveloped countries to assess their technology-led growth priorities in the context of the climate costs of AI. “While the economic growth imperative of AI is understandably the priority, not engaging in emerging debates in climate and AI risks these narratives and soon, governance processes, being shaped by contexts and terms set by a small group of powerful actors.”

Between the lines

There is no dearth of studies on the climate impact of AI, but this space is still in the early stages, with a number of different models for quantifying impact, most of them stemming from the US. Part of the problem lies in the lack of transparency from tech companies in the AI space on the lifecycle emissions of their operations (the complexity of their supply chains certainly plays a part, but should not serve as a reason to dodge responsibility). Funding is also key: could, for example, governments and industry fund the creation of Centres of Excellence (CoE) for sustainable AI, especially in countries in the “Global South”? This paper is meant to trigger debate on whether emerging narratives and processes on climate and AI are perpetuating inequities inherent to both spaces.

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

🔍 SEARCH

Spotlight

A network diagram with lots of little emojis, organised in clusters.

Tech Futures: AI For and Against Knowledge

A brightly coloured illustration which can be viewed in any direction. It has many elements to it working together: men in suits around a table, someone in a data centre, big hands controlling the scenes and holding a phone, people in a production line. Motifs such as network diagrams and melting emojis are placed throughout the busy vignettes.

Tech Futures: The Fossil Fuels Playbook for Big Tech: Part II

A rock embedded with intricate circuit board patterns, held delicately by pale hands drawn in a ghostly style. The contrast between the rough, metallic mineral and the sleek, artificial circuit board illustrates the relationship between raw natural resources and modern technological development. The hands evoke human involvement in the extraction and manufacturing processes.

Tech Futures: The Fossil Fuels Playbook for Big Tech: Part I

Close-up of a cat sleeping on a computer keyboard

Tech Futures: The threat of AI-generated code to the world’s digital infrastructure

The undying sun hangs in the sky, as people gather around signal towers, working through their digital devices.

Dreams and Realities in Modi’s AI Impact Summit

related posts

  • A Virtue-Based Framework to Support Putting AI Ethics into Practice

    A Virtue-Based Framework to Support Putting AI Ethics into Practice

  • Efficiency is Not Enough: A Critical Perspective of Environmentally Sustainable AI

    Efficiency is Not Enough: A Critical Perspective of Environmentally Sustainable AI

  • Confucius, cyberpunk and Mr. Science: comparing AI ethics principles between China and the EU

    Confucius, cyberpunk and Mr. Science: comparing AI ethics principles between China and the EU

  • (Re)Politicizing Digital Well-Being: Beyond User Engagements

    (Re)Politicizing Digital Well-Being: Beyond User Engagements

  • “Cool Projects” or “Expanding the Efficiency of the Murderous American War Machine?” (Research Summa...

    “Cool Projects” or “Expanding the Efficiency of the Murderous American War Machine?” (Research Summa...

  • Responsible Use of Technology in Credit Reporting: White Paper

    Responsible Use of Technology in Credit Reporting: White Paper

  • Am I Literate? Redefining Literacy in the Age of Artificial Intelligence

    Am I Literate? Redefining Literacy in the Age of Artificial Intelligence

  • The State of AI Ethics Report (Oct 2020)

    The State of AI Ethics Report (Oct 2020)

  • Algorithms Deciding the Future of Legal Decisions

    Algorithms Deciding the Future of Legal Decisions

  • Looking before we leap: Expanding ethical review processes for AI and data science research

    Looking before we leap: Expanding ethical review processes for AI and data science research

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer


Articles

Columns

AI Literacy

The State of AI Ethics Report


 

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.

Contact

Donate


  • © 2025 MONTREAL AI ETHICS INSTITUTE.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.