• Skip to main content
  • Skip to secondary menu
  • Skip to primary sidebar
  • Skip to footer
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

  • Articles
    • Public Policy
    • Privacy & Security
    • Human Rights
      • Ethics
      • JEDI (Justice, Equity, Diversity, Inclusion
    • Climate
    • Design
      • Emerging Technology
    • Application & Adoption
      • Health
      • Education
      • Government
        • Military
        • Public Works
      • Labour
    • Arts & Culture
      • Film & TV
      • Music
      • Pop Culture
      • Digital Art
  • Columns
    • AI Policy Corner
    • Recess
    • Tech Futures
  • The AI Ethics Brief
  • AI Literacy
    • Research Summaries
    • AI Ethics Living Dictionary
    • Learning Community
  • The State of AI Ethics Report
    • Volume 7 (November 2025)
    • Volume 6 (February 2022)
    • Volume 5 (July 2021)
    • Volume 4 (April 2021)
    • Volume 3 (Jan 2021)
    • Volume 2 (Oct 2020)
    • Volume 1 (June 2020)
  • About
    • Our Contributions Policy
    • Our Open Access Policy
    • Contact
    • Donate

The AI Carbon Footprint and Responsibilities of AI Scientists

March 23, 2022

🔬 Research Summary by Eryn Rigley, a PhD research student at University of Southampton, specialising in the intersection of environmental and AI ethics, as well as defence & security AI ethics.

[Original paper by Guglielmo Tamburrini]


Overview:  Tamburrini approaches the critical, yet underrepresented, problem of AI science impact on the environment. He argues that a shift in the metrics by which AI research and development (R&D) success is measured to encompass environmental impact might well provide a means of distributed responsibility for our planet’s wellbeing.


Introduction

The training of an DNN based NLP Transformer model is estimated to emit as much greenhouse gasses (GHG) as five cars over their entire lifecycle. Training a BERT Large NLP model produces as much GHG as a flight from San Francisco to New York. There are ways in which we are taxed or charged to neutralize the carbon footprint of our travel; but what about AI research which clearly has just as significant an impact?  Distributed responsibility has been explored significantly in terms of the outputs of AI systems, particularly in ML systems where the often unpredictable output of an AI system can directly impact humans. However, there are other responsibilities AI scientists should consider. In this paper, Tamburrini picks apart the responsibility of AI scientists to environmental protection.

Key Insights

The paper examines the global environmental impact of training machine learning (ML) systems in depth, specifically it’s carbon footprint. Tamburrini outlines the complexities of the global problem of GHG emission in AI R&D, and offers an appropriate and measured solution.

The problem

Tamburrini argues the environmental impact of AI R&D is global. No matter your geographical periphery to a particular AI system, you will be impacted by the carbon emissions that training AI systems generate. However, there are many elements to an AI system’s impact on the environment which makes defining an AI researcher’s responsibility to reducing GHG emissions a complex problem. An interesting point Tamburrini notes is that much of AI in use today is intangible, yet has very tangible effects on the environment. We cannot see or touch a neural network or ML algorithm. Yet we live and breathe the GHG emissions which the training of those systems will produce. As such, the environmental impact of ML systems can be a difficult problem to motivate. 

Moreover, the problem of many hands arises. Because up to hundreds of people are involved in the various stages of AI development and deployment, assuming someone else is to blame can be an easy way of shucking one’s personal responsibility within a complex organization. This problem becomes further exasperated when considering the various, sometimes disconnected and discordant, stages of an AI system’s life cycle; from conception, design, research, development, training, deployment, to reflection. 

The aim of this paper is to approach this complex web of people involved in the development of AI, to fairly distribute the responsibility of environmental protection within AI R&D.

The solution

Currently, the metrics to measure the success of an AI system exclude any reference to environmental impact. In a random sample of 100 relevant research papers, a resounding zero percent mentioned the carbon footprint of an AI system. Instead, metrics include accuracy and computational resources used (without reference to the environmental impact of these). Alternatively, metrics measure the impact of a ML system on humans, including fairness and bias. Environmental resources used or carbon footprint is not considered when we measure the efficacy of AI. 

This paper suggests herein lies the problem and solution. Looking at the different impacts from different stages of R&D process can help distribute responsibility. That is, how different parties (energy suppliers, hardware builders, data scientists) play unique roles in the stages of AI R&D. More specifically, we can examine the environmental impact of each player within AI R&D. So rather than just computational efficiency in terms of outputs, we can consider the range of influence on carbon footprint such as electricity supply,, hardware employed, and even time of day for training, so that each player has their own recognized role and responsibility to find a more environmentally considerate process. This, Tamburrini arges, should distribute responsibility fairly. 

His proposed means to encourage such responsibility is AI research competitions, which can focus on the carbon footprint of an AI system’s development as well as the accuracy of outputs. His solution seems feasible, as winning esteem and standing in the community will no doubt motivate a shift in priorities for AI scientists.

Between the lines

Tamburrini approaches the very real, yet poorly recognised, challenge of mitigating the environmental consequences of AI R&D. There is an opportunity to push this idea further. Tamburrini rightly acknowledges that GHG from AI is a global issue, but does not consider the unequal distribution of climate change affects across the globe. GHG emissions impact communities and geographical locations differently. Some countries are more negatively affected than others, already experiencing drought, flooding, and melting glaciers, and marginalized or disadvantaged groups face a changing climate more intensely than those who are privileged or living in parts of the world less drastically affected. Tamburrini should examine the object of our responsibility in more detail; who or what do we owe responsibility to? 

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

🔍 SEARCH

Spotlight

This image is a collage with a colourful Japanese vintage landscape showing a mountain, hills, flowers and other plants and a small stream. There are 3 large black data servers placed in the bottom half of the image, with a cloud of black smoke emitting from them, partly obscuring the scenery.

Tech Futures: Crafting Participatory Tech Futures

A network diagram with lots of little emojis, organised in clusters.

Tech Futures: AI For and Against Knowledge

A brightly coloured illustration which can be viewed in any direction. It has many elements to it working together: men in suits around a table, someone in a data centre, big hands controlling the scenes and holding a phone, people in a production line. Motifs such as network diagrams and melting emojis are placed throughout the busy vignettes.

Tech Futures: The Fossil Fuels Playbook for Big Tech: Part II

A rock embedded with intricate circuit board patterns, held delicately by pale hands drawn in a ghostly style. The contrast between the rough, metallic mineral and the sleek, artificial circuit board illustrates the relationship between raw natural resources and modern technological development. The hands evoke human involvement in the extraction and manufacturing processes.

Tech Futures: The Fossil Fuels Playbook for Big Tech: Part I

Close-up of a cat sleeping on a computer keyboard

Tech Futures: The threat of AI-generated code to the world’s digital infrastructure

related posts

  • Slow AI and The Culture of Speed

    Slow AI and The Culture of Speed

  • Technology on the Margins: AI and Global Migration Management From a Human Rights Perspective (Resea...

    Technology on the Margins: AI and Global Migration Management From a Human Rights Perspective (Resea...

  • Research summary: Beyond Near- and Long-Term: Towards a Clearer Account of Research Priorities in AI...

    Research summary: Beyond Near- and Long-Term: Towards a Clearer Account of Research Priorities in AI...

  • AI Ethics in the Public, Private, and NGO Sectors: A Review of a Global Document Collection

    AI Ethics in the Public, Private, and NGO Sectors: A Review of a Global Document Collection

  • Towards a Feminist Metaethics of AI

    Towards a Feminist Metaethics of AI

  • How Do We Teach Tech Ethics? How Should We?

    How Do We Teach Tech Ethics? How Should We?

  • The Paris AI Summit: Deregulation, Fear, and Surveillance

    The Paris AI Summit: Deregulation, Fear, and Surveillance

  • Representation and Imagination for Preventing AI Harms

    Representation and Imagination for Preventing AI Harms

  • Regulatory Instruments for Fair Personalized Pricing

    Regulatory Instruments for Fair Personalized Pricing

  • The Whiteness of AI (Research Summary)

    The Whiteness of AI (Research Summary)

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer


Articles

Columns

AI Literacy

The State of AI Ethics Report


 

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.

Contact

Donate


  • © 2025 MONTREAL AI ETHICS INSTITUTE.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.