• Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • Core Principles of Responsible AI
    • Accountability
    • Fairness
    • Privacy
    • Safety and Security
    • Sustainability
    • Transparency
  • Special Topics
    • AI in Industry
    • Ethical Implications
    • Human-Centered Design
    • Regulatory Landscape
    • Technical Methods
  • Living Dictionary
  • State of AI Ethics
  • AI Ethics Brief
  • 🇫🇷
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

Research summary: Sponge Examples: Energy-Latency Attacks on Neural Networks

August 3, 2020

Summary contributed by Camylle Lanteigne (@CamLante), who’s currently pursuing a Master’s in Public Policy at Concordia University and whose work on social robots and empathy has been featured on Vox.

Link to full paper + authors listed at the bottom.


Mini-summary: Energy use can also be used for nefarious purposes through sponge examples: attacks made on an ML model to drastically increase its energy consumption during inference. Sponge examples can make an ML model’s carbon emissions skyrocket, but they can also cause more immediate harm. Increased energy consumption can significantly decrease the availability of the model, increase latency and ultimately delay operations. More concretely, autonomous vehicles undergoing a sponge attack may be unable to perform operations fast enough due to this delay, causing a vehicle to fail to break in time, leading to a collision. To defend against an adversary exploiting a sponge example, the authors suggest 1) a cut-off threshold, where the total amount of energy consumed for one inference cannot be higher than a predetermined threshold, and 2) to address delays in real-time performance which could have deadly consequences in mission-critical situations, these systems must be designed to function properly even in worst-case performance scenarios and have a fail-safe mechanism.

Full summary:

Energy use is an important and yet understudied aspect of Machine Learning (ML). Energy consumption can help us gauge the environmental impacts of ML, for one. In this paper, Shumailov et al. show how energy use can also be used for nefarious purposes through sponge examples: attacks made on an ML model to drastically increase its energy consumption during inference.  Sponge examples, of course, can make an ML model’s carbon emissions skyrocket, but they can also cause more immediate harm. Indeed, increased energy consumption can significantly decrease the availability of the model, increase latency and ultimately delay operations. More concretely, autonomous vehicles undergoing a sponge attack may be unable to perform operations fast enough due to this delay, causing a vehicle to fail to break in time, leading to a collision.

Shumailov et al. propose two hypotheses as to how sponge examples can be generated. For one, a sponge example can exploit how sparsely activated some hidden layers of the neural network are when the sum of inputs to a neuron is negative. By adding inputs that lead to more neuron activations, this increases the model’s energy used because of the larger amount of operations performed. 

Secondly, sponge examples can soak up large amounts of energy by exploiting the energy-latency gap: “different inputs of the same size can cause a deep neural network (DNN) to draw very different amounts of time and energy” (Shumailov et al., 2020). The authors use Transformer as their example — an ML model that takes words as its data. The token input size and the token output size (the number of individual words), as well as the input and output embedding spaces’ size, can be increased by a remote attacker with no access to the model’s configuration or hardware. These increases can yield non-linear increases in energy use; in other words, energy consumption goes up exponentially as token input, token output, or embedding space size increase linearly.

The paper explores three threat models. First, a white box setup, where the attacker knows the model’s parameters and architecture. Second, an interactive black box setup, where the attacker does not know the parameters and architecture of the model, but can measure the energy consumption remotely as well as the time necessary for an operation to run. Third is the cluelessÂą adversary setup, where the attacker has none of the information of the two prior setups, and can only transfer sponge examples to this new model without previously having interacted with it. 

In the cases of the white box and interactive black box setups, an attacker can create a sponge example attack through genetic algorithms. In this context, a genetic algorithm would continually select for the top 10% of inputs with the highest energy consumption, these becoming the “parents” for the next “generation” of inputs. Genetic algorithms can thus help maximize the damage a sponge example attack can have by providing inputs that consume extremely high amounts of energy.

In a white box setting, an attacker can likewise launch a sponge example attack by using an L-BFGS-B algorithm to generate inputs that increase all the activation values throughout the model, forcing more operations to be undertaken and causing energy consumption to surge.

As for the clueless adversary setup, the energy consumption of hardware (CPUs, GPUs, and ASICs) can be determined without an attacker having access to the model (through calculations or through the NVIDIA Management Library, for instance). The authors perform experiments on NLP (Natural Language Processing) tasks and Computer Vision tasks to evaluate the performance of sponge examples across models, hardware, and tasks. Shumailov et al. find that sponge examples are transferable across both hardware and models in the white box setup, in the interactive black box setup, and even in the clueless adversary setup, where performing an attack is most difficult. 

To defend against an adversary exploiting a sponge example, the authors suggest two methods. First, a cut-off threshold, where the total amount of energy consumed for one inference cannot be higher than a predetermined threshold. This could prevent sponge examples from impacting the availability of the machine learning model. This, however, applies to scenarios where battery drainage is the main concern. 

To address delays in real-time performance, which could have deadly consequences in autonomous vehicles or missile targeting systems, the authors believe these systems must be designed to function properly even in worst-case performance scenarios, and perhaps be equipped with a fallback mechanism for instances where these systems fail completely.
The paper ends on a call for more research to be done regarding the carbon emissions of machine learning at the stage of inference. Most of the research done on this topic focuses on training large neural networks, but the authors highlight that inference is done much more frequently and on a larger scale than training once a model is deployed.


¹: The paper refers to this setup as the “blind adversary setup”, but in an effort to use non-ableist language, I opted for “clueless adversary setup”.

Original paper by Shumailov, I., Zhao, Y., Bates, D., Papernot, N., Mullins, R., & Anderson, R.: http://arxiv.org/abs/2006.03463

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

🔍 SEARCH

Spotlight

Canada’s Minister of AI and Digital Innovation is a Historic First. Here’s What We Recommend.

Am I Literate? Redefining Literacy in the Age of Artificial Intelligence

AI Policy Corner: The Texas Responsible AI Governance Act

AI Policy Corner: Singapore’s National AI Strategy 2.0

AI Governance in a Competitive World: Balancing Innovation, Regulation and Ethics | Point Zero Forum 2025

related posts

  • Talking About Large Language Models

    Talking About Large Language Models

  • Can You Meaningfully Consent in Eight Seconds? Identifying Ethical Issues with Verbal Consent for Vo...

    Can You Meaningfully Consent in Eight Seconds? Identifying Ethical Issues with Verbal Consent for Vo...

  • The E.U.’s Artificial Intelligence Act: An Ordoliberal Assessment

    The E.U.’s Artificial Intelligence Act: An Ordoliberal Assessment

  • Friend or foe? Exploring the implications of large language models on the science system

    Friend or foe? Exploring the implications of large language models on the science system

  • Research summary: Challenges in Supporting Exploratory Search through Voice Assistants

    Research summary: Challenges in Supporting Exploratory Search through Voice Assistants

  • Modeling Content Creator Incentives on Algorithm-Curated Platforms

    Modeling Content Creator Incentives on Algorithm-Curated Platforms

  • Incentivized Symbiosis: A Paradigm for Human-Agent Coevolution

    Incentivized Symbiosis: A Paradigm for Human-Agent Coevolution

  • Risk of AI in Healthcare: A Study Framework

    Risk of AI in Healthcare: A Study Framework

  • Right to be Forgotten in the Era of Large Language Models: Implications, Challenges, and Solutions

    Right to be Forgotten in the Era of Large Language Models: Implications, Challenges, and Solutions

  • Research summary: What’s Next for AI Ethics, Policy, and Governance? A Global Overview

    Research summary: What’s Next for AI Ethics, Policy, and Governance? A Global Overview

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer

Categories


• Blog
• Research Summaries
• Columns
• Core Principles of Responsible AI
• Special Topics

Signature Content


• The State Of AI Ethics

• The Living Dictionary

• The AI Ethics Brief

Learn More


• About

• Open Access Policy

• Contributions Policy

• Editorial Stance on AI Tools

• Press

• Donate

• Contact

The AI Ethics Brief (bi-weekly newsletter)

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.


Archive

  • © MONTREAL AI ETHICS INSTITUTE. All rights reserved 2024.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.