Summary contributed by Camylle Lanteigne (@CamLante), who’s currently pursuing a Master’s in Public Policy at Concordia University and whose work on social robots and empathy has been featured on Vox.
Link to full paper + authors listed at the bottom.
Mini-summary: Energy use can also be used for nefarious purposes through sponge examples: attacks made on an ML model to drastically increase its energy consumption during inference. Sponge examples can make an ML model’s carbon emissions skyrocket, but they can also cause more immediate harm. Increased energy consumption can significantly decrease the availability of the model, increase latency and ultimately delay operations. More concretely, autonomous vehicles undergoing a sponge attack may be unable to perform operations fast enough due to this delay, causing a vehicle to fail to break in time, leading to a collision. To defend against an adversary exploiting a sponge example, the authors suggest 1) a cut-off threshold, where the total amount of energy consumed for one inference cannot be higher than a predetermined threshold, and 2) to address delays in real-time performance which could have deadly consequences in mission-critical situations, these systems must be designed to function properly even in worst-case performance scenarios and have a fail-safe mechanism.
Full summary:
Energy use is an important and yet understudied aspect of Machine Learning (ML). Energy consumption can help us gauge the environmental impacts of ML, for one. In this paper, Shumailov et al. show how energy use can also be used for nefarious purposes through sponge examples: attacks made on an ML model to drastically increase its energy consumption during inference. Sponge examples, of course, can make an ML model’s carbon emissions skyrocket, but they can also cause more immediate harm. Indeed, increased energy consumption can significantly decrease the availability of the model, increase latency and ultimately delay operations. More concretely, autonomous vehicles undergoing a sponge attack may be unable to perform operations fast enough due to this delay, causing a vehicle to fail to break in time, leading to a collision.
Shumailov et al. propose two hypotheses as to how sponge examples can be generated. For one, a sponge example can exploit how sparsely activated some hidden layers of the neural network are when the sum of inputs to a neuron is negative. By adding inputs that lead to more neuron activations, this increases the model’s energy used because of the larger amount of operations performed.
Secondly, sponge examples can soak up large amounts of energy by exploiting the energy-latency gap: “different inputs of the same size can cause a deep neural network (DNN) to draw very different amounts of time and energy” (Shumailov et al., 2020). The authors use Transformer as their example — an ML model that takes words as its data. The token input size and the token output size (the number of individual words), as well as the input and output embedding spaces’ size, can be increased by a remote attacker with no access to the model’s configuration or hardware. These increases can yield non-linear increases in energy use; in other words, energy consumption goes up exponentially as token input, token output, or embedding space size increase linearly.
The paper explores three threat models. First, a white box setup, where the attacker knows the model’s parameters and architecture. Second, an interactive black box setup, where the attacker does not know the parameters and architecture of the model, but can measure the energy consumption remotely as well as the time necessary for an operation to run. Third is the clueless¹ adversary setup, where the attacker has none of the information of the two prior setups, and can only transfer sponge examples to this new model without previously having interacted with it.
In the cases of the white box and interactive black box setups, an attacker can create a sponge example attack through genetic algorithms. In this context, a genetic algorithm would continually select for the top 10% of inputs with the highest energy consumption, these becoming the “parents” for the next “generation” of inputs. Genetic algorithms can thus help maximize the damage a sponge example attack can have by providing inputs that consume extremely high amounts of energy.
In a white box setting, an attacker can likewise launch a sponge example attack by using an L-BFGS-B algorithm to generate inputs that increase all the activation values throughout the model, forcing more operations to be undertaken and causing energy consumption to surge.
As for the clueless adversary setup, the energy consumption of hardware (CPUs, GPUs, and ASICs) can be determined without an attacker having access to the model (through calculations or through the NVIDIA Management Library, for instance). The authors perform experiments on NLP (Natural Language Processing) tasks and Computer Vision tasks to evaluate the performance of sponge examples across models, hardware, and tasks. Shumailov et al. find that sponge examples are transferable across both hardware and models in the white box setup, in the interactive black box setup, and even in the clueless adversary setup, where performing an attack is most difficult.
To defend against an adversary exploiting a sponge example, the authors suggest two methods. First, a cut-off threshold, where the total amount of energy consumed for one inference cannot be higher than a predetermined threshold. This could prevent sponge examples from impacting the availability of the machine learning model. This, however, applies to scenarios where battery drainage is the main concern.
To address delays in real-time performance, which could have deadly consequences in autonomous vehicles or missile targeting systems, the authors believe these systems must be designed to function properly even in worst-case performance scenarios, and perhaps be equipped with a fallback mechanism for instances where these systems fail completely.
The paper ends on a call for more research to be done regarding the carbon emissions of machine learning at the stage of inference. Most of the research done on this topic focuses on training large neural networks, but the authors highlight that inference is done much more frequently and on a larger scale than training once a model is deployed.
¹: The paper refers to this setup as the “blind adversary setup”, but in an effort to use non-ableist language, I opted for “clueless adversary setup”.
Original paper by Shumailov, I., Zhao, Y., Bates, D., Papernot, N., Mullins, R., & Anderson, R.: http://arxiv.org/abs/2006.03463