Summary contributed by Erick Galinkin (@ErickGalinkin), Principal AI Researcher at Rapid7
(Authors of full paper & link at the bottom)
Machine learning adoption is widespread and in the field of security, applications such as spam filtering, malware detection, and intrusion detection are becoming increasingly reliant on machine learning techniques. Since these environments are naturally adversarial, defenders cannot rely on the assumption that underlying data distributions are stationary. Instead, machine learning practitioners in the security domain must adopt paradigms from cryptography and security engineering to deal with these systems in adversarial settings.
Previously, approaches such as min-max and Nash equilibrium have been used to consider attack scenarios. However, realistic constraints are far more complex than these frameworks allow, and so we instead see how practitioners can understand how classification performance is degraded under attack. This allows us to better design algorithms to detect what we want in an environment where attackers seek to reduce our ability to correctly classify examples. Specifically, this work considers attacks on classifiers which are not necessarily linear or convex.
To simulate attacks, two strategies are undertaken:
- “Perfect Knowledge” – this is a conventional “white box” attack where attackers have perfect knowledge of the feature space, the trained model itself, the classifier, the training data, and can transform attack points in the test data within a distance of dₘₐₓ.
- “Limited Knowledge” – In this “grey box” attack, the adversary still has knowledge of the classifier type and feature space but cannot directly compute the discriminant function g(x). Instead, they must compute a surrogate function from data not in the training set, but from the same underlying distribution.
The attacker’s strategy is to minimize the discriminant function g(x) or the corresponding surrogate function in the limited knowledge case. In order to overcome failure cases for gradient descent-based approaches, a density estimator is introduced which penalizes the model in low-density regions. This component is known as “mimicry” and is parametrized by λ, a trade-off parameter. When λ is 0, no mimicry is used, and as λ increases, the attack sample becomes more similar to the target class. In the case of images, this can make the attack sample unrecognizable to humans.
The first “toy” example used is MNIST, where an image which is obviously a “3” to human observers is reliably misclassified as the target class “7” against a support vector machine.
The task of discriminating between malicious and benign PDF files was also addressed, relying on the ease of inserting new objects to a PDF file as a method of controlling dₘₐₓ. For the limited knowledge case, a surrogate dataset 20% of the size of the training data was used. For SVMs with both linear and RBF kernels, both perfect knowledge and limited knowledge attacks were highly successful both with and without mimicry, in as few as 5 modifications. For the neural network classifiers, the attacks without mimicry were not very successful, though the perfect knowledge attacks with mimicry were highly successful.
The authors suggest many avenues for further research, including using the mimicry term as a search heuristic; building small but representative sets of surrogate data; and using ensemble techniques such as bagging or random subspace methods to train several classifiers.
Original paper by Battista Biggio, Igino Corona, Davide Maiorca, Blaine Nelson, Nedim Srndic, Pavel Laskov, Giorgio Giacinto, and Fabio Roli: https://arxiv.org/abs/1708.06131