• Skip to main content
  • Skip to secondary menu
  • Skip to primary sidebar
  • Skip to footer
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

  • Articles
    • Public Policy
    • Privacy & Security
    • Human Rights
      • Ethics
      • JEDI (Justice, Equity, Diversity, Inclusion
    • Climate
    • Design
      • Emerging Technology
    • Application & Adoption
      • Health
      • Education
      • Government
        • Military
        • Public Works
      • Labour
    • Arts & Culture
      • Film & TV
      • Music
      • Pop Culture
      • Digital Art
  • Columns
    • AI Policy Corner
    • Recess
    • Tech Futures
  • The AI Ethics Brief
  • AI Literacy
    • Research Summaries
    • AI Ethics Living Dictionary
    • Learning Community
  • The State of AI Ethics Report
    • Volume 7 (November 2025)
    • Volume 6 (February 2022)
    • Volume 5 (July 2021)
    • Volume 4 (April 2021)
    • Volume 3 (Jan 2021)
    • Volume 2 (Oct 2020)
    • Volume 1 (June 2020)
  • About
    • Our Contributions Policy
    • Our Open Access Policy
    • Contact
    • Donate

Research summary: Evasion Attacks Against Machine Learning at Test Time

June 8, 2020

Summary contributed by Erick Galinkin (@ErickGalinkin), Principal AI Researcher at Rapid7

(Authors of full paper & link at the bottom)


Machine learning adoption is widespread and in the field of security, applications such as spam filtering, malware detection, and intrusion detection are becoming increasingly reliant on machine learning techniques. Since these environments are naturally adversarial, defenders cannot rely on the assumption that underlying data distributions are stationary. Instead, machine learning practitioners in the security domain must adopt paradigms from cryptography and security engineering to deal with these systems in adversarial settings.

Previously, approaches such as min-max and Nash equilibrium have been used to consider attack scenarios. However, realistic constraints are far more complex than these frameworks allow, and so we instead see how practitioners can understand how classification performance is degraded under attack. This allows us to better design algorithms to detect what we want in an environment where attackers seek to reduce our ability to correctly classify examples. Specifically, this work considers attacks on classifiers which are not necessarily linear or convex.

To simulate attacks, two strategies are undertaken:

  1. “Perfect Knowledge” – this is a conventional “white box” attack where attackers have perfect knowledge of the feature space, the trained model itself, the classifier, the training data, and can transform attack points in the test data within a distance of dₘₐₓ.
  2. “Limited Knowledge” – In this “grey box” attack, the adversary still has knowledge of the classifier type and feature space but cannot directly compute the discriminant function g(x). Instead, they must compute a surrogate function from data not in the training set, but from the same underlying distribution.

The attacker’s strategy is to minimize the discriminant function g(x) or the corresponding surrogate function in the limited knowledge case. In order to overcome failure cases for gradient descent-based approaches, a density estimator is introduced which penalizes the model in low-density regions. This component is known as “mimicry” and is parametrized by λ, a trade-off parameter. When λ is 0, no mimicry is used, and as λ increases, the attack sample becomes more similar to the target class. In the case of images, this can make the attack sample unrecognizable to humans.

The first “toy” example used is MNIST, where an image which is obviously a “3” to human observers is reliably misclassified as the target class “7” against a support vector machine.

The task of discriminating between malicious and benign PDF files was also addressed, relying on the ease of inserting new objects to a PDF file as a method of controlling dₘₐₓ. For the limited knowledge case, a surrogate dataset 20% of the size of the training data was used. For SVMs with both linear and RBF kernels, both perfect knowledge and limited knowledge attacks were highly successful both with and without mimicry, in as few as 5 modifications. For the neural network classifiers, the attacks without mimicry were not very successful, though the perfect knowledge attacks with mimicry were highly successful.

The authors suggest many avenues for further research, including using the mimicry term as a search heuristic; building small but representative sets of surrogate data; and using ensemble techniques such as bagging or random subspace methods to train several classifiers.


Original paper by Battista Biggio, Igino Corona, Davide Maiorca, Blaine Nelson, Nedim Srndic, Pavel Laskov, Giorgio Giacinto, and Fabio Roli: https://arxiv.org/abs/1708.06131 

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

🔍 SEARCH

Spotlight

A network diagram with lots of little emojis, organised in clusters.

Tech Futures: AI For and Against Knowledge

A brightly coloured illustration which can be viewed in any direction. It has many elements to it working together: men in suits around a table, someone in a data centre, big hands controlling the scenes and holding a phone, people in a production line. Motifs such as network diagrams and melting emojis are placed throughout the busy vignettes.

Tech Futures: The Fossil Fuels Playbook for Big Tech: Part II

A rock embedded with intricate circuit board patterns, held delicately by pale hands drawn in a ghostly style. The contrast between the rough, metallic mineral and the sleek, artificial circuit board illustrates the relationship between raw natural resources and modern technological development. The hands evoke human involvement in the extraction and manufacturing processes.

Tech Futures: The Fossil Fuels Playbook for Big Tech: Part I

Close-up of a cat sleeping on a computer keyboard

Tech Futures: The threat of AI-generated code to the world’s digital infrastructure

The undying sun hangs in the sky, as people gather around signal towers, working through their digital devices.

Dreams and Realities in Modi’s AI Impact Summit

related posts

  • Impacts of Differential Privacy on Fostering More Racially and Ethnically Diverse Elementary Schools

    Impacts of Differential Privacy on Fostering More Racially and Ethnically Diverse Elementary Schools

  • Operationalising the Definition of General Purpose AI Systems: Assessing Four Approaches

    Operationalising the Definition of General Purpose AI Systems: Assessing Four Approaches

  • Why the contemporary view of the relationship between AI's moral status and rights is wrong

    Why the contemporary view of the relationship between AI's moral status and rights is wrong

  • Andrew Ng’s AI For Everyone - The Definitive Starting Block for AI Novices

    Andrew Ng’s AI For Everyone - The Definitive Starting Block for AI Novices

  • Not Quite ‘Ask a Librarian’: AI on the Nature, Value, and Future of LIS

    Not Quite ‘Ask a Librarian’: AI on the Nature, Value, and Future of LIS

  • Looking for a connection in AI: fanciful or natural?

    Looking for a connection in AI: fanciful or natural?

  • The Artificiality of AI – Why are We Letting Machines Manage Employees?

    The Artificiality of AI – Why are We Letting Machines Manage Employees?

  • Research summary: Suckers List: How Allstate’s Secret Auto Insurance Algorithm Squeezes Big Spenders

    Research summary: Suckers List: How Allstate’s Secret Auto Insurance Algorithm Squeezes Big Spenders

  • Algorithmic Auditing and Social Justice: Lessons from the History of Audit Studies

    Algorithmic Auditing and Social Justice: Lessons from the History of Audit Studies

  • Green Lighting ML: Confidentiality, Integrity, and Availability of Machine Learning Systems in Deplo...

    Green Lighting ML: Confidentiality, Integrity, and Availability of Machine Learning Systems in Deplo...

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer


Articles

Columns

AI Literacy

The State of AI Ethics Report


 

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.

Contact

Donate


  • © 2025 MONTREAL AI ETHICS INSTITUTE.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.