• Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • Core Principles of Responsible AI
    • Accountability
    • Fairness
    • Privacy
    • Safety and Security
    • Sustainability
    • Transparency
  • Special Topics
    • AI in Industry
    • Ethical Implications
    • Human-Centered Design
    • Regulatory Landscape
    • Technical Methods
  • Living Dictionary
  • State of AI Ethics
  • AI Ethics Brief
  • 🇫🇷
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

Research Summary: Explaining and Harnessing Adversarial Examples

June 28, 2020

Summary contributed by Shannon Egan, Research Fellow at Building 21 and pursuing a master’s in physics at UBC.

*Author & link to original paper at the bottom.


Click here for the FULL summary in PDF form

(Short-form summary below)

A bemusing weakness of many supervised machine learning (ML) models, including neural networks (NNs), are adversarial examples (AEs).  AEs are inputs generated by adding a small perturbation to a correctly-classified input, causing the model to misclassify the resulting AE with high confidence.  Goodfellow et al. propose a linear explanation of AEs, in which the vulnerability of ML models to AEs is considered a by-product of their linear behaviour and high-dimensional feature space.  In other words, small perturbations on an input can alter its classification because the change in NN activation (as result of the perturbation) scales with the size of the input vector.

Identifying ways to effectively handle AEs is of interest for problems like image classification, where the input consists of intensity data for many thousands of pixels.  A method of generating AEs called “fast gradient sign method” badly fools a maxout network, leading to a 89.4% error rate on a perturbed MNIST test set.  The authors propose an “adversarial training” scheme for NNs, in which an adversarial term is added to the loss function during training. 

This dramatically improves the error rate of the same maxout network to 17.4% on AEs generated by the fast gradient sign method. The linear interpretation of adversarial examples suggests an approach to adversarial training which improves a model’s ability to classify AEs, and helps interpret properties of AE classification which the previously proposed nonlinearity and overfitting hypotheses do not explain. 


Click here for the full summary in PDF form.

Original paper by Ian J. Goodfellow, Jonathan Shlens and Christian Szegedy: https://arxiv.org/abs/1412.6572

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

🔍 SEARCH

Spotlight

ALL IN Conference 2025: Four Key Takeaways from Montreal

Beyond Dependency: The Hidden Risk of Social Comparison in Chatbot Companionship

AI Policy Corner: Restriction vs. Regulation: Comparing State Approaches to AI Mental Health Legislation

Beyond Consultation: Building Inclusive AI Governance for Canada’s Democratic Future

AI Policy Corner: U.S. Executive Order on Advancing AI Education for American Youth

related posts

  • Risk of AI in Healthcare: A Study Framework

    Risk of AI in Healthcare: A Study Framework

  • Research Summary: Trust and Transparency in Contact Tracing Applications

    Research Summary: Trust and Transparency in Contact Tracing Applications

  • Evolution in Age-Verification Applications: Can AI Open Some New Horizons?

    Evolution in Age-Verification Applications: Can AI Open Some New Horizons?

  • More Trust, Less Eavesdropping in Conversational AI

    More Trust, Less Eavesdropping in Conversational AI

  • Co-Designing Checklists to Understand Organizational Challenges and Opportunities around Fairness in...

    Co-Designing Checklists to Understand Organizational Challenges and Opportunities around Fairness in...

  • Zoom Out and Observe: News Environment Perception for Fake News Detection

    Zoom Out and Observe: News Environment Perception for Fake News Detection

  • Not Quite ‘Ask a Librarian’: AI on the Nature, Value, and Future of LIS

    Not Quite ‘Ask a Librarian’: AI on the Nature, Value, and Future of LIS

  • Experts Doubt Ethical AI Design Will Be Broadly Adopted as the Norm Within the Next Decade

    Experts Doubt Ethical AI Design Will Be Broadly Adopted as the Norm Within the Next Decade

  • Research Summary: The cognitive science of fake news

    Research Summary: The cognitive science of fake news

  • Research summary: Health Care, Capabilities, and AI Assistive Technologies

    Research summary: Health Care, Capabilities, and AI Assistive Technologies

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer

Categories


• Blog
• Research Summaries
• Columns
• Core Principles of Responsible AI
• Special Topics

Signature Content


• The State Of AI Ethics

• The Living Dictionary

• The AI Ethics Brief

Learn More


• About

• Open Access Policy

• Contributions Policy

• Editorial Stance on AI Tools

• Press

• Donate

• Contact

The AI Ethics Brief (bi-weekly newsletter)

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.


Archive

  • Š 2025 MONTREAL AI ETHICS INSTITUTE.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.