• Skip to main content
  • Skip to secondary menu
  • Skip to primary sidebar
  • Skip to footer
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

  • Articles
    • Public Policy
    • Privacy & Security
    • Human Rights
      • Ethics
      • JEDI (Justice, Equity, Diversity, Inclusion
    • Climate
    • Design
      • Emerging Technology
    • Application & Adoption
      • Health
      • Education
      • Government
        • Military
        • Public Works
      • Labour
    • Arts & Culture
      • Film & TV
      • Music
      • Pop Culture
      • Digital Art
  • Columns
    • AI Policy Corner
    • Recess
  • The AI Ethics Brief
  • AI Literacy
    • Research Summaries
    • AI Ethics Living Dictionary
    • Learning Community
  • The State of AI Ethics Report
    • Volume 7 (November 2025)
    • Volume 6 (February 2022)
    • Volume 5 (July 2021)
    • Volume 4 (April 2021)
    • Volume 3 (Jan 2021)
    • Volume 2 (Oct 2020)
    • Volume 1 (June 2020)
  • About
    • Our Contributions Policy
    • Our Open Access Policy
    • Contact
    • Donate

The Secret Revealer: Generative Model-Inversion Attacks Against Deep Neural Networks (Research Summary)

November 30, 2020

Summary contributed by our researcher Erick Galinkin (@ErickGalinkin), who’s also Principal AI Researcher at Rapid7.

*Link to original paper + authors at the bottom.


Overview: Neural networks have shown amazing ability to learn on a variety of tasks, and this sometimes leads to unintended memorization. This paper explores how generative adversarial networks may be used to recover some of these memorized examples.


Model inversion attacks are a type of attack which abuse access to a model by attempting to infer information about the training data set. Effective model inversion attacks have largely been on extremely simple models such as linear regression and logistic regression, showing little promise in deep neural networks. However, generative adversarial networks (GANs) provide the ability to approximate these data sets.

Using techniques similar to image inpainting for obscured or damaged images, the GAN creates semantically plausible pixels based on what has been inferred about the sensitive features in the training data. A Wasserstein-GAN is used to set up a min-max problem as the loss function, and some auxiliary knowledge about the private images are provided to the attacker. This serves as an additional input to the generator. The generator then passes the recovered images to both the target network and a discriminator. The loss from both of these inferences is combined to optimize the generator. 

Using facial recognition classifiers as a model, Zhang et al. find that generative model inversion is significantly more effective than existing model inversion methods. Notably, more powerful models which have more layers and parameters are more susceptible to the attack.

Zhang et al. also find that pre-training the GAN on auxiliary data from the training distribution helps recovery of private data significantly. However, even training on similar data with a different distribution – such as pre-training on the PubFig83 dataset and attacking a model trained on the CelebA dataset still outperforms existing model inversion attacks by a large margin. Some image pre-processing can further improve the accuracy of the GAN in generating target data.

Finally, Zhang et al. investigated the implications of differential privacy in recovering images. They note that differentially private facial recognition models are very difficult to produce with acceptable accuracy in the first place, due to the complexity of the task. Thus, using MNIST as a reference dataset, they find that generative model inversion can expose private information from differentially private models even with strong privacy guarantees, and the strictness of the guarantee does not impact the ability to recover data. They suggest that this is likely because “DP, in its canonical form, only hides the presence of a single instance in the training set; it does not explicitly aim to protect attribute privacy.”


Original paper by Yuheng Zhang, Ruoxi Jia, Hengzhi Pei, Wenxiao Wang, Dawn Song: https://arxiv.org/abs/1911.07135

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

🔍 SEARCH

Spotlight

ALL IN Conference 2025: Four Key Takeaways from Montreal

Beyond Dependency: The Hidden Risk of Social Comparison in Chatbot Companionship

AI Policy Corner: Restriction vs. Regulation: Comparing State Approaches to AI Mental Health Legislation

Beyond Consultation: Building Inclusive AI Governance for Canada’s Democratic Future

AI Policy Corner: U.S. Executive Order on Advancing AI Education for American Youth

related posts

  • Technology on the Margins: AI and Global Migration Management From a Human Rights Perspective (Resea...

    Technology on the Margins: AI and Global Migration Management From a Human Rights Perspective (Resea...

  • The Two Faces of AI in Green Mobile Computing: A Literature Review

    The Two Faces of AI in Green Mobile Computing: A Literature Review

  • HAI Weekly Seminar Series: Decolonizing AI with Sabelo Mhlambi

    HAI Weekly Seminar Series: Decolonizing AI with Sabelo Mhlambi

  • Risky Analysis: Assessing and Improving AI Governance Tools

    Risky Analysis: Assessing and Improving AI Governance Tools

  • Knowing Your Annotator: Rapidly Testing the Reliability of Affect Annotation

    Knowing Your Annotator: Rapidly Testing the Reliability of Affect Annotation

  • Artificial Intelligence and Inequality in the Middle East: The Political Economy of Inclusion

    Artificial Intelligence and Inequality in the Middle East: The Political Economy of Inclusion

  • Principios éticos para una inteligencia artificial antropocéntrica: consensos actuales desde una per...

    Principios éticos para una inteligencia artificial antropocéntrica: consensos actuales desde una per...

  • The role of the African value of Ubuntu in global AI inclusion discourse: A normative ethics perspec...

    The role of the African value of Ubuntu in global AI inclusion discourse: A normative ethics perspec...

  • Measuring Surprise in the Wild

    Measuring Surprise in the Wild

  • AI in Finance: 8 Frequently Asked Questions

    AI in Finance: 8 Frequently Asked Questions

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer


Articles

Columns

AI Literacy

The State of AI Ethics Report


 

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.

Contact

Donate


  • © 2025 MONTREAL AI ETHICS INSTITUTE.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.