• Skip to main content
  • Skip to secondary menu
  • Skip to primary sidebar
  • Skip to footer
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

  • Articles
    • Public Policy
    • Privacy & Security
    • Human Rights
      • Ethics
      • JEDI (Justice, Equity, Diversity, Inclusion
    • Climate
    • Design
      • Emerging Technology
    • Application & Adoption
      • Health
      • Education
      • Government
        • Military
        • Public Works
      • Labour
    • Arts & Culture
      • Film & TV
      • Music
      • Pop Culture
      • Digital Art
  • Columns
    • AI Policy Corner
    • Recess
    • Tech Futures
  • The AI Ethics Brief
  • AI Literacy
    • Research Summaries
    • AI Ethics Living Dictionary
    • Learning Community
  • The State of AI Ethics Report
    • Volume 7 (November 2025)
    • Volume 6 (February 2022)
    • Volume 5 (July 2021)
    • Volume 4 (April 2021)
    • Volume 3 (Jan 2021)
    • Volume 2 (Oct 2020)
    • Volume 1 (June 2020)
  • About
    • Our Contributions Policy
    • Our Open Access Policy
    • Contact
    • Donate

Research summary: Appendix C: Model Benefit-Risk Analysis

June 17, 2020

Summary contributed by Victoria Heath (@victoria_heath7), Communications Manager at Creative Commons.

*Author & link to original paper at the bottom


Data transparency is a key goal of the open data movement, and as different federal and municipal governments create open data policies, it’s important that they take into account the risks to individual privacy that come with sharing data publicly. In order to ensure open data privacy, open data managers and departmental data owners within governments need a standardized methodology to assess the privacy risks and benefits of a dataset. This methodology is a valuable component of building what the Future of Privacy Forum (FPF) calls a “mature open data program.” 

In their City of Seattle Open Data Risk Assessment report, the FPF presents a Model Benefit-Risk Analysis that can be utilized to evaluate datasets and determine whether or not they should be published openly. This analysis is based on work by the National Institute of Standards and Technology, the University of Washington, the Berkman Klein Center, and the City of San Francisco. There are five steps to the analysis:

  1. Evaluate the information the dataset contains

This step involves identifying whether there are direct or indirect identifiers, sensitive attributes, non-identifiable information, spatial data, and other information in the dataset; assessing whether the dataset is linkable to other datasets, and; analyzing the “context in which the data was obtained.” 

  1. Evaluate the benefits associated with releasing the dataset

This step includes considering the potential benefits and users of the dataset, including identifying whether the data fields involve aggregate data or individual records. To evaluate the potential benefits of the dataset, the evaluator selects a qualitative and quantitative value of the benefits and then selects a value for the likelihood of those benefits occurring. Those ratings are then compared to identify the overall benefits of releasing the dataset. 

  1. Evaluate the risks associated with releasing the dataset

This step includes considering the potential privacy risks and negative users of the dataset. The foreseeable privacy risks include the re-identification (and false re-identification) impacts on individuals and/or organizations; data quality and equity impacts; and public trust impacts. To evaluate the potential privacy risks of the dataset, the evaluator selects a qualitative and quantitative value of the risks and then selects a value for the likelihood of those risks occurring. Those ratings are then compared to identify the overall privacy risk of releasing the dataset. 

  1. Weigh the benefits against the risks of releasing the dataset

This step includes combining the scores from Steps 2 and 3 in order to determine whether to a) release the dataset openly, b) release it in a limited environment, c) create formal application and oversight mechanisms before publishing the dataset, or d) keep the dataset closed unless the risk can be reduced or there are other public policy reasons to consider. In this step, it is important to consider the level of acceptable privacy risks, while also considering the overall benefits of publishing the dataset openly; and, if necessary, what technical, administrative, and legal controls can be put in place to mitigate the identified risks. Technical controls include suppression, generalization/blurring, pseudonymization, aggregation, visualizations, perturbation, k-Anonymity, differential privacy, and synthetic data. Administrative and legal controls include contractual provisions, access fees, data enclaves, tiered access controls, and ethical and/or disclosure review board. 

  1. Evaluate countervailing factors

This step includes considering any factors that may “justify releasing a dataset openly regardless of its privacy risk.” For example, if it’s in the public’s interest to release the dataset (e.g. salaries of elected officials), then it’s important to consider analyzing the dataset “holistically” and proceeding cautiously—because once a dataset is published openly it’s impossible to make it closed once again. It’s also important to document the analysis, considerations, and thinking behind publishing a dataset openly, especially if it was initially determined to remain closed. This is key to building and maintaining trust in the open data program. 


Original paper by the Future of Privacy Forum: fpf.org/wp-content/uploads/2018/01/FPF-Open-Data-Risk-Assessment-for-City-of-Seattle.pdf

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

🔍 SEARCH

Spotlight

A rock embedded with intricate circuit board patterns, held delicately by pale hands drawn in a ghostly style. The contrast between the rough, metallic mineral and the sleek, artificial circuit board illustrates the relationship between raw natural resources and modern technological development. The hands evoke human involvement in the extraction and manufacturing processes.

Tech Futures: The Fossil Fuels Playbook for Big Tech: Part I

Close-up of a cat sleeping on a computer keyboard

Tech Futures: The threat of AI-generated code to the world’s digital infrastructure

The undying sun hangs in the sky, as people gather around signal towers, working through their digital devices.

Dreams and Realities in Modi’s AI Impact Summit

Illustration of a coral reef ecosystem

Tech Futures: Diversity of Thought and Experience: The UN’s Scientific Panel on AI

This image shows a large white, traditional, old building. The top half of the building represents the humanities (which is symbolised by the embedded text from classic literature which is faintly shown ontop the building). The bottom section of the building is embossed with mathematical formulas to represent the sciences. The middle layer of the image is heavily pixelated. On the steps at the front of the building there is a group of scholars, wearing formal suits and tie attire, who are standing around at the enternace talking and some of them are sitting on the steps. There are two stone, statute-like hands that are stretching the building apart from the left side. In the forefront of the image, there are 8 students - which can only be seen from the back. Their graduation gowns have bright blue hoods and they all look as though they are walking towards the old building which is in the background at a distance. There are a mix of students in the foreground.

Tech Futures: Co-opting Research and Education

related posts

  • From Promise to Practice: A Glimpse into AI-Driven Approaches to Neuroscience

    From Promise to Practice: A Glimpse into AI-Driven Approaches to Neuroscience

  • Atomist or holist? A diagnosis and vision for more productive interdisciplinary AI ethics dialogue

    Atomist or holist? A diagnosis and vision for more productive interdisciplinary AI ethics dialogue

  • Performative Power

    Performative Power

  • Low-Resource Languages Jailbreak GPT-4

    Low-Resource Languages Jailbreak GPT-4

  • Will an Artificial Intellichef be Cooking Your Next Meal at a Michelin Star Restaurant?

    Will an Artificial Intellichef be Cooking Your Next Meal at a Michelin Star Restaurant?

  • Tell me, what are you most afraid of? Exploring the Effects of Agent Representation on Information D...

    Tell me, what are you most afraid of? Exploring the Effects of Agent Representation on Information D...

  • In Consideration of Indigenous Data Sovereignty: Data Mining as a Colonial Practice

    In Consideration of Indigenous Data Sovereignty: Data Mining as a Colonial Practice

  • Upgrading China Through Automation: Manufacturers, Workers and Techno-Development State (Research Su...

    Upgrading China Through Automation: Manufacturers, Workers and Techno-Development State (Research Su...

  • Counterfactual Explanations via Locally-guided Sequential Algorithmic Recourse

    Counterfactual Explanations via Locally-guided Sequential Algorithmic Recourse

  • Industry AI Ethics 101 with Kathy Baxter (Podcast Summary)

    Industry AI Ethics 101 with Kathy Baxter (Podcast Summary)

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer


Articles

Columns

AI Literacy

The State of AI Ethics Report


 

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.

Contact

Donate


  • © 2025 MONTREAL AI ETHICS INSTITUTE.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.