• Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • Core Principles of Responsible AI
    • Accountability
    • Fairness
    • Privacy
    • Safety and Security
    • Sustainability
    • Transparency
  • Special Topics
    • AI in Industry
    • Ethical Implications
    • Human-Centered Design
    • Regulatory Landscape
    • Technical Methods
  • Living Dictionary
  • State of AI Ethics
  • AI Ethics Brief
  • 🇫🇷
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

Research summary: Appendix C: Model Benefit-Risk Analysis

June 17, 2020

Summary contributed by Victoria Heath (@victoria_heath7), Communications Manager at Creative Commons.

*Author & link to original paper at the bottom


Data transparency is a key goal of the open data movement, and as different federal and municipal governments create open data policies, it’s important that they take into account the risks to individual privacy that come with sharing data publicly. In order to ensure open data privacy, open data managers and departmental data owners within governments need a standardized methodology to assess the privacy risks and benefits of a dataset. This methodology is a valuable component of building what the Future of Privacy Forum (FPF) calls a “mature open data program.” 

In their City of Seattle Open Data Risk Assessment report, the FPF presents a Model Benefit-Risk Analysis that can be utilized to evaluate datasets and determine whether or not they should be published openly. This analysis is based on work by the National Institute of Standards and Technology, the University of Washington, the Berkman Klein Center, and the City of San Francisco. There are five steps to the analysis:

  1. Evaluate the information the dataset contains

This step involves identifying whether there are direct or indirect identifiers, sensitive attributes, non-identifiable information, spatial data, and other information in the dataset; assessing whether the dataset is linkable to other datasets, and; analyzing the “context in which the data was obtained.” 

  1. Evaluate the benefits associated with releasing the dataset

This step includes considering the potential benefits and users of the dataset, including identifying whether the data fields involve aggregate data or individual records. To evaluate the potential benefits of the dataset, the evaluator selects a qualitative and quantitative value of the benefits and then selects a value for the likelihood of those benefits occurring. Those ratings are then compared to identify the overall benefits of releasing the dataset. 

  1. Evaluate the risks associated with releasing the dataset

This step includes considering the potential privacy risks and negative users of the dataset. The foreseeable privacy risks include the re-identification (and false re-identification) impacts on individuals and/or organizations; data quality and equity impacts; and public trust impacts. To evaluate the potential privacy risks of the dataset, the evaluator selects a qualitative and quantitative value of the risks and then selects a value for the likelihood of those risks occurring. Those ratings are then compared to identify the overall privacy risk of releasing the dataset. 

  1. Weigh the benefits against the risks of releasing the dataset

This step includes combining the scores from Steps 2 and 3 in order to determine whether to a) release the dataset openly, b) release it in a limited environment, c) create formal application and oversight mechanisms before publishing the dataset, or d) keep the dataset closed unless the risk can be reduced or there are other public policy reasons to consider. In this step, it is important to consider the level of acceptable privacy risks, while also considering the overall benefits of publishing the dataset openly; and, if necessary, what technical, administrative, and legal controls can be put in place to mitigate the identified risks. Technical controls include suppression, generalization/blurring, pseudonymization, aggregation, visualizations, perturbation, k-Anonymity, differential privacy, and synthetic data. Administrative and legal controls include contractual provisions, access fees, data enclaves, tiered access controls, and ethical and/or disclosure review board. 

  1. Evaluate countervailing factors

This step includes considering any factors that may “justify releasing a dataset openly regardless of its privacy risk.” For example, if it’s in the public’s interest to release the dataset (e.g. salaries of elected officials), then it’s important to consider analyzing the dataset “holistically” and proceeding cautiously—because once a dataset is published openly it’s impossible to make it closed once again. It’s also important to document the analysis, considerations, and thinking behind publishing a dataset openly, especially if it was initially determined to remain closed. This is key to building and maintaining trust in the open data program. 


Original paper by the Future of Privacy Forum: fpf.org/wp-content/uploads/2018/01/FPF-Open-Data-Risk-Assessment-for-City-of-Seattle.pdf

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

🔍 SEARCH

Spotlight

Beyond Dependency: The Hidden Risk of Social Comparison in Chatbot Companionship

AI Policy Corner: Restriction vs. Regulation: Comparing State Approaches to AI Mental Health Legislation

Beyond Consultation: Building Inclusive AI Governance for Canada’s Democratic Future

AI Policy Corner: U.S. Executive Order on Advancing AI Education for American Youth

AI Policy Corner: U.S. Copyright Guidance on Works Created with AI

related posts

  • Hiring Algorithms Based on Junk Science May Cost You Your Dream Job

    Hiring Algorithms Based on Junk Science May Cost You Your Dream Job

  • Eticas Foundation external audits VioGĂ©n: Spain’s algorithm designed to protect victims of gender vi...

    Eticas Foundation external audits VioGén: Spain’s algorithm designed to protect victims of gender vi...

  • Fairness Definitions Explained (Research Summary)

    Fairness Definitions Explained (Research Summary)

  • Slow AI and The Culture of Speed

    Slow AI and The Culture of Speed

  • Open-source provisions for large models in the AI Act

    Open-source provisions for large models in the AI Act

  • Repairing Innovation - A Study of Integrating AI in Clinical Care (Research Summary)

    Repairing Innovation - A Study of Integrating AI in Clinical Care (Research Summary)

  • Defending Against Authorship Identification Attacks

    Defending Against Authorship Identification Attacks

  • Policy Brief: AI’s Promise and Peril for the U.S. Government (Research summary)

    Policy Brief: AI’s Promise and Peril for the U.S. Government (Research summary)

  • Zoom Out and Observe: News Environment Perception for Fake News Detection

    Zoom Out and Observe: News Environment Perception for Fake News Detection

  • Sociological Perspectives on Artificial Intelligence: A Typological Reading

    Sociological Perspectives on Artificial Intelligence: A Typological Reading

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer

Categories


• Blog
• Research Summaries
• Columns
• Core Principles of Responsible AI
• Special Topics

Signature Content


• The State Of AI Ethics

• The Living Dictionary

• The AI Ethics Brief

Learn More


• About

• Open Access Policy

• Contributions Policy

• Editorial Stance on AI Tools

• Press

• Donate

• Contact

The AI Ethics Brief (bi-weekly newsletter)

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.


Archive

  • © 2025 MONTREAL AI ETHICS INSTITUTE.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.