• Skip to main content
  • Skip to secondary menu
  • Skip to primary sidebar
  • Skip to footer
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

  • Articles
    • Public Policy
    • Privacy & Security
    • Human Rights
      • Ethics
      • JEDI (Justice, Equity, Diversity, Inclusion
    • Climate
    • Design
      • Emerging Technology
    • Application & Adoption
      • Health
      • Education
      • Government
        • Military
        • Public Works
      • Labour
    • Arts & Culture
      • Film & TV
      • Music
      • Pop Culture
      • Digital Art
  • Columns
    • AI Policy Corner
    • Recess
    • Tech Futures
  • The AI Ethics Brief
  • AI Literacy
    • Research Summaries
    • AI Ethics Living Dictionary
    • Learning Community
  • The State of AI Ethics Report
    • Volume 7 (November 2025)
    • Volume 6 (February 2022)
    • Volume 5 (July 2021)
    • Volume 4 (April 2021)
    • Volume 3 (Jan 2021)
    • Volume 2 (Oct 2020)
    • Volume 1 (June 2020)
  • About
    • Our Contributions Policy
    • Our Open Access Policy
    • Contact
    • Donate

Mapping value sensitive design onto AI for social good principles

September 7, 2021

🔬 Research summary by Dr. Marianna Ganapini (@MariannaBergama), our Faculty Director.

[Original paper by Steven Umbrello, Ibo van de Poel]


Overview: Value sensitive design (VSD) is a method for shaping technology in accordance with our values. In this paper, the authors argue that, when applied to AI, VSD faces some specific challenges (connected to machine learning, in particular). To address these challenges, they propose modifying VSD, integrating it with a set of AI-specific principles, and ensuring that the unintended uses and consequences of AI technologies are monitored and addressed.

Introduction

How do we bridge theory and practice when it comes to following ethical principles in AI? This paper aims at answering that very question by adopting Value sensitive design: a set of steps to implement values in technological innovation. Value sensitive design potentially applies to a vast range of technologies, but when used in AI and machine learning, it inevitably faces some specific challenges. The authors propose a way to fix these problems by integrating Value sensitive design with other actionable frameworks.

Key Insights

  1. Value sensitive design (VSD)

Value sensitive design (VSD) is a method originally developed by researchers at the University of Washington and it lays out actional steps for designing technology in accordance with our values. These steps are grouped in three main categories: conceptual, empirical and technical investigations. Conceptual analysis determines the appropriate set of values (coming from the philosophical literature and/or from the stakeholders’ expectations), whereas empirical investigations may survey direct and indirect stakeholders to understand their values and needs. The third set of steps looks into potential technical limitations and resources to design a technology following the appropriate set of values. 

Unfortunately, the self-learning capabilities of AI pose some specific challenges for VSD. Notoriously, models developed through machine learning can have features that were not initially designed or foreseen, and some of these features may be opaque and thus not easily detectable. This could mean that AI systems, originally designed following VSD, “may have unintended value consequences, […] or unintentionally ‘disembody’ values embedded in their original design.” As the authors explain, this means that we need design principles specific for this kind of technology and expand VSD to address those challenges. The question is how to do that. 

  1. Solutions 

The authors propose to modify VSD in the following three ways: (1) VSD should include a set of AI-specific principles (AI4SG); (2) for VSD, the goal should be not only to promote outcomes that avoid harming but also to contribute to social good overall; (3) VSD should look at the downstream consequences of adopting a certain AI system to make sure the designed values are in fact respected. 

2.1 VSD & AI4SG

Lets’ start witht the first point. The authors propose to adopt AI-specific principles in VSD. In particular, they look at  AI4SG (AI for social good) principles, which are actionable guidelines, inspired by the more high level values of “respect for human autonomy, prevention of harm, fairness, and explicability”. These are the principles:

“(i) falsifiability and incremental deployment; (ii) safeguards against the manipulation of predictors; (iii) receiver-contextualised intervention; (iv) receiver-contextualised explanation and transparent purposes; (v) privacy protection and data subject consent; (vi) situational fairness; and (vii) human-friendly semanticisation.”

The authors of the paper point out that applying these specific principles in the design of AI systems would address some of the concerns mentioned above. This is because these steps are not only more practical than the high-level values but they are also specific to AI and so are the right tools to avoid the challenges raised by this kind of technology. These principles are, in other words, a more concrete application of the key values (e.g. beneficence) we want to see as part of the design of AI going forward.

2.2 VSD & the social good

Here’s the second issue: VSD should be not only to promote outcomes that avoid doing harm but also to contribute to social good and so “there must be an explicit orientation toward socially desirable ends.” To promote this, the authors recommend that VSD adopts “the Sustainable Development Goals (SDGs), proposed by the United Nations, as the best approximation of what we collectively believe to be valuable societal ends”. Again, this is a matter of complementing and enriching VSD with a set of principles that actively try to promote social good, and as such, they should be part of the design of AI systems.  

2.3 VSD and downstream consequences 

Finally, ongoing monitoring is needed to address possible unintended consequences of adopting AI systems. Indeed, when employed, AI systems may not respect the original design values (see here for more). This is why there is the need to apply VSD to the entire “life cycle of an AI technology”, monitoring systems, and adopt the necessary design changes when needed.  The authors point out that prototyping and small scale testing could really help address unforeseen consequences. 

By combining these principles and ideas, the authors embrace a framework that encompasses the following recursive loop: 

Context Analysis (e.g. societal challenges, values for stakeholders) → Value Identification (e.g. beneficence, autonomy, SDGs, case specific values) → Design Requirements (e.g. AI4SG), → Prototyping (e.g. small-scale testing) 

This proposed framework is meant to be taking into account the various aspects of VSD while also addressing some of its shortcomings. 

Between the lines

It is important to find a way to bridge theory and practice when it comes to building ethical AI systems. This paper is charting a way forward to address this need. It brings together different methods and approaches by explaining how to integrate action steps within the VSD framework while also making sure social good is taken into account. Now that we have a fairly comprehensive set of high-level values, future research will need to establish more precise, actionable and concrete steps to embody those values within AI systems, and it will need to find new ways to determine the ethically relevant, downstream consequences of the use of those systems.

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

🔍 SEARCH

Spotlight

A network diagram with lots of little emojis, organised in clusters.

Tech Futures: AI For and Against Knowledge

A brightly coloured illustration which can be viewed in any direction. It has many elements to it working together: men in suits around a table, someone in a data centre, big hands controlling the scenes and holding a phone, people in a production line. Motifs such as network diagrams and melting emojis are placed throughout the busy vignettes.

Tech Futures: The Fossil Fuels Playbook for Big Tech: Part II

A rock embedded with intricate circuit board patterns, held delicately by pale hands drawn in a ghostly style. The contrast between the rough, metallic mineral and the sleek, artificial circuit board illustrates the relationship between raw natural resources and modern technological development. The hands evoke human involvement in the extraction and manufacturing processes.

Tech Futures: The Fossil Fuels Playbook for Big Tech: Part I

Close-up of a cat sleeping on a computer keyboard

Tech Futures: The threat of AI-generated code to the world’s digital infrastructure

The undying sun hangs in the sky, as people gather around signal towers, working through their digital devices.

Dreams and Realities in Modi’s AI Impact Summit

related posts

  • Can ChatGPT replace a Spanish or philosophy tutor?

    Can ChatGPT replace a Spanish or philosophy tutor?

  • Social Robots and Empathy: The Harmful Effects of Always Getting What We Want

    Social Robots and Empathy: The Harmful Effects of Always Getting What We Want

  • Fair and explainable machine learning under current legal frameworks

    Fair and explainable machine learning under current legal frameworks

  • Online public discourse on artificial intelligence and ethics in China: context, content, and implic...

    Online public discourse on artificial intelligence and ethics in China: context, content, and implic...

  • Looking before we leap: Expanding ethical review processes for AI and data science research

    Looking before we leap: Expanding ethical review processes for AI and data science research

  • Confucius, cyberpunk and Mr. Science: comparing AI ethics principles between China and the EU

    Confucius, cyberpunk and Mr. Science: comparing AI ethics principles between China and the EU

  • Towards Healthy AI: Large Language Models Need Therapists Too

    Towards Healthy AI: Large Language Models Need Therapists Too

  • Does diversity really go well with Large Language Models?

    Does diversity really go well with Large Language Models?

  • ABScribe: Rapid Exploration of Multiple Writing Variations in Human-AI Co-Writing Tasks using Large ...

    ABScribe: Rapid Exploration of Multiple Writing Variations in Human-AI Co-Writing Tasks using Large ...

  • From Pretraining Data to Language Models to Downstream Tasks: Tracking the Trails of Political Biase...

    From Pretraining Data to Language Models to Downstream Tasks: Tracking the Trails of Political Biase...

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer


Articles

Columns

AI Literacy

The State of AI Ethics Report


 

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.

Contact

Donate


  • © 2025 MONTREAL AI ETHICS INSTITUTE.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.