• Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • Core Principles of Responsible AI
    • Accountability
    • Fairness
    • Privacy
    • Safety and Security
    • Sustainability
    • Transparency
  • Special Topics
    • AI in Industry
    • Ethical Implications
    • Human-Centered Design
    • Regulatory Landscape
    • Technical Methods
  • Living Dictionary
  • State of AI Ethics
  • AI Ethics Brief
  • 🇫🇷
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

Mapping value sensitive design onto AI for social good principles

September 7, 2021

🔬 Research summary by Dr. Marianna Ganapini (@MariannaBergama), our Faculty Director.

[Original paper by Steven Umbrello, Ibo van de Poel]


Overview: Value sensitive design (VSD) is a method for shaping technology in accordance with our values. In this paper, the authors argue that, when applied to AI, VSD faces some specific challenges (connected to machine learning, in particular). To address these challenges, they propose modifying VSD, integrating it with a set of AI-specific principles, and ensuring that the unintended uses and consequences of AI technologies are monitored and addressed.

Introduction

How do we bridge theory and practice when it comes to following ethical principles in AI? This paper aims at answering that very question by adopting Value sensitive design: a set of steps to implement values in technological innovation. Value sensitive design potentially applies to a vast range of technologies, but when used in AI and machine learning, it inevitably faces some specific challenges. The authors propose a way to fix these problems by integrating Value sensitive design with other actionable frameworks.

Key Insights

  1. Value sensitive design (VSD)

Value sensitive design (VSD) is a method originally developed by researchers at the University of Washington and it lays out actional steps for designing technology in accordance with our values. These steps are grouped in three main categories: conceptual, empirical and technical investigations. Conceptual analysis determines the appropriate set of values (coming from the philosophical literature and/or from the stakeholders’ expectations), whereas empirical investigations may survey direct and indirect stakeholders to understand their values and needs. The third set of steps looks into potential technical limitations and resources to design a technology following the appropriate set of values. 

Unfortunately, the self-learning capabilities of AI pose some specific challenges for VSD. Notoriously, models developed through machine learning can have features that were not initially designed or foreseen, and some of these features may be opaque and thus not easily detectable. This could mean that AI systems, originally designed following VSD, “may have unintended value consequences, […] or unintentionally ‘disembody’ values embedded in their original design.” As the authors explain, this means that we need design principles specific for this kind of technology and expand VSD to address those challenges. The question is how to do that. 

  1. Solutions 

The authors propose to modify VSD in the following three ways: (1) VSD should include a set of AI-specific principles (AI4SG); (2) for VSD, the goal should be not only to promote outcomes that avoid harming but also to contribute to social good overall; (3) VSD should look at the downstream consequences of adopting a certain AI system to make sure the designed values are in fact respected. 

2.1 VSD & AI4SG

Lets’ start witht the first point. The authors propose to adopt AI-specific principles in VSD. In particular, they look at  AI4SG (AI for social good) principles, which are actionable guidelines, inspired by the more high level values of “respect for human autonomy, prevention of harm, fairness, and explicability”. These are the principles:

“(i) falsifiability and incremental deployment; (ii) safeguards against the manipulation of predictors; (iii) receiver-contextualised intervention; (iv) receiver-contextualised explanation and transparent purposes; (v) privacy protection and data subject consent; (vi) situational fairness; and (vii) human-friendly semanticisation.”

The authors of the paper point out that applying these specific principles in the design of AI systems would address some of the concerns mentioned above. This is because these steps are not only more practical than the high-level values but they are also specific to AI and so are the right tools to avoid the challenges raised by this kind of technology. These principles are, in other words, a more concrete application of the key values (e.g. beneficence) we want to see as part of the design of AI going forward.

2.2 VSD & the social good

Here’s the second issue: VSD should be not only to promote outcomes that avoid doing harm but also to contribute to social good and so “there must be an explicit orientation toward socially desirable ends.” To promote this, the authors recommend that VSD adopts “the Sustainable Development Goals (SDGs), proposed by the United Nations, as the best approximation of what we collectively believe to be valuable societal ends”. Again, this is a matter of complementing and enriching VSD with a set of principles that actively try to promote social good, and as such, they should be part of the design of AI systems.  

2.3 VSD and downstream consequences 

Finally, ongoing monitoring is needed to address possible unintended consequences of adopting AI systems. Indeed, when employed, AI systems may not respect the original design values (see here for more). This is why there is the need to apply VSD to the entire “life cycle of an AI technology”, monitoring systems, and adopt the necessary design changes when needed.  The authors point out that prototyping and small scale testing could really help address unforeseen consequences. 

By combining these principles and ideas, the authors embrace a framework that encompasses the following recursive loop: 

Context Analysis (e.g. societal challenges, values for stakeholders) → Value Identification (e.g. beneficence, autonomy, SDGs, case specific values) → Design Requirements (e.g. AI4SG), → Prototyping (e.g. small-scale testing) 

This proposed framework is meant to be taking into account the various aspects of VSD while also addressing some of its shortcomings. 

Between the lines

It is important to find a way to bridge theory and practice when it comes to building ethical AI systems. This paper is charting a way forward to address this need. It brings together different methods and approaches by explaining how to integrate action steps within the VSD framework while also making sure social good is taken into account. Now that we have a fairly comprehensive set of high-level values, future research will need to establish more precise, actionable and concrete steps to embody those values within AI systems, and it will need to find new ways to determine the ethically relevant, downstream consequences of the use of those systems.

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

🔍 SEARCH

Spotlight

AI Policy Corner: Frontier AI Safety Commitments, AI Seoul Summit 2024

AI Policy Corner: The Colorado State Deepfakes Act

Special Edition: Honouring the Legacy of Abhishek Gupta (1992–2024)

AI Policy Corner: The Turkish Artificial Intelligence Law Proposal

From Funding Crisis to AI Misuse: Critical Digital Rights Challenges from RightsCon 2025

related posts

  • Melting contestation: insurance fairness and machine learning

    Melting contestation: insurance fairness and machine learning

  • Human-AI Interactions and Societal Pitfalls

    Human-AI Interactions and Societal Pitfalls

  • Collectionless Artificial Intelligence

    Collectionless Artificial Intelligence

  • Supporting Human-LLM collaboration in Auditing LLMs with LLMs

    Supporting Human-LLM collaboration in Auditing LLMs with LLMs

  • Research summary: Different Intelligibility for Different Folks

    Research summary: Different Intelligibility for Different Folks

  • Extensible Consent Management Architectures for Data Trusts

    Extensible Consent Management Architectures for Data Trusts

  • Research summary: SoK: Security and Privacy in Machine Learning

    Research summary: SoK: Security and Privacy in Machine Learning

  • Distributed Governance: a Principal-Agent Approach to Data Governance - Part 1 Background & Core Def...

    Distributed Governance: a Principal-Agent Approach to Data Governance - Part 1 Background & Core Def...

  • A Lesson From AI: Ethics Is Not an Imitation Game

    A Lesson From AI: Ethics Is Not an Imitation Game

  • The Social Metaverse: Battle for Privacy

    The Social Metaverse: Battle for Privacy

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer

Categories


• Blog
• Research Summaries
• Columns
• Core Principles of Responsible AI
• Special Topics

Signature Content


• The State Of AI Ethics

• The Living Dictionary

• The AI Ethics Brief

Learn More


• About

• Open Access Policy

• Contributions Policy

• Editorial Stance on AI Tools

• Press

• Donate

• Contact

The AI Ethics Brief (bi-weekly newsletter)

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.


Archive

  • Š MONTREAL AI ETHICS INSTITUTE. All rights reserved 2024.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.