• Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • Core Principles of Responsible AI
    • Accountability
    • Fairness
    • Privacy
    • Safety and Security
    • Sustainability
    • Transparency
  • Special Topics
    • AI in Industry
    • Ethical Implications
    • Human-Centered Design
    • Regulatory Landscape
    • Technical Methods
  • Living Dictionary
  • State of AI Ethics
  • AI Ethics Brief
  • šŸ‡«šŸ‡·
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

Embedding Values in Artificial Intelligence (AI) Systems

September 13, 2021

šŸ”¬ Research summary by Dr. Andrea Pedeferri, instructional designer and leader in higher ed (Faculty at Union College), and founder at Logica, helping learners become more efficient thinkers.

[Original paper by Ibo van de Poel]


Overview:  Though there are numerous high-level normative frameworks, it is still quite unclear how or whether values can be implemented in AI systems.  Van de Poel and Kroes’s (2014) have recently provided an account of how to embed values in technology. The current article proposes to expand that view to complex AI systems and explain how values can be embedded in technological systems that are ā€œautonomous, interactive, and adaptiveā€.


Introduction

Though there are numerous high-level normative frameworks, it is still quite unclear how or whether those frameworks can be implemented in AI systems.Ā  Van de Poel and Kroes’s (2014) have recently provided an account of how to embed values in technology in general. The current article proposes to expand that view to AI systems which, according to the author, have five building blocks: ā€œtechnical artifacts, institutions, human agents, artificial agents, and technical normsā€. This paper is a very useful guide to understanding how values can be embedded in a complex system comprised of multiple parts that interact in different ways.

Key Insights

  1. Embedding Values

Organizations such as the EU High-Level Expert Group on AI and the IEEE have provided a list high-level ethical values and principles to implement in AI systems. Whatever your views on values might be, the paper points out that we need an account of what it means for those values to be embedded. To start, a set of values is said to be ā€˜embedded’ only if it is integrated into the system by design. That is, those who design the system should intentionally build that system with a specific set of values in mind. More is needed, though, because even if a system is designed to comply with certain values, that does not mean it will really realize those values.

So the paper proposes the following definition of ā€œembodied valuesā€: ā€œThe embodied value is the value that is both intended (by the designers) and realized if the artifact or system is properly used.ā€

Drawing both from the current paper and Van de Poel and Kroes’s (2014), we have the following set of useful definitions: 

Designed value: any value that is intentionally part of the design of a technological system 

Realized value: any value that the (appropriate) use of the system is prone to bring about 

Embedded value: any value that is both designed and realized. Thus, a value-embedded system is a system that, because of the way it was designed, will bring about certain values (when it is properly used).

As the paper explains, this opens the door to the idea of a feedback loop: when an intended value is not realized, there has to be some change in the way it is used and/or designed. Similarly, if a system is used in a way that is contrary to intended values,  a re-design might be in order. As the author points out, the practice of re-design systems to avoid unintended consequences ā€œis particularly important in the case of AI systems, which due to the adaptive abilities of AI, may acquire system properties that were never intended or foreseen by the original designers.ā€

  1. Embedding Values in AI systems 

This account provides a way to understand how values can be embedded in AI by looking both at the components and the system level. More specifically, the paper understands AI systems as socio-technical systems composed not only of ā€œtechnical artifacts, human agents, and institutionsā€  but also ā€œartificial agents and certain technical norms that regulate interactions between artificial agents and other elements of the system.ā€ To clarify, a socio-technical system is a system that depends ā€œon not only technical hardware but also human behavior and social institutions for their proper functioning (cf. Kroes et al. 2006).ā€ 

To start, the paper clarifies that an AI system will be the result of both social institutions and human agents interacting to design technological artifacts in accordance with certain values. Importantly, the paper points out that those social institutions will also be embedded with values. As such, the role of humans is key: they need to monitor and evaluate the outcomes and use of both the technological artifacts and the social institutions that influence the production and design of those technological artifacts. In addition, because of how AI systems work, there will also be technical norms that regulate how artificial agents interact with humans and social institutions. As such, these norms will embed and promote certain values. 

Therefore, in conclusion, an AI system promotes a set of values if and only if all five of its main components (i.e. technical artifacts, institutions, human agents, artificial agents, and technical norms) will either embody or intentionally promote V.Ā  As the author rightly points out then, ā€œAI systems offer unique value-embedding opportunities and constraints because they contain additional building blocks compared to traditional sociotechnical systems. While these allow new possibilities for value embedding, they also impose constraints and risks, e.g., the risk that an AI system disembodies certain values due to how it evolves. This means that for AI systems, it is crucial to monitor their realized values and to undertake continuous redesign activities.ā€

Between the lines

The paper is a very useful guide to understanding how values can be embedded in a complex system composed of multiple parts that interact in different ways. The next step is to figure out how this analysis connects to the debate on trust and trustworthy AI (see here for more): given the current way we understand value-embedded AI, is it possible to build an AI we can actually trust?

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

šŸ” SEARCH

Spotlight

Canada’s Minister of AI and Digital Innovation is a Historic First. Here’s What We Recommend.

Am I Literate? Redefining Literacy in the Age of Artificial Intelligence

AI Policy Corner: The Texas Responsible AI Governance Act

AI Policy Corner: Singapore’s National AI Strategy 2.0

AI Governance in a Competitive World: Balancing Innovation, Regulation and Ethics | Point Zero Forum 2025

related posts

  • Research summary: Machine Learning Fairness - Lessons Learned

    Research summary: Machine Learning Fairness - Lessons Learned

  • Contextualizing Artificially Intelligent Morality: A Meta-Ethnography of Top-Down, Bottom-Up, and Hy...

    Contextualizing Artificially Intelligent Morality: A Meta-Ethnography of Top-Down, Bottom-Up, and Hy...

  • The importance of audit in AI governance

    The importance of audit in AI governance

  • Zoom Out and Observe: News Environment Perception for Fake News Detection

    Zoom Out and Observe: News Environment Perception for Fake News Detection

  • The Secret Revealer: Generative Model-Inversion Attacks Against Deep Neural Networks (Research Summa...

    The Secret Revealer: Generative Model-Inversion Attacks Against Deep Neural Networks (Research Summa...

  • Does Military AI Have Gender? Understanding Bias and Promoting Ethical Approaches in Military Applic...

    Does Military AI Have Gender? Understanding Bias and Promoting Ethical Approaches in Military Applic...

  • Top 10 Takeaways from our Conversation with Salesforce about Conversational AI

    Top 10 Takeaways from our Conversation with Salesforce about Conversational AI

  • Balancing Transparency and Risk: The Security and Privacy Risks of Open-Source Machine Learning Mode...

    Balancing Transparency and Risk: The Security and Privacy Risks of Open-Source Machine Learning Mode...

  • Beyond the Frontier: Fairness Without Accuracy Loss

    Beyond the Frontier: Fairness Without Accuracy Loss

  • Bots don’t Vote, but They Surely Bother! A Study of Anomalous Accounts in a National Referendum

    Bots don’t Vote, but They Surely Bother! A Study of Anomalous Accounts in a National Referendum

Partners

  • Ā 
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer

Categories


• Blog
• Research Summaries
• Columns
• Core Principles of Responsible AI
• Special Topics

Signature Content


• The State Of AI Ethics

• The Living Dictionary

• The AI Ethics Brief

Learn More


• About

• Open Access Policy

• Contributions Policy

• Editorial Stance on AI Tools

• Press

• Donate

• Contact

The AI Ethics Brief (bi-weekly newsletter)

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.


Archive

  • Ā© MONTREAL AI ETHICS INSTITUTE. All rights reserved 2024.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.