• Skip to main content
  • Skip to secondary menu
  • Skip to primary sidebar
  • Skip to footer
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

  • Articles
    • Public Policy
    • Privacy & Security
    • Human Rights
      • Ethics
      • JEDI (Justice, Equity, Diversity, Inclusion
    • Climate
    • Design
      • Emerging Technology
    • Application & Adoption
      • Health
      • Education
      • Government
        • Military
        • Public Works
      • Labour
    • Arts & Culture
      • Film & TV
      • Music
      • Pop Culture
      • Digital Art
  • Columns
    • AI Policy Corner
    • Recess
    • Tech Futures
  • The AI Ethics Brief
  • AI Literacy
    • Research Summaries
    • AI Ethics Living Dictionary
    • Learning Community
  • The State of AI Ethics Report
    • Volume 7 (November 2025)
    • Volume 6 (February 2022)
    • Volume 5 (July 2021)
    • Volume 4 (April 2021)
    • Volume 3 (Jan 2021)
    • Volume 2 (Oct 2020)
    • Volume 1 (June 2020)
  • About
    • Our Contributions Policy
    • Our Open Access Policy
    • Contact
    • Donate

Understanding technology-induced value change: a pragmatist proposal

June 28, 2022

🔬 Research Summary by Ibo van de Poel, an Anthoni van Leeuwenhoek professor in Ethics and Technology at TU delft and has an ERC advanced grant on technological design and value change.

[Original paper by Ibo van de Poel and Olya Kudina]


Overview: The introduction of new technologies into society may sometimes lead to changes in social and moral values. For example, explainability has been articulated as a new value in response to the opaqueness of machine learning. The article offers a new theoretical account of value change based on philosophical pragmatism.


Introduction

If we want to design technologies, like artificial intelligence systems, for moral and social values, we should not just look at current values, but be aware that values may change over time. When the current energy systems were designed, sustainability was not yet a major value. Due to climate change, we are now confronted with a need for an energy transition. In the future, it would be desirable to design proactively for the possibility of value change rather than adopting entrenched sociotechnical systems later. Doing so would require a better theoretical understanding of why and how values may change and that is what this article aims to offer. It does so by building on philosophical pragmatism and proposes to understand values as tools to address moral problems. Consequently, it suggests that when humanity is confronted with new moral problems, new values may be needed to properly address these moral problems. Examples are environmental problems that since the 1980s have led to the articulation of sustainability as a value, and more recently explainability in AI that arose in response to the moral problem of opaqueness and lack of transparency.

Key Insights

A pragmatist understanding of value

The article builds on John Dewey’s philosophical pragmatism. Dewey was a prominent American thinker at the turn of the 20th century. In his writings, three meanings of values can be distinguished, namely 1) immediate value, which is a direct affective reaction to a situation, 2) value as a result of judgment, and 3) generalized values that are used as hypotheses for judging new situations. The authors propose to foreground the third meaning and understand values as ‘evaluative devices that carry over from earlier experiences and are (to some extent) shared in society’. 

So understood, values fulfill several functions in (moral) evaluation and inquiry, namely (1) discovering what is morally salient in a situation; (2) normatively evaluating situations; (3) suggesting courses of action to resolve morally problematic situations; (4) judging whether a problematic situation has been resolved and (5) moral justification.

Value change

On a pragmatist account, values can change if we are confronted with new types of moral problems that existing values can insufficiently address. To flesh this out, the authors zoom in on what Dewey calls inquiry. Inquiry starts with an indeterminate situation, that is a situation that is somehow unsettling, incomplete or felt as unpleasant. For Dewey, an indeterminate situation is not yet a morally problematic situation, as that already requires a certain interpretation of the situation, e.g. based on existing values. The aim of inquiry is to transform an indeterminate situation into a determinate one. Values are the intellectual tools that we use in inquiry and they may be more or less successful in helping to transform an indeterminate situation into a determinate one.

When we are confronted with a new situation, we will on the one hand bring existing values to that situation, while on the other hand, we will often have a direct immediate unreflective valuing of that situation. If there is a tension between direct valuing and the generalized values, this will trigger a process of inquiry, and this process may result in value change. Depending on how this process exactly evolves, the authors distinguish between three dynamics of value change, namely value dynamism, value adaptation and value emergence.

Value dynamism

The authors speak of value dynamism if there is a tension between generalized values and the direct valuing of a situation that is resolved through judgment, but in which the resolution does not carry over to new situations, so that while the value is reinterpreted for the situation at hand it does not result in a societal value change. 

The example that the authors give is the experimental use of Google Glass. An important generalized value in this case is informational privacy. However, people’s immediate valuing ranged from ‘the end of privacy’ towards discomfort with people wearing Glass during dinner or in public (whether it was recording or not). These direct valuings were not (fully) covered by the existing value of informational privacy, and through judgment more specific notions of privacy developed, also addressing e.g. spatial privacy concerns. However, these reinterpretations did not affect the general understanding of privacy as a generalized valuel.  

Value adaptation

Value adaptation starts off similar to value dynamism but here the revaluation of existing values does not remain local but carries over to new situations. The authors cite the example of the ‘right to be forgotten’ in relation to the Internet. They suggest that the Internet was initially based on values like remembering and storing forever. However, in practice, people were confronted with the negative effects that all information is kept on the Internet, like being fired from a teacher’s position because of an old Facebook photo with alcohol bottles. This led to new valuings at tension with the dominant value of remembering and storing all information. Initially, such revaluations were local, but around 2014 court cases against Google made the value change more lasting and general. 

Value emergence

The third dynamic discussed by the authors is that of value emergence. In this case, there is not so much a reinterpretation of revaluation of existing values, but rather the emergence of a new value that doesn’t have a predecessor. The authors mention the value of sustainability which since around the 1980s has first emerged and then gradually become more prominent as a value to deal with the tension between economic development and environmental problems.

Between the lines

The article is quite abstract and philosophical in nature. The authors offer a general account of value change, even more general than technology-induced value change. They also point out how their account may add to existing theoretical accounts of value change.

The authors suggest that value change is relevant for the design of new technologies, but do not further develop these implications in detail in this article. The publication is part of a larger project that also aims to to further explore how value change can be accounted for when designing new technologies (see www.valuechange.eu )

When it comes to AI, value change would seem a relevant phenomenon; the example of explainability as a (somewhat) new value was already mentioned, and it is well conceivable that AI will raise new moral problems in the future that will require new values. A relevant question is what that would mean for the current design of AI systems; one approach that one could imagine is to design such systems in such a way that they are adjustable at a later stage to account for new values (see also https://doi.org/10.1007/s10676-018-9461-9 )

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

🔍 SEARCH

Spotlight

A network diagram with lots of little emojis, organised in clusters.

Tech Futures: AI For and Against Knowledge

A brightly coloured illustration which can be viewed in any direction. It has many elements to it working together: men in suits around a table, someone in a data centre, big hands controlling the scenes and holding a phone, people in a production line. Motifs such as network diagrams and melting emojis are placed throughout the busy vignettes.

Tech Futures: The Fossil Fuels Playbook for Big Tech: Part II

A rock embedded with intricate circuit board patterns, held delicately by pale hands drawn in a ghostly style. The contrast between the rough, metallic mineral and the sleek, artificial circuit board illustrates the relationship between raw natural resources and modern technological development. The hands evoke human involvement in the extraction and manufacturing processes.

Tech Futures: The Fossil Fuels Playbook for Big Tech: Part I

Close-up of a cat sleeping on a computer keyboard

Tech Futures: The threat of AI-generated code to the world’s digital infrastructure

The undying sun hangs in the sky, as people gather around signal towers, working through their digital devices.

Dreams and Realities in Modi’s AI Impact Summit

related posts

  • The Paradox of AI Ethics in Warfare

    The Paradox of AI Ethics in Warfare

  • How Kathleen Siminyu created Kenya’s go-to space for Women in Machine Learning

    How Kathleen Siminyu created Kenya’s go-to space for Women in Machine Learning

  • Deciphering Open Source in the EU AI Act

    Deciphering Open Source in the EU AI Act

  • Faith and Fate: Limits of Transformers on Compositionality

    Faith and Fate: Limits of Transformers on Compositionality

  • The Robot Made Me Do It: Human–Robot Interaction and Risk-Taking Behavior (Research Summary)

    The Robot Made Me Do It: Human–Robot Interaction and Risk-Taking Behavior (Research Summary)

  • Confidence-Building Measures for Artificial Intelligence

    Confidence-Building Measures for Artificial Intelligence

  • Technical methods for regulatory inspection of algorithmic systems in social media platforms

    Technical methods for regulatory inspection of algorithmic systems in social media platforms

  • Between a Rock and a Hard Place: Freedom, Flexibility, Precarity and Vulnerability in the Gig Econom...

    Between a Rock and a Hard Place: Freedom, Flexibility, Precarity and Vulnerability in the Gig Econom...

  • Research summary: Bring the People Back In: Contesting Benchmark Machine Learning

    Research summary: Bring the People Back In: Contesting Benchmark Machine Learning

  • Melting contestation: insurance fairness and machine learning

    Melting contestation: insurance fairness and machine learning

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer


Articles

Columns

AI Literacy

The State of AI Ethics Report


 

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.

Contact

Donate


  • © 2025 MONTREAL AI ETHICS INSTITUTE.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.