• Skip to main content
  • Skip to secondary menu
  • Skip to primary sidebar
  • Skip to footer
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

  • Articles
    • Public Policy
    • Privacy & Security
    • Human Rights
      • Ethics
      • JEDI (Justice, Equity, Diversity, Inclusion
    • Climate
    • Design
      • Emerging Technology
    • Application & Adoption
      • Health
      • Education
      • Government
        • Military
        • Public Works
      • Labour
    • Arts & Culture
      • Film & TV
      • Music
      • Pop Culture
      • Digital Art
  • Columns
    • AI Policy Corner
    • Recess
  • The AI Ethics Brief
  • AI Literacy
    • Research Summaries
    • AI Ethics Living Dictionary
    • Learning Community
  • The State of AI Ethics Report
    • Volume 7 (November 2025)
    • Volume 6 (February 2022)
    • Volume 5 (July 2021)
    • Volume 4 (April 2021)
    • Volume 3 (Jan 2021)
    • Volume 2 (Oct 2020)
    • Volume 1 (June 2020)
  • About
    • Our Contributions Policy
    • Our Open Access Policy
    • Contact
    • Donate

Understanding technology-induced value change: a pragmatist proposal

June 28, 2022

🔬 Research Summary by Ibo van de Poel, an Anthoni van Leeuwenhoek professor in Ethics and Technology at TU delft and has an ERC advanced grant on technological design and value change.

[Original paper by Ibo van de Poel and Olya Kudina]


Overview: The introduction of new technologies into society may sometimes lead to changes in social and moral values. For example, explainability has been articulated as a new value in response to the opaqueness of machine learning. The article offers a new theoretical account of value change based on philosophical pragmatism.


Introduction

If we want to design technologies, like artificial intelligence systems, for moral and social values, we should not just look at current values, but be aware that values may change over time. When the current energy systems were designed, sustainability was not yet a major value. Due to climate change, we are now confronted with a need for an energy transition. In the future, it would be desirable to design proactively for the possibility of value change rather than adopting entrenched sociotechnical systems later. Doing so would require a better theoretical understanding of why and how values may change and that is what this article aims to offer. It does so by building on philosophical pragmatism and proposes to understand values as tools to address moral problems. Consequently, it suggests that when humanity is confronted with new moral problems, new values may be needed to properly address these moral problems. Examples are environmental problems that since the 1980s have led to the articulation of sustainability as a value, and more recently explainability in AI that arose in response to the moral problem of opaqueness and lack of transparency.

Key Insights

A pragmatist understanding of value

The article builds on John Dewey’s philosophical pragmatism. Dewey was a prominent American thinker at the turn of the 20th century. In his writings, three meanings of values can be distinguished, namely 1) immediate value, which is a direct affective reaction to a situation, 2) value as a result of judgment, and 3) generalized values that are used as hypotheses for judging new situations. The authors propose to foreground the third meaning and understand values as ‘evaluative devices that carry over from earlier experiences and are (to some extent) shared in society’. 

So understood, values fulfill several functions in (moral) evaluation and inquiry, namely (1) discovering what is morally salient in a situation; (2) normatively evaluating situations; (3) suggesting courses of action to resolve morally problematic situations; (4) judging whether a problematic situation has been resolved and (5) moral justification.

Value change

On a pragmatist account, values can change if we are confronted with new types of moral problems that existing values can insufficiently address. To flesh this out, the authors zoom in on what Dewey calls inquiry. Inquiry starts with an indeterminate situation, that is a situation that is somehow unsettling, incomplete or felt as unpleasant. For Dewey, an indeterminate situation is not yet a morally problematic situation, as that already requires a certain interpretation of the situation, e.g. based on existing values. The aim of inquiry is to transform an indeterminate situation into a determinate one. Values are the intellectual tools that we use in inquiry and they may be more or less successful in helping to transform an indeterminate situation into a determinate one.

When we are confronted with a new situation, we will on the one hand bring existing values to that situation, while on the other hand, we will often have a direct immediate unreflective valuing of that situation. If there is a tension between direct valuing and the generalized values, this will trigger a process of inquiry, and this process may result in value change. Depending on how this process exactly evolves, the authors distinguish between three dynamics of value change, namely value dynamism, value adaptation and value emergence.

Value dynamism

The authors speak of value dynamism if there is a tension between generalized values and the direct valuing of a situation that is resolved through judgment, but in which the resolution does not carry over to new situations, so that while the value is reinterpreted for the situation at hand it does not result in a societal value change. 

The example that the authors give is the experimental use of Google Glass. An important generalized value in this case is informational privacy. However, people’s immediate valuing ranged from ‘the end of privacy’ towards discomfort with people wearing Glass during dinner or in public (whether it was recording or not). These direct valuings were not (fully) covered by the existing value of informational privacy, and through judgment more specific notions of privacy developed, also addressing e.g. spatial privacy concerns. However, these reinterpretations did not affect the general understanding of privacy as a generalized valuel.  

Value adaptation

Value adaptation starts off similar to value dynamism but here the revaluation of existing values does not remain local but carries over to new situations. The authors cite the example of the ‘right to be forgotten’ in relation to the Internet. They suggest that the Internet was initially based on values like remembering and storing forever. However, in practice, people were confronted with the negative effects that all information is kept on the Internet, like being fired from a teacher’s position because of an old Facebook photo with alcohol bottles. This led to new valuings at tension with the dominant value of remembering and storing all information. Initially, such revaluations were local, but around 2014 court cases against Google made the value change more lasting and general. 

Value emergence

The third dynamic discussed by the authors is that of value emergence. In this case, there is not so much a reinterpretation of revaluation of existing values, but rather the emergence of a new value that doesn’t have a predecessor. The authors mention the value of sustainability which since around the 1980s has first emerged and then gradually become more prominent as a value to deal with the tension between economic development and environmental problems.

Between the lines

The article is quite abstract and philosophical in nature. The authors offer a general account of value change, even more general than technology-induced value change. They also point out how their account may add to existing theoretical accounts of value change.

The authors suggest that value change is relevant for the design of new technologies, but do not further develop these implications in detail in this article. The publication is part of a larger project that also aims to to further explore how value change can be accounted for when designing new technologies (see www.valuechange.eu )

When it comes to AI, value change would seem a relevant phenomenon; the example of explainability as a (somewhat) new value was already mentioned, and it is well conceivable that AI will raise new moral problems in the future that will require new values. A relevant question is what that would mean for the current design of AI systems; one approach that one could imagine is to design such systems in such a way that they are adjustable at a later stage to account for new values (see also https://doi.org/10.1007/s10676-018-9461-9 )

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

🔍 SEARCH

Spotlight

This image shows a large white, traditional, old building. The top half of the building represents the humanities (which is symbolised by the embedded text from classic literature which is faintly shown ontop the building). The bottom section of the building is embossed with mathematical formulas to represent the sciences. The middle layer of the image is heavily pixelated. On the steps at the front of the building there is a group of scholars, wearing formal suits and tie attire, who are standing around at the enternace talking and some of them are sitting on the steps. There are two stone, statute-like hands that are stretching the building apart from the left side. In the forefront of the image, there are 8 students - which can only be seen from the back. Their graduation gowns have bright blue hoods and they all look as though they are walking towards the old building which is in the background at a distance. There are a mix of students in the foreground.

Tech Futures: Co-opting Research and Education

Agentic AI systems and algorithmic accountability: a new era of e-commerce

ALL IN Conference 2025: Four Key Takeaways from Montreal

Beyond Dependency: The Hidden Risk of Social Comparison in Chatbot Companionship

AI Policy Corner: Restriction vs. Regulation: Comparing State Approaches to AI Mental Health Legislation

related posts

  • Eticas Foundation external audits VioGén: Spain’s algorithm designed to protect victims of gender vi...

    Eticas Foundation external audits VioGén: Spain’s algorithm designed to protect victims of gender vi...

  • Aging with AI: Another Source of Bias?

    Aging with AI: Another Source of Bias?

  • Consent as a Foundation for Responsible Autonomy

    Consent as a Foundation for Responsible Autonomy

  • How Cognitive Biases Affect XAI-assisted Decision-making: A Systematic Review

    How Cognitive Biases Affect XAI-assisted Decision-making: A Systematic Review

  • Principios éticos para una inteligencia artificial antropocéntrica: consensos actuales desde una per...

    Principios éticos para una inteligencia artificial antropocéntrica: consensos actuales desde una per...

  • The Paris AI Summit: Deregulation, Fear, and Surveillance

    The Paris AI Summit: Deregulation, Fear, and Surveillance

  • Does Military AI Have Gender? Understanding Bias and Promoting Ethical Approaches in Military Applic...

    Does Military AI Have Gender? Understanding Bias and Promoting Ethical Approaches in Military Applic...

  • An Algorithmic Introduction to Savings Circles

    An Algorithmic Introduction to Savings Circles

  • The Case for Anticipating Undesirable Consequences of Computing Innovations Early, Often, and Across...

    The Case for Anticipating Undesirable Consequences of Computing Innovations Early, Often, and Across...

  • (Re)Politicizing Digital Well-Being: Beyond User Engagements

    (Re)Politicizing Digital Well-Being: Beyond User Engagements

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer


Articles

Columns

AI Literacy

The State of AI Ethics Report


 

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.

Contact

Donate


  • © 2025 MONTREAL AI ETHICS INSTITUTE.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.