• Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • Core Principles of Responsible AI
    • Accountability
    • Fairness
    • Privacy
    • Safety and Security
    • Sustainability
    • Transparency
  • Special Topics
    • AI in Industry
    • Ethical Implications
    • Human-Centered Design
    • Regulatory Landscape
    • Technical Methods
  • Living Dictionary
  • State of AI Ethics
  • AI Ethics Brief
  • 🇫🇷
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

Understanding technology-induced value change: a pragmatist proposal

June 28, 2022

🔬 Research Summary by Ibo van de Poel, an Anthoni van Leeuwenhoek professor in Ethics and Technology at TU delft and has an ERC advanced grant on technological design and value change.

[Original paper by Ibo van de Poel and Olya Kudina]


Overview: The introduction of new technologies into society may sometimes lead to changes in social and moral values. For example, explainability has been articulated as a new value in response to the opaqueness of machine learning. The article offers a new theoretical account of value change based on philosophical pragmatism.


Introduction

If we want to design technologies, like artificial intelligence systems, for moral and social values, we should not just look at current values, but be aware that values may change over time. When the current energy systems were designed, sustainability was not yet a major value. Due to climate change, we are now confronted with a need for an energy transition. In the future, it would be desirable to design proactively for the possibility of value change rather than adopting entrenched sociotechnical systems later. Doing so would require a better theoretical understanding of why and how values may change and that is what this article aims to offer. It does so by building on philosophical pragmatism and proposes to understand values as tools to address moral problems. Consequently, it suggests that when humanity is confronted with new moral problems, new values may be needed to properly address these moral problems. Examples are environmental problems that since the 1980s have led to the articulation of sustainability as a value, and more recently explainability in AI that arose in response to the moral problem of opaqueness and lack of transparency.

Key Insights

A pragmatist understanding of value

The article builds on John Dewey’s philosophical pragmatism. Dewey was a prominent American thinker at the turn of the 20th century. In his writings, three meanings of values can be distinguished, namely 1) immediate value, which is a direct affective reaction to a situation, 2) value as a result of judgment, and 3) generalized values that are used as hypotheses for judging new situations. The authors propose to foreground the third meaning and understand values as ‘evaluative devices that carry over from earlier experiences and are (to some extent) shared in society’. 

So understood, values fulfill several functions in (moral) evaluation and inquiry, namely (1) discovering what is morally salient in a situation; (2) normatively evaluating situations; (3) suggesting courses of action to resolve morally problematic situations; (4) judging whether a problematic situation has been resolved and (5) moral justification.

Value change

On a pragmatist account, values can change if we are confronted with new types of moral problems that existing values can insufficiently address. To flesh this out, the authors zoom in on what Dewey calls inquiry. Inquiry starts with an indeterminate situation, that is a situation that is somehow unsettling, incomplete or felt as unpleasant. For Dewey, an indeterminate situation is not yet a morally problematic situation, as that already requires a certain interpretation of the situation, e.g. based on existing values. The aim of inquiry is to transform an indeterminate situation into a determinate one. Values are the intellectual tools that we use in inquiry and they may be more or less successful in helping to transform an indeterminate situation into a determinate one.

When we are confronted with a new situation, we will on the one hand bring existing values to that situation, while on the other hand, we will often have a direct immediate unreflective valuing of that situation. If there is a tension between direct valuing and the generalized values, this will trigger a process of inquiry, and this process may result in value change. Depending on how this process exactly evolves, the authors distinguish between three dynamics of value change, namely value dynamism, value adaptation and value emergence.

Value dynamism

The authors speak of value dynamism if there is a tension between generalized values and the direct valuing of a situation that is resolved through judgment, but in which the resolution does not carry over to new situations, so that while the value is reinterpreted for the situation at hand it does not result in a societal value change. 

The example that the authors give is the experimental use of Google Glass. An important generalized value in this case is informational privacy. However, people’s immediate valuing ranged from ‘the end of privacy’ towards discomfort with people wearing Glass during dinner or in public (whether it was recording or not). These direct valuings were not (fully) covered by the existing value of informational privacy, and through judgment more specific notions of privacy developed, also addressing e.g. spatial privacy concerns. However, these reinterpretations did not affect the general understanding of privacy as a generalized valuel.  

Value adaptation

Value adaptation starts off similar to value dynamism but here the revaluation of existing values does not remain local but carries over to new situations. The authors cite the example of the ‘right to be forgotten’ in relation to the Internet. They suggest that the Internet was initially based on values like remembering and storing forever. However, in practice, people were confronted with the negative effects that all information is kept on the Internet, like being fired from a teacher’s position because of an old Facebook photo with alcohol bottles. This led to new valuings at tension with the dominant value of remembering and storing all information. Initially, such revaluations were local, but around 2014 court cases against Google made the value change more lasting and general. 

Value emergence

The third dynamic discussed by the authors is that of value emergence. In this case, there is not so much a reinterpretation of revaluation of existing values, but rather the emergence of a new value that doesn’t have a predecessor. The authors mention the value of sustainability which since around the 1980s has first emerged and then gradually become more prominent as a value to deal with the tension between economic development and environmental problems.

Between the lines

The article is quite abstract and philosophical in nature. The authors offer a general account of value change, even more general than technology-induced value change. They also point out how their account may add to existing theoretical accounts of value change.

The authors suggest that value change is relevant for the design of new technologies, but do not further develop these implications in detail in this article. The publication is part of a larger project that also aims to to further explore how value change can be accounted for when designing new technologies (see www.valuechange.eu )

When it comes to AI, value change would seem a relevant phenomenon; the example of explainability as a (somewhat) new value was already mentioned, and it is well conceivable that AI will raise new moral problems in the future that will require new values. A relevant question is what that would mean for the current design of AI systems; one approach that one could imagine is to design such systems in such a way that they are adjustable at a later stage to account for new values (see also https://doi.org/10.1007/s10676-018-9461-9 )

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

🔍 SEARCH

Spotlight

AI Policy Corner: Frontier AI Safety Commitments, AI Seoul Summit 2024

AI Policy Corner: The Colorado State Deepfakes Act

Special Edition: Honouring the Legacy of Abhishek Gupta (1992–2024)

AI Policy Corner: The Turkish Artificial Intelligence Law Proposal

From Funding Crisis to AI Misuse: Critical Digital Rights Challenges from RightsCon 2025

related posts

  • Do Large GPT Models Discover Moral Dimensions in Language Representations? A Topological Study Of Se...

    Do Large GPT Models Discover Moral Dimensions in Language Representations? A Topological Study Of Se...

  • On the Generation of Unsafe Images and Hateful Memes From Text-To-Image Models

    On the Generation of Unsafe Images and Hateful Memes From Text-To-Image Models

  • Measuring Fairness of Text Classifiers via Prediction Sensitivity

    Measuring Fairness of Text Classifiers via Prediction Sensitivity

  • Race and AI: the Diversity Dilemma

    Race and AI: the Diversity Dilemma

  • When AI Ethics Goes Astray: A Case Study of Autonomous Vehicles

    When AI Ethics Goes Astray: A Case Study of Autonomous Vehicles

  • How Helpful do Novice Programmers Find the Feedback of an Automated Repair Tool?

    How Helpful do Novice Programmers Find the Feedback of an Automated Repair Tool?

  • Research summary: Lessons from Archives: Strategies for Collecting Sociocultural Data in Machine Lea...

    Research summary: Lessons from Archives: Strategies for Collecting Sociocultural Data in Machine Lea...

  • From AI Winter to AI Hype: The Story of AI in Montreal

    From AI Winter to AI Hype: The Story of AI in Montreal

  • Ethics for People Who Work in Tech

    Ethics for People Who Work in Tech

  • Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising

    Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer

Categories


• Blog
• Research Summaries
• Columns
• Core Principles of Responsible AI
• Special Topics

Signature Content


• The State Of AI Ethics

• The Living Dictionary

• The AI Ethics Brief

Learn More


• About

• Open Access Policy

• Contributions Policy

• Editorial Stance on AI Tools

• Press

• Donate

• Contact

The AI Ethics Brief (bi-weekly newsletter)

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.


Archive

  • © MONTREAL AI ETHICS INSTITUTE. All rights reserved 2024.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.