• Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • Core Principles of Responsible AI
    • Accountability
    • Fairness
    • Privacy
    • Safety and Security
    • Sustainability
    • Transparency
  • Special Topics
    • AI in Industry
    • Ethical Implications
    • Human-Centered Design
    • Regulatory Landscape
    • Technical Methods
  • Living Dictionary
  • State of AI Ethics
  • AI Ethics Brief
  • šŸ‡«šŸ‡·
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

(Re)Politicizing Digital Well-Being: Beyond User Engagements

October 13, 2022

šŸ”¬ Research Summary by Niall Docherty and Asia Biega.

Dr Niall Docherty is focused on analyzing, critiquing, and building ā€˜healthy’ sociotechnical systems, currently at Microsoft Research New England, and, from September 2022, at the Information School, University of Sheffield.

Dr Asia J. Biega is a tenure-track faculty member at the Max Planck Institute for Security and Privacy where she leads the Responsible Computing research group.

[Original paper by Niall Docherty and Asia Biega]


Overview: Concerns surrounding digital well-being have grown in recent years. To date, the issue has largely been studied in terms of individual user engagement and a vague notion of ā€˜time well spent’. Our paper argues that this is empirically and ideologically insufficient. We instead show how digital well-being ought to be recognized as a culturally relative notion, reflective of uneven societal pressures felt by situated individuals in online spheres. Our paper highlights the limits of user engagement metrics as a singular proxy for user well-being and suggests new ways for practitioners to attend to digital well-being based on its structural dimensions. Overall, we hope to reinvigorate the issue of digital well-being as a nexus point of political concern, through which multiple disciplines can study experiences of digital distress as symptomatic of wider social inequalities and sociotechnical relations of power.


Introduction

Digital well-being is chiefly characterized as a matter of personal self-control over technologies like social media. If you want to live well online, simply limit your screen-time, don’t doom-scroll, engage in conscious use, get active – we’ve heard it all before. Interventions in the Human-Computer-Interaction space have largely targeted these types of individualized digital engagements in their attempts to improve the well-being of users. This often results in ā€˜nudging’ users toward ā€˜healthier’ interactions with technology. However, our paper adopts an interdisciplinary approach to instead highlight the empirical, ideological, and political limits of this individualised approach to designing for digital well-being. We instead argue user well-being is something far greater than the behavioral sum of its parts, and ought to be treated as such in our computational interventions.

Key Insights

Well-being is a normative, relative concept

We begin by highlighting how there is no universally stable definition of well-being. Well-being is a relative concept in two related ways. First, the “well” in well-being mobilizes judgements about how humans should best live their lives. Second, the ā€˜being’ in well-being requires an ontological definition of what the human being is. Determining what is good for the human, as such, is always an evaluation of what the human individual is and ought to be. Moreover, these determinations are tied to the time and place of their articulation. For example, how do we know that the well-being for someone in say, Manchester, England, means the same as someone in Bengaluru, India. Even within national bounds, well-being is unique to the individual in question. One person’s idea of living well involves a glass of wine and the TV, while it’s a wheatgrass shot and yoga for another. 

Well-being is environmentally, politically, and socially conditioned

It has long been acknowledged that unequal income levels, housing qualities, gendered discriminations, and racialized marginalization impact individual well-being. This has been studied productively within the social determinants of health framework. Accordingly, our paper argues that declining experiences of user well-being online should be linked to the conditions we know impact well-being offline.

We suggest that measuring relative depreciations of well-being in digital contexts can provide an opportunity to discuss the psychological impacts that intersecting inequalities and oppression has on users. Doing so would treat the psychological issues associated with specified forms of user engagement as social issues. This is opposed to them simply being treated as personal behaviors separate from the material conditions that support and sustain them. 

The limits of user engagement metrics 

In our paper, we examine the limits of user engagement as a proxy for digital well-being. We ground this exploration in a comparison between a diagnostic framework for digital addition and state-of-the-art engagement metrics. Our analysis reveals that most dimensions of digital addiction cannot be captured by online user behavior measurements as they relate to aspects of a person’s life typically not accessible to the platform. Moreover, we show how identical patterns in user engagement could be a manifestation of both well-being increasing and decreasing behaviors. Overall, our paper shows that user engagement is neither a necessary nor a sufficient proxy for digital well-being.

Suggested paths forward for HCI practitioners 

Instead of relying on purely behaviorist measurements, system designers might consider alternative approaches to digital well-being modeling. From harm reduction frameworks, through value-sensitive and participatory designs, up to a decision to not design for well-being at all – there exist a variety of approaches that could center the individual needs and the material conditions of the users themselves, as well as different understandings of well-being. This is instead of simply following the assumptions of culturally situated designers. In either case, practitioners might consider grounding their digital well-being projects in interdisciplinary dialogues – much like the dialogue that forms the backbone of our paper.

Between the lines

Failing to recognize the limits of user engagements as a proxy for well-being, and without advocating the need for them to be supplemented by other well-being metrics, has the effect of rendering the social, cultural, and political factors that contribute to well-being as unimportant. This enables existing power structures to remain in place unchecked, and the opportunity to link digital well-being with wider social justice issues is lost. 

If computing experts wish to confront these issues head on, it is crucial to incorporate the structural, social, and systemic determinants of digital well-being in its conceptualization, measurement, and design. This will reinvigorate digital well-being as a key site of socially conscious, political action in the present age, rather than being simply an individual problem to be individually fixed.

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

šŸ” SEARCH

Spotlight

AI Policy Corner: Frontier AI Safety Commitments, AI Seoul Summit 2024

AI Policy Corner: The Colorado State Deepfakes Act

Special Edition: Honouring the Legacy of Abhishek Gupta (1992–2024)

AI Policy Corner: The Turkish Artificial Intelligence Law Proposal

From Funding Crisis to AI Misuse: Critical Digital Rights Challenges from RightsCon 2025

related posts

  • UNESCO’s Recommendation on the Ethics of AI

    UNESCO’s Recommendation on the Ethics of AI

  • On the sui generis value capture of new digital technologies: The case of AI

    On the sui generis value capture of new digital technologies: The case of AI

  • On the Challenges of Using Black-Box APIs for Toxicity Evaluation in Research

    On the Challenges of Using Black-Box APIs for Toxicity Evaluation in Research

  • Model Positionality and Computational Reflexivity: Promoting Reflexivity in Data Science

    Model Positionality and Computational Reflexivity: Promoting Reflexivity in Data Science

  • Embedded ethics: a proposal for integrating ethics into the development of medical AI

    Embedded ethics: a proposal for integrating ethics into the development of medical AI

  • Mapping the Responsible AI Profession, A Field in Formation (techUK)

    Mapping the Responsible AI Profession, A Field in Formation (techUK)

  • International Institutions for Advanced AI

    International Institutions for Advanced AI

  • Research summary: Social Work Thinking for UX and AI Design

    Research summary: Social Work Thinking for UX and AI Design

  • ā€œA Proposal for Identifying and Managing Bias in Artificial Intelligenceā€. A draft from the NIST

    ā€œA Proposal for Identifying and Managing Bias in Artificial Intelligenceā€. A draft from the NIST

  • Zoom Out and Observe: News Environment Perception for Fake News Detection

    Zoom Out and Observe: News Environment Perception for Fake News Detection

Partners

  • Ā 
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer

Categories


• Blog
• Research Summaries
• Columns
• Core Principles of Responsible AI
• Special Topics

Signature Content


• The State Of AI Ethics

• The Living Dictionary

• The AI Ethics Brief

Learn More


• About

• Open Access Policy

• Contributions Policy

• Editorial Stance on AI Tools

• Press

• Donate

• Contact

The AI Ethics Brief (bi-weekly newsletter)

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.


Archive

  • Ā© MONTREAL AI ETHICS INSTITUTE. All rights reserved 2024.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.