• Skip to main content
  • Skip to secondary menu
  • Skip to primary sidebar
  • Skip to footer
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

  • Articles
    • Public Policy
    • Privacy & Security
    • Human Rights
      • Ethics
      • JEDI (Justice, Equity, Diversity, Inclusion
    • Climate
    • Design
      • Emerging Technology
    • Application & Adoption
      • Health
      • Education
      • Government
        • Military
        • Public Works
      • Labour
    • Arts & Culture
      • Film & TV
      • Music
      • Pop Culture
      • Digital Art
  • Columns
    • AI Policy Corner
    • Recess
    • Tech Futures
  • The AI Ethics Brief
  • AI Literacy
    • Research Summaries
    • AI Ethics Living Dictionary
    • Learning Community
  • The State of AI Ethics Report
    • Volume 7 (November 2025)
    • Volume 6 (February 2022)
    • Volume 5 (July 2021)
    • Volume 4 (April 2021)
    • Volume 3 (Jan 2021)
    • Volume 2 (Oct 2020)
    • Volume 1 (June 2020)
  • About
    • Our Contributions Policy
    • Our Open Access Policy
    • Contact
    • Donate

Data Capitalism and the User: An Exploration of Privacy Cynicism in Germany

March 28, 2021

🔬 Research summary by Sarah P. Grant, a freelance writer dedicated to covering the implications of AI and big data analytics.

[Original paper by Christoph Lutz, Christian Pieter Hoffmann, Giulia Ranzini]


Overview: This paper dives deeply into the many dimensions of privacy cynicism in this study of Internet users in Germany. The researchers examine attitudes of uncertainty, powerlessness, and resignation towards data handling by Internet companies and find that people do not consider privacy protections to be entirely futile.


Introduction

For many people, social media is a critical avenue for social participation. But users might feel powerless if they perceive that they have no choice but to give up their personal privacy in exchange for digital inclusion.

Lutz et Al examine these types of attitudes in this study, which analyzes data from a 2017 survey of 1,008 respondents in Germany. Their aim is to build on previous work related to the “privacy paradox”–the observation that privacy concerns do not always align with protection behaviours–and explore the phenomenon of privacy cynicism. 

The authors define privacy cynicism as “an attitude of uncertainty, powerlessness, mistrust, and resignation toward data handling by online services that renders privacy protection subjectively futile.” They assert that privacy cynicism is more than an attitude or belief–it’s also a coping mechanism. As the authors maintain, this is the first study to contribute quantitative empirical evidence to online privacy research.

Data Capitalism

The authors place their quantitative findings within the broader context of an interdisciplinary literature review. They reference the work of scholars who reflect on surveillance capitalism and data capitalism specifically, and emphasize that research in this area focuses on how data extraction is a central component of digital platforms’ business models. A common thread in these critiques of digital platforms is that they “challenge user agency,” in that users feel they have to choose between having meaningful social relationships or maintaining their privacy. 

The Privacy Paradox

Their review of privacy paradox literature goes as far back as 1977, when the assertion was made that people acquire optimal privacy levels through their ability to control personal interactions. They reference several studies and note that, while it has been widely discussed, empirical evidence of the privacy paradox is actually weak.

Surveillance Realism, Privacy Apathy, and Privacy Fatigue

The authors also review approaches to digital inclusion coping mechanisms other than privacy cynicism, which include privacy fatigue, surveillance realism, and privacy apathy.

Surveillance realism, for example, covers both an unease about data collection and “normalization” that distracts people from envisioning alternatives. Privacy apathy refers to a lack of privacy protection in the US, while privacy fatigue is a “negative coping mechanism, where individuals become disengaged and fail to protect themselves.” Lutz et al. observe that cynicism is a core component of privacy fatigue.

Privacy Cynicism

The authors of this study chose to focus on privacy cynicism because it has roots in social psychology and is tested by more generalizable data. They describe cynicism in great depth, noting that it is about assumptions of self-interest. Cynicism is also about powerlessness: when one of two participants in a relationship have little control over decision making, they grow cynical. Risks are therefore perceived as inevitable because they are out of the person’s control.

They argue that the combination of data capitalist business models along with design goals of maximizing user engagement might make it too complicated for users to consider their desired level of disclosure for specific situations.

Results

The quantitative study tests several hypotheses and yields many key findings. In general, the research reveals that German users feel “quite powerless and distrustful” but do not harbour widespread resignation. Internet skills mitigate privacy cynicism, but do not eliminate feelings of mistrust. People tend to be more cynical after they have had a privacy threat experience, but mistrust does not appear to stem from experience.

Powerlessness is the most prevalent factor associated with privacy cynicism, while resignation that produces the perceived futility of privacy-protecting behaviours is the least prevalent factor. 

One important finding is that privacy concerns have a positive effect on privacy protection behaviour. Therefore, the researchers find no evidence for the privacy paradox in this study. 

Implications for Public Policy and Future Research

The authors state that “lacking control over the sharing of personal data online appears as the most salient dimension of privacy cynicism” and therefore policy and other interventions should focus on giving agency back to users.  They also state that future research could look at powerlessness in relation to the kind of business models as described by researchers who focus on data capitalism and surveillance capitalism. 

It is important to note that the work of Lutz et al. also provides the foundations for further investigations beyond privacy, setting the stage for future explorations into whether an awareness of social media’s impacts on personal well-being and democracy contributes to user cynicism.

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

🔍 SEARCH

Spotlight

A network diagram with lots of little emojis, organised in clusters.

Tech Futures: AI For and Against Knowledge

A brightly coloured illustration which can be viewed in any direction. It has many elements to it working together: men in suits around a table, someone in a data centre, big hands controlling the scenes and holding a phone, people in a production line. Motifs such as network diagrams and melting emojis are placed throughout the busy vignettes.

Tech Futures: The Fossil Fuels Playbook for Big Tech: Part II

A rock embedded with intricate circuit board patterns, held delicately by pale hands drawn in a ghostly style. The contrast between the rough, metallic mineral and the sleek, artificial circuit board illustrates the relationship between raw natural resources and modern technological development. The hands evoke human involvement in the extraction and manufacturing processes.

Tech Futures: The Fossil Fuels Playbook for Big Tech: Part I

Close-up of a cat sleeping on a computer keyboard

Tech Futures: The threat of AI-generated code to the world’s digital infrastructure

The undying sun hangs in the sky, as people gather around signal towers, working through their digital devices.

Dreams and Realities in Modi’s AI Impact Summit

related posts

  • Research summary: Technology-Enabled Disinformation: Summary, Lessons, and Recommendations

    Research summary: Technology-Enabled Disinformation: Summary, Lessons, and Recommendations

  • Towards Sustainable Conversational AI

    Towards Sustainable Conversational AI

  • Why was your job application rejected: Bias in Recruitment Algorithms? (Part 1)

    Why was your job application rejected: Bias in Recruitment Algorithms? (Part 1)

  • Research summary: Classical Ethics in A/IS

    Research summary: Classical Ethics in A/IS

  • Fair Generative Model Via Transfer Learning

    Fair Generative Model Via Transfer Learning

  • Research summary: Aligning Super Human AI with Human Behavior: Chess as a Model System

    Research summary: Aligning Super Human AI with Human Behavior: Chess as a Model System

  • The algorithmic imaginary: exploring the ordinary affects of Facebook algorithms (Research Summary)

    The algorithmic imaginary: exploring the ordinary affects of Facebook algorithms (Research Summary)

  • AI and Marketing: Why We Need to Ask Ethical Questions

    AI and Marketing: Why We Need to Ask Ethical Questions

  • The Participatory Turn in AI Design: Theoretical Foundations and the Current State of Practice

    The Participatory Turn in AI Design: Theoretical Foundations and the Current State of Practice

  • Why civic competence in AI ethics is needed in 2021

    Why civic competence in AI ethics is needed in 2021

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer


Articles

Columns

AI Literacy

The State of AI Ethics Report


 

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.

Contact

Donate


  • © 2025 MONTREAL AI ETHICS INSTITUTE.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.