• Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • Core Principles of Responsible AI
    • Accountability
    • Fairness
    • Privacy
    • Safety and Security
    • Sustainability
    • Transparency
  • Special Topics
    • AI in Industry
    • Ethical Implications
    • Human-Centered Design
    • Regulatory Landscape
    • Technical Methods
  • Living Dictionary
  • State of AI Ethics
  • AI Ethics Brief
  • 🇫🇷
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

Data Capitalism and the User: An Exploration of Privacy Cynicism in Germany

March 28, 2021

🔬 Research summary by Sarah P. Grant, a freelance writer dedicated to covering the implications of AI and big data analytics.

[Original paper by Christoph Lutz, Christian Pieter Hoffmann, Giulia Ranzini]


Overview: This paper dives deeply into the many dimensions of privacy cynicism in this study of Internet users in Germany. The researchers examine attitudes of uncertainty, powerlessness, and resignation towards data handling by Internet companies and find that people do not consider privacy protections to be entirely futile.


Introduction

For many people, social media is a critical avenue for social participation. But users might feel powerless if they perceive that they have no choice but to give up their personal privacy in exchange for digital inclusion.

Lutz et Al examine these types of attitudes in this study, which analyzes data from a 2017 survey of 1,008 respondents in Germany. Their aim is to build on previous work related to the “privacy paradox”–the observation that privacy concerns do not always align with protection behaviours–and explore the phenomenon of privacy cynicism. 

The authors define privacy cynicism as “an attitude of uncertainty, powerlessness, mistrust, and resignation toward data handling by online services that renders privacy protection subjectively futile.” They assert that privacy cynicism is more than an attitude or belief–it’s also a coping mechanism. As the authors maintain, this is the first study to contribute quantitative empirical evidence to online privacy research.

Data Capitalism

The authors place their quantitative findings within the broader context of an interdisciplinary literature review. They reference the work of scholars who reflect on surveillance capitalism and data capitalism specifically, and emphasize that research in this area focuses on how data extraction is a central component of digital platforms’ business models. A common thread in these critiques of digital platforms is that they “challenge user agency,” in that users feel they have to choose between having meaningful social relationships or maintaining their privacy. 

The Privacy Paradox

Their review of privacy paradox literature goes as far back as 1977, when the assertion was made that people acquire optimal privacy levels through their ability to control personal interactions. They reference several studies and note that, while it has been widely discussed, empirical evidence of the privacy paradox is actually weak.

Surveillance Realism, Privacy Apathy, and Privacy Fatigue

The authors also review approaches to digital inclusion coping mechanisms other than privacy cynicism, which include privacy fatigue, surveillance realism, and privacy apathy.

Surveillance realism, for example, covers both an unease about data collection and “normalization” that distracts people from envisioning alternatives. Privacy apathy refers to a lack of privacy protection in the US, while privacy fatigue is a “negative coping mechanism, where individuals become disengaged and fail to protect themselves.” Lutz et al. observe that cynicism is a core component of privacy fatigue.

Privacy Cynicism

The authors of this study chose to focus on privacy cynicism because it has roots in social psychology and is tested by more generalizable data. They describe cynicism in great depth, noting that it is about assumptions of self-interest. Cynicism is also about powerlessness: when one of two participants in a relationship have little control over decision making, they grow cynical. Risks are therefore perceived as inevitable because they are out of the person’s control.

They argue that the combination of data capitalist business models along with design goals of maximizing user engagement might make it too complicated for users to consider their desired level of disclosure for specific situations.

Results

The quantitative study tests several hypotheses and yields many key findings. In general, the research reveals that German users feel “quite powerless and distrustful” but do not harbour widespread resignation. Internet skills mitigate privacy cynicism, but do not eliminate feelings of mistrust. People tend to be more cynical after they have had a privacy threat experience, but mistrust does not appear to stem from experience.

Powerlessness is the most prevalent factor associated with privacy cynicism, while resignation that produces the perceived futility of privacy-protecting behaviours is the least prevalent factor. 

One important finding is that privacy concerns have a positive effect on privacy protection behaviour. Therefore, the researchers find no evidence for the privacy paradox in this study. 

Implications for Public Policy and Future Research

The authors state that “lacking control over the sharing of personal data online appears as the most salient dimension of privacy cynicism” and therefore policy and other interventions should focus on giving agency back to users.  They also state that future research could look at powerlessness in relation to the kind of business models as described by researchers who focus on data capitalism and surveillance capitalism. 

It is important to note that the work of Lutz et al. also provides the foundations for further investigations beyond privacy, setting the stage for future explorations into whether an awareness of social media’s impacts on personal well-being and democracy contributes to user cynicism.

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

🔍 SEARCH

Spotlight

AI Policy Corner: New York City Local Law 144

Canada’s Minister of AI and Digital Innovation is a Historic First. Here’s What We Recommend.

Am I Literate? Redefining Literacy in the Age of Artificial Intelligence

AI Policy Corner: The Texas Responsible AI Governance Act

AI Policy Corner: Singapore’s National AI Strategy 2.0

related posts

  • The struggle for recognition in the age of facial recognition technology

    The struggle for recognition in the age of facial recognition technology

  • The Epistemological View: Data Ethics, Privacy & Trust on Digital Platform

    The Epistemological View: Data Ethics, Privacy & Trust on Digital Platform

  • Research summary:  Laughing is Scary, but Farting is Cute: A Conceptual Model of Children’s Perspect...

    Research summary: Laughing is Scary, but Farting is Cute: A Conceptual Model of Children’s Perspect...

  • Mapping the Ethics of Generative AI: A Comprehensive Scoping Review

    Mapping the Ethics of Generative AI: A Comprehensive Scoping Review

  • Research summary: Artificial Intelligence: The Ambiguous Labor Market Impact of Automating Predictio...

    Research summary: Artificial Intelligence: The Ambiguous Labor Market Impact of Automating Predictio...

  • The Two Faces of AI in Green Mobile Computing: A Literature Review

    The Two Faces of AI in Green Mobile Computing: A Literature Review

  • Artificial intelligence and biological misuse: Differentiating risks of language models and biologic...

    Artificial intelligence and biological misuse: Differentiating risks of language models and biologic...

  • Right to be Forgotten in the Era of Large Language Models: Implications, Challenges, and Solutions

    Right to be Forgotten in the Era of Large Language Models: Implications, Challenges, and Solutions

  • Connecting the dots in trustworthy Artificial Intelligence: From AI principles, ethics, and key requ...

    Connecting the dots in trustworthy Artificial Intelligence: From AI principles, ethics, and key requ...

  • Risk of AI in Healthcare: A Study Framework

    Risk of AI in Healthcare: A Study Framework

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer

Categories


• Blog
• Research Summaries
• Columns
• Core Principles of Responsible AI
• Special Topics

Signature Content


• The State Of AI Ethics

• The Living Dictionary

• The AI Ethics Brief

Learn More


• About

• Open Access Policy

• Contributions Policy

• Editorial Stance on AI Tools

• Press

• Donate

• Contact

The AI Ethics Brief (bi-weekly newsletter)

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.


Archive

  • © MONTREAL AI ETHICS INSTITUTE. All rights reserved 2024.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.