• Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • Core Principles of Responsible AI
    • Accountability
    • Fairness
    • Privacy
    • Safety and Security
    • Sustainability
    • Transparency
  • Special Topics
    • AI in Industry
    • Ethical Implications
    • Human-Centered Design
    • Regulatory Landscape
    • Technical Methods
  • Living Dictionary
  • State of AI Ethics
  • AI Ethics Brief
  • 🇫🇷
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

SoK: The Gap Between Data Rights Ideals and Reality

February 6, 2024

🔬 Research Summary by Yujin Potter, a postdoc at UC Berkeley, focusing on AI alignment, AI safety, blockchain, and DeFi.

[Original paper by Yujin Potter, Ella Corren, Gonzalo Munilla Garrido, Chris Hoofnagle, and Dawn Song]


Overview: This metastudy paper critically examines the effectiveness of rights-based privacy laws, the EU’s GDPR, in empowering individuals over their data. An analysis of 201 interdisciplinary empirical studies, news articles, and blog posts identifies 15 key questions about the efficacy of these laws, revealing often conflicting results and highlighting their limitations. The paper concludes with recommendations for policymakers and Computer Science groups and discusses alternative approaches to privacy regulation.


Introduction

Are we living in a world where we believe our privacy is protected?

Various measures, such as privacy laws, safeguard our privacy. However, the effectiveness of these measures is a subject of debate, which can undermine our confidence in them. There are divergent views on their actual efficacy. Our paper critically examines the effectiveness of rights-based privacy laws, like the European Union’s General Data Protection Regulation (GDPR), in truly empowering individuals over their personal data. We uncover conflicting narratives through an extensive analysis of 201 interdisciplinary empirical studies, news articles, and blog posts. These narratives suggest that the current implementation of rights-based regimes may be inadequate, indicating a need for improvement. Our findings delve into these laws’ complexities and potential shortcomings, offering a nuanced perspective on the state of data privacy today.

Key Insights 

In the digital age, where privacy is increasingly threatened, policymakers have established rights-based regimes to empower users. These regimes grant specific rights, such as accessing and erasing personal data. However, there is ongoing controversy regarding the actual effectiveness of these regimes in empowering users. In our study, we evaluated rights-based privacy approaches from the perspectives of the primary actors in the information economy: users, companies or developers, and regulators. Below, we summarize our findings on the GDPR’s impact and effectiveness.

Users’ perspective

For the privacy law to be effective, the following assumptions on users are presumed: users should have sufficient knowledge of their data rights and will exercise them, and data rights benefit users. However, this assumption may be overly optimistic. Some surveys show that users are not aware of their data rights. For example, the Eurobarometer reveals that many Europeans have never heard of the right to data portability and to avoid automated decision-making. Alternatively, according to one study, only 24% of patients were aware of accessing their health information. Actually, even the willingness to exercise their data rights is mixed. While some people feel that having data rights is important, others feel there is insufficient incentive to exercise the rights. This point also leads to the reality that many users do not exercise GDPR data rights. 

Even benefits from their data rights are unclear. Even though a study indicates that reminding users of GDPR data rights can curb data-sharing behavior, several studies also show that providing data control can paradoxically lead to more data disclosure. 

Companies’ perspective

Companies also lack knowledge of user data rights. For example, a user survey of businesses across eight EU countries reveals that only approximately 50-65% of participants correctly answered questions about the right to access, erasure, and object. Moreover, developers often show indifference to implementing data rights. In an experiment, out of 448 developers who were alerted via email about the incorrect implementation of data rights in their apps, 334 ignored the message. Of course, not all developers disregard users’ data rights. Indeed, many service providers emphasize the importance of enforcing them even though it doesn’t necessarily lead to compliance with the data rights law. 

There is evidence that companies’ current implementation of data rights is not enough. One of the most well-known examples is privacy policies provided by companies. Few privacy policies currently meet all GDPR requirements; many don’t inform users of their data rights like access and erasure, even though European websites tend to state more extensive data rights than websites from other countries. 

Implementing data rights poses numerous technical and managerial challenges for businesses, often making compliance daunting. Artificial Intelligence (AI) is frequently cited as one of the most challenging technologies in this regard. The lack of clear guidelines on key issues exacerbates the difficulty. For instance, it remains unclear whether businesses should remove user-deleted data from all AI model training, test, and validation sets or delete the model upon receiving a user’s erasure request. These ambiguities lead to confusion among developers, complicating the path to compliance.

Regulators’ perspective

European regulators adopt many strategies to gauge and understand the effectiveness of GDPR data rights. Primarily, they interact with citizens through the complaints they receive, serving as a valuable resource for assessing user awareness of data rights. Considering the evidence that many people lack awareness of their data rights, these efforts are insufficient. 

Ensuring robust enforcement is paramount to protecting data rights, encompassing the detection and penalization of corporations infringing upon these rights via fines and other sanctions. However, many studies reveal that imposed fines have typically been marginal in relation to the comprehensive economic and societal repercussions of GDPR noncompliance. 

Despite the challenges and unintended consequences, regulators and legal professionals agree that the GDPR has strengthened individual empowerment.

Between the lines

Privacy in the digital age is paramount, yet its protection remains elusive. The emergence of data rights as a facet of human rights is a promising development. Still, as our comprehensive literature analysis reveals, their implementation and acceptance are inconsistent. This inconsistency is further complicated by the varying willingness of individuals to exercise these rights. While some empirical studies highlight the efficacy of data rights in specific scenarios, there is growing skepticism about the overall efficiency of current rights-based systems.

This inconsistency in the application and effectiveness of data rights is a critical gap in the current digital privacy landscape. It raises important questions about the adaptability of these rights in diverse contexts and their real-world impact on individual privacy. The varying willingness of individuals to exercise data rights also points to a potential disconnect between the legal provisions and public awareness or trust in these systems. Further research is needed to understand the barriers to effective implementation and explore how these rights can be more accessible and impactful for the average user. Additionally, investigating the role of technology, especially AI, in complicating data rights compliance could provide insights into developing more robust privacy protection mechanisms in our increasingly digital world.

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

🔍 SEARCH

Spotlight

AI Policy Corner: Frontier AI Safety Commitments, AI Seoul Summit 2024

AI Policy Corner: The Colorado State Deepfakes Act

Special Edition: Honouring the Legacy of Abhishek Gupta (1992–2024)

AI Policy Corner: The Turkish Artificial Intelligence Law Proposal

From Funding Crisis to AI Misuse: Critical Digital Rights Challenges from RightsCon 2025

related posts

  • The Challenge of Understanding What Users Want: Inconsistent Preferences and Engagement Optimization

    The Challenge of Understanding What Users Want: Inconsistent Preferences and Engagement Optimization

  • Research summary: Lexicon of Lies: Terms for Problematic Information

    Research summary: Lexicon of Lies: Terms for Problematic Information

  • The Unequal Opportunities of Large Language Models: Revealing Demographic Bias through Job Recommend...

    The Unequal Opportunities of Large Language Models: Revealing Demographic Bias through Job Recommend...

  • CRUSH: Contextually Regularized and User Anchored Self-Supervised Hate Speech Detection

    CRUSH: Contextually Regularized and User Anchored Self-Supervised Hate Speech Detection

  • Acceptable Risks in Europe’s Proposed AI Act: Reasonableness and Other Principles for Deciding How M...

    Acceptable Risks in Europe’s Proposed AI Act: Reasonableness and Other Principles for Deciding How M...

  • Towards an Understanding of Developers' Perceptions of Transparency in Software Development: A Preli...

    Towards an Understanding of Developers' Perceptions of Transparency in Software Development: A Preli...

  • The Meaning of “Explainability Fosters Trust in AI”

    The Meaning of “Explainability Fosters Trust in AI”

  • The Ethics of AI Value Chains: An Approach for Integrating and Expanding AI Ethics Research, Practic...

    The Ethics of AI Value Chains: An Approach for Integrating and Expanding AI Ethics Research, Practic...

  • Scoping AI Governance: A Smarter Tool Kit for Beneficial Applications

    Scoping AI Governance: A Smarter Tool Kit for Beneficial Applications

  • Research summary: Using Multimodal Sensing to Improve Awareness in Human-AI Interaction

    Research summary: Using Multimodal Sensing to Improve Awareness in Human-AI Interaction

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer

Categories


• Blog
• Research Summaries
• Columns
• Core Principles of Responsible AI
• Special Topics

Signature Content


• The State Of AI Ethics

• The Living Dictionary

• The AI Ethics Brief

Learn More


• About

• Open Access Policy

• Contributions Policy

• Editorial Stance on AI Tools

• Press

• Donate

• Contact

The AI Ethics Brief (bi-weekly newsletter)

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.


Archive

  • © MONTREAL AI ETHICS INSTITUTE. All rights reserved 2024.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.