• Skip to main content
  • Skip to secondary menu
  • Skip to primary sidebar
  • Skip to footer
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

  • Articles
    • Public Policy
    • Privacy & Security
    • Human Rights
      • Ethics
      • JEDI (Justice, Equity, Diversity, Inclusion
    • Climate
    • Design
      • Emerging Technology
    • Application & Adoption
      • Health
      • Education
      • Government
        • Military
        • Public Works
      • Labour
    • Arts & Culture
      • Film & TV
      • Music
      • Pop Culture
      • Digital Art
  • Columns
    • AI Policy Corner
    • Recess
  • The AI Ethics Brief
  • AI Literacy
    • Research Summaries
    • AI Ethics Living Dictionary
    • Learning Community
  • The State of AI Ethics Report
    • Volume 7 (November 2025)
    • Volume 6 (February 2022)
    • Volume 5 (July 2021)
    • Volume 4 (April 2021)
    • Volume 3 (Jan 2021)
    • Volume 2 (Oct 2020)
    • Volume 1 (June 2020)
  • About
    • Our Contributions Policy
    • Our Open Access Policy
    • Contact
    • Donate

The Impact of the GDPR on Artificial Intelligence

February 20, 2022

šŸ”¬ Research Summary by Avantika Bhandari, SJD. Her research areas cover indigenous knowledge and its protection, human rights, and intellectual property rights.

[Original paper by European Parliament]


Overview: Ā The report addresses the relationship between General Data Protection Regulation (GDPR) and Artificial Intelligence (AI). Further, the study analyzes how AI is regulated in the GDPR and theĀ  extent to which AI fits into the GDPR framework. It discusses the tensions and proximities between AI and data protection principles, particularly that of purpose limitation, and data minimization. The conducts an in-depth analysis of automated decision-making, the safeguards methods to be adopted, and whether data subjects have a right to individual explanations.


Introduction

In the last few decades, AI has gone through rapid development. It is a known fact that AI can lead to social, economic, cultural development, better health care, and the spread of knowledge. However, these opportunities are also accompanied by serious risks, including, discrimination, exclusion, unemployment, surveillance, and manipulation. AI has significantly evolved since it began to focus on the application of machine learning to mass volumes of data. In machine learning applications, AI systems ā€˜learn to make predictions after being trained on vast sets of examples.’ Thus, AI has become hungry for data and has led to the process of data collection, in a self-reinforcing spiral. This study aims to provide a comprehensive assessment of the interactions between artificial intelligence and the principles of GDPR.

Key Insights

AI in the GDPR: Unlike the Data Protection Directive, the GDPR contains terms referring to the internet (websites, links, and social networks), however, it does not contain the term ‘artificial intelligence,’ nor any terms connected with relating concepts such as autonomous systems, intelligent systems, automated reasoning and inference, machine learning or even big data. But, we will see that there are many provisions in the GDPR that are relevant to AI.

  1. Article 4(1): Personal Data (identification, identifiability, re-identification)- In connection with the GDPR definition of GDPR definition of personal data, AI is raised in two key issues: i) the ‘re-personalisation’ of anonymous data, namely the re-identification of the individuals to which such data are related; (ii) and the inference of further personal information from personal data that are already available. Thanks to AI and big data the identifiability of the data subjects has vastly increased.
  2. Article 4(2): Profiling- Although GDPR does not explicitly refer to AI, it does address processing that is accomplished using AI technology. The process consists of using the data concerning a person to infer information on other aspects of that person.
  3. Article 4(11): GDPR consent: According to GDPR, consent should be freely given specific, informed, and unambiguous. Consent plays a crucial role in the traditional understanding of data protection, based on the ā€˜notice and consent model,’ according to which data protection is aimed at protecting the right to ā€˜informational self-determination.’
  4. Article 5(1)(b): GDPR Purpose limitation: The concept of a purpose establishes a relationship between the purpose of processing operations and their legal basis. There is an existence of tension between the use of AI and the purpose limitation requirement. The technologies ā€˜enable the useful reuse of personal data for the new purposes’ that are different from those from which they were originally collected. For example, data collected for contract management can be processed to know customers’ preferences and can be used to send targeted messages. To establish the legitimacy of repurposing data, one needs to determine whether the new purpose is ā€˜compatible’ or ā€˜not incompatible’ with the purpose of originally collected data. 
  5. Article 5(1)(d): GDPR Accuracy: GDPR requires that data must be ā€˜accurate and where necessary kept up to date,’ and initiative must be taken to address inaccuracies. This principle is also applicable when personal data is used as an output to an AI system, especially at instances when personal data are used to make inferences about the data subject. 

It has been argued that GDPR would be incompatible with AI and big data, considering that GDPR is based on principles such as data minimization, purpose limitation, the special treatment of ā€˜sensitive data,’ the limitation on automated decisions. However, this report shows that it is likely that GDPR ā€˜will be interpreted in such a way as to reconcile both desiderata: protecting data subjects and enabling’ useful applications of AI. 

Between the lines

The report suggests oversight by competent authorities needs to be complemented with the support of civil society. As power relations, collective interests, and societal arrangements are at stake, a public-debate and involvement of representative institutions are also needed. GDPR does not address the issue of collective enforcement, which relies on individual action by the concerned data subject. Enabling collective actions for injunctions and compensations can prove to be an effective mechanism toward effective protection. 

Some policy proposals on AI and the GDPR:

  • A number of AI-related data protection issues are not mentioned in the GDPR, which may lead to uncertainties and costs, and may unnecessarily hamper the developments of AI applications.
  • Data subjects and controllers should be provided with guidance on AI that can be applied to personal data with the GDPR, and on technologies for doing so.
  • The political debate must address what applications are to be barred unconditionally, and which may be applied under specific circumstances.
  • National Data Protection Authorities should also provide recommendations and guidance, in particular when contacted by the controllers or in response to data subjects’ queries.
  • Guidance is also needed on profiling and automated decision-making. 
  • Collective enforcement in the data protection domain should be facilitated. 
Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

šŸ” SEARCH

Spotlight

This image shows a large white, traditional, old building. The top half of the building represents the humanities (which is symbolised by the embedded text from classic literature which is faintly shown ontop the building). The bottom section of the building is embossed with mathematical formulas to represent the sciences. The middle layer of the image is heavily pixelated. On the steps at the front of the building there is a group of scholars, wearing formal suits and tie attire, who are standing around at the enternace talking and some of them are sitting on the steps. There are two stone, statute-like hands that are stretching the building apart from the left side. In the forefront of the image, there are 8 students - which can only be seen from the back. Their graduation gowns have bright blue hoods and they all look as though they are walking towards the old building which is in the background at a distance. There are a mix of students in the foreground.

Tech Futures: Co-opting Research and Education

Agentic AI systems and algorithmic accountability: a new era of e-commerce

ALL IN Conference 2025: Four Key Takeaways from Montreal

Beyond Dependency: The Hidden Risk of Social Comparison in Chatbot Companionship

AI Policy Corner: Restriction vs. Regulation: Comparing State Approaches to AI Mental Health Legislation

related posts

  • Research Summary: Geo-indistinguishability: Differential privacy for location-based systems

    Research Summary: Geo-indistinguishability: Differential privacy for location-based systems

  • Responsible sourcing and the professionalization of data work

    Responsible sourcing and the professionalization of data work

  • Bias Propagation in Federated Learning

    Bias Propagation in Federated Learning

  • SoK: The Gap Between Data Rights Ideals and Reality

    SoK: The Gap Between Data Rights Ideals and Reality

  • Choices, Risks, and Reward Reports: Charting Public Policy for Reinforcement Learning Systems

    Choices, Risks, and Reward Reports: Charting Public Policy for Reinforcement Learning Systems

  • GAM(e) changer or not? An evaluation of interpretable machine learning models

    GAM(e) changer or not? An evaluation of interpretable machine learning models

  • Why was your job application rejected: Bias in Recruitment Algorithms? (Part 2)

    Why was your job application rejected: Bias in Recruitment Algorithms? (Part 2)

  • The Unequal Opportunities of Large Language Models: Revealing Demographic Bias through Job Recommend...

    The Unequal Opportunities of Large Language Models: Revealing Demographic Bias through Job Recommend...

  • AI Has Arrived in Healthcare, but What Does This Mean?

    AI Has Arrived in Healthcare, but What Does This Mean?

  • Enhancing Trust in AI Through Industry Self-Governance

    Enhancing Trust in AI Through Industry Self-Governance

Partners

  • Ā 
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer


Articles

Columns

AI Literacy

The State of AI Ethics Report


 

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.

Contact

Donate


  • Ā© 2025 MONTREAL AI ETHICS INSTITUTE.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.