• Skip to main content
  • Skip to secondary menu
  • Skip to primary sidebar
  • Skip to footer
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

  • Articles
    • Public Policy
    • Privacy & Security
    • Human Rights
      • Ethics
      • JEDI (Justice, Equity, Diversity, Inclusion
    • Climate
    • Design
      • Emerging Technology
    • Application & Adoption
      • Health
      • Education
      • Government
        • Military
        • Public Works
      • Labour
    • Arts & Culture
      • Film & TV
      • Music
      • Pop Culture
      • Digital Art
  • Columns
    • AI Policy Corner
    • Recess
  • The AI Ethics Brief
  • AI Literacy
    • Research Summaries
    • AI Ethics Living Dictionary
    • Learning Community
  • The State of AI Ethics Report
    • Volume 7 (November 2025)
    • Volume 6 (February 2022)
    • Volume 5 (July 2021)
    • Volume 4 (April 2021)
    • Volume 3 (Jan 2021)
    • Volume 2 (Oct 2020)
    • Volume 1 (June 2020)
  • About
    • Our Contributions Policy
    • Our Open Access Policy
    • Contact
    • Donate

A Look at the American Data Privacy and Protection Act

May 31, 2023

✍️ Column by Max Krueger, a consultant at Accenture with an interest in both the long and short-term implications of AI on society.

[Original document by Frank Pallone (D-NJ-06), Cathy McMorris Rodgers (R-WA-05), Jan Schakowsky (D-IL-09), Gus Bilirakis (R-FL-12)]


Overview: Data privacy is finally getting attention at the federal level in the US. This past summer, bipartisan Senators drafted the American Data Privacy and Protection Act (ADPPA) to provide consumers with data privacy rights and protections. While the passage of this bill is unlikely this session, there is interest at the federal level to codify consumer data protections.


Introduction

Federal data protections for consumers in the United States have been lacking in the past two decades. In the summer of 2022, bipartisan senators introduced legislation to provide consumer data protections. The bill broadly focuses on three areas 1) Consumer Data Rights, 2) Corporate Accountability, and 3) Enforcement. Additionally, the bill seeks to establish a Duty of Loyalty between consumers and entities using or collecting data. In a country grappling with how to regulate “Big Tech,” this represents landmark legislation. To get a pulse on what Capital Hill is thinking, let’s dive into the ADPPA.

Key Insights

Duty of Loyalty

The Duty of Loyalty establishes the covered entity’s responsibility to the consumer. The ADPPA states, “Covered entities are prohibited from collecting, processing, or transferring covered data beyond what is reasonably necessary.” Covered data includes information that can be linked to an individual directly or when combined with additional data. This extends to data generated or connected to a person’s device(s). Further restrictions are placed on specific types of covered data, including biometric information, genetic information, aggregated internet browsing and search history, physical activity information, precise geolocation information, Social Security numbers, password information, and nonconsensual intimate images. Additionally, covered entities must incorporate privacy-by-design practices into their data pipelines with an extra focus on persons under the age of 17. 

Consumer Data Rights

A key provision of the ADPPA is transparency. Covered entities must provide consumers with information on how it collects, processes, and transfers data. This includes information about third-party entities data might be transmitted to and whether any data handled is made available in China, North Korea, Russia, or Iran. Covered entities must also inform individuals how they may exercise their rights and how long covered entities will retain their data. Finally, large data holders must provide short-form notices of their covered data practices according to minimum requirements established in FTC regulations under the APA. 

An essential piece of this legislation concerns individual ownership of data. The ADPPA states, “Individuals have the right to access, correct, delete, and portability of covered data that pertains to them.” Covered entities are burdened with deleting or updating data per individual requests and informing other covered entities of such changes. 

Civil Rights and Algorithms

Algorithmic bias is well documented in society. The ADPPA seeks to address this issue by requiring large data holders to assess their algorithms annually and submit annual algorithmic impact assessments to the FTC. Within the assessment, entities must detail how they will mitigate algorithmic harms, explicitly detailing how they will minimize harms against those under 17. Further precautions must be taken,

These assessments must also seek to mitigate algorithmic harms related to advertising for housing, education, employment, healthcare, insurance, or credit, access to or restrictions on places of public accommodation, and any disparate impact on the basis of an individual’s race, color, religion, national origin, gender, sexual orientation, or disability status.

Additionally, assessments must occur in the design phase and use an independent auditor if possible.

Enforcement

The ADPPA charges the FTC to create a new bureau responsible for enforcing and monitoring various provisions within the Act. Included in this new bureau is an office of business mentorship to help entities meet the requirements of the Act. Further, a relief fund will be established to compensate individuals for harm. A key provision within the Act gives individuals the right to bring litigation against covered entities if they believe they have been harmed. This provision kicks in after four years, and individuals must notify the FTC and their state attorney general before bringing the litigation.

Conclusion

The ADPPA represents a step closer to protecting consumers against exploitative data collection practices. While the legislation is broad, key areas include

  • creating a duty of loyalty for covered entities,
  • allowing individuals to sue covered entities, and
  • creating a new bureau within the FTC to enforce the law.

Additionally, the law will provide more transparency into the potential impact of algorithms and how companies think about bias and potential harm that algorithms could generate. While this legislation will likely change significantly before passing into law, it is excellent to see draft federal legislation focused on an area in desperate need of regulation. 

Between the lines

The ADPPA represents the first step in the long road to federal consumer data protections within the United States. Some regulation is needed as the prevalence of personal data grows and becomes more accessible. Consumers must be able to exercise their privacy rights easily. For example, a centralized platform needs to be created to allow consumers to request their data and have it removed if desired. This is easier said than done.

Additionally, the lengthy privacy policies commonplace today must be replaced with succinct, simple privacy disclosures. Two of the more exciting aspects of the ADPPA are 1) the ability of individuals to bring legal action against covered entities and 2) required impact assessments. In theory, allowing individuals to sue may hold corporations more accountable and help regulate how algorithms are used “in the wild.” The required impact assessments, if made public, will provide valuable information on how corporations are accounting for algorithmic harms. This can increase corporate accountability and add to the institutional knowledge base in a fast-changing industry. The ADPPA has a lot of intriguing and impactful language. However, its ultimate effectiveness will rely on enforcement and the ease by which consumers can exercise their rights within the bill.

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

🔍 SEARCH

Spotlight

This image shows a large white, traditional, old building. The top half of the building represents the humanities (which is symbolised by the embedded text from classic literature which is faintly shown ontop the building). The bottom section of the building is embossed with mathematical formulas to represent the sciences. The middle layer of the image is heavily pixelated. On the steps at the front of the building there is a group of scholars, wearing formal suits and tie attire, who are standing around at the enternace talking and some of them are sitting on the steps. There are two stone, statute-like hands that are stretching the building apart from the left side. In the forefront of the image, there are 8 students - which can only be seen from the back. Their graduation gowns have bright blue hoods and they all look as though they are walking towards the old building which is in the background at a distance. There are a mix of students in the foreground.

Tech Futures: Co-opting Research and Education

Agentic AI systems and algorithmic accountability: a new era of e-commerce

ALL IN Conference 2025: Four Key Takeaways from Montreal

Beyond Dependency: The Hidden Risk of Social Comparison in Chatbot Companionship

AI Policy Corner: Restriction vs. Regulation: Comparing State Approaches to AI Mental Health Legislation

related posts

  • Cascaded Debiasing : Studying the Cumulative Effect of Multiple Fairness-Enhancing Interventions

    Cascaded Debiasing : Studying the Cumulative Effect of Multiple Fairness-Enhancing Interventions

  • The Two Faces of AI in Green Mobile Computing: A Literature Review

    The Two Faces of AI in Green Mobile Computing: A Literature Review

  • Who will share Fake-News on Twitter? Psycholinguistic cues in online post histories discriminate bet...

    Who will share Fake-News on Twitter? Psycholinguistic cues in online post histories discriminate bet...

  • Tell me, what are you most afraid of? Exploring the Effects of Agent Representation on Information D...

    Tell me, what are you most afraid of? Exploring the Effects of Agent Representation on Information D...

  • Sex Trouble: Sex/Gender Slippage, Sex Confusion, and Sex Obsession in Machine Learning Using Electro...

    Sex Trouble: Sex/Gender Slippage, Sex Confusion, and Sex Obsession in Machine Learning Using Electro...

  • Research summary:  Learning to Complement Humans

    Research summary: Learning to Complement Humans

  • The Epistemological View: Data Ethics, Privacy & Trust on Digital Platform

    The Epistemological View: Data Ethics, Privacy & Trust on Digital Platform

  • AI Policy Corner: Texas and New York: Comparing U.S. State-Level AI Laws

    AI Policy Corner: Texas and New York: Comparing U.S. State-Level AI Laws

  • A survey on adversarial attacks and defences

    A survey on adversarial attacks and defences

  • Can we blame a chatbot if it goes wrong?

    Can we blame a chatbot if it goes wrong?

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer


Articles

Columns

AI Literacy

The State of AI Ethics Report


 

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.

Contact

Donate


  • © 2025 MONTREAL AI ETHICS INSTITUTE.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.