• Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • Core Principles of Responsible AI
    • Accountability
    • Fairness
    • Privacy
    • Safety and Security
    • Sustainability
    • Transparency
  • Special Topics
    • AI in Industry
    • Ethical Implications
    • Human-Centered Design
    • Regulatory Landscape
    • Technical Methods
  • Living Dictionary
  • State of AI Ethics
  • AI Ethics Brief
  • 🇫🇷
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

Research summary: Mass Incarceration and the Future of AI

August 9, 2020

Summary contributed by our researcher Alexandrine Royer, who works at The Foundation for Genocide Education.

*Authors of full paper & link at the bottom


Mini-summary: The US, with a staggering 25% of the world’s prison population, has been called the incarceration nation. For millions of Americans, background checks obstruct their social mobility and perpetuate the stigma around criminal records. The digitization of individual records and the growing use of online background checks will lead to more automated barriers and biases that can prevent equal access to employment, healthcare, education and housing. The easy access and distribution of an individual’s sensitive data included in their criminal record elicits a debate on whether public safety overrides principles of privacy and human dignity. The authors, in this preliminary discussion paper, address the urgency of regulating background screening and invite further research on questions of data access, individual rights and standards for data integrity.

Full summary:

The integration of AI decision-making in the US criminal justice system and the biases within these systems has sparked numerous controversies. Automated bias in the courts, policing and law enforcement agencies will impact the lives of hundreds of millions of people. In the US, one in three individuals lives with a criminal record. The over-incarceration of the country’s population has fueled government agencies’ need to amass and track data on these citizens from aggregated sources.  Arrest and/or conviction records, made readily available for background screening tools under the guise of public safety, are lifelong shackles that restrict social mobility and access to employment, housing and educational opportunities. Hodge & Leonard are hoping to start a discussion on the line between an individual’s human rights and information that is necessary for society’s well-being. 

Hodge and Leonard, a mother-daughter duo, speak from personal experience. Hodge served a 78-month sentence in Federal prison before becoming a criminal justice advocate. Drawing on Weber’s social theory, they point to the creation of an underclass in American society, where people living with criminal records cannot fully compete in the open labour market and face open stigmatization offline and online. The rise in criminal background checks due to factors such as a fear of terrorism, the growth of the gig economy and the need to digitize government records, serve to accentuate the divide between this underclass and the rest of American society. Individuals with criminal records are unable to dispute privacy-infringing information spread and purchased online. They are also not given any indication of what types of information will prop up in an online search. With nearly half of the FBI’s background checks failing to indicate what was the outcome of a case after an arrest, individuals with dismissed charges and no convictions face unjust prejudice. 

The paper works to stimulate a public debate on questions of access to data, individual rights of privacy and dignity, and setting quality standards for the source of data that is shared. It touches on building mechanisms to ensure that these standards are being met. The modernization of government databases to online records happened before parameters of access to low-level data and the integrity of source data were put in place. People living with criminal records face around 50 000 known sanctions and restrictions, unchecked background screening tools built on incomplete and sometimes inaccurate data will only serve to increase that number. 

Equality, opportunity, human dignity and respect for privacy are often cited as the core values of American society, yet automated online background checks impede these rights and freedoms. Individuals with criminal records need to have their data rights applied in practice and hold shared ownership over their personal information. Without these protections, background checks will continue to obstruct upward mobility and perpetuate the “life sentence” suffered by individuals with records. Policymakers tackling how data is collected, stored and shared must include in their decision the voices of vulnerable populations.


Original paper by Teresa Y. Hodge and Laurin Leonard: https://carrcenter.hks.harvard.edu/files/cchr/files/CCDP_2020-009.pdf?m=1595005129

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

🔍 SEARCH

Spotlight

Canada’s Minister of AI and Digital Innovation is a Historic First. Here’s What We Recommend.

Am I Literate? Redefining Literacy in the Age of Artificial Intelligence

AI Policy Corner: The Texas Responsible AI Governance Act

AI Policy Corner: Singapore’s National AI Strategy 2.0

AI Governance in a Competitive World: Balancing Innovation, Regulation and Ethics | Point Zero Forum 2025

related posts

  • SHADES: Towards a Multilingual Assessment of Stereotypes in Large Language Models

    SHADES: Towards a Multilingual Assessment of Stereotypes in Large Language Models

  • The 28 Computer Vision Datasets Used in Algorithmic Fairness Research

    The 28 Computer Vision Datasets Used in Algorithmic Fairness Research

  • Theorizing Femininity in AI: a Framework for Undoing Technology’s Gender Troubles (Research Summary)

    Theorizing Femininity in AI: a Framework for Undoing Technology’s Gender Troubles (Research Summary)

  • Supporting Human-LLM collaboration in Auditing LLMs with LLMs

    Supporting Human-LLM collaboration in Auditing LLMs with LLMs

  • U.S.-EU Trade and Technology Council Inaugural Joint Statement – A look into what’s in store for AI?

    U.S.-EU Trade and Technology Council Inaugural Joint Statement – A look into what’s in store for AI?

  • The State of AI Ethics Report (Volume 6)

    The State of AI Ethics Report (Volume 6)

  • Using Pre-Trained Language Models for Producing Counter Narratives Against Hate Speech: a Comparativ...

    Using Pre-Trained Language Models for Producing Counter Narratives Against Hate Speech: a Comparativ...

  • Corporate Governance of Artificial Intelligence in the Public Interest

    Corporate Governance of Artificial Intelligence in the Public Interest

  • Self-Improving Diffusion Models with Synthetic Data

    Self-Improving Diffusion Models with Synthetic Data

  • Exploiting Large Language Models (LLMs) through Deception Techniques and Persuasion Principles

    Exploiting Large Language Models (LLMs) through Deception Techniques and Persuasion Principles

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer

Categories


• Blog
• Research Summaries
• Columns
• Core Principles of Responsible AI
• Special Topics

Signature Content


• The State Of AI Ethics

• The Living Dictionary

• The AI Ethics Brief

Learn More


• About

• Open Access Policy

• Contributions Policy

• Editorial Stance on AI Tools

• Press

• Donate

• Contact

The AI Ethics Brief (bi-weekly newsletter)

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.


Archive

  • © MONTREAL AI ETHICS INSTITUTE. All rights reserved 2024.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.