• Skip to main content
  • Skip to secondary menu
  • Skip to primary sidebar
  • Skip to footer
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

  • Articles
    • Public Policy
    • Privacy & Security
    • Human Rights
      • Ethics
      • JEDI (Justice, Equity, Diversity, Inclusion
    • Climate
    • Design
      • Emerging Technology
    • Application & Adoption
      • Health
      • Education
      • Government
        • Military
        • Public Works
      • Labour
    • Arts & Culture
      • Film & TV
      • Music
      • Pop Culture
      • Digital Art
  • Columns
    • AI Policy Corner
    • Recess
    • Tech Futures
  • The AI Ethics Brief
  • AI Literacy
    • Research Summaries
    • AI Ethics Living Dictionary
    • Learning Community
  • The State of AI Ethics Report
    • Volume 7 (November 2025)
    • Volume 6 (February 2022)
    • Volume 5 (July 2021)
    • Volume 4 (April 2021)
    • Volume 3 (Jan 2021)
    • Volume 2 (Oct 2020)
    • Volume 1 (June 2020)
  • About
    • Our Contributions Policy
    • Our Open Access Policy
    • Contact
    • Donate

Research summary: Mass Incarceration and the Future of AI

August 9, 2020

Summary contributed by our researcher Alexandrine Royer, who works at The Foundation for Genocide Education.

*Authors of full paper & link at the bottom


Mini-summary: The US, with a staggering 25% of the world’s prison population, has been called the incarceration nation. For millions of Americans, background checks obstruct their social mobility and perpetuate the stigma around criminal records. The digitization of individual records and the growing use of online background checks will lead to more automated barriers and biases that can prevent equal access to employment, healthcare, education and housing. The easy access and distribution of an individual’s sensitive data included in their criminal record elicits a debate on whether public safety overrides principles of privacy and human dignity. The authors, in this preliminary discussion paper, address the urgency of regulating background screening and invite further research on questions of data access, individual rights and standards for data integrity.

Full summary:

The integration of AI decision-making in the US criminal justice system and the biases within these systems has sparked numerous controversies. Automated bias in the courts, policing and law enforcement agencies will impact the lives of hundreds of millions of people. In the US, one in three individuals lives with a criminal record. The over-incarceration of the country’s population has fueled government agencies’ need to amass and track data on these citizens from aggregated sources.  Arrest and/or conviction records, made readily available for background screening tools under the guise of public safety, are lifelong shackles that restrict social mobility and access to employment, housing and educational opportunities. Hodge & Leonard are hoping to start a discussion on the line between an individual’s human rights and information that is necessary for society’s well-being. 

Hodge and Leonard, a mother-daughter duo, speak from personal experience. Hodge served a 78-month sentence in Federal prison before becoming a criminal justice advocate. Drawing on Weber’s social theory, they point to the creation of an underclass in American society, where people living with criminal records cannot fully compete in the open labour market and face open stigmatization offline and online. The rise in criminal background checks due to factors such as a fear of terrorism, the growth of the gig economy and the need to digitize government records, serve to accentuate the divide between this underclass and the rest of American society. Individuals with criminal records are unable to dispute privacy-infringing information spread and purchased online. They are also not given any indication of what types of information will prop up in an online search. With nearly half of the FBI’s background checks failing to indicate what was the outcome of a case after an arrest, individuals with dismissed charges and no convictions face unjust prejudice. 

The paper works to stimulate a public debate on questions of access to data, individual rights of privacy and dignity, and setting quality standards for the source of data that is shared. It touches on building mechanisms to ensure that these standards are being met. The modernization of government databases to online records happened before parameters of access to low-level data and the integrity of source data were put in place. People living with criminal records face around 50 000 known sanctions and restrictions, unchecked background screening tools built on incomplete and sometimes inaccurate data will only serve to increase that number. 

Equality, opportunity, human dignity and respect for privacy are often cited as the core values of American society, yet automated online background checks impede these rights and freedoms. Individuals with criminal records need to have their data rights applied in practice and hold shared ownership over their personal information. Without these protections, background checks will continue to obstruct upward mobility and perpetuate the “life sentence” suffered by individuals with records. Policymakers tackling how data is collected, stored and shared must include in their decision the voices of vulnerable populations.


Original paper by Teresa Y. Hodge and Laurin Leonard: https://carrcenter.hks.harvard.edu/files/cchr/files/CCDP_2020-009.pdf?m=1595005129

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

🔍 SEARCH

Spotlight

Illustration of a coral reef ecosystem

Tech Futures: Diversity of Thought and Experience: The UN’s Scientific Panel on AI

This image shows a large white, traditional, old building. The top half of the building represents the humanities (which is symbolised by the embedded text from classic literature which is faintly shown ontop the building). The bottom section of the building is embossed with mathematical formulas to represent the sciences. The middle layer of the image is heavily pixelated. On the steps at the front of the building there is a group of scholars, wearing formal suits and tie attire, who are standing around at the enternace talking and some of them are sitting on the steps. There are two stone, statute-like hands that are stretching the building apart from the left side. In the forefront of the image, there are 8 students - which can only be seen from the back. Their graduation gowns have bright blue hoods and they all look as though they are walking towards the old building which is in the background at a distance. There are a mix of students in the foreground.

Tech Futures: Co-opting Research and Education

Agentic AI systems and algorithmic accountability: a new era of e-commerce

ALL IN Conference 2025: Four Key Takeaways from Montreal

Beyond Dependency: The Hidden Risk of Social Comparison in Chatbot Companionship

related posts

  • Exploring the Subtleties of Privacy Protection in Machine Learning Research in Québec 

    Exploring the Subtleties of Privacy Protection in Machine Learning Research in Québec 

  • The Impact of Recommendation Systems on Opinion Dynamics: Microscopic versus Macroscopic Effects

    The Impact of Recommendation Systems on Opinion Dynamics: Microscopic versus Macroscopic Effects

  • UK’s roadmap to AI supremacy: Is the ‘AI War’ heating up?

    UK’s roadmap to AI supremacy: Is the ‘AI War’ heating up?

  • Research summary: On the Edge of Tomorrow: Canada’s AI Augmented Workforce

    Research summary: On the Edge of Tomorrow: Canada’s AI Augmented Workforce

  • Defining organizational AI governance

    Defining organizational AI governance

  • On the Impact of Machine Learning Randomness on Group Fairness

    On the Impact of Machine Learning Randomness on Group Fairness

  • Research summary: A Focus on Neural Machine Translation for African Languages

    Research summary: A Focus on Neural Machine Translation for African Languages

  • Exploring the under-explored areas in teaching tech ethics today

    Exploring the under-explored areas in teaching tech ethics today

  • Jack Clark Presenting the 2022 AI Index Report

    Jack Clark Presenting the 2022 AI Index Report

  • Setting the Right Expectations: Algorithmic Recourse Over Time

    Setting the Right Expectations: Algorithmic Recourse Over Time

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer


Articles

Columns

AI Literacy

The State of AI Ethics Report


 

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.

Contact

Donate


  • © 2025 MONTREAL AI ETHICS INSTITUTE.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.