• Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • Core Principles of Responsible AI
    • Accountability
    • Fairness
    • Privacy
    • Safety and Security
    • Sustainability
    • Transparency
  • Special Topics
    • AI in Industry
    • Ethical Implications
    • Human-Centered Design
    • Regulatory Landscape
    • Technical Methods
  • Living Dictionary
  • State of AI Ethics
  • AI Ethics Brief
  • 🇫🇷
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

Algorithmic Harms in Child Welfare: Uncertainties in Practice, Organization, and Street-level Decision-Making

September 25, 2023

🔬 Research Summary by Devansh Saxena, a Presidential Postdoctoral Fellow at Carnegie Mellon University in the Human-Computer Interaction Institute. He studies sociotechnical practices of decision-making in the public sector and the social impacts of introducing AI in these contexts.

[Original paper by Devansh Saxena and Shion Guha]


Overview: Algorithmic decision-support (ADS) systems are increasingly being adopted by public sector agencies to make high-stakes decisions about citizens as they promise to allocate resources efficiently, remove human bias from the process, and lead to consistent decisions for all cases. We spent two years at a child welfare agency where we observed meetings where critical decisions are made and spoke with caseworkers to gather their perspectives on ADS systems. We learned that the interplay between systemic mechanics (i.e., policies, resource constraints, nuances of labor) and ADS systems can lead to process-oriented harms where they adversely affect the fairness of the decision-making process itself.  


Introduction

ADS systems are increasingly utilized in various government sectors, including child welfare, criminal justice, public education, unemployment services, and homeless services. The aim is to enhance public service delivery and allocate resources more effectively to cases in greatest need. These algorithms influence decisions such as which child maltreatment cases to investigate, who should receive preventive services, where additional policing is required, and who is eligible for public housing and unemployment benefits. In child welfare, public scrutiny and media attention have intensified due to removing children from their parents’ care and cases where the system failed to protect children from abuse. This has increased the pressure on child welfare agencies to use algorithmic systems, even though evidence suggests that these systems disproportionately affect low-income and minority communities. Critics have gathered substantial evidence of the harmful consequences of haphazardly deploying ADS systems in the public sector. These algorithmic harms, often disparate in their impact, have led scholars to create a taxonomy of sociotechnical harms and challenge misconceptions about ADS system performance. In this study, we draw attention to harms that are harder to pin down in specific disparate outcomes, policy or design decisions, or the workers’ training but adversely impact the fairness of the decision-making process itself.   We unpack process-oriented harms and their impact on professional practices, agency administration, and street-level decision-making in child welfare.

Key Insights

Some Context

In a two-year study at a child welfare agency, we discovered caseworkers heavily relied on algorithmic tools for high-stakes decisions, such as evaluating a child’s mental health needs, selecting foster caregivers, determining foster parent compensation, and assessing sex-trafficking risks. Despite lacking statistical training, caseworkers were legally obligated to use these algorithms. However, the algorithms often disregarded systemic constraints, rendering them impractical and frustrating for social work practice. Notably, the CANS algorithm assessed a child’s mental health needs and risks, influencing foster care placements and compensations. Caseworkers often manipulated CANS scores to boost foster parent pay due to a shortage of quality homes, eroding trust in algorithmic decision-making over time.

Algorithmic Harms to Social Work Practice and Child Welfare Agencies

The child welfare system faces persistent issues of inconsistent decision-making and high caseworker turnover, exacerbated by many caseworkers leaving within their initial two years. Research indicates that it takes about two years for caseworkers to become proficient in their roles, involving interactions with various stakeholders and understanding child welfare practices. Inexperienced caseworkers rely on algorithms like CANS, assuming their objectivity. However, studies show biases can seep into several of these systems through inherently subjective variables such as parents’ cooperation with the agency and stress level without their input or evaluating the effectiveness of the agency’s intervention. Ironically, algorithms contribute to caseworker turnover due to frustrations with such adversarial practices. The discourse on algorithms in the public sector often overlooks the extensive data collection efforts required from workers as a means to improve decision-making. Caseworkers also expressed frustrations with the CANS algorithm, which demands data collection while stripping them of decision-making power. Predictive systems in high-stakes domains are designed to extract discretionary power from workers, replacing it with probabilistic outcomes. Here, human-in-the-loop algorithmic solutions are often proposed, but inexperienced workers are equally likely to make mistakes. 

At this agency, serious data provenance concerns exist regarding data collected about children through the CANS algorithm. This data is heavily manipulated by both the caseworkers and foster parents. Initially, CANS was repurposed to calculate foster parent compensations, aiming for fair resource allocation and cost reduction. However, data specialists uncovered cases of algorithm gaming, leading to continually increasing compensations. Additionally, inflated CANS scores send foster children to unnecessary services, straining an underfunded system. This exacerbates barriers to evidence-based decision-making and requires the caseworkers to provide added labor to address the disruption caused to decision-making processes. Caseworkers are obligated to support AI systems but lack control over critical processes. Ethical concerns arise when repurposing algorithms, like CANS, originally designed for mental health assessments, and here, public sector agencies may feel compelled to adopt such tools for innovation and political reasons, further complicating their decision-making ecosystem.

Between the Lines

In this study, we show how functionality issues in ADS systems can lead to process-oriented harms, adversely impacting the nature of professional practice and administration at the agency and leading to inconsistent and unreliable street-level decision-making. We show how caseworkers are forced to assume the added labor resulting from these algorithmic harms and must conduct repair work to address the disruption caused to administrative processes and street-level decision-making. This repair work is conducted within the bounds of organizational pressures, time constraints, and limited resources, making it difficult to properly locate and assess process-oriented harms.

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

🔍 SEARCH

Spotlight

Canada’s Minister of AI and Digital Innovation is a Historic First. Here’s What We Recommend.

Am I Literate? Redefining Literacy in the Age of Artificial Intelligence

AI Policy Corner: The Texas Responsible AI Governance Act

AI Policy Corner: Singapore’s National AI Strategy 2.0

AI Governance in a Competitive World: Balancing Innovation, Regulation and Ethics | Point Zero Forum 2025

related posts

  • Exploring XAI for the Arts: Explaining Latent Space in Generative Music

    Exploring XAI for the Arts: Explaining Latent Space in Generative Music

  • Breaking Your Neural Network with Adversarial Examples

    Breaking Your Neural Network with Adversarial Examples

  • Digital Sex Crime, Online Misogyny, and Digital Feminism in South Korea

    Digital Sex Crime, Online Misogyny, and Digital Feminism in South Korea

  • Towards Sustainable Conversational AI

    Towards Sustainable Conversational AI

  • Cascaded Debiasing : Studying the Cumulative Effect of Multiple Fairness-Enhancing Interventions

    Cascaded Debiasing : Studying the Cumulative Effect of Multiple Fairness-Enhancing Interventions

  • Research summary: Using Multimodal Sensing to Improve Awareness in Human-AI Interaction

    Research summary: Using Multimodal Sensing to Improve Awareness in Human-AI Interaction

  • The Social Metaverse: Battle for Privacy

    The Social Metaverse: Battle for Privacy

  • Why We Need to Audit Government AI

    Why We Need to Audit Government AI

  • Looking before we leap: Expanding ethical review processes for AI and data science research

    Looking before we leap: Expanding ethical review processes for AI and data science research

  • 2022 AI Index Report - Technical AI Ethics Chapter

    2022 AI Index Report - Technical AI Ethics Chapter

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer

Categories


• Blog
• Research Summaries
• Columns
• Core Principles of Responsible AI
• Special Topics

Signature Content


• The State Of AI Ethics

• The Living Dictionary

• The AI Ethics Brief

Learn More


• About

• Open Access Policy

• Contributions Policy

• Editorial Stance on AI Tools

• Press

• Donate

• Contact

The AI Ethics Brief (bi-weekly newsletter)

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.


Archive

  • © MONTREAL AI ETHICS INSTITUTE. All rights reserved 2024.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.