• Skip to main content
  • Skip to secondary menu
  • Skip to primary sidebar
  • Skip to footer
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

  • Articles
    • Public Policy
    • Privacy & Security
    • Human Rights
      • Ethics
      • JEDI (Justice, Equity, Diversity, Inclusion
    • Climate
    • Design
      • Emerging Technology
    • Application & Adoption
      • Health
      • Education
      • Government
        • Military
        • Public Works
      • Labour
    • Arts & Culture
      • Film & TV
      • Music
      • Pop Culture
      • Digital Art
  • Columns
    • AI Policy Corner
    • Recess
  • The AI Ethics Brief
  • AI Literacy
    • Research Summaries
    • AI Ethics Living Dictionary
    • Learning Community
  • The State of AI Ethics Report
    • Volume 6 (February 2022)
    • Volume 5 (July 2021)
    • Volume 4 (April 2021)
    • Volume 3 (Jan 2021)
    • Volume 2 (Oct 2020)
    • Volume 1 (June 2020)
  • About
    • Our Contributions Policy
    • Our Open Access Policy
    • Contact
    • Donate

Algorithmic Harms in Child Welfare: Uncertainties in Practice, Organization, and Street-level Decision-Making

September 25, 2023

🔬 Research Summary by Devansh Saxena, a Presidential Postdoctoral Fellow at Carnegie Mellon University in the Human-Computer Interaction Institute. He studies sociotechnical practices of decision-making in the public sector and the social impacts of introducing AI in these contexts.

[Original paper by Devansh Saxena and Shion Guha]


Overview: Algorithmic decision-support (ADS) systems are increasingly being adopted by public sector agencies to make high-stakes decisions about citizens as they promise to allocate resources efficiently, remove human bias from the process, and lead to consistent decisions for all cases. We spent two years at a child welfare agency where we observed meetings where critical decisions are made and spoke with caseworkers to gather their perspectives on ADS systems. We learned that the interplay between systemic mechanics (i.e., policies, resource constraints, nuances of labor) and ADS systems can lead to process-oriented harms where they adversely affect the fairness of the decision-making process itself.  


Introduction

ADS systems are increasingly utilized in various government sectors, including child welfare, criminal justice, public education, unemployment services, and homeless services. The aim is to enhance public service delivery and allocate resources more effectively to cases in greatest need. These algorithms influence decisions such as which child maltreatment cases to investigate, who should receive preventive services, where additional policing is required, and who is eligible for public housing and unemployment benefits. In child welfare, public scrutiny and media attention have intensified due to removing children from their parents’ care and cases where the system failed to protect children from abuse. This has increased the pressure on child welfare agencies to use algorithmic systems, even though evidence suggests that these systems disproportionately affect low-income and minority communities. Critics have gathered substantial evidence of the harmful consequences of haphazardly deploying ADS systems in the public sector. These algorithmic harms, often disparate in their impact, have led scholars to create a taxonomy of sociotechnical harms and challenge misconceptions about ADS system performance. In this study, we draw attention to harms that are harder to pin down in specific disparate outcomes, policy or design decisions, or the workers’ training but adversely impact the fairness of the decision-making process itself.   We unpack process-oriented harms and their impact on professional practices, agency administration, and street-level decision-making in child welfare.

Key Insights

Some Context

In a two-year study at a child welfare agency, we discovered caseworkers heavily relied on algorithmic tools for high-stakes decisions, such as evaluating a child’s mental health needs, selecting foster caregivers, determining foster parent compensation, and assessing sex-trafficking risks. Despite lacking statistical training, caseworkers were legally obligated to use these algorithms. However, the algorithms often disregarded systemic constraints, rendering them impractical and frustrating for social work practice. Notably, the CANS algorithm assessed a child’s mental health needs and risks, influencing foster care placements and compensations. Caseworkers often manipulated CANS scores to boost foster parent pay due to a shortage of quality homes, eroding trust in algorithmic decision-making over time.

Algorithmic Harms to Social Work Practice and Child Welfare Agencies

The child welfare system faces persistent issues of inconsistent decision-making and high caseworker turnover, exacerbated by many caseworkers leaving within their initial two years. Research indicates that it takes about two years for caseworkers to become proficient in their roles, involving interactions with various stakeholders and understanding child welfare practices. Inexperienced caseworkers rely on algorithms like CANS, assuming their objectivity. However, studies show biases can seep into several of these systems through inherently subjective variables such as parents’ cooperation with the agency and stress level without their input or evaluating the effectiveness of the agency’s intervention. Ironically, algorithms contribute to caseworker turnover due to frustrations with such adversarial practices. The discourse on algorithms in the public sector often overlooks the extensive data collection efforts required from workers as a means to improve decision-making. Caseworkers also expressed frustrations with the CANS algorithm, which demands data collection while stripping them of decision-making power. Predictive systems in high-stakes domains are designed to extract discretionary power from workers, replacing it with probabilistic outcomes. Here, human-in-the-loop algorithmic solutions are often proposed, but inexperienced workers are equally likely to make mistakes. 

At this agency, serious data provenance concerns exist regarding data collected about children through the CANS algorithm. This data is heavily manipulated by both the caseworkers and foster parents. Initially, CANS was repurposed to calculate foster parent compensations, aiming for fair resource allocation and cost reduction. However, data specialists uncovered cases of algorithm gaming, leading to continually increasing compensations. Additionally, inflated CANS scores send foster children to unnecessary services, straining an underfunded system. This exacerbates barriers to evidence-based decision-making and requires the caseworkers to provide added labor to address the disruption caused to decision-making processes. Caseworkers are obligated to support AI systems but lack control over critical processes. Ethical concerns arise when repurposing algorithms, like CANS, originally designed for mental health assessments, and here, public sector agencies may feel compelled to adopt such tools for innovation and political reasons, further complicating their decision-making ecosystem.

Between the Lines

In this study, we show how functionality issues in ADS systems can lead to process-oriented harms, adversely impacting the nature of professional practice and administration at the agency and leading to inconsistent and unreliable street-level decision-making. We show how caseworkers are forced to assume the added labor resulting from these algorithmic harms and must conduct repair work to address the disruption caused to administrative processes and street-level decision-making. This repair work is conducted within the bounds of organizational pressures, time constraints, and limited resources, making it difficult to properly locate and assess process-oriented harms.

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

🔍 SEARCH

Spotlight

ALL IN Conference 2025: Four Key Takeaways from Montreal

Beyond Dependency: The Hidden Risk of Social Comparison in Chatbot Companionship

AI Policy Corner: Restriction vs. Regulation: Comparing State Approaches to AI Mental Health Legislation

Beyond Consultation: Building Inclusive AI Governance for Canada’s Democratic Future

AI Policy Corner: U.S. Executive Order on Advancing AI Education for American Youth

related posts

  • AI and Great Power Competition: Implications for National Security

    AI and Great Power Competition: Implications for National Security

  • Are we ready for a multispecies Westworld?

    Are we ready for a multispecies Westworld?

  • Towards Community-Driven Generative AI

    Towards Community-Driven Generative AI

  • How Culturally Aligned are Large Language Models?

    How Culturally Aligned are Large Language Models?

  • The algorithmic imaginary: exploring the ordinary affects of Facebook algorithms (Research Summary)

    The algorithmic imaginary: exploring the ordinary affects of Facebook algorithms (Research Summary)

  • How Prevalent is Gender Bias in ChatGPT? - Exploring German and English ChatGPT Responses

    How Prevalent is Gender Bias in ChatGPT? - Exploring German and English ChatGPT Responses

  • Augmented Datasheets for Speech Datasets and Ethical Decision-Making

    Augmented Datasheets for Speech Datasets and Ethical Decision-Making

  • Bridging the Gap Between AI and the Public (TEDxYouth@GandyStreet)

    Bridging the Gap Between AI and the Public (TEDxYouth@GandyStreet)

  • DC-Check: A Data-Centric AI checklist to guide the development of reliable machine learning systems

    DC-Check: A Data-Centric AI checklist to guide the development of reliable machine learning systems

  • AI Policy Corner: U.S. Executive Order on Advancing AI Education for American Youth

    AI Policy Corner: U.S. Executive Order on Advancing AI Education for American Youth

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer


Articles

Columns

AI Literacy

The State of AI Ethics Report


 

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.

Contact

Donate


  • © 2025 MONTREAL AI ETHICS INSTITUTE.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.