• Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • Core Principles of Responsible AI
    • Accountability
    • Fairness
    • Privacy
    • Safety and Security
    • Sustainability
    • Transparency
  • Special Topics
    • AI in Industry
    • Ethical Implications
    • Human-Centered Design
    • Regulatory Landscape
    • Technical Methods
  • Living Dictionary
  • State of AI Ethics
  • AI Ethics Brief
  • 🇫🇷
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

Outsourced & Automated: How AI Companies Have Taken Over Government Decision-Making

October 3, 2023

🔬 Research Summary by Grant Fergusson, an Equal Justice Works Fellow at the Electronic Privacy Information Center (EPIC), where he focuses on AI and automated decision-making systems within state and local government.

[Original paper by Grant Fergusson]


Overview: Across the United States, state and local governments are outsourcing critical government services to private AI companies without any public input–and many of these companies’ AI systems are producing biased and error-prone outputs. This legal research report draws on extensive AI contracting research and interviews with advocates and government employees to evaluate the variety of AI systems embedded within the U.S. government and highlight how government procurement reform can spur responsible AI development and use.


Introduction

The next time you send your children to school, get stuck in traffic, or seek out government services, pay close attention to the technologies you encounter. Are there cameras in your school or along the road? Do you know when you’re talking to a government employee or a chatbot? Or perhaps you’ve encountered government AI systems without realizing it–systems that screen applicants, predict crime, allocate welfare, and more?

Government AI systems are everywhere, and in 2021, the Electronic Privacy Information Center (EPIC) set out to investigate what government decisions AI systems were making–and how well they were doing. EPIC identified 621 AI contracts through a mix of open records requests and research into state contracting databases, then paired its contract research with qualitative interviews with legal advocates and government employees to construct a picture of where AI systems were operating and how they were doing.

The results were clear: across all fifty U.S. states, government agencies are outsourcing critical government decisions to biased and error-prone AI systems, all while funneling millions of U.S. dollars to a handful of private companies.

Key Insights

Across the United States, state and local governments are experimenting with AI tools that outsource and automate important government decisions. These tools assign children to schools, inform medical decisions about patients, impact policing decisions about where to patrol and whom to target, and determine who receives public benefits. And they make these decisions in sometimes discriminatory ways: a wave of new litigation across the country reveals how AI errors disproportionately punish the low-income and marginalized communities most in need of government support.

How did we get here? Facing a mix of austerity measures, hiring challenges, and government modernization efforts, many government agencies have turned to private AI companies promising greater efficiency and cost savings. But AI systems are different from other products and services procured by the government: they displace government decision-making and discretion, often in difficult-to-decipher and manage ways. In this report, EPIC combined extensive research into state AI contracts, interviews with legal advocates and government officials, and independent research into AI oversight mechanisms to offer a first-of-its-kind look at the world of government AI. EPIC’s report highlights (1) the demonstrated risks of government AI systems, (2) the vendors, procurement processes, and contract provisions that have allowed faulty AI systems to propagate throughout government, and (3) reforms that government agencies could pursue to mitigate AI risks and regain control over government decisions.

The Risks of Government AI

Government AI systems, defined as “any system that automates a process, aids human decision-making or replaces human decision-making,” inject three main risks into government programs. 

First, AI systems can produce privacy risks. These risks come from how government agencies and private contractors use (and abuse) your personal data. For example, when a government agency uses AI to allocate public benefits, it needs to give private AI developers access to a large swath of personal data provided to the government, which an AI developer can then use to train its system and produce inferences about those on welfare rolls.

Second, AI systems can produce accuracy risks. An AI system’s accuracy, reliability, and effectiveness depend entirely on the data used to train and operate the system, the analytic technique used to produce system outputs, and the system’s programmed risk tolerance. Without proper safeguards and oversight, AI systems can produce flawed, biased, or overly simplistic outputs.

Third, AI systems can produce accountability risks. These risks come from how AI systems displace the processes for holding our governments accountable. Traditionally, when an agency official made a decision about you, there were opportunities for public comment, hearings, or a record supporting the decision. When private vendors make these decisions, agencies and the public are left to rely on procurement procedures and vendor disclosures to understand what’s happening.

The AI Vendor Landscape

While several state agencies build and maintain their own AI systems, private companies develop and operate most government AI systems. There are three main reasons why private AI systems have become so commonplace in government:

  1. Modern procurement processes focus on efficiency and cost-savings instead of oversight and transparency, making it easy for vendors to market their AI systems without reporting system limitations or risks.
  2. Many of today’s largest AI companies aggressively market their systems to state agencies and legislatures, creating political pressure to adopt AI systems.
  3. State agencies struggle to attract employees with AI expertise, so when an agency wants to automate or modernize their processes, they must rely blindly on AI vendors.

However, most of the AI contracts EPIC found were also signed without a competitive procurement process. Instead, many AI companies coordinate with intermediary companies to pursue “cooperative purchasing agreements”: contracts between one state and one company with a portfolio of products and services that dozens of other states can access without going through their procurement process. Perhaps more shockingly, a handful of companies receive the lion’s share of public funds from AI contracts: Deloitte, for example, received an estimated $193 million across 16 state contracts.

Recommendations & Reforms

Government agencies and state legislatures still have options to rein in harmful AI systems, and EPIC’s report proposed four recommendations:

  1. Government agencies must establish AI audit procedures to monitor government AI, including training data audits and red-teaming requirements.
  2. Procurement officials can include protective contract language in AI contracts to restrict data use, support human review of AI decisions, and increase transparency.
  3. Agencies can improve public disclosures of AI decision-making and empower those harmed by government AI through statutory legal remedies.
  4. Governments can reprioritize investment in non-AI options unless and until they can effectively mitigate AI risks.

Between the lines

Government agencies can be leaders in responsible AI use but lack the funding and training to oversee AI systems effectively. Instead, many U.S. agencies are left to depend on powerful AI companies to manage core government functions without any ethical or regulatory guardrails, and marginalized communities are being harmed as a result.

EPIC’s research is meant to spotlight an overlooked area of AI ethics: government AI use and procurement. It captures a disturbing trend of private AI companies quietly embedding their AI systems into government services, all while the public focuses on commercial AI use and new, generative AI models. However, this report only scrapes the surface. There are no doubt other AI contracts out there, both in the United States and worldwide, and there is more work to be done to flesh out what responsible government AI procurement and use looks like.

EPIC plans to continue researching government AI systems so legal advocates and AI ethicists have the information they need to combat AI harms and articulate stronger AI oversight mechanisms.

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

🔍 SEARCH

Spotlight

AI Policy Corner: Frontier AI Safety Commitments, AI Seoul Summit 2024

AI Policy Corner: The Colorado State Deepfakes Act

Special Edition: Honouring the Legacy of Abhishek Gupta (1992–2024)

AI Policy Corner: The Turkish Artificial Intelligence Law Proposal

From Funding Crisis to AI Misuse: Critical Digital Rights Challenges from RightsCon 2025

related posts

  • Bridging the Gap Between AI and the Public (TEDxYouth@GandyStreet)

    Bridging the Gap Between AI and the Public (TEDxYouth@GandyStreet)

  • Can Large Language Models Provide Security & Privacy Advice? Measuring the Ability of LLMs to Re...

    Can Large Language Models Provide Security & Privacy Advice? Measuring the Ability of LLMs to Re...

  • Friend or foe? Exploring the implications of large language models on the science system

    Friend or foe? Exploring the implications of large language models on the science system

  • Research summary: The Deepfake Detection  Challenge: Insights and Recommendations  for AI and Media ...

    Research summary: The Deepfake Detection Challenge: Insights and Recommendations for AI and Media ...

  • Incentivized Symbiosis: A Paradigm for Human-Agent Coevolution

    Incentivized Symbiosis: A Paradigm for Human-Agent Coevolution

  • AI Ethics and Ordoliberalism 2.0: Towards A ‘Digital Bill of Rights

    AI Ethics and Ordoliberalism 2.0: Towards A ‘Digital Bill of Rights

  • Bridging the Gap: The Case For an ‘Incompletely Theorized Agreement’ on AI Policy (Research Summary)

    Bridging the Gap: The Case For an ‘Incompletely Theorized Agreement’ on AI Policy (Research Summary)

  • Towards Environmentally Equitable AI via Geographical Load Balancing

    Towards Environmentally Equitable AI via Geographical Load Balancing

  • Fairness implications of encoding protected categorical attributes

    Fairness implications of encoding protected categorical attributes

  • Mapping value sensitive design onto AI for social good principles

    Mapping value sensitive design onto AI for social good principles

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer

Categories


• Blog
• Research Summaries
• Columns
• Core Principles of Responsible AI
• Special Topics

Signature Content


• The State Of AI Ethics

• The Living Dictionary

• The AI Ethics Brief

Learn More


• About

• Open Access Policy

• Contributions Policy

• Editorial Stance on AI Tools

• Press

• Donate

• Contact

The AI Ethics Brief (bi-weekly newsletter)

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.


Archive

  • © MONTREAL AI ETHICS INSTITUTE. All rights reserved 2024.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.