🔬 Research Summary by Grant Fergusson, an Equal Justice Works Fellow at the Electronic Privacy Information Center (EPIC), where he focuses on AI and automated decision-making systems within state and local government.
[Original paper by Grant Fergusson]
Overview: Across the United States, state and local governments are outsourcing critical government services to private AI companies without any public input–and many of these companies’ AI systems are producing biased and error-prone outputs. This legal research report draws on extensive AI contracting research and interviews with advocates and government employees to evaluate the variety of AI systems embedded within the U.S. government and highlight how government procurement reform can spur responsible AI development and use.
Introduction
The next time you send your children to school, get stuck in traffic, or seek out government services, pay close attention to the technologies you encounter. Are there cameras in your school or along the road? Do you know when you’re talking to a government employee or a chatbot? Or perhaps you’ve encountered government AI systems without realizing it–systems that screen applicants, predict crime, allocate welfare, and more?
Government AI systems are everywhere, and in 2021, the Electronic Privacy Information Center (EPIC) set out to investigate what government decisions AI systems were making–and how well they were doing. EPIC identified 621 AI contracts through a mix of open records requests and research into state contracting databases, then paired its contract research with qualitative interviews with legal advocates and government employees to construct a picture of where AI systems were operating and how they were doing.
The results were clear: across all fifty U.S. states, government agencies are outsourcing critical government decisions to biased and error-prone AI systems, all while funneling millions of U.S. dollars to a handful of private companies.
Key Insights
Across the United States, state and local governments are experimenting with AI tools that outsource and automate important government decisions. These tools assign children to schools, inform medical decisions about patients, impact policing decisions about where to patrol and whom to target, and determine who receives public benefits. And they make these decisions in sometimes discriminatory ways: a wave of new litigation across the country reveals how AI errors disproportionately punish the low-income and marginalized communities most in need of government support.
How did we get here? Facing a mix of austerity measures, hiring challenges, and government modernization efforts, many government agencies have turned to private AI companies promising greater efficiency and cost savings. But AI systems are different from other products and services procured by the government: they displace government decision-making and discretion, often in difficult-to-decipher and manage ways. In this report, EPIC combined extensive research into state AI contracts, interviews with legal advocates and government officials, and independent research into AI oversight mechanisms to offer a first-of-its-kind look at the world of government AI. EPIC’s report highlights (1) the demonstrated risks of government AI systems, (2) the vendors, procurement processes, and contract provisions that have allowed faulty AI systems to propagate throughout government, and (3) reforms that government agencies could pursue to mitigate AI risks and regain control over government decisions.
The Risks of Government AI
Government AI systems, defined as “any system that automates a process, aids human decision-making or replaces human decision-making,” inject three main risks into government programs.
First, AI systems can produce privacy risks. These risks come from how government agencies and private contractors use (and abuse) your personal data. For example, when a government agency uses AI to allocate public benefits, it needs to give private AI developers access to a large swath of personal data provided to the government, which an AI developer can then use to train its system and produce inferences about those on welfare rolls.
Second, AI systems can produce accuracy risks. An AI system’s accuracy, reliability, and effectiveness depend entirely on the data used to train and operate the system, the analytic technique used to produce system outputs, and the system’s programmed risk tolerance. Without proper safeguards and oversight, AI systems can produce flawed, biased, or overly simplistic outputs.
Third, AI systems can produce accountability risks. These risks come from how AI systems displace the processes for holding our governments accountable. Traditionally, when an agency official made a decision about you, there were opportunities for public comment, hearings, or a record supporting the decision. When private vendors make these decisions, agencies and the public are left to rely on procurement procedures and vendor disclosures to understand what’s happening.
The AI Vendor Landscape
While several state agencies build and maintain their own AI systems, private companies develop and operate most government AI systems. There are three main reasons why private AI systems have become so commonplace in government:
- Modern procurement processes focus on efficiency and cost-savings instead of oversight and transparency, making it easy for vendors to market their AI systems without reporting system limitations or risks.
- Many of today’s largest AI companies aggressively market their systems to state agencies and legislatures, creating political pressure to adopt AI systems.
- State agencies struggle to attract employees with AI expertise, so when an agency wants to automate or modernize their processes, they must rely blindly on AI vendors.
However, most of the AI contracts EPIC found were also signed without a competitive procurement process. Instead, many AI companies coordinate with intermediary companies to pursue “cooperative purchasing agreements”: contracts between one state and one company with a portfolio of products and services that dozens of other states can access without going through their procurement process. Perhaps more shockingly, a handful of companies receive the lion’s share of public funds from AI contracts: Deloitte, for example, received an estimated $193 million across 16 state contracts.
Recommendations & Reforms
Government agencies and state legislatures still have options to rein in harmful AI systems, and EPIC’s report proposed four recommendations:
- Government agencies must establish AI audit procedures to monitor government AI, including training data audits and red-teaming requirements.
- Procurement officials can include protective contract language in AI contracts to restrict data use, support human review of AI decisions, and increase transparency.
- Agencies can improve public disclosures of AI decision-making and empower those harmed by government AI through statutory legal remedies.
- Governments can reprioritize investment in non-AI options unless and until they can effectively mitigate AI risks.
Between the lines
Government agencies can be leaders in responsible AI use but lack the funding and training to oversee AI systems effectively. Instead, many U.S. agencies are left to depend on powerful AI companies to manage core government functions without any ethical or regulatory guardrails, and marginalized communities are being harmed as a result.
EPIC’s research is meant to spotlight an overlooked area of AI ethics: government AI use and procurement. It captures a disturbing trend of private AI companies quietly embedding their AI systems into government services, all while the public focuses on commercial AI use and new, generative AI models. However, this report only scrapes the surface. There are no doubt other AI contracts out there, both in the United States and worldwide, and there is more work to be done to flesh out what responsible government AI procurement and use looks like.
EPIC plans to continue researching government AI systems so legal advocates and AI ethicists have the information they need to combat AI harms and articulate stronger AI oversight mechanisms.