• Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • Core Principles of Responsible AI
    • Accountability
    • Fairness
    • Privacy
    • Safety and Security
    • Sustainability
    • Transparency
  • Special Topics
    • AI in Industry
    • Ethical Implications
    • Human-Centered Design
    • Regulatory Landscape
    • Technical Methods
  • Living Dictionary
  • State of AI Ethics
  • AI Ethics Brief
  • 🇫🇷
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

Policy Brief: AI’s Promise and Peril for the U.S. Government (Research summary)

September 28, 2020

Summary contributed by our researcher Connor Wright (Philosophy, University of Exeter)

*Link to original paper + authors at the bottom.


Mini-summary: Having studied multiple federal agencies across the U.S., the authors have produced a list of 5 main findings from their research. Ranging from the uptake of Artificial Intelligence (AI) systems in government to the ability of AI to exacerbate social inequalities if mismanaged, the authors produce a concise and clear summary of the effects of AI on the U.S. government. How the government acts to instil the norms of legal explainability, non-discrimination and transparency will shape how the AI future of the U.S. is defined.


Full summary:

The influence of Artificial Intelligence (AI) in government procedures has the potential to not only reduce costs and increase efficiency, but also to help the government make fairer decisions. The authors of this paper have thus scoured federal agencies across the U.S. in order to see how the AI (if being used at all) is being implemented. Their study produced 5 main findings, which I shall introduce and expand on now.

1. Nearly half (45%) of federal agencies had implemented some form of AI toolkit:

Federal agencies have been taking advantage of the benefits AI brings. The agencies use the technology for aspects such as customer communication, as well as extracting huge amounts of data from the government’s data stream. In this sense, AI is slowly becoming the norm in the federal sphere. However, how do the public and private sectors compare?

2. The public sector lacks the technological sophistication possessed by the private sector:

The authors found that only 12% of the technologies used in the public sector could be deemed equivalent to that of the private sector. Without significant public investment, the sector will lag behind, finding it harder to see gains in accuracy enjoyed by the private sector. In order to guide such investment, how are federal agencies to go about designing AI systems?

3. In house AI systems are the way to go:

In house AI systems were found to be more adequately adjusted towards the complex legal requirements, as well as being more likely to be implemented in a compliant fashion. In fact, some 53% of the agencies studied had utilised in house AI systems. Proving a safer bet than calling on external contractors who do not know the company’s requirements as well as those from within, this brings me on to point number 4.

4. AI must take into account the unique legal norms and practices of the US legal system:

Assurance of adherence to aspects of the U.S. legal system such as transparency, explainability, and non-discrimination must be had surrounding AI systems. This will prove essential to the safe proliferation of AI in society as it creeps into more and more areas of society, which allows me to introduce point number 5.

5. AI has the potential to augment social anxieties and create an equity gap within society:

Here, the fear is that bigger companies with the human resources and purchasing power that they possess will find it easier to ‘game’ any government AI model to be able to be compliant, unlike smaller businesses. Without the same resources and potential expertise, they will not be able to keep out of the cross-hairs as easily as bigger businesses. Such inequity could then translate to society, building a culture of discontent and distrust with a techno-government. For an AI-adoptive government to survive, such problems need to be properly considered.

Questions surrounding these findings are then pondered by the authors. For example, how much transparency will be required in order for AI systems to be compliant with U.S. legal norms? What does an ‘explainable AI’ actually look like? Answers to questions like these will need to be answered in due course, which will go on to shape the ultimate overview of whether the federal agencies will manage AI policy poorly, or well. 

Overall, AI has the potential to aid the government in making a fairer society, but also to make it more unjust. While the uptake of AI systems in government is encouraging, this creates a greater need for caution and scrutiny over how AI systems are to be implemented in order to not exacerbate society’s level of inequality. AI is the pen to our notebook world, and how we use the pen to write will either convert the notebook into a glorious adventure novel, or a terrifying horror.


Original paper by David Freeman Engstrom, Daniel E. Ho, Catherine M. Sharkey, and Mariano-Florentino Cuéllar: https://hai.stanford.edu/sites/default/files/2020-09/HAI_PromisePeril.pdf

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

🔍 SEARCH

Spotlight

Canada’s Minister of AI and Digital Innovation is a Historic First. Here’s What We Recommend.

Am I Literate? Redefining Literacy in the Age of Artificial Intelligence

AI Policy Corner: The Texas Responsible AI Governance Act

AI Policy Corner: Singapore’s National AI Strategy 2.0

AI Governance in a Competitive World: Balancing Innovation, Regulation and Ethics | Point Zero Forum 2025

related posts

  • Towards Sustainable Conversational AI

    Towards Sustainable Conversational AI

  • International Human Rights, Artificial Intelligence, and the Challenge for the Pondering State: Time...

    International Human Rights, Artificial Intelligence, and the Challenge for the Pondering State: Time...

  • Reduced, Reused, and Recycled: The Life of a Benchmark in Machine Learning Research

    Reduced, Reused, and Recycled: The Life of a Benchmark in Machine Learning Research

  • The Ethical Need for Watermarks in Machine-Generated Language

    The Ethical Need for Watermarks in Machine-Generated Language

  • Cascaded Debiasing : Studying the Cumulative Effect of Multiple Fairness-Enhancing Interventions

    Cascaded Debiasing : Studying the Cumulative Effect of Multiple Fairness-Enhancing Interventions

  • Tell me, what are you most afraid of? Exploring the Effects of Agent Representation on Information D...

    Tell me, what are you most afraid of? Exploring the Effects of Agent Representation on Information D...

  • 6 Ways Machine Learning Threatens Social Justice

    6 Ways Machine Learning Threatens Social Justice

  • Research summary: Evasion Attacks Against Machine Learning at Test Time

    Research summary: Evasion Attacks Against Machine Learning at Test Time

  • The Ethics of Artificial Intelligence through the Lens of Ubuntu

    The Ethics of Artificial Intelligence through the Lens of Ubuntu

  • Towards Healthy AI: Large Language Models Need Therapists Too

    Towards Healthy AI: Large Language Models Need Therapists Too

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer

Categories


• Blog
• Research Summaries
• Columns
• Core Principles of Responsible AI
• Special Topics

Signature Content


• The State Of AI Ethics

• The Living Dictionary

• The AI Ethics Brief

Learn More


• About

• Open Access Policy

• Contributions Policy

• Editorial Stance on AI Tools

• Press

• Donate

• Contact

The AI Ethics Brief (bi-weekly newsletter)

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.


Archive

  • © MONTREAL AI ETHICS INSTITUTE. All rights reserved 2024.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.