• Skip to main content
  • Skip to secondary menu
  • Skip to primary sidebar
  • Skip to footer
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

  • Articles
    • Public Policy
    • Privacy & Security
    • Human Rights
      • Ethics
      • JEDI (Justice, Equity, Diversity, Inclusion
    • Climate
    • Design
      • Emerging Technology
    • Application & Adoption
      • Health
      • Education
      • Government
        • Military
        • Public Works
      • Labour
    • Arts & Culture
      • Film & TV
      • Music
      • Pop Culture
      • Digital Art
  • Columns
    • AI Policy Corner
    • Recess
  • The AI Ethics Brief
  • AI Literacy
    • Research Summaries
    • AI Ethics Living Dictionary
    • Learning Community
  • The State of AI Ethics Report
    • Volume 6 (February 2022)
    • Volume 5 (July 2021)
    • Volume 4 (April 2021)
    • Volume 3 (Jan 2021)
    • Volume 2 (Oct 2020)
    • Volume 1 (June 2020)
  • About
    • Our Contributions Policy
    • Our Open Access Policy
    • Contact
    • Donate

Policy Brief: AI’s Promise and Peril for the U.S. Government (Research summary)

September 28, 2020

Summary contributed by our researcher Connor Wright (Philosophy, University of Exeter)

*Link to original paper + authors at the bottom.


Mini-summary: Having studied multiple federal agencies across the U.S., the authors have produced a list of 5 main findings from their research. Ranging from the uptake of Artificial Intelligence (AI) systems in government to the ability of AI to exacerbate social inequalities if mismanaged, the authors produce a concise and clear summary of the effects of AI on the U.S. government. How the government acts to instil the norms of legal explainability, non-discrimination and transparency will shape how the AI future of the U.S. is defined.


Full summary:

The influence of Artificial Intelligence (AI) in government procedures has the potential to not only reduce costs and increase efficiency, but also to help the government make fairer decisions. The authors of this paper have thus scoured federal agencies across the U.S. in order to see how the AI (if being used at all) is being implemented. Their study produced 5 main findings, which I shall introduce and expand on now.

1. Nearly half (45%) of federal agencies had implemented some form of AI toolkit:

Federal agencies have been taking advantage of the benefits AI brings. The agencies use the technology for aspects such as customer communication, as well as extracting huge amounts of data from the government’s data stream. In this sense, AI is slowly becoming the norm in the federal sphere. However, how do the public and private sectors compare?

2. The public sector lacks the technological sophistication possessed by the private sector:

The authors found that only 12% of the technologies used in the public sector could be deemed equivalent to that of the private sector. Without significant public investment, the sector will lag behind, finding it harder to see gains in accuracy enjoyed by the private sector. In order to guide such investment, how are federal agencies to go about designing AI systems?

3. In house AI systems are the way to go:

In house AI systems were found to be more adequately adjusted towards the complex legal requirements, as well as being more likely to be implemented in a compliant fashion. In fact, some 53% of the agencies studied had utilised in house AI systems. Proving a safer bet than calling on external contractors who do not know the company’s requirements as well as those from within, this brings me on to point number 4.

4. AI must take into account the unique legal norms and practices of the US legal system:

Assurance of adherence to aspects of the U.S. legal system such as transparency, explainability, and non-discrimination must be had surrounding AI systems. This will prove essential to the safe proliferation of AI in society as it creeps into more and more areas of society, which allows me to introduce point number 5.

5. AI has the potential to augment social anxieties and create an equity gap within society:

Here, the fear is that bigger companies with the human resources and purchasing power that they possess will find it easier to ‘game’ any government AI model to be able to be compliant, unlike smaller businesses. Without the same resources and potential expertise, they will not be able to keep out of the cross-hairs as easily as bigger businesses. Such inequity could then translate to society, building a culture of discontent and distrust with a techno-government. For an AI-adoptive government to survive, such problems need to be properly considered.

Questions surrounding these findings are then pondered by the authors. For example, how much transparency will be required in order for AI systems to be compliant with U.S. legal norms? What does an ‘explainable AI’ actually look like? Answers to questions like these will need to be answered in due course, which will go on to shape the ultimate overview of whether the federal agencies will manage AI policy poorly, or well. 

Overall, AI has the potential to aid the government in making a fairer society, but also to make it more unjust. While the uptake of AI systems in government is encouraging, this creates a greater need for caution and scrutiny over how AI systems are to be implemented in order to not exacerbate society’s level of inequality. AI is the pen to our notebook world, and how we use the pen to write will either convert the notebook into a glorious adventure novel, or a terrifying horror.


Original paper by David Freeman Engstrom, Daniel E. Ho, Catherine M. Sharkey, and Mariano-Florentino Cuéllar: https://hai.stanford.edu/sites/default/files/2020-09/HAI_PromisePeril.pdf

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

🔍 SEARCH

Spotlight

ALL IN Conference 2025: Four Key Takeaways from Montreal

Beyond Dependency: The Hidden Risk of Social Comparison in Chatbot Companionship

AI Policy Corner: Restriction vs. Regulation: Comparing State Approaches to AI Mental Health Legislation

Beyond Consultation: Building Inclusive AI Governance for Canada’s Democratic Future

AI Policy Corner: U.S. Executive Order on Advancing AI Education for American Youth

related posts

  • 10 takeaways from our meetup on AI Ethics in the APAC Region

    10 takeaways from our meetup on AI Ethics in the APAC Region

  • AI in Finance: 8 Frequently Asked Questions

    AI in Finance: 8 Frequently Asked Questions

  • From AI Winter to AI Hype: The Story of AI in Montreal

    From AI Winter to AI Hype: The Story of AI in Montreal

  • Towards a Framework for Human-AI Interaction Patterns in Co-Creative GAN Applications

    Towards a Framework for Human-AI Interaction Patterns in Co-Creative GAN Applications

  • LLMCarbon: Modeling the end-to-end Carbon Footprint of Large Language Models

    LLMCarbon: Modeling the end-to-end Carbon Footprint of Large Language Models

  • Democratising AI: Multiple Meanings, Goals, and Methods

    Democratising AI: Multiple Meanings, Goals, and Methods

  • Broadening AI Ethics Narratives: An Indic Art View

    Broadening AI Ethics Narratives: An Indic Art View

  • CRUSH: Contextually Regularized and User Anchored Self-Supervised Hate Speech Detection

    CRUSH: Contextually Regularized and User Anchored Self-Supervised Hate Speech Detection

  • Right to be Forgotten in the Era of Large Language Models: Implications, Challenges, and Solutions

    Right to be Forgotten in the Era of Large Language Models: Implications, Challenges, and Solutions

  • Equal Improvability: A New Fairness Notion Considering the Long-term Impact

    Equal Improvability: A New Fairness Notion Considering the Long-term Impact

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer


Articles

Columns

AI Literacy

The State of AI Ethics Report


 

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.

Contact

Donate


  • © 2025 MONTREAL AI ETHICS INSTITUTE.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.