• Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • Core Principles of Responsible AI
    • Accountability
    • Fairness
    • Privacy
    • Safety and Security
    • Sustainability
    • Transparency
  • Special Topics
    • AI in Industry
    • Ethical Implications
    • Human-Centered Design
    • Regulatory Landscape
    • Technical Methods
  • Living Dictionary
  • State of AI Ethics
  • AI Ethics Brief
  • 🇫🇷
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

Policy Brief: AI’s Promise and Peril for the U.S. Government (Research summary)

September 28, 2020

Summary contributed by our researcher Connor Wright (Philosophy, University of Exeter)

*Link to original paper + authors at the bottom.


Mini-summary: Having studied multiple federal agencies across the U.S., the authors have produced a list of 5 main findings from their research. Ranging from the uptake of Artificial Intelligence (AI) systems in government to the ability of AI to exacerbate social inequalities if mismanaged, the authors produce a concise and clear summary of the effects of AI on the U.S. government. How the government acts to instil the norms of legal explainability, non-discrimination and transparency will shape how the AI future of the U.S. is defined.


Full summary:

The influence of Artificial Intelligence (AI) in government procedures has the potential to not only reduce costs and increase efficiency, but also to help the government make fairer decisions. The authors of this paper have thus scoured federal agencies across the U.S. in order to see how the AI (if being used at all) is being implemented. Their study produced 5 main findings, which I shall introduce and expand on now.

1. Nearly half (45%) of federal agencies had implemented some form of AI toolkit:

Federal agencies have been taking advantage of the benefits AI brings. The agencies use the technology for aspects such as customer communication, as well as extracting huge amounts of data from the government’s data stream. In this sense, AI is slowly becoming the norm in the federal sphere. However, how do the public and private sectors compare?

2. The public sector lacks the technological sophistication possessed by the private sector:

The authors found that only 12% of the technologies used in the public sector could be deemed equivalent to that of the private sector. Without significant public investment, the sector will lag behind, finding it harder to see gains in accuracy enjoyed by the private sector. In order to guide such investment, how are federal agencies to go about designing AI systems?

3. In house AI systems are the way to go:

In house AI systems were found to be more adequately adjusted towards the complex legal requirements, as well as being more likely to be implemented in a compliant fashion. In fact, some 53% of the agencies studied had utilised in house AI systems. Proving a safer bet than calling on external contractors who do not know the company’s requirements as well as those from within, this brings me on to point number 4.

4. AI must take into account the unique legal norms and practices of the US legal system:

Assurance of adherence to aspects of the U.S. legal system such as transparency, explainability, and non-discrimination must be had surrounding AI systems. This will prove essential to the safe proliferation of AI in society as it creeps into more and more areas of society, which allows me to introduce point number 5.

5. AI has the potential to augment social anxieties and create an equity gap within society:

Here, the fear is that bigger companies with the human resources and purchasing power that they possess will find it easier to ‘game’ any government AI model to be able to be compliant, unlike smaller businesses. Without the same resources and potential expertise, they will not be able to keep out of the cross-hairs as easily as bigger businesses. Such inequity could then translate to society, building a culture of discontent and distrust with a techno-government. For an AI-adoptive government to survive, such problems need to be properly considered.

Questions surrounding these findings are then pondered by the authors. For example, how much transparency will be required in order for AI systems to be compliant with U.S. legal norms? What does an ‘explainable AI’ actually look like? Answers to questions like these will need to be answered in due course, which will go on to shape the ultimate overview of whether the federal agencies will manage AI policy poorly, or well. 

Overall, AI has the potential to aid the government in making a fairer society, but also to make it more unjust. While the uptake of AI systems in government is encouraging, this creates a greater need for caution and scrutiny over how AI systems are to be implemented in order to not exacerbate society’s level of inequality. AI is the pen to our notebook world, and how we use the pen to write will either convert the notebook into a glorious adventure novel, or a terrifying horror.


Original paper by David Freeman Engstrom, Daniel E. Ho, Catherine M. Sharkey, and Mariano-Florentino Cuéllar: https://hai.stanford.edu/sites/default/files/2020-09/HAI_PromisePeril.pdf

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

🔍 SEARCH

Spotlight

AI Policy Corner: Texas and New York: Comparing U.S. State-Level AI Laws

What is Sovereign Artificial Intelligence?

AI Policy Corner: The Kenya National AI Strategy

AI Policy Corner: New York City Local Law 144

Canada’s Minister of AI and Digital Innovation is a Historic First. Here’s What We Recommend.

related posts

  • Perspectives and Approaches in AI Ethics: East Asia (Research Summary)

    Perspectives and Approaches in AI Ethics: East Asia (Research Summary)

  • The Wrong Kind of AI? Artificial Intelligence and the Future of Labour Demand (Research Summary)

    The Wrong Kind of AI? Artificial Intelligence and the Future of Labour Demand (Research Summary)

  • Why reciprocity prohibits autonomous weapons systems in war

    Why reciprocity prohibits autonomous weapons systems in war

  • Research Summary: Countering Information Influence Activities: The State of the Art

    Research Summary: Countering Information Influence Activities: The State of the Art

  • Unprofessional Peer Reviews Disproportionately Harm Underrepresented Groups in STEM (Research Summar...

    Unprofessional Peer Reviews Disproportionately Harm Underrepresented Groups in STEM (Research Summar...

  • ABScribe: Rapid Exploration of Multiple Writing Variations in Human-AI Co-Writing Tasks using Large ...

    ABScribe: Rapid Exploration of Multiple Writing Variations in Human-AI Co-Writing Tasks using Large ...

  • AI Ethics in the Public, Private, and NGO Sectors: A Review of a Global Document Collection

    AI Ethics in the Public, Private, and NGO Sectors: A Review of a Global Document Collection

  • Green Algorithms: Quantifying the Carbon Emissions of Computation (Research Summary)

    Green Algorithms: Quantifying the Carbon Emissions of Computation (Research Summary)

  • The Participatory Turn in AI Design: Theoretical Foundations and the Current State of Practice

    The Participatory Turn in AI Design: Theoretical Foundations and the Current State of Practice

  • Examining the Impact of Provenance-Enabled Media on Trust and Accuracy Perceptions

    Examining the Impact of Provenance-Enabled Media on Trust and Accuracy Perceptions

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer

Categories


• Blog
• Research Summaries
• Columns
• Core Principles of Responsible AI
• Special Topics

Signature Content


• The State Of AI Ethics

• The Living Dictionary

• The AI Ethics Brief

Learn More


• About

• Open Access Policy

• Contributions Policy

• Editorial Stance on AI Tools

• Press

• Donate

• Contact

The AI Ethics Brief (bi-weekly newsletter)

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.


Archive

  • © MONTREAL AI ETHICS INSTITUTE. All rights reserved 2024.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.