• Skip to primary navigation
  • Skip to main content
  • LinkedIn
  • RSS
  • Twitter
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy.

  • Content
    • The State of AI Ethics
    • The AI Ethics Brief
    • The Living Dictionary
    • Research Summaries
    • Columns
      • Social Context in LLM Research: the BigScience Approach
      • Recess
      • Like Talking to a Person
      • Sociology of AI Ethics
      • The New Heartbeat of Healthcare
      • Office Hours
      • Permission to Be Uncertain
      • AI Application Spotlight
      • Ethical AI Startups
    • Publications
  • Community
    • Events
    • Learning Community
    • Code of Conduct
  • Team
  • Donate
  • About
    • Our Open Access Policy
    • Our Contributions Policy
    • Press
  • Contact
  • 🇫🇷
Subscribe

Policy Brief: AI’s Promise and Peril for the U.S. Government (Research summary)

September 28, 2020 by MAIEI

Summary contributed by our researcher Connor Wright (Philosophy, University of Exeter)

*Link to original paper + authors at the bottom.


Mini-summary: Having studied multiple federal agencies across the U.S., the authors have produced a list of 5 main findings from their research. Ranging from the uptake of Artificial Intelligence (AI) systems in government to the ability of AI to exacerbate social inequalities if mismanaged, the authors produce a concise and clear summary of the effects of AI on the U.S. government. How the government acts to instil the norms of legal explainability, non-discrimination and transparency will shape how the AI future of the U.S. is defined.


Full summary:

The influence of Artificial Intelligence (AI) in government procedures has the potential to not only reduce costs and increase efficiency, but also to help the government make fairer decisions. The authors of this paper have thus scoured federal agencies across the U.S. in order to see how the AI (if being used at all) is being implemented. Their study produced 5 main findings, which I shall introduce and expand on now.

1. Nearly half (45%) of federal agencies had implemented some form of AI toolkit:

Federal agencies have been taking advantage of the benefits AI brings. The agencies use the technology for aspects such as customer communication, as well as extracting huge amounts of data from the government’s data stream. In this sense, AI is slowly becoming the norm in the federal sphere. However, how do the public and private sectors compare?

2. The public sector lacks the technological sophistication possessed by the private sector:

The authors found that only 12% of the technologies used in the public sector could be deemed equivalent to that of the private sector. Without significant public investment, the sector will lag behind, finding it harder to see gains in accuracy enjoyed by the private sector. In order to guide such investment, how are federal agencies to go about designing AI systems?

3. In house AI systems are the way to go:

In house AI systems were found to be more adequately adjusted towards the complex legal requirements, as well as being more likely to be implemented in a compliant fashion. In fact, some 53% of the agencies studied had utilised in house AI systems. Proving a safer bet than calling on external contractors who do not know the company’s requirements as well as those from within, this brings me on to point number 4.

4. AI must take into account the unique legal norms and practices of the US legal system:

Assurance of adherence to aspects of the U.S. legal system such as transparency, explainability, and non-discrimination must be had surrounding AI systems. This will prove essential to the safe proliferation of AI in society as it creeps into more and more areas of society, which allows me to introduce point number 5.

5. AI has the potential to augment social anxieties and create an equity gap within society:

Here, the fear is that bigger companies with the human resources and purchasing power that they possess will find it easier to ‘game’ any government AI model to be able to be compliant, unlike smaller businesses. Without the same resources and potential expertise, they will not be able to keep out of the cross-hairs as easily as bigger businesses. Such inequity could then translate to society, building a culture of discontent and distrust with a techno-government. For an AI-adoptive government to survive, such problems need to be properly considered.

Questions surrounding these findings are then pondered by the authors. For example, how much transparency will be required in order for AI systems to be compliant with U.S. legal norms? What does an ‘explainable AI’ actually look like? Answers to questions like these will need to be answered in due course, which will go on to shape the ultimate overview of whether the federal agencies will manage AI policy poorly, or well. 

Overall, AI has the potential to aid the government in making a fairer society, but also to make it more unjust. While the uptake of AI systems in government is encouraging, this creates a greater need for caution and scrutiny over how AI systems are to be implemented in order to not exacerbate society’s level of inequality. AI is the pen to our notebook world, and how we use the pen to write will either convert the notebook into a glorious adventure novel, or a terrifying horror.


Original paper by David Freeman Engstrom, Daniel E. Ho, Catherine M. Sharkey, and Mariano-Florentino Cuéllar: https://hai.stanford.edu/sites/default/files/2020-09/HAI_PromisePeril.pdf

Category iconResearch Summaries

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We write every week.
  • LinkedIn
  • RSS
  • Twitter
  • © MONTREAL AI ETHICS INSTITUTE. All rights reserved 2021.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Creative Commons LicenseLearn more about our open access policy here.