• Skip to main content
  • Skip to secondary menu
  • Skip to primary sidebar
  • Skip to footer
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

  • Articles
    • Public Policy
    • Privacy & Security
    • Human Rights
      • Ethics
      • JEDI (Justice, Equity, Diversity, Inclusion
    • Climate
    • Design
      • Emerging Technology
    • Application & Adoption
      • Health
      • Education
      • Government
        • Military
        • Public Works
      • Labour
    • Arts & Culture
      • Film & TV
      • Music
      • Pop Culture
      • Digital Art
  • Columns
    • AI Policy Corner
    • Recess
    • Tech Futures
  • The AI Ethics Brief
  • AI Literacy
    • Research Summaries
    • AI Ethics Living Dictionary
    • Learning Community
  • The State of AI Ethics Report
    • Volume 7 (November 2025)
    • Volume 6 (February 2022)
    • Volume 5 (July 2021)
    • Volume 4 (April 2021)
    • Volume 3 (Jan 2021)
    • Volume 2 (Oct 2020)
    • Volume 1 (June 2020)
  • About
    • Our Contributions Policy
    • Our Open Access Policy
    • Contact
    • Donate

Why We Need to Audit Government AI

September 14, 2020

Guest post contributed by Alayna Kennedy, Public Sector Consultant and AI Ethics Researcher at IBM.


Artificial Intelligence (AI) technology has exploded in popularity over the last 10 years, with each wave of technical breakthroughs ushering more and more speculation about the potential impacts of AI on our society, businesses, and governments. First, the Big Data revolution promised to forever change the way we understood analytics, then Deep Learning promised human-level AI performance, and today AI offers huge business returns to investors. AI has long been a buzzword in businesses across the world, but for many government agencies and larger organizations, earlier applications of commercial AI proved to be overhyped and underwhelming. Only now are large-scale organizations, including governments, beginning to implement AI technology at scale, as the technology has moved from the research lab to the office. 

Each of the waves of AI development has been accompanied by a suite of ethical concerns and mitigation strategies. Between 2016 and 2019, 74 sets of ethical principles or guidelines for AI were published, focusing on high-level guidance like “creating transparent AI.” These high-level principles rarely provided concrete guidance, and often weren’t necessary, since most large organizations and government agencies were not yet using AI at scale. In recent years, the AI Ethics community has moved past high-level frameworks and begun to focus on statistical bias mitigation. A plethora of toolkits, including IBM’s AIF360, Microsoft’s Fairlearn, and FairML, have emerged to combat bias in datasets and in AI models.  

We now find ourselves in a new, less exciting wave of AI adoption – starting to implement AI at scale. Despite the hype of the first waves promising immediate returns, AI is just now starting to be widely applied in businesses that don’t have strong technical capabilities of their own, including government agencies. 

Governments are now using AI to make decisions within large scale government projects, including the deployment of humanitarian resources, who is granted bail, which citizens are subjected to increased police presence, whether or not reports of abuse are investigated, and who receives government-funded welfare. This latest wave of commercial application of AI brings its own concerns, not about the novelty of the technology itself but about the scale of its application.

Despite having a huge impact, governments do not have specific frameworks to audit ML projects within government agencies. Furthermore, most countries have no central oversight agency or policy that regulates AI and ML technology at scale. 

The large-scale implementation of AI in governments requires corresponding efforts from the AI Ethics community to provide a method to audit and oversee AI at scale, across complex enterprises. In the past, the AI Ethics community has focused on high-level, and recently on bias mitigation. It is easy to assume that an ML model trained on a representative dataset tested for statistical bias and embedded fairness metrics will continue to perform ethically without oversight. In reality, the environment in which the model is operating is constantly changing, and auditors need to periodically reassess model performance and outcomes with thorough, multidisciplinary auditing teams to avoid unethical outcomes seeping into the models over time since ethics is not a ‘bolt-on’, but a continuous process. Ensuring that ML algorithms behave ethically requires regulation, measurement, and consistent auditing.

As governments around the world scale up their investments in AI technology, they will also need to scale up their capability to assess, audit, and review those technologies for ethical concerns to avoid amplifying inequality. Large scale government enterprises require a systemized method to look across their portfolio of projects and quickly assess which are more vulnerable to becoming unethical. This allows appropriate allocation of auditing resources, continual monitoring of ML projects’ outputs, and appropriately identifies risky projects before they are fully developed and deployed. This auditing process needs to be agile, continuous, and quick enough to meet the government agencies’ need for self-regulation. In the next wave of AI Ethics development, we need to pry our focus away from high-level principles and bias-only concerns and develop the mundane, practical tools to allow organizations to audit AI. As MIT Technology Review’s Karen Hao wrote, “Let’s stop AI ethics-washing and actually do something.” 

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

🔍 SEARCH

Spotlight

A network diagram with lots of little emojis, organised in clusters.

Tech Futures: AI For and Against Knowledge

A brightly coloured illustration which can be viewed in any direction. It has many elements to it working together: men in suits around a table, someone in a data centre, big hands controlling the scenes and holding a phone, people in a production line. Motifs such as network diagrams and melting emojis are placed throughout the busy vignettes.

Tech Futures: The Fossil Fuels Playbook for Big Tech: Part II

A rock embedded with intricate circuit board patterns, held delicately by pale hands drawn in a ghostly style. The contrast between the rough, metallic mineral and the sleek, artificial circuit board illustrates the relationship between raw natural resources and modern technological development. The hands evoke human involvement in the extraction and manufacturing processes.

Tech Futures: The Fossil Fuels Playbook for Big Tech: Part I

Close-up of a cat sleeping on a computer keyboard

Tech Futures: The threat of AI-generated code to the world’s digital infrastructure

The undying sun hangs in the sky, as people gather around signal towers, working through their digital devices.

Dreams and Realities in Modi’s AI Impact Summit

related posts

  • FaiRIR: Mitigating Exposure Bias from Related Item Recommendations in Two-Sided Platforms

    FaiRIR: Mitigating Exposure Bias from Related Item Recommendations in Two-Sided Platforms

  • Research summary: Out of the Laboratory and Into the Classroom: The Future of AI in Education

    Research summary: Out of the Laboratory and Into the Classroom: The Future of AI in Education

  • Response to the AHRC and WEF regarding Responsible Innovation in AI

    Response to the AHRC and WEF regarding Responsible Innovation in AI

  • Artificial Intelligence and Healthcare: From Sci-Fi to Reality

    Artificial Intelligence and Healthcare: From Sci-Fi to Reality

  • Collect, Measure, Repeat: Reliability Factors for Responsible AI Data Collection

    Collect, Measure, Repeat: Reliability Factors for Responsible AI Data Collection

  • Exploring Antitrust and Platform Power in Generative AI

    Exploring Antitrust and Platform Power in Generative AI

  • Public Strategies for Artificial Intelligence: Which Value Drivers?

    Public Strategies for Artificial Intelligence: Which Value Drivers?

  • On the Impact of Machine Learning Randomness on Group Fairness

    On the Impact of Machine Learning Randomness on Group Fairness

  • Post-Mortem Privacy 2.0: Theory, Law and Technology

    Post-Mortem Privacy 2.0: Theory, Law and Technology

  • Incentivized Symbiosis: A Paradigm for Human-Agent Coevolution

    Incentivized Symbiosis: A Paradigm for Human-Agent Coevolution

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer


Articles

Columns

AI Literacy

The State of AI Ethics Report


 

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.

Contact

Donate


  • © 2025 MONTREAL AI ETHICS INSTITUTE.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.