• Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • Core Principles of Responsible AI
    • Accountability
    • Fairness
    • Privacy
    • Safety and Security
    • Sustainability
    • Transparency
  • Special Topics
    • AI in Industry
    • Ethical Implications
    • Human-Centered Design
    • Regulatory Landscape
    • Technical Methods
  • Living Dictionary
  • State of AI Ethics
  • AI Ethics Brief
  • 🇫🇷
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

Why We Need to Audit Government AI

September 14, 2020

Guest post contributed by Alayna Kennedy, Public Sector Consultant and AI Ethics Researcher at IBM.


Artificial Intelligence (AI) technology has exploded in popularity over the last 10 years, with each wave of technical breakthroughs ushering more and more speculation about the potential impacts of AI on our society, businesses, and governments. First, the Big Data revolution promised to forever change the way we understood analytics, then Deep Learning promised human-level AI performance, and today AI offers huge business returns to investors. AI has long been a buzzword in businesses across the world, but for many government agencies and larger organizations, earlier applications of commercial AI proved to be overhyped and underwhelming. Only now are large-scale organizations, including governments, beginning to implement AI technology at scale, as the technology has moved from the research lab to the office. 

Each of the waves of AI development has been accompanied by a suite of ethical concerns and mitigation strategies. Between 2016 and 2019, 74 sets of ethical principles or guidelines for AI were published, focusing on high-level guidance like “creating transparent AI.” These high-level principles rarely provided concrete guidance, and often weren’t necessary, since most large organizations and government agencies were not yet using AI at scale. In recent years, the AI Ethics community has moved past high-level frameworks and begun to focus on statistical bias mitigation. A plethora of toolkits, including IBM’s AIF360, Microsoft’s Fairlearn, and FairML, have emerged to combat bias in datasets and in AI models.  

We now find ourselves in a new, less exciting wave of AI adoption – starting to implement AI at scale. Despite the hype of the first waves promising immediate returns, AI is just now starting to be widely applied in businesses that don’t have strong technical capabilities of their own, including government agencies. 

Governments are now using AI to make decisions within large scale government projects, including the deployment of humanitarian resources, who is granted bail, which citizens are subjected to increased police presence, whether or not reports of abuse are investigated, and who receives government-funded welfare. This latest wave of commercial application of AI brings its own concerns, not about the novelty of the technology itself but about the scale of its application.

Despite having a huge impact, governments do not have specific frameworks to audit ML projects within government agencies. Furthermore, most countries have no central oversight agency or policy that regulates AI and ML technology at scale. 

The large-scale implementation of AI in governments requires corresponding efforts from the AI Ethics community to provide a method to audit and oversee AI at scale, across complex enterprises. In the past, the AI Ethics community has focused on high-level, and recently on bias mitigation. It is easy to assume that an ML model trained on a representative dataset tested for statistical bias and embedded fairness metrics will continue to perform ethically without oversight. In reality, the environment in which the model is operating is constantly changing, and auditors need to periodically reassess model performance and outcomes with thorough, multidisciplinary auditing teams to avoid unethical outcomes seeping into the models over time since ethics is not a ‘bolt-on’, but a continuous process. Ensuring that ML algorithms behave ethically requires regulation, measurement, and consistent auditing.

As governments around the world scale up their investments in AI technology, they will also need to scale up their capability to assess, audit, and review those technologies for ethical concerns to avoid amplifying inequality. Large scale government enterprises require a systemized method to look across their portfolio of projects and quickly assess which are more vulnerable to becoming unethical. This allows appropriate allocation of auditing resources, continual monitoring of ML projects’ outputs, and appropriately identies risky projects before they are fully developed and deployed. This auditing process needs to be agile, continuous, and quick enough to meet the government agencies’ need for self-regulation. In the next wave of AI Ethics development, we need to pry our focus away from high-level principles and bias-only concerns and develop the mundane, practical tools to allow organizations to audit AI. As MIT Technology Review’s Karen Hao wrote, “Let’s stop AI ethics-washing and actually do something.” 

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

🔍 SEARCH

Spotlight

Beyond Consultation: Building Inclusive AI Governance for Canada’s Democratic Future

AI Policy Corner: U.S. Executive Order on Advancing AI Education for American Youth

AI Policy Corner: U.S. Copyright Guidance on Works Created with AI

AI Policy Corner: AI for Good Summit 2025

AI Policy Corner: Japan’s AI Promotion Act

related posts

  • I Don't Want Someone to Watch Me While I'm Working: Gendered Views of Facial Recognition Technolog...

    "I Don't Want Someone to Watch Me While I'm Working": Gendered Views of Facial Recognition Technolog...

  • Explainable artificial intelligence (XAI) post‐hoc explainability methods: risks and limitations in ...

    Explainable artificial intelligence (XAI) post‐hoc explainability methods: risks and limitations in ...

  • Bridging the Gap: The Case For an ‘Incompletely Theorized Agreement’ on AI Policy (Research Summary)

    Bridging the Gap: The Case For an ‘Incompletely Theorized Agreement’ on AI Policy (Research Summary)

  • A Critical Analysis of the What3Words Geocoding Algorithm

    A Critical Analysis of the What3Words Geocoding Algorithm

  • AI Framework for Healthy Built Environments

    AI Framework for Healthy Built Environments

  • NIST Special Publication 1270: Towards a Standard for Identifying and Managing Bias in Artificial In...

    NIST Special Publication 1270: Towards a Standard for Identifying and Managing Bias in Artificial In...

  • Clueless AI: Should AI Models Report to Us When They Are Clueless?

    Clueless AI: Should AI Models Report to Us When They Are Clueless?

  • From Dance App to Political Mercenary: How disinformation on TikTok gaslights political tensions in ...

    From Dance App to Political Mercenary: How disinformation on TikTok gaslights political tensions in ...

  • Melting contestation: insurance fairness and machine learning

    Melting contestation: insurance fairness and machine learning

  • Seeing Like a Toolkit: How Toolkits Envision the Work of AI Ethics

    Seeing Like a Toolkit: How Toolkits Envision the Work of AI Ethics

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer

Categories


• Blog
• Research Summaries
• Columns
• Core Principles of Responsible AI
• Special Topics

Signature Content


• The State Of AI Ethics

• The Living Dictionary

• The AI Ethics Brief

Learn More


• About

• Open Access Policy

• Contributions Policy

• Editorial Stance on AI Tools

• Press

• Donate

• Contact

The AI Ethics Brief (bi-weekly newsletter)

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.


Archive

  • Š 2025 MONTREAL AI ETHICS INSTITUTE.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.