• Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • Core Principles of Responsible AI
    • Accountability
    • Fairness
    • Privacy
    • Safety and Security
    • Sustainability
    • Transparency
  • Special Topics
    • AI in Industry
    • Ethical Implications
    • Human-Centered Design
    • Regulatory Landscape
    • Technical Methods
  • Living Dictionary
  • State of AI Ethics
  • AI Ethics Brief
  • 🇫🇷
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

AI Governance on the Ground: Canada’s Algorithmic Impact Assessment Process and Algorithm has evolved

February 3, 2025

✍️ Report Summary by Kate Kaye, a researcher, award-winning journalist and deputy director of the World Privacy Forum, a non-partisan public interest research 501c3 nonprofit organization. Kate is a member of the OECD.AI Network of Experts, where she contributes to the Expert Group on AI Risk and Accountability, and was selected in 2019 by MAIEI as part of a multidisciplinary cohort of interns researching the social impacts of AI.

[Original Report by Kate Kaye, World Privacy Forum]


Overview: Canadian government agencies, including its employment and transportation agencies, the Department of Veterans Affairs, and the Royal Canadian Mounted Police (RCMP), have evaluated the automated systems they use according to the country’s Algorithmic Impact Assessment process, or AIA. However, Canada’s AIA process itself has evolved. The report excerpted here, part of the World Privacy Forum’s AI Governance on the Ground Series, reviews key elements of Canada’s AIA evolution and its impacts on stakeholders.


Introduction

AI governance tools, like other approaches to implementing government AI and tech policy, sometimes need to change and adapt to remain relevant to real-world contexts. The World Privacy Forum’s ongoing analysis of AI governance tools established by governments and non-governmental organizations around the world explores the ways these tools have been used or modified over time.

The designers of Canada’s AIA assessment framework, for example, have evaluated and re-evaluated the AIA, updating its criteria, requirements, and risk-level scoring algorithm since the framework was first established.

Understanding how AI governance tools measure, whether they satisfy responsible AI goals, and how they affect stakeholders are key focus areas of the World Privacy Forum’s ongoing research. For this report, we spoke with two key members of the AIA oversight team at Canada’s Treasury Board who have guided the design of the assessment process. In addition, we spoke to a Canadian immigration and refugee lawyer who uses the publicly available AIAs to understand the use of algorithmic systems by Canada’s immigration agency and their impacts on decisions that affect his clients, sometimes influencing decisions about immigration case risk, whether immigrants or refugees can legally work, and even whether people must separate from their spouses or children.

How Canada’s Algorithmic Impact Assessment Process and Algorithm Has Evolved

Evaluation of algorithmic systems used by government agencies according to Canada’s AIA framework has been required since the country’s Directive on Automated Decision-Making went into effect in April 2019. The team at the Treasury Board of Canada Secretariat overseeing its evolution have called Canada’s AIA a work in progress. 

Canada’s AIA is comprised of several questions intended to determine risk and reduce potential negative impacts of automated systems. Answers related to a system’s design, algorithm, decision type, impact and data all factor into a numerical score measuring the risk level of the system evaluated.

The World Privacy Forum spoke in March 2024 with two key members of the AIA oversight team at the Treasury Board: Benoit Deshaies, Director of Responsible Data and Artificial Intelligence for the Office of the Chief Data Officer of Canada, and Dawn Hall, Advisor, Responsible Data and AI, Office of the Chief Information Officer. Both were responsible for various aspects of Canada’s AIA implementation and design process updates. 

At the time of publication of the World Privacy Forum’s AI on the Ground report about Canada’s evolving AIA process in August 2024, Canadian agencies had published 22 AIAs evaluating automated systems they planned to use. Those AIA documents are publicly available in the country’s open government data and information repository. 

Transport Canada, for instance, evaluated its Pre-load Air Cargo Targeting (PACT) Program, an automated approach to measuring the risk of inbound air shipments that could contain explosive devices or other threat items prior to loading and departure to Canada. The Department of Veterans Affairs assessed its system for Automation Development to Support Disability Benefit Decision Making.

Canada’s evolving AIA scoring algorithm

Canada’s AIA scoring algorithm is used by policymakers to correlate the level of risk of a system to the stringency of requirements for its use. The scoring algorithm works by assigning points to questionnaire answers – the more points, the higher the risk level.

Canada’s AIA is one of many AI assessment processes that attempt to quantify AI risk. The impulse to quantify measurement and improvements to AI systems in the hopes of reducing AI risks carries its potential problems, some of which are discussed in the World Privacy Forum’s 2023 Risky Analysis report.

The scoring algorithm has been adjusted as new questions have been added. To determine the number of points assigned to a particular question, Canada’s Treasury Board determined its significance in relation to other questions. For instance, the use of personal data in a system increased points attributed, potentially resulting in a higher overall risk level.

An AIA update in 2023 added another layer of assessment requiring agencies to evaluate the impacts of algorithmic systems on particular populations and according to gender and age considerations. The evaluation is based on Canada’s Gender-Based Analysis Plus, an intersectional analysis that, in addition to considering biological sex and gender, considers factors such as age, disability, education, ethnicity, economic status, geography, language, race, religion, and sexual orientation to understand impact.

The Real-World Impact of Impact Assessments

Ultimately, Algorithmic Impact Assessments should be meaningful governance tools, not only evaluating risks and helping spotlight ways to improve algorithmic systems but creating genuine transparency and accountability around their use.

The assessments from Immigration, Refugees and Citizenship Canada (IRCC) are the primary public sources of information available to William Tao, Founder of Heron Law Offices in Burnaby, British Columbia, about the automated and algorithmic systems determining crucial decisions affecting the lives of his immigrant and refugee clients.

Though he’s been critical of the AIA process, Tao said in a discussion in March 2024 with the World Privacy Forum he was surprised to see that an assessment of an automated triage tool created by IRCC to assist in processing applications for Canada’s international youth work program included additional revealing documentation. That Gender-Based Analysis (GBA) Plus document, showing how the tool was measured according to gender- and age-related criteria, was something Tao did not expect to be made public. 

The rare publication of the GBA Plus report helped Tao and other immigration lawyers in Canada discover that information such as travel history, medical requests and country of origin affects the ways applicants are categorized.

The scenario shows us just how important the design and approach of AI governance tools like Algorithmic Impact Assessments are and will be for years to come.

Between the lines

AI governance policy doesn’t stop at policy. The tools used to implement those policies – such as Algorithmic Impact Assessments – matter just as much, if not more, and it’s imperative that AI governance tools align with policy goals and function as promised. Put simply, that requires measuring the measures used in those tools. 

Do Canada’s AIA questionnaire and risk-scoring algorithm work as intended? There may be a variety of perspectives on that question, where a more robust assessment will be necessary to answer it.


For more details on how Canada’s AIA process has evolved, read the full report. For more on AI Governance Tools used around the world and how they might improve, see the World Privacy Forum’s December 2023 report: Risky Analysis: Assessing and Improving AI Governance Tools, An international review of AI Governance Tools and suggestions for pathways forward.


Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

🔍 SEARCH

Spotlight

AI Policy Corner: New York City Local Law 144

Canada’s Minister of AI and Digital Innovation is a Historic First. Here’s What We Recommend.

Am I Literate? Redefining Literacy in the Age of Artificial Intelligence

AI Policy Corner: The Texas Responsible AI Governance Act

AI Policy Corner: Singapore’s National AI Strategy 2.0

related posts

  • Social Context of LLMs - the BigScience Approach, Part 4:Model Governance and Responsible Use

    Social Context of LLMs - the BigScience Approach, Part 4:Model Governance and Responsible Use

  • The Stanislavsky projects approach to teaching technology ethics

    The "Stanislavsky projects" approach to teaching technology ethics

  • AI Policy Corner: New York City Local Law 144

    AI Policy Corner: New York City Local Law 144

  • Am I Literate? Redefining Literacy in the Age of Artificial Intelligence

    Am I Literate? Redefining Literacy in the Age of Artificial Intelligence

  • Should AI-Powered Search Engines and Conversational Agents Prioritize Sponsored Content?

    Should AI-Powered Search Engines and Conversational Agents Prioritize Sponsored Content?

  • Teaching Responsible AI in a Time of Hype

    Teaching Responsible AI in a Time of Hype

  • Diagnosing Gender Bias In Image Recognition Systems (Research Summary)

    Diagnosing Gender Bias In Image Recognition Systems (Research Summary)

  • Digital Sex Crime, Online Misogyny, and Digital Feminism in South Korea

    Digital Sex Crime, Online Misogyny, and Digital Feminism in South Korea

  • The state of the debate on the ethics of computer vision

    The state of the debate on the ethics of computer vision

  • Discover Weekly: How the Music Platform Spotify Collects and Uses Your Data

    Discover Weekly: How the Music Platform Spotify Collects and Uses Your Data

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer

Categories


• Blog
• Research Summaries
• Columns
• Core Principles of Responsible AI
• Special Topics

Signature Content


• The State Of AI Ethics

• The Living Dictionary

• The AI Ethics Brief

Learn More


• About

• Open Access Policy

• Contributions Policy

• Editorial Stance on AI Tools

• Press

• Donate

• Contact

The AI Ethics Brief (bi-weekly newsletter)

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.


Archive

  • © MONTREAL AI ETHICS INSTITUTE. All rights reserved 2024.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.