✍️ Report Summary by Kate Kaye, a researcher, award-winning journalist and deputy director of the World Privacy Forum, a non-partisan public interest research 501c3 nonprofit organization. Kate is a member of the OECD.AI Network of Experts, where she contributes to the Expert Group on AI Risk and Accountability, and was selected in 2019 by MAIEI as part of a multidisciplinary cohort of interns researching the social impacts of AI.
[Original Report by Kate Kaye, World Privacy Forum]
Overview: Canadian government agencies, including its employment and transportation agencies, the Department of Veterans Affairs, and the Royal Canadian Mounted Police (RCMP), have evaluated the automated systems they use according to the country’s Algorithmic Impact Assessment process, or AIA. However, Canada’s AIA process itself has evolved. The report excerpted here, part of the World Privacy Forum’s AI Governance on the Ground Series, reviews key elements of Canada’s AIA evolution and its impacts on stakeholders.
Introduction
AI governance tools, like other approaches to implementing government AI and tech policy, sometimes need to change and adapt to remain relevant to real-world contexts. The World Privacy Forum’s ongoing analysis of AI governance tools established by governments and non-governmental organizations around the world explores the ways these tools have been used or modified over time.
The designers of Canada’s AIA assessment framework, for example, have evaluated and re-evaluated the AIA, updating its criteria, requirements, and risk-level scoring algorithm since the framework was first established.
Understanding how AI governance tools measure, whether they satisfy responsible AI goals, and how they affect stakeholders are key focus areas of the World Privacy Forum’s ongoing research. For this report, we spoke with two key members of the AIA oversight team at Canada’s Treasury Board who have guided the design of the assessment process. In addition, we spoke to a Canadian immigration and refugee lawyer who uses the publicly available AIAs to understand the use of algorithmic systems by Canada’s immigration agency and their impacts on decisions that affect his clients, sometimes influencing decisions about immigration case risk, whether immigrants or refugees can legally work, and even whether people must separate from their spouses or children.
How Canada’s Algorithmic Impact Assessment Process and Algorithm Has Evolved
Evaluation of algorithmic systems used by government agencies according to Canada’s AIA framework has been required since the country’s Directive on Automated Decision-Making went into effect in April 2019. The team at the Treasury Board of Canada Secretariat overseeing its evolution have called Canada’s AIA a work in progress.
Canada’s AIA is comprised of several questions intended to determine risk and reduce potential negative impacts of automated systems. Answers related to a system’s design, algorithm, decision type, impact and data all factor into a numerical score measuring the risk level of the system evaluated.
The World Privacy Forum spoke in March 2024 with two key members of the AIA oversight team at the Treasury Board: Benoit Deshaies, Director of Responsible Data and Artificial Intelligence for the Office of the Chief Data Officer of Canada, and Dawn Hall, Advisor, Responsible Data and AI, Office of the Chief Information Officer. Both were responsible for various aspects of Canada’s AIA implementation and design process updates.
At the time of publication of the World Privacy Forum’s AI on the Ground report about Canada’s evolving AIA process in August 2024, Canadian agencies had published 22 AIAs evaluating automated systems they planned to use. Those AIA documents are publicly available in the country’s open government data and information repository.
Transport Canada, for instance, evaluated its Pre-load Air Cargo Targeting (PACT) Program, an automated approach to measuring the risk of inbound air shipments that could contain explosive devices or other threat items prior to loading and departure to Canada. The Department of Veterans Affairs assessed its system for Automation Development to Support Disability Benefit Decision Making.
Canada’s evolving AIA scoring algorithm
Canada’s AIA scoring algorithm is used by policymakers to correlate the level of risk of a system to the stringency of requirements for its use. The scoring algorithm works by assigning points to questionnaire answers – the more points, the higher the risk level.
Canada’s AIA is one of many AI assessment processes that attempt to quantify AI risk. The impulse to quantify measurement and improvements to AI systems in the hopes of reducing AI risks carries its potential problems, some of which are discussed in the World Privacy Forum’s 2023 Risky Analysis report.
The scoring algorithm has been adjusted as new questions have been added. To determine the number of points assigned to a particular question, Canada’s Treasury Board determined its significance in relation to other questions. For instance, the use of personal data in a system increased points attributed, potentially resulting in a higher overall risk level.
An AIA update in 2023 added another layer of assessment requiring agencies to evaluate the impacts of algorithmic systems on particular populations and according to gender and age considerations. The evaluation is based on Canada’s Gender-Based Analysis Plus, an intersectional analysis that, in addition to considering biological sex and gender, considers factors such as age, disability, education, ethnicity, economic status, geography, language, race, religion, and sexual orientation to understand impact.
The Real-World Impact of Impact Assessments
Ultimately, Algorithmic Impact Assessments should be meaningful governance tools, not only evaluating risks and helping spotlight ways to improve algorithmic systems but creating genuine transparency and accountability around their use.
The assessments from Immigration, Refugees and Citizenship Canada (IRCC) are the primary public sources of information available to William Tao, Founder of Heron Law Offices in Burnaby, British Columbia, about the automated and algorithmic systems determining crucial decisions affecting the lives of his immigrant and refugee clients.
Though he’s been critical of the AIA process, Tao said in a discussion in March 2024 with the World Privacy Forum he was surprised to see that an assessment of an automated triage tool created by IRCC to assist in processing applications for Canada’s international youth work program included additional revealing documentation. That Gender-Based Analysis (GBA) Plus document, showing how the tool was measured according to gender- and age-related criteria, was something Tao did not expect to be made public.
The rare publication of the GBA Plus report helped Tao and other immigration lawyers in Canada discover that information such as travel history, medical requests and country of origin affects the ways applicants are categorized.
The scenario shows us just how important the design and approach of AI governance tools like Algorithmic Impact Assessments are and will be for years to come.
Between the lines
AI governance policy doesn’t stop at policy. The tools used to implement those policies – such as Algorithmic Impact Assessments – matter just as much, if not more, and it’s imperative that AI governance tools align with policy goals and function as promised. Put simply, that requires measuring the measures used in those tools.
Do Canada’s AIA questionnaire and risk-scoring algorithm work as intended? There may be a variety of perspectives on that question, where a more robust assessment will be necessary to answer it.
For more details on how Canada’s AIA process has evolved, read the full report. For more on AI Governance Tools used around the world and how they might improve, see the World Privacy Forum’s December 2023 report: Risky Analysis: Assessing and Improving AI Governance Tools, An international review of AI Governance Tools and suggestions for pathways forward.