✍️ Op-Ed by Blair Attard-Frost, a PhD Candidate at the University of Toronto. She researches and teaches about the governance of AI systems in Canada and globally.
Summary
Canada is currently experiencing a historic bout of political turbulence, and the proposed Artificial Intelligence and Data Act (AIDA) has died amidst a prorogation of Parliament.
The AIDA was tabled in Canada’s House of Commons in June 2022 with the ambitious goal of establishing a comprehensive regulatory framework for AI systems across Canada. However, the AIDA was embroiled in controversy throughout its life in Parliament. A chorus of individuals and organizations voiced concern with the AIDA, citing its exclusionary public consultation process, its vague scope and requirements, and its lack of independent regulatory oversight as reasons why the legislation should not become law. Though the government ultimately proposed some amendments to the AIDA in response to criticisms, the amendments did not sufficiently address the fundamental flaws in the AIDA’s drafting and development. As a result, the AIDA languished and died in a parliamentary committee, unable to secure the confidence and political will needed to proceed through the legislative process.
The AIDA will be remembered by many as a national AI legislation failure, and in its absence, the future of Canadian AI regulation is now uncertain. A victory for the Conservative Party of Canada in an upcoming federal election seems likely. A Conservative approach to AI regulation may favor promoting AI innovation and targeted intervention in specific high-risk AI use cases over the more comprehensive, cross-sectoral framework of the AIDA. In the absence of clear and effective national AI regulation, Canadians can still regulate AI systems at smaller scales. Professional associations, unions, and community organizations in Canada and elsewhere have already created policies, guidelines, and best practices for regulating AI systems in workplaces and communities. As Canada’s political upheaval continues and new regulatory norms for AI emerge, these bottom-up approaches to AI regulation will play an important role.
Introduction
With Canadian Parliament prorogued and a non-confidence vote and federal election looming over the country, Canada’s proposed Artificial Intelligence and Data Act has died on the table of a House of Commons committee.
The Artificial Intelligence and Data Act (or “AIDA” for short) will be remembered by many as an ineffective and undemocratic piece of legislation. Though the AIDA aimed to set comprehensive rules on AI systems across Canada to protect against harmful uses of AI, the legislation was widely criticized for its exclusionary public consultation process, narrow scope, lack of specificity, and lack of independent regulatory enforcement and oversight.
Early Life of the AIDA
The AIDA was tabled in Parliament in June 2022 as part of Bill C-27, a package of three new legislative acts collectively known as the Digital Charter Implementation Act. The first two acts bundled together in Bill C-27 – the Consumer Privacy Protection Act and the Personal Information and Data Protection Tribunal Act – aimed to modernize Canada’s consumer data protection and privacy laws. As the third act in Bill C-27, the AIDA aimed to establish a regulatory framework for the development, deployment, and operation of AI systems, to be enforced by a new government official known as the “AI and Data Commissioner.”
The regulatory framework set out by the AIDA required developers, providers, and operators of “high-impact” AI systems in Canada to comply with requirements for risk assessment and mitigation, recordkeeping, and disclosure of key system information or face monetary penalties and criminal offences. Unfortunately, the text of these requirements, as written in the AIDA, was deemed unfit for this purpose by many critics of the legislation.
Criticisms & Controversies
During its life in Parliament, the AIDA was hotly debated over the course of two readings by Members of Parliament, as well as in an in-depth study by the House of Commons Standing Committee on Industry and Technology (INDU). The INDU Committee’s study of the AIDA began in April 2023 and has now been left incomplete.
During the INDU committee’s study of the AIDA, a total of 137 witnesses appeared before the committee to comment on the AIDA; 113 briefs were also submitted to the committee by a range of individuals and organizations. Many of those submissions expressed concern that the requirements for developers and operators of high-impact systems set out by the AIDA were vaguely described and insufficient for protecting Canada against harmful AI impacts. This insufficiency was due in large part to the AIDA’s lack of robust and inclusive stakeholder engagement. Instead of an open and public process of consultation and deliberation, records provided by the government show that the development of the AIDA primarily occurred behind closed doors with a selective group of industry representatives.
Sectors and workers vulnerable to the impacts of AI systems, marginalized communities, and civil society organizations were largely excluded from participating in the drafting and development of the AIDA. As a result, the AIDA did not adequately serve the interests of many stakeholders. For example, in their submission to the INDU committee, the Canadian Labour Congress deemed the AIDA insufficient for protecting Canadian workers against harmful AI systems, recommending that the legislation be “reconceived from a human, labour, and privacy rights-based perspective, placing transparency, accountability and consultation at the core of the approach to regulating AI.”
Submissions by labour organizations representing creative workers voiced similar concerns, such as the Directors Guild of Canada and Writers Guild of Canada, Screen Composers Guild of Canada, Music Canada, and a group of advocacy organizations representing Canadian authors and publishers. These organizations deemed the AIDA ineffective at protecting artists and creative workers against the social and economic impacts of generative AI.
In addition, briefs submitted by Amnesty International and the Women’s Legal Education and Action Fund observed that the AIDA did not provide sufficient protections for human rights, particularly for the rights of racialized communities, women, and gender minorities. The Assembly of First Nations stated that a lawsuit against the government was likely due to the government’s failure to uphold Indigenous rights by consulting First Nations during the AIDA’s drafting. In their submission, the Assembly of First Nations noted that “AI has the potential to destroy First Nations’ cultures, threaten First Nations’ security, and increase demand for our resources.” Over the course of the INDU committee’s study, submissions such as these made it strikingly clear that the AIDA was not designed to protect those in greatest need of protection against AI.
Later Life & Death
In response to criticisms of the AIDA, the government proposed a series of amendments to the legislation in November 2023. The proposed amendments added specificity to the scope, requirements, and regulatory powers set out by the legislation, but were not substantive enough to address the concerns of the legislation’s critics and move the AIDA into law. As the Canadian Union of Public Employees (CUPE) wrote in their brief to the INDU committee following the proposal of the amendments:
“The Committee should allow sufficient time for stakeholders to analyze and provide additional commentary on these new amendments. Still, what is before the committee is a deeply flawed legislative framework on a pivotal matter for all Canadians.”
The proposed amendments to the AIDA were too little too late. After languishing on the table of the INDU committee throughout 2024, the AIDA, along with the rest of Bill C-27, ultimately failed to become law. The AIDA’s failure can be attributed to several factors, including its unclear and incomplete scope and requirements, limited public participation in the drafting of the legislation, and a now-imploding government that neglected to take greater accountability for these errors. In a strange twist of fate, the legislation intended to bolster trust and accountability in AI systems was unable to overcome a lack of trust and accountability in its own legislative process.
AI Regulation in a Post-AIDA Canada
AI regulation now faces an uncertain future in Canada. With the Conservative Party of Canada likely to form a new government following an upcoming non-confidence vote and federal election, Canada’s AI policy landscape may see significant changes in the coming months and years.
Although the Conservative Party has not released a definitive official statement of their intended approach to regulating AI, remarks on innovation policy and AI regulation by Conservative MPs such as Rick Perkins and Michelle Rempel Garner indicate that the Conservatives may take a lighter-handed approach to AI regulation than the current government. In contrast to the sweeping, cross-sectoral approach of the AIDA, Conservative AI policy may focus primarily on promoting AI innovation in pursuit of economic growth, leveraging existing laws or creating new legislation only for addressing specific high-risk uses of AI that are of particular concern to the government. In the United States, a similar approach to prioritizing AI innovation over regulation is also likely under the new Trump administration, potentially adding further deregulatory pressure to a Conservative Canadian government.
Regardless of the regulatory approach that Canada’s next government may take, it is important to recognize that AI regulation can and does exist outside of government. Following the 2023 strikes of the Writers Guild of America (WGA) and Screen Actors Guild – American Federation of Television and Radio Artists (SAG-AFTRA), both unions established new regulations for the training of AI models on union-protected creative works and the use of generative AI applications in the workplace. In Canada, professional associations and unions such as the Canadian Bar Association, College of Physicians & Surgeons of Manitoba, and Elementary Teachers’ Federation of Ontario have already created guidelines and rules for regulating how AI tools can be used within their professions and workplaces.
Canadians do not need to wait and hope for our next government to fill the regulatory vacuum left by the death of the AIDA. In the absence of clear and effective national AI regulation, we can organize with our co-workers and our communities to create smaller-scale policies, guidelines, and best practices for how AI should be built and used in the places that we live in and work. As Canada’s political upheaval continues and new regulatory norms for AI emerge, these bottom-up approaches to AI regulation will play an important role.
If you are interested in taking AI regulation into your own hands, my essay on AI countergovernance, along with Partnership on AI’s Guidelines for Participatory and Inclusive AI, NIST’s AI Risk Management Framework Playbook, and TechTarget’s guide to creating an acceptable use of AI policy for an organization, will provide useful resources for creating your own policies, guidelines, and shared rules for AI.