• Skip to main content
  • Skip to secondary menu
  • Skip to primary sidebar
  • Skip to footer
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

  • Articles
    • Public Policy
    • Privacy & Security
    • Human Rights
      • Ethics
      • JEDI (Justice, Equity, Diversity, Inclusion
    • Climate
    • Design
      • Emerging Technology
    • Application & Adoption
      • Health
      • Education
      • Government
        • Military
        • Public Works
      • Labour
    • Arts & Culture
      • Film & TV
      • Music
      • Pop Culture
      • Digital Art
  • Columns
    • AI Policy Corner
    • Recess
    • Tech Futures
  • The AI Ethics Brief
  • AI Literacy
    • Research Summaries
    • AI Ethics Living Dictionary
    • Learning Community
  • The State of AI Ethics Report
    • Volume 7 (November 2025)
    • Volume 6 (February 2022)
    • Volume 5 (July 2021)
    • Volume 4 (April 2021)
    • Volume 3 (Jan 2021)
    • Volume 2 (Oct 2020)
    • Volume 1 (June 2020)
  • About
    • Our Contributions Policy
    • Our Open Access Policy
    • Contact
    • Donate

The Death of Canada’s Artificial Intelligence and Data Act: What Happened, and What’s Next for AI Regulation in Canada?

January 17, 2025

✍️ Op-Ed by Blair Attard-Frost, a PhD Candidate at the University of Toronto. She researches and teaches about the governance of AI systems in Canada and globally.


Summary

Canada is currently experiencing a historic bout of political turbulence, and the proposed Artificial Intelligence and Data Act (AIDA) has died amidst a prorogation of Parliament. 

The AIDA was tabled in Canada’s House of Commons in June 2022 with the ambitious goal of establishing a comprehensive regulatory framework for AI systems across Canada. However, the AIDA was embroiled in controversy throughout its life in Parliament. A chorus of individuals and organizations voiced concern with the AIDA, citing its exclusionary public consultation process, its vague scope and requirements, and its lack of independent regulatory oversight as reasons why the legislation should not become law. Though the government ultimately proposed some amendments to the AIDA in response to criticisms, the amendments did not sufficiently address the fundamental flaws in the AIDA’s drafting and development. As a result, the AIDA languished and died in a parliamentary committee, unable to secure the confidence and political will needed to proceed through the legislative process.

The AIDA will be remembered by many as a national AI legislation failure, and in its absence, the future of Canadian AI regulation is now uncertain. A victory for the Conservative Party of Canada in an upcoming federal election seems likely. A Conservative approach to AI regulation may favor promoting AI innovation and targeted intervention in specific high-risk AI use cases over the more comprehensive, cross-sectoral framework of the AIDA. In the absence of clear and effective national AI regulation, Canadians can still regulate AI systems at smaller scales. Professional associations, unions, and community organizations in Canada and elsewhere have already created policies, guidelines, and best practices for regulating AI systems in workplaces and communities. As Canada’s political upheaval continues and new regulatory norms for AI emerge, these bottom-up approaches to AI regulation will play an important role.


Introduction

With Canadian Parliament prorogued and a non-confidence vote and federal election looming over the country, Canada’s proposed Artificial Intelligence and Data Act has died on the table of a House of Commons committee. 

The Artificial Intelligence and Data Act (or “AIDA” for short) will be remembered by many as an ineffective and undemocratic piece of legislation. Though the AIDA aimed to set comprehensive rules on AI systems across Canada to protect against harmful uses of AI, the legislation was widely criticized for its exclusionary public consultation process, narrow scope, lack of specificity, and lack of independent regulatory enforcement and oversight.

Early Life of the AIDA

The AIDA was tabled in Parliament in June 2022 as part of Bill C-27, a package of three new legislative acts collectively known as the Digital Charter Implementation Act. The first two acts bundled together in Bill C-27 – the Consumer Privacy Protection Act and the Personal Information and Data Protection Tribunal Act – aimed to modernize Canada’s consumer data protection and privacy laws. As the third act in Bill C-27, the AIDA aimed to establish a regulatory framework for the development, deployment, and operation of AI systems, to be enforced by a new government official known as the “AI and Data Commissioner.” 

The regulatory framework set out by the AIDA required developers, providers, and operators of “high-impact” AI systems in Canada to comply with requirements for risk assessment and mitigation, recordkeeping, and disclosure of key system information or face monetary penalties and criminal offences. Unfortunately, the text of these requirements, as written in the AIDA, was deemed unfit for this purpose by many critics of the legislation.

Criticisms & Controversies

During its life in Parliament, the AIDA was hotly debated over the course of two readings by Members of Parliament, as well as in an in-depth study by the House of Commons Standing Committee on Industry and Technology (INDU). The INDU Committee’s study of the AIDA began in April 2023 and has now been left incomplete. 

During the INDU committee’s study of the AIDA, a total of 137 witnesses appeared before the committee to comment on the AIDA; 113 briefs were also submitted to the committee by a range of individuals and organizations. Many of those submissions expressed concern that the requirements for developers and operators of high-impact systems set out by the AIDA were vaguely described and insufficient for protecting Canada against harmful AI impacts. This insufficiency was due in large part to the AIDA’s lack of robust and inclusive stakeholder engagement. Instead of an open and public process of consultation and deliberation, records provided by the government show that the development of the AIDA primarily occurred behind closed doors with a selective group of industry representatives.

Sectors and workers vulnerable to the impacts of AI systems, marginalized communities, and civil society organizations were largely excluded from participating in the drafting and development of the AIDA. As a result, the AIDA did not adequately serve the interests of many stakeholders. For example, in their submission to the INDU committee, the Canadian Labour Congress deemed the AIDA insufficient for protecting Canadian workers against harmful AI systems, recommending that the legislation be “reconceived from a human, labour, and privacy rights-based perspective, placing transparency, accountability and consultation at the core of the approach to regulating AI.” 

Submissions by labour organizations representing creative workers voiced similar concerns, such as the Directors Guild of Canada and Writers Guild of Canada, Screen Composers Guild of Canada, Music Canada, and a group of advocacy organizations representing Canadian authors and publishers. These organizations deemed the AIDA ineffective at protecting artists and creative workers against the social and economic impacts of generative AI. 

In addition, briefs submitted by Amnesty International and the Women’s Legal Education and Action Fund observed that the AIDA did not provide sufficient protections for human rights, particularly for the rights of racialized communities, women, and gender minorities. The Assembly of First Nations stated that a lawsuit against the government was likely due to the government’s failure to uphold Indigenous rights by consulting First Nations during the AIDA’s drafting. In their submission, the Assembly of First Nations noted that “AI has the potential to destroy First Nations’ cultures, threaten First Nations’ security, and increase demand for our resources.” Over the course of the INDU committee’s study, submissions such as these made it strikingly clear that the AIDA was not designed to protect those in greatest need of protection against AI. 

Later Life & Death

In response to criticisms of the AIDA, the government proposed a series of amendments to the legislation in November 2023. The proposed amendments added specificity to the scope, requirements, and regulatory powers set out by the legislation, but were not substantive enough to address the concerns of the legislation’s critics and move the AIDA into law. As the Canadian Union of Public Employees (CUPE) wrote in their brief to the INDU committee following the proposal of the amendments: 

“The Committee should allow sufficient time for stakeholders to analyze and provide additional commentary on these new amendments. Still, what is before the committee is a deeply flawed legislative framework on a pivotal matter for all Canadians.” 

The proposed amendments to the AIDA were too little too late. After languishing on the table of the INDU committee throughout 2024, the AIDA, along with the rest of Bill C-27, ultimately failed to become law. The AIDA’s failure can be attributed to several factors, including its unclear and incomplete scope and requirements, limited public participation in the drafting of the legislation, and a now-imploding government that neglected to take greater accountability for these errors. In a strange twist of fate, the legislation intended to bolster trust and accountability in AI systems was unable to overcome a lack of trust and accountability in its own legislative process.

AI Regulation in a Post-AIDA Canada

AI regulation now faces an uncertain future in Canada. With the Conservative Party of Canada likely to form a new government following an upcoming non-confidence vote and federal election, Canada’s AI policy landscape may see significant changes in the coming months and years. 

Although the Conservative Party has not released a definitive official statement of their intended approach to regulating AI, remarks on innovation policy and AI regulation by Conservative MPs such as Rick Perkins and Michelle Rempel Garner indicate that the Conservatives may take a lighter-handed approach to AI regulation than the current government. In contrast to the sweeping, cross-sectoral approach of the AIDA, Conservative AI policy may focus primarily on promoting AI innovation in pursuit of economic growth, leveraging existing laws or creating new legislation only for addressing specific high-risk uses of AI that are of particular concern to the government. In the United States, a similar approach to prioritizing AI innovation over regulation is also likely under the new Trump administration, potentially adding further deregulatory pressure to a Conservative Canadian government.

Regardless of the regulatory approach that Canada’s next government may take, it is important to recognize that AI regulation can and does exist outside of government. Following the 2023 strikes of the Writers Guild of America (WGA) and Screen Actors Guild – American Federation of Television and Radio Artists (SAG-AFTRA), both unions established new regulations for the training of AI models on union-protected creative works and the use of generative AI applications in the workplace. In Canada, professional associations and unions such as the Canadian Bar Association, College of Physicians & Surgeons of Manitoba, and Elementary Teachers’ Federation of Ontario have already created guidelines and rules for regulating how AI tools can be used within their professions and workplaces.

Canadians do not need to wait and hope for our next government to fill the regulatory vacuum left by the death of the AIDA. In the absence of clear and effective national AI regulation, we can organize with our co-workers and our communities to create smaller-scale policies, guidelines, and best practices for how AI should be built and used in the places that we live in and work. As Canada’s political upheaval continues and new regulatory norms for AI emerge, these bottom-up approaches to AI regulation will play an important role.

If you are interested in taking AI regulation into your own hands, my essay on AI countergovernance, along with Partnership on AI’s Guidelines for Participatory and Inclusive AI, NIST’s AI Risk Management Framework Playbook, and TechTarget’s guide to creating an acceptable use of AI policy for an organization, will provide useful resources for creating your own policies, guidelines, and shared rules for AI.

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

🔍 SEARCH

Spotlight

A rock embedded with intricate circuit board patterns, held delicately by pale hands drawn in a ghostly style. The contrast between the rough, metallic mineral and the sleek, artificial circuit board illustrates the relationship between raw natural resources and modern technological development. The hands evoke human involvement in the extraction and manufacturing processes.

Tech Futures: The Fossil Fuels Playbook for Big Tech: Part I

Close-up of a cat sleeping on a computer keyboard

Tech Futures: The threat of AI-generated code to the world’s digital infrastructure

The undying sun hangs in the sky, as people gather around signal towers, working through their digital devices.

Dreams and Realities in Modi’s AI Impact Summit

Illustration of a coral reef ecosystem

Tech Futures: Diversity of Thought and Experience: The UN’s Scientific Panel on AI

This image shows a large white, traditional, old building. The top half of the building represents the humanities (which is symbolised by the embedded text from classic literature which is faintly shown ontop the building). The bottom section of the building is embossed with mathematical formulas to represent the sciences. The middle layer of the image is heavily pixelated. On the steps at the front of the building there is a group of scholars, wearing formal suits and tie attire, who are standing around at the enternace talking and some of them are sitting on the steps. There are two stone, statute-like hands that are stretching the building apart from the left side. In the forefront of the image, there are 8 students - which can only be seen from the back. Their graduation gowns have bright blue hoods and they all look as though they are walking towards the old building which is in the background at a distance. There are a mix of students in the foreground.

Tech Futures: Co-opting Research and Education

related posts

  • Report on the Santa Clara Principles ​for Content Moderation

    Report on the Santa Clara Principles ​for Content Moderation

  • The Brussels Effect and AI: How EU Regulation will Impact the Global AI Market

    The Brussels Effect and AI: How EU Regulation will Impact the Global AI Market

  • Post-Mortem Privacy 2.0: Theory, Law and Technology

    Post-Mortem Privacy 2.0: Theory, Law and Technology

  • Considerations for Closed Messaging Research in Democratic Contexts  (Research summary)

    Considerations for Closed Messaging Research in Democratic Contexts (Research summary)

  • Exploring Clusters of Research in Three Areas of AI Safety

    Exploring Clusters of Research in Three Areas of AI Safety

  • Slow AI and The Culture of Speed

    Slow AI and The Culture of Speed

  • The Ethics of AI in Medtech: A Discussion With Abhishek Gupta

    The Ethics of AI in Medtech: A Discussion With Abhishek Gupta

  • Exploiting The Right: Inferring Ideological Alignment in Online Influence Campaigns Using Shared Ima...

    Exploiting The Right: Inferring Ideological Alignment in Online Influence Campaigns Using Shared Ima...

  • Design Principles for User Interfaces in AI-Based Decision Support Systems: The Case of Explainable ...

    Design Principles for User Interfaces in AI-Based Decision Support Systems: The Case of Explainable ...

  • A roadmap toward empowering the labor force behind AI

    A roadmap toward empowering the labor force behind AI

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer


Articles

Columns

AI Literacy

The State of AI Ethics Report


 

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.

Contact

Donate


  • © 2025 MONTREAL AI ETHICS INSTITUTE.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.