• Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • Core Principles of Responsible AI
    • Accountability
    • Fairness
    • Privacy
    • Safety and Security
    • Sustainability
    • Transparency
  • Special Topics
    • AI in Industry
    • Ethical Implications
    • Human-Centered Design
    • Regulatory Landscape
    • Technical Methods
  • Living Dictionary
  • State of AI Ethics
  • AI Ethics Brief
  • 🇫🇷
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

Beyond Consultation: Building Inclusive AI Governance for Canada’s Democratic Future

September 2, 2025

✍️ By Renjie Butalid. 

Renjie is Co-founder and Director of the Montreal AI Ethics Institute (MAIEI).


At the Victoria Forum 2025, “Towards a Better Future: Shifting the Trajectory,” lawmakers, researchers, and civil society leaders gathered on the traditional territory of the lək̓ʷəŋən peoples in Victoria, BC, to examine how Canada can navigate AI governance while maintaining both global competitiveness and democratic values.

Co-hosted by the University of Victoria and the Senate of Canada, the Victoria Forum brought together diverse perspectives on Canada’s most pressing policy challenges of our time.

The Montreal AI Ethics Institute (MAIEI) participated alongside Dr. Vanessa Andreotti (University of Victoria) and Renee Black (Goodbot), moderated by Senator Rosemary Moodie, on a panel on AI and Society: The Role of Governments, Civic Institutions and Business in AI Innovation and Governance.

The panel brought together three distinct perspectives on AI governance, with MAIEI contributing insights on civic engagement and public participation in AI policy:

1. MAIEI’s Core Message: AI Governance Must Include the Public

MAIEI’s perspective centred on a gap identified at the organization’s founding in 2018: early AI governance conversations were dominated by policymakers and technologists, with the public notably absent despite AI’s deeply societal impacts. MAIEI’s work through our in-person meetups in Montreal in the early days, research summaries, The AI Ethics Brief, and more, represents an effort to build civic competence around AI’s societal impacts.

The fundamental question driving this work remains: “What is humanity’s role in a world increasingly driven by algorithms?” This framing positions governance not just as a technical problem but as a democratic challenge requiring broad public engagement.

2. Reframing AI as Socio-Technical Systems

MAIEI’s contribution focused on addressing AI as socio-technical systems encompassing both technical elements (software like ChatGPT, hardware like chips and data centres) and social elements (people, culture, values, organizations). This perspective shifts focus from purely technical solutions to questions of trust, power, and lived impacts.

Without attention to the social dimensions, AI governance risks becoming mere compliance checklists rather than meaningful safeguards that address how AI actually shows up in people’s lives.

3. Canada’s Strategic Position in Global AI Governance

The discussion included analysis of how different jurisdictions approach AI governance: the U.S. market-driven model, the EU’s rights-based framework, China’s state-led approach, and the UK’s sector-specific model. This context positioned Canada’s opportunity to balance global competitiveness with democratic values like equity, accountability, and resilience.

The prosperity narrative is important, but the real test for Canadian leadership is whether our policies help us compete globally while serving Canadians fairly.

4. Moving Beyond Consultation to Co-Creation

Building on MAIEI’s community engagement approach, the discussion emphasized that meaningful inclusion requires co-creation rather than traditional consultation. This means embedding diverse perspectives (Indigenous, racialized, rural, low-income) into how we fundamentally define fairness and accountability in AI systems throughout the development lifecycle, not just at the end.

5. Dr. Andreotti’s Meta-Relational AI Alternative

Dr. Andreotti presented a compelling alternative to extractive AI models through “meta-relational AI,” an approach that recognizes AI as part of nature’s interconnected web rather than a separate technological object. Her work demonstrates how AI can be trained to cultivate compassion, responsibility, and engagement with complexity rather than simply optimizing for engagement or profit.

This paradigm shift suggests possibilities for Canadian leadership in developing AI that serves human flourishing rather than corporate extraction, though it requires fundamental changes in how we approach AI training and deployment.

6. Critical Gaps in Current Approaches

The panel discussion also highlighted significant challenges with Canada’s current trajectory, particularly around Bill C-2, which has faced criticism for transferring data to the US for national security reasons rather than advancing digital sovereignty. Civil society organizations have mobilized against current approaches, with hundreds of organizations signing letters calling for policy changes.

The discussion also emphasized that without comprehensive regulation, voluntary industry standards emerge unevenly, with some organizations leading while others lag behind, creating accountability gaps that risk making governance efforts performative rather than substantive.


Key Takeaways for Canadian Policymakers

The Victoria Forum discussion on AI and Society suggests three priorities for Canadian AI governance:

1. Build socio-technical governance frameworks that address both technological and social dimensions of AI systems, with clear accountability mechanisms and redress procedures when harms occur.

2. Move from consultation to co-creation by ensuring AI development reflects Canadian diversity through meaningful participation of Indigenous, racialized, rural, and low-income communities in defining fairness and accountability.

3. Strengthen civic institutions that can bridge technical AI debates with public understanding while maintaining pressure for accountability beyond voluntary industry standards.

Canada has the opportunity to answer that question in ways that honour both innovation and democratic values. The Victoria Forum discussion suggests this requires moving beyond technical solutions toward inclusive, values-driven governance that puts people at the centre of AI policy.


Photo header description: Closing comments on Day 3 of the Victoria Forum featuring Rowan Gentleman-Sylvester (CityHive), Mason Ducharme (Centre for First Nations Governance), and the Rt. Hon. Joe Clark (Former Prime Minister of Canada), moderated by Prof. Adel Guitouni (University of Victoria)

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

🔍 SEARCH

Spotlight

Beyond Consultation: Building Inclusive AI Governance for Canada’s Democratic Future

AI Policy Corner: U.S. Executive Order on Advancing AI Education for American Youth

AI Policy Corner: U.S. Copyright Guidance on Works Created with AI

AI Policy Corner: AI for Good Summit 2025

AI Policy Corner: Japan’s AI Promotion Act

related posts

  • AI supply chains make it easy to disavow ethical accountability

    AI supply chains make it easy to disavow ethical accountability

  • Moral Machine or Tyranny of the Majority?

    Moral Machine or Tyranny of the Majority?

  • Is ChatGPT for everyone? Seeing beyond the hype toward responsible use in education

    Is ChatGPT for everyone? Seeing beyond the hype toward responsible use in education

  • The Ethical AI Startup Ecosystem 01: An Overview of Ethical AI Startups

    The Ethical AI Startup Ecosystem 01: An Overview of Ethical AI Startups

  • Representation Engineering: A Top-Down Approach to AI Transparency

    Representation Engineering: A Top-Down Approach to AI Transparency

  • Special Edition: Honouring the Legacy of Abhishek Gupta (1992–2024)

    Special Edition: Honouring the Legacy of Abhishek Gupta (1992–2024)

  • AI Policy Corner: New York City Local Law 144

    AI Policy Corner: New York City Local Law 144

  • Risks vs. Harms: Unraveling the AI Terminology Confusion

    Risks vs. Harms: Unraveling the AI Terminology Confusion

  • AI Policy Corner: U.S. Executive Order on Advancing AI Education for American Youth

    AI Policy Corner: U.S. Executive Order on Advancing AI Education for American Youth

  • Regulating computer vision & the ongoing relevance of AI ethics

    Regulating computer vision & the ongoing relevance of AI ethics

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer

Categories


• Blog
• Research Summaries
• Columns
• Core Principles of Responsible AI
• Special Topics

Signature Content


• The State Of AI Ethics

• The Living Dictionary

• The AI Ethics Brief

Learn More


• About

• Open Access Policy

• Contributions Policy

• Editorial Stance on AI Tools

• Press

• Donate

• Contact

The AI Ethics Brief (bi-weekly newsletter)

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.


Archive

  • © 2025 MONTREAL AI ETHICS INSTITUTE.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.