

✍️ By Renjie Butalid.
Renjie is Co-founder and Director of the Montreal AI Ethics Institute (MAIEI).
At the Victoria Forum 2025, “Towards a Better Future: Shifting the Trajectory,” lawmakers, researchers, and civil society leaders gathered on the traditional territory of the lək̓ʷəŋən peoples in Victoria, BC, to examine how Canada can navigate AI governance while maintaining both global competitiveness and democratic values.
Co-hosted by the University of Victoria and the Senate of Canada, the Victoria Forum brought together diverse perspectives on Canada’s most pressing policy challenges of our time.
The Montreal AI Ethics Institute (MAIEI) participated alongside Dr. Vanessa Andreotti (University of Victoria) and Renee Black (Goodbot), moderated by Senator Rosemary Moodie, on a panel on AI and Society: The Role of Governments, Civic Institutions and Business in AI Innovation and Governance.

The panel brought together three distinct perspectives on AI governance, with MAIEI contributing insights on civic engagement and public participation in AI policy:
1. MAIEI’s Core Message: AI Governance Must Include the Public
MAIEI’s perspective centred on a gap identified at the organization’s founding in 2018: early AI governance conversations were dominated by policymakers and technologists, with the public notably absent despite AI’s deeply societal impacts. MAIEI’s work through our in-person meetups in Montreal in the early days, research summaries, The AI Ethics Brief, and more, represents an effort to build civic competence around AI’s societal impacts.
The fundamental question driving this work remains: “What is humanity’s role in a world increasingly driven by algorithms?” This framing positions governance not just as a technical problem but as a democratic challenge requiring broad public engagement.
2. Reframing AI as Socio-Technical Systems
MAIEI’s contribution focused on addressing AI as socio-technical systems encompassing both technical elements (software like ChatGPT, hardware like chips and data centres) and social elements (people, culture, values, organizations). This perspective shifts focus from purely technical solutions to questions of trust, power, and lived impacts.
Without attention to the social dimensions, AI governance risks becoming mere compliance checklists rather than meaningful safeguards that address how AI actually shows up in people’s lives.
3. Canada’s Strategic Position in Global AI Governance
The discussion included analysis of how different jurisdictions approach AI governance: the U.S. market-driven model, the EU’s rights-based framework, China’s state-led approach, and the UK’s sector-specific model. This context positioned Canada’s opportunity to balance global competitiveness with democratic values like equity, accountability, and resilience.
The prosperity narrative is important, but the real test for Canadian leadership is whether our policies help us compete globally while serving Canadians fairly.
4. Moving Beyond Consultation to Co-Creation
Building on MAIEI’s community engagement approach, the discussion emphasized that meaningful inclusion requires co-creation rather than traditional consultation. This means embedding diverse perspectives (Indigenous, racialized, rural, low-income) into how we fundamentally define fairness and accountability in AI systems throughout the development lifecycle, not just at the end.
5. Dr. Andreotti’s Meta-Relational AI Alternative
Dr. Andreotti presented a compelling alternative to extractive AI models through “meta-relational AI,” an approach that recognizes AI as part of nature’s interconnected web rather than a separate technological object. Her work demonstrates how AI can be trained to cultivate compassion, responsibility, and engagement with complexity rather than simply optimizing for engagement or profit.
This paradigm shift suggests possibilities for Canadian leadership in developing AI that serves human flourishing rather than corporate extraction, though it requires fundamental changes in how we approach AI training and deployment.
6. Critical Gaps in Current Approaches
The panel discussion also highlighted significant challenges with Canada’s current trajectory, particularly around Bill C-2, which has faced criticism for transferring data to the US for national security reasons rather than advancing digital sovereignty. Civil society organizations have mobilized against current approaches, with hundreds of organizations signing letters calling for policy changes.
The discussion also emphasized that without comprehensive regulation, voluntary industry standards emerge unevenly, with some organizations leading while others lag behind, creating accountability gaps that risk making governance efforts performative rather than substantive.
Key Takeaways for Canadian Policymakers
The Victoria Forum discussion on AI and Society suggests three priorities for Canadian AI governance:
1. Build socio-technical governance frameworks that address both technological and social dimensions of AI systems, with clear accountability mechanisms and redress procedures when harms occur.
2. Move from consultation to co-creation by ensuring AI development reflects Canadian diversity through meaningful participation of Indigenous, racialized, rural, and low-income communities in defining fairness and accountability.
3. Strengthen civic institutions that can bridge technical AI debates with public understanding while maintaining pressure for accountability beyond voluntary industry standards.
Canada has the opportunity to answer that question in ways that honour both innovation and democratic values. The Victoria Forum discussion suggests this requires moving beyond technical solutions toward inclusive, values-driven governance that puts people at the centre of AI policy.
Photo header description: Closing comments on Day 3 of the Victoria Forum featuring Rowan Gentleman-Sylvester (CityHive), Mason Ducharme (Centre for First Nations Governance), and the Rt. Hon. Joe Clark (Former Prime Minister of Canada), moderated by Prof. Adel Guitouni (University of Victoria)