

✍️ By Renjie Butalid.
Renjie is Co-founder and Director of the Montreal AI Ethics Institute (MAIEI).
Can ethical AI deliver both innovation and oversight, or are we regulating ourselves out of competitiveness?

At the Point Zero Forum 2025 in Zurich, a roundtable of global leaders, including regulators, central bankers, cloud providers, and civil society organizations, convened under the Chatham House Rule to explore how we might balance innovation, regulation, and ethics in the global race to govern AI.
The Montreal AI Ethics Institute (MAIEI) participated in the session, contributing insights on civic engagement, trust-building, and responsible AI deployment and procurement in high-impact sectors.

The conversation offered five key insights into how jurisdictions could navigate this complex landscape:
- The global impact of the EU AI Act was a central focus. Its risk-based classification of AI systems, including outright bans on use cases like social scoring, sets a precedent in AI governance. However, concerns were raised that overly rigid compliance obligations could drive startups, talent, and capital out of the EU. Participants emphasized the need for regulatory clarity paired with agility to foster innovation.
- Principles-based regulatory models were seen as a pragmatic alternative. Jurisdictions favouring sector-specific approaches, such as the UK’s vertical model, in contrast to the EU’s horizontal framework, enable regulators to tailor oversight to their mandates. Similar experimentation-driven models are emerging in Singapore and Japan, using regulatory sandboxes and iterative guidance shaped by real-world use cases.
- Ethical AI is emerging as a strategic advantage. In sectors like finance and healthcare, responsible AI practices are increasingly influencing procurement decisions. Certification frameworks, such as ISO/IEC 42001, are gaining traction as signals of trust and governance maturity. With regulatory shifts, market pressures, and public expectations converging, ethical AI is moving from a “nice-to-have” to a business necessity.
Note: We explore this further in our recent Tech Policy Press op-ed, “Responsible AI as a Business Necessity: Three Forces Driving Market Adoption.” - Collaborative ecosystems are essential. Public-private co-creation of policy frameworks was highlighted as key to effective governance, alongside the vital role of civil society and bottom-up public pressure in shaping responsible AI. Growing AI literacy, civic engagement, and workers organizing are raising societal expectations around transparency, fairness, and redress. Governance mechanisms must be rigorous yet accessible, particularly for small and medium-sized enterprises (SMEs) and startups, and responsive to these bottom-up forces. Labeling systems, akin to sustainability or data privacy certifications, could help signal trust, but must avoid becoming exclusionary or performative.
- Balancing innovation with precaution remained a central theme. While regulation is essential for safeguarding rights and public trust, participants debated whether the current risk posture is overly cautious. One provocation captured the tension: “The question isn’t whether AI should be regulated, but how to do so without stifling what makes it valuable.”
AI governance is no longer hypothetical. It is being shaped in real time by cross-border debates over values, incentives, and risk. These discussions at the Point Zero Forum suggest that ethical AI can indeed be a competitive advantage if regulation is designed with flexibility, collaboration, and global interoperability in mind.
Many thanks to the Global Finance & Technology Network (GFTN) and the Swiss State Secretariat for International Finance for organizing the forum. We look forward to continued engagement with the Point Zero Forum community.