

✍️ Op-Ed by Ana Brandusescu and Prof Renée Sieber.
Ana is a PhD Candidate at McGill University and a Balsillie Scholar at the Balsillie School of International Affairs.

Prof Sieber is an Associate Professor at McGill University.
The Paris AI Action Summit was marketed as public interest AI, but the underlying message was deregulation. Panel discussions gravitated toward soft law, continued references to existential risks, and new calls to remove troublesome red tape.
Anthropic, Cohere, Scale AI, and OpenAI were there to promote AI safety and trustworthy AI. But how is AI safety defined exactly? Why must we trust AI? The thread through much of the Summit was to make AI safe for companies and nation states. Not for the public. The throughline was “Trust us; we’ll protect you;” however, protection appeared geared entirely toward profit-seeking. This should worry us, not assure us. There has always been tension between hard law (e.g. regulations) and soft law (e.g. self-regulation internal to companies). Countries appeared ready to reject AI regulations, including accountability, in favour of feel-good codes of practices and standards. When there is legislation like the EU AI Act, the European Commission (EC) looks to loosen compliance mechanisms. Indeed, the EC has already rolled back a critical piece in order to make European AI companies more competitive by withdrawing the AI Liability Directive and e-Privacy Regulation, validating the remarks of Macron and von der Leyen. This type of regulation-building seems to favour companies and private interest over the public interest. Public interest means protecting privacy, dignity, labour, and ensuring meaningful civic engagement that halts AI systems that the public finds problematic.
Shortly before the Summit, Trump signed an Executive Order on AI that marked a return to public-private partnerships (PPPs). Conceivably, AI-specific PPPs could set standards and risk levels, operationalize voluntary codes of conduct, and evaluate new AI models. However, PPPs can further a continued hollowing out of government, which was especially clear in the panel, “In Fuelling Trustworthy AI Innovation through Collaboration between Industry and Government,” moderated by Dr Seth Center, Acting Special Envoy for Critical and Emerging Technology at the U.S. Department of State. The panel also saw participation from Scale AI, OpenAI, the Digital Development and Information Agency of Singapore, and the Japan AI Safety Institute. Alexandr Wang, CEO of Scale AI, said: “PPPs are the future of AI.” Sasha Baker, Head of National Security at OpenAI, bolstered the role of industry: “For the US AI Safety Institute, the idea is to partner with the private sector, ourselves—an ecosystem of testers in the private sector with a global network of AI safety institutes to then enable global scaling.” In PPPs, private sector influence can eclipse public sector power, driven by knowledge asymmetry as well as differentials in objectives (profit maximization over public accountability).
We weren’t sanguine about the inclusion of public concerns in AI safety, as envisioned at Bletchley Park in 2023, the first AI summit. In Paris, we witnessed a shift from more expansive considerations of AI safety towards a far narrower AI for defense. Indeed, the UK AI Safety Institute has already announced its rebranding to the UK AI Security Institute. AI safety was always vulnerable to being weaponized and, therefore, easily reduced to improving algorithmic performance and making a nation-state safe. AI safety has now become almost exclusively national AI security, both defensive (e.g. cybersecurity) and offensive (e.g. information warfare). It also has solidified into a panicked race for market dominance. Additionally, AI safety as AI security represents a gold rush for border tech surveillance companies, especially for the Canada-US border, the longest in the world. Soft laws and soft norms (in the case of defense) are insufficient to protect us from unaccountable companies.
It is worth putting the spotlight on Cohere, Canada’s most popular AI company that was recently awarded $240 million by Innovation, Science, and Economic Development (ISED) Canada. Cohere partnered with US defense data analytics and AI firm Palantir to use Palantir’s Foundry platform, a form of surveillance technology. Palantir has been linked to numerous human rights abuses, yet is still a preferred vendor in Canada’s AI Source List. Canada should not align itself with these sorts of partnerships.
Copyright protection is supposed to be important in a country dedicated to protecting Canadian content (CanCon). Respect for intellectual property (IP) was already problematic with GenAI as Big Tech firms harvested data without regard to ownership. Recently, national and international media companies filed a lawsuit against Cohere for allegedly engaging in “massive, systematic copyright infringement and trademark infringement.” The Summit moved completely away from IP protection.
A reason why data harvesting can be achieved with so little pushback is due to the massive concentration of power and wealth in the hands of the smallest number of companies. This should raise alarms for regulators and civil servants who work to serve people, not corporations. It is not a distant risk, it is here.
The distancing of the US and the UK from international cooperation by not signing the Paris Declaration does not advance global governance in the ways these Summits have been positioned. Countries seem to be going inward to do “sovereign AI,” yet, what is sovereign about Canada’s AI when we run all of our tech on US AI systems?
Lastly, the AI Action Summit was ostensibly guided by the theme of public interest AI, of which, we were part of the working group. The Summit saw the launch of a $400 million foundation to support public interest AI called Current AI. Claims of AI to serve the public interest were largely reduced to data infrastructural support for small and medium-sized enterprises. Participatory practices were undervalued even as collective current societal harms have become more obvious. As with Bletchley Park, civil society and non governmental organizations once again were required to organize their own events (e.g. Participatory AI Research and Practice Symposium). Amid the international alliances, PPPs, and Big Tech, these activities created some cracks in the dominant discourse that otherwise would exclude the viewpoints of citizens and non-experts.