• Skip to main content
  • Skip to secondary menu
  • Skip to primary sidebar
  • Skip to footer
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

  • Articles
    • Public Policy
    • Privacy & Security
    • Human Rights
      • Ethics
      • JEDI (Justice, Equity, Diversity, Inclusion
    • Climate
    • Design
      • Emerging Technology
    • Application & Adoption
      • Health
      • Education
      • Government
        • Military
        • Public Works
      • Labour
    • Arts & Culture
      • Film & TV
      • Music
      • Pop Culture
      • Digital Art
  • Columns
    • AI Policy Corner
    • Recess
    • Tech Futures
  • The AI Ethics Brief
  • AI Literacy
    • Research Summaries
    • AI Ethics Living Dictionary
    • Learning Community
  • The State of AI Ethics Report
    • Volume 7 (November 2025)
    • Volume 6 (February 2022)
    • Volume 5 (July 2021)
    • Volume 4 (April 2021)
    • Volume 3 (Jan 2021)
    • Volume 2 (Oct 2020)
    • Volume 1 (June 2020)
  • About
    • Our Contributions Policy
    • Our Open Access Policy
    • Contact
    • Donate

The Paris AI Summit: Deregulation, Fear, and Surveillance

February 17, 2025

✍️ Op-Ed by Ana Brandusescu and Prof Renée Sieber.

Ana is a PhD Candidate at McGill University and a Balsillie Scholar at the Balsillie School of International Affairs.

Prof Sieber is an Associate Professor at McGill University.


The Paris AI Action Summit was marketed as public interest AI, but the underlying message was deregulation. Panel discussions gravitated toward soft law, continued references to existential risks, and new calls to remove troublesome red tape. 

Anthropic, Cohere, Scale AI, and OpenAI were there to promote AI safety and trustworthy AI. But how is AI safety defined exactly? Why must we trust AI? The thread through much of the Summit was to make AI safe for companies and nation states. Not for the public. The throughline was “Trust us; we’ll protect you;” however, protection appeared geared entirely toward profit-seeking. This should worry us, not assure us. There has always been tension between hard law (e.g. regulations) and soft law (e.g. self-regulation internal to companies). Countries appeared ready to reject AI regulations, including accountability, in favour of feel-good codes of practices and standards. When there is legislation like the EU AI Act, the European Commission (EC) looks to loosen compliance mechanisms. Indeed, the EC has already rolled back a critical piece in order to make European AI companies more competitive by withdrawing the AI Liability Directive and e-Privacy Regulation, validating the remarks of Macron and von der Leyen. This type of regulation-building seems to favour companies and private interest over the public interest. Public interest means protecting privacy, dignity, labour, and ensuring meaningful civic engagement that halts AI systems that the public finds problematic.

Shortly before the Summit, Trump signed an Executive Order on AI that marked a return to public-private partnerships (PPPs). Conceivably, AI-specific PPPs could set standards and risk levels, operationalize voluntary codes of conduct, and evaluate new AI models. However, PPPs can further a continued hollowing out of government, which was especially clear in the panel, “In Fuelling Trustworthy AI Innovation through Collaboration between Industry and Government,” moderated by Dr Seth Center, Acting Special Envoy for Critical and Emerging Technology at the U.S. Department of State. The panel also saw participation from Scale AI, OpenAI, the Digital Development and Information Agency of Singapore, and the Japan AI Safety Institute. Alexandr Wang, CEO of Scale AI, said: “PPPs are the future of AI.” Sasha Baker, Head of National Security at OpenAI, bolstered the role of industry: “For the US AI Safety Institute, the idea is to partner with the private sector, ourselves—an ecosystem of testers in the private sector with a global network of AI safety institutes to then enable global scaling.” In PPPs, private sector influence can eclipse public sector power, driven by knowledge asymmetry as well as differentials in objectives (profit maximization over public accountability). 

We weren’t sanguine about the inclusion of public concerns in AI safety, as envisioned at Bletchley Park in 2023, the first AI summit. In Paris, we witnessed a shift from more expansive considerations of AI safety towards a far narrower AI for defense. Indeed, the UK AI Safety Institute has already announced its rebranding to the UK AI Security Institute. AI safety was always vulnerable to being weaponized and, therefore, easily reduced to improving algorithmic performance and making a nation-state safe. AI safety has now become almost exclusively national AI security, both defensive (e.g. cybersecurity) and offensive (e.g. information warfare). It also has solidified into a panicked race for market dominance. Additionally, AI safety as AI security represents a gold rush for border tech surveillance companies, especially for the Canada-US border, the longest in the world. Soft laws and soft norms (in the case of defense) are insufficient to protect us from unaccountable companies. 

It is worth putting the spotlight on Cohere, Canada’s most popular AI company that was recently awarded $240 million by Innovation, Science, and Economic Development (ISED) Canada. Cohere partnered with US defense data analytics and AI firm Palantir to use Palantir’s Foundry platform, a form of surveillance technology. Palantir has been linked to numerous human rights abuses, yet is still a preferred vendor in Canada’s AI Source List. Canada should not align itself with these sorts of partnerships.

Copyright protection is supposed to be important in a country dedicated to protecting Canadian content (CanCon). Respect for intellectual property (IP) was already problematic with GenAI as Big Tech firms harvested data without regard to ownership. Recently, national and international media companies filed a lawsuit against Cohere for allegedly engaging in “massive, systematic copyright infringement and trademark infringement.” The Summit moved completely away from IP protection.

A reason why data harvesting can be achieved with so little pushback is due to the massive concentration of power and wealth in the hands of the smallest number of companies. This should raise alarms for regulators and civil servants who work to serve people, not corporations. It is not a distant risk, it is here.

The distancing of the US and the UK from international cooperation by not signing the Paris Declaration does not advance global governance in the ways these Summits have been positioned. Countries seem to be going inward to do “sovereign AI,” yet, what is sovereign about Canada’s AI when we run all of our tech on US AI systems? 

Lastly, the AI Action Summit was ostensibly guided by the theme of public interest AI, of which, we were part of the working group. The Summit saw the launch of a $400 million foundation to support public interest AI called Current AI. Claims of AI to serve the public interest were largely reduced to data infrastructural support for small and medium-sized enterprises. Participatory practices were undervalued even as collective current societal harms have become more obvious. As with Bletchley Park, civil society and non governmental organizations once again were required to organize their own events (e.g. Participatory AI Research and Practice Symposium). Amid the international alliances, PPPs, and Big Tech, these activities created some cracks in the dominant discourse that otherwise would exclude the viewpoints of citizens and non-experts.

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

🔍 SEARCH

Spotlight

Close-up of a cat sleeping on a computer keyboard

Tech Futures: The threat of AI-generated code to the world’s digital infrastructure

The undying sun hangs in the sky, as people gather around signal towers, working through their digital devices.

Dreams and Realities in Modi’s AI Impact Summit

Illustration of a coral reef ecosystem

Tech Futures: Diversity of Thought and Experience: The UN’s Scientific Panel on AI

This image shows a large white, traditional, old building. The top half of the building represents the humanities (which is symbolised by the embedded text from classic literature which is faintly shown ontop the building). The bottom section of the building is embossed with mathematical formulas to represent the sciences. The middle layer of the image is heavily pixelated. On the steps at the front of the building there is a group of scholars, wearing formal suits and tie attire, who are standing around at the enternace talking and some of them are sitting on the steps. There are two stone, statute-like hands that are stretching the building apart from the left side. In the forefront of the image, there are 8 students - which can only be seen from the back. Their graduation gowns have bright blue hoods and they all look as though they are walking towards the old building which is in the background at a distance. There are a mix of students in the foreground.

Tech Futures: Co-opting Research and Education

Agentic AI systems and algorithmic accountability: a new era of e-commerce

related posts

  • Research summary: Challenging Truth and Trust: A Global Inventory of Organized Social Media Manipula...

    Research summary: Challenging Truth and Trust: A Global Inventory of Organized Social Media Manipula...

  • AI Ethics: Enter the Dragon!

    AI Ethics: Enter the Dragon!

  • Routing with Privacy for Drone Package Delivery Systems

    Routing with Privacy for Drone Package Delivery Systems

  • Atomist or holist? A diagnosis and vision for more productive interdisciplinary AI ethics dialogue

    Atomist or holist? A diagnosis and vision for more productive interdisciplinary AI ethics dialogue

  • NATO Artificial Intelligence Strategy

    NATO Artificial Intelligence Strategy

  • Considerations for Closed Messaging Research in Democratic Contexts  (Research summary)

    Considerations for Closed Messaging Research in Democratic Contexts (Research summary)

  • Race and AI: the Diversity Dilemma

    Race and AI: the Diversity Dilemma

  • 10 takeaways from our meetup on AI Ethics in the APAC Region

    10 takeaways from our meetup on AI Ethics in the APAC Region

  • Research summary: Troubling Trends in Machine Learning Scholarship

    Research summary: Troubling Trends in Machine Learning Scholarship

  • Intersectional Inquiry, on the Ground and in the Algorithm

    Intersectional Inquiry, on the Ground and in the Algorithm

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer


Articles

Columns

AI Literacy

The State of AI Ethics Report


 

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.

Contact

Donate


  • © 2025 MONTREAL AI ETHICS INSTITUTE.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.