• Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • Core Principles of Responsible AI
    • Accountability
    • Fairness
    • Privacy
    • Safety and Security
    • Sustainability
    • Transparency
  • Special Topics
    • AI in Industry
    • Ethical Implications
    • Human-Centered Design
    • Regulatory Landscape
    • Technical Methods
  • Living Dictionary
  • State of AI Ethics
  • AI Ethics Brief
  • 🇫🇷
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

The Paris AI Summit: Deregulation, Fear, and Surveillance

February 17, 2025

✍️ Op-Ed by Ana Brandusescu and Prof Renée Sieber.

Ana is a PhD Candidate at McGill University and a Balsillie Scholar at the Balsillie School of International Affairs.

Prof Sieber is an Associate Professor at McGill University.


The Paris AI Action Summit was marketed as public interest AI, but the underlying message was deregulation. Panel discussions gravitated toward soft law, continued references to existential risks, and new calls to remove troublesome red tape. 

Anthropic, Cohere, Scale AI, and OpenAI were there to promote AI safety and trustworthy AI. But how is AI safety defined exactly? Why must we trust AI? The thread through much of the Summit was to make AI safe for companies and nation states. Not for the public. The throughline was “Trust us; we’ll protect you;” however, protection appeared geared entirely toward profit-seeking. This should worry us, not assure us. There has always been tension between hard law (e.g. regulations) and soft law (e.g. self-regulation internal to companies). Countries appeared ready to reject AI regulations, including accountability, in favour of feel-good codes of practices and standards. When there is legislation like the EU AI Act, the European Commission (EC) looks to loosen compliance mechanisms. Indeed, the EC has already rolled back a critical piece in order to make European AI companies more competitive by withdrawing the AI Liability Directive and e-Privacy Regulation, validating the remarks of Macron and von der Leyen. This type of regulation-building seems to favour companies and private interest over the public interest. Public interest means protecting privacy, dignity, labour, and ensuring meaningful civic engagement that halts AI systems that the public finds problematic.

Shortly before the Summit, Trump signed an Executive Order on AI that marked a return to public-private partnerships (PPPs). Conceivably, AI-specific PPPs could set standards and risk levels, operationalize voluntary codes of conduct, and evaluate new AI models. However, PPPs can further a continued hollowing out of government, which was especially clear in the panel, “In Fuelling Trustworthy AI Innovation through Collaboration between Industry and Government,” moderated by Dr Seth Center, Acting Special Envoy for Critical and Emerging Technology at the U.S. Department of State. The panel also saw participation from Scale AI, OpenAI, the Digital Development and Information Agency of Singapore, and the Japan AI Safety Institute. Alexandr Wang, CEO of Scale AI, said: “PPPs are the future of AI.” Sasha Baker, Head of National Security at OpenAI, bolstered the role of industry: “For the US AI Safety Institute, the idea is to partner with the private sector, ourselves—an ecosystem of testers in the private sector with a global network of AI safety institutes to then enable global scaling.” In PPPs, private sector influence can eclipse public sector power, driven by knowledge asymmetry as well as differentials in objectives (profit maximization over public accountability). 

We weren’t sanguine about the inclusion of public concerns in AI safety, as envisioned at Bletchley Park in 2023, the first AI summit. In Paris, we witnessed a shift from more expansive considerations of AI safety towards a far narrower AI for defense. Indeed, the UK AI Safety Institute has already announced its rebranding to the UK AI Security Institute. AI safety was always vulnerable to being weaponized and, therefore, easily reduced to improving algorithmic performance and making a nation-state safe. AI safety has now become almost exclusively national AI security, both defensive (e.g. cybersecurity) and offensive (e.g. information warfare). It also has solidified into a panicked race for market dominance. Additionally, AI safety as AI security represents a gold rush for border tech surveillance companies, especially for the Canada-US border, the longest in the world. Soft laws and soft norms (in the case of defense) are insufficient to protect us from unaccountable companies. 

It is worth putting the spotlight on Cohere, Canada’s most popular AI company that was recently awarded $240 million by Innovation, Science, and Economic Development (ISED) Canada. Cohere partnered with US defense data analytics and AI firm Palantir to use Palantir’s Foundry platform, a form of surveillance technology. Palantir has been linked to numerous human rights abuses, yet is still a preferred vendor in Canada’s AI Source List. Canada should not align itself with these sorts of partnerships.

Copyright protection is supposed to be important in a country dedicated to protecting Canadian content (CanCon). Respect for intellectual property (IP) was already problematic with GenAI as Big Tech firms harvested data without regard to ownership. Recently, national and international media companies filed a lawsuit against Cohere for allegedly engaging in “massive, systematic copyright infringement and trademark infringement.” The Summit moved completely away from IP protection.

A reason why data harvesting can be achieved with so little pushback is due to the massive concentration of power and wealth in the hands of the smallest number of companies. This should raise alarms for regulators and civil servants who work to serve people, not corporations. It is not a distant risk, it is here.

The distancing of the US and the UK from international cooperation by not signing the Paris Declaration does not advance global governance in the ways these Summits have been positioned. Countries seem to be going inward to do “sovereign AI,” yet, what is sovereign about Canada’s AI when we run all of our tech on US AI systems? 

Lastly, the AI Action Summit was ostensibly guided by the theme of public interest AI, of which, we were part of the working group. The Summit saw the launch of a $400 million foundation to support public interest AI called Current AI. Claims of AI to serve the public interest were largely reduced to data infrastructural support for small and medium-sized enterprises. Participatory practices were undervalued even as collective current societal harms have become more obvious. As with Bletchley Park, civil society and non governmental organizations once again were required to organize their own events (e.g. Participatory AI Research and Practice Symposium). Amid the international alliances, PPPs, and Big Tech, these activities created some cracks in the dominant discourse that otherwise would exclude the viewpoints of citizens and non-experts.

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

🔍 SEARCH

Spotlight

Canada’s Minister of AI and Digital Innovation is a Historic First. Here’s What We Recommend.

Am I Literate? Redefining Literacy in the Age of Artificial Intelligence

AI Policy Corner: The Texas Responsible AI Governance Act

AI Policy Corner: Singapore’s National AI Strategy 2.0

AI Governance in a Competitive World: Balancing Innovation, Regulation and Ethics | Point Zero Forum 2025

related posts

  • Open-source provisions for large models in the AI Act

    Open-source provisions for large models in the AI Act

  • Does diversity really go well with Large Language Models?

    Does diversity really go well with Large Language Models?

  • Discover Weekly: How the Music Platform Spotify Collects and Uses Your Data

    Discover Weekly: How the Music Platform Spotify Collects and Uses Your Data

  • AI Framework for Healthy Built Environments

    AI Framework for Healthy Built Environments

  • AI Chatbots: The Future of Socialization

    AI Chatbots: The Future of Socialization

  • Knowledge, Workflow, Oversight: A framework for implementing AI ethics

    Knowledge, Workflow, Oversight: A framework for implementing AI ethics

  • Can LLMs Enhance the Conversational AI Experience?

    Can LLMs Enhance the Conversational AI Experience?

  • Responsible sourcing and the professionalization of data work

    Responsible sourcing and the professionalization of data work

  • Social Context of LLMs - the BigScience Approach, Part 4:Model Governance and Responsible Use

    Social Context of LLMs - the BigScience Approach, Part 4:Model Governance and Responsible Use

  • Sociological Perspectives on Artificial Intelligence: A Typological Reading

    Sociological Perspectives on Artificial Intelligence: A Typological Reading

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer

Categories


• Blog
• Research Summaries
• Columns
• Core Principles of Responsible AI
• Special Topics

Signature Content


• The State Of AI Ethics

• The Living Dictionary

• The AI Ethics Brief

Learn More


• About

• Open Access Policy

• Contributions Policy

• Editorial Stance on AI Tools

• Press

• Donate

• Contact

The AI Ethics Brief (bi-weekly newsletter)

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.


Archive

  • © MONTREAL AI ETHICS INSTITUTE. All rights reserved 2024.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.