• Skip to main content
  • Skip to secondary menu
  • Skip to primary sidebar
  • Skip to footer
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

  • Articles
    • Public Policy
    • Privacy & Security
    • Human Rights
      • Ethics
      • JEDI (Justice, Equity, Diversity, Inclusion
    • Climate
    • Design
      • Emerging Technology
    • Application & Adoption
      • Health
      • Education
      • Government
        • Military
        • Public Works
      • Labour
    • Arts & Culture
      • Film & TV
      • Music
      • Pop Culture
      • Digital Art
  • Columns
    • AI Policy Corner
    • Recess
    • Tech Futures
  • The AI Ethics Brief
  • AI Literacy
    • Research Summaries
    • AI Ethics Living Dictionary
    • Learning Community
  • The State of AI Ethics Report
    • Volume 7 (November 2025)
    • Volume 6 (February 2022)
    • Volume 5 (July 2021)
    • Volume 4 (April 2021)
    • Volume 3 (Jan 2021)
    • Volume 2 (Oct 2020)
    • Volume 1 (June 2020)
  • About
    • Our Contributions Policy
    • Our Open Access Policy
    • Contact
    • Donate

Agentic AI systems and algorithmic accountability: a new era of e-commerce

December 22, 2025

🔬 By Sun-Gyoo Kang.

Sun Gyoo Kang is a lawyer and AI ethics specialist based in Montreal, specializing in the intersection of emerging technology, law, and regulatory policy. As the founder of Law and Ethics in Tech, he serves as an AI curator, producing critical analyses on the legal and moral implications of automation. A frequent contributor to the Montreal AI Ethics Institute (MAIEI) and other journals, his writing explores high-stakes topics such as algorithmic bias, sovereign AI, and the accountability frameworks for AI systems.

Featured image credit: Google DeepMind on Unsplash


In an era of rapid technological advancement, artificial intelligence (AI) systems are evolving from passive tools into semi-autonomous agents capable of making decisions and taking actions with minimal human intervention. This transition from AI assistants managing schedules to sophisticated agents trading stocks marks a new phase in e-commerce, presenting profound challenges to traditional notions of accountability. As these agentic AIs become more integrated into our daily lives, the gap between their deployment and safety grows, highlighted by a significant increase in reported AI incidents.

The current e-commerce landscape is fundamentally human-centric, designed to capture the attention and influence the decisions of a person. This model relies on human-driven search, social proof-like reviews, and the influence of branding and advertising. The user bears the cognitive load of sifting through information, detecting fake reviews, and navigating manipulative designs. Accountability in this system, while challenging, is linked to human-readable information, such as a misleading advertisement.

The introduction of Agentic AI will reshape this marketplace, shifting the focus from influencing human psychology to influencing machine logic. Before going further, it’s important to understand the distinction between a standard AI agent and the more advanced Agentic AI. An AI agent is a goal-driven assistant that performs a specific task you assign, like scheduling meetings or answering questions. Agentic AI, by contrast, can plan, coordinate multiple tasks or agents, and adapt its own strategy over time—acting more like a self-directed project manager than a single helper. It is this ability to autonomously pursue complex goals with minimal human intervention that will be the driving force of change.

As an example, a consumer may initiate the process by giving their personal agentic AI an order (“mandate”) to buy a concert ticket only when its price drops below $100. This personal AI then works autonomously, constantly communicating with numerous other AIs running on ticket websites and resale markets to monitor prices in real-time. When a seller’s AI finally advertises a ticket at the target price, the user’s agent instantly detects it and begins a secure negotiation. This is how Google’s AP2 protocol works. Once verified, a payment agent is automatically triggered, instantly transferring the exact payment amount in a valid currency directly to the seller’s digital wallet. The entire end-to-end process, from discovery to payment, is completed through the interaction of these specialized agentic AIs, with the user simply receiving a notification that the ticket has been bought. The agent then independently formulates and executes a plan, a capability that makes it both powerful and ethically complex (see Brief #169 on this topic).

This shift will bring about significant changes. Users will no longer search for products but will delegate complex goals to their agents, which will handle discovery, vetting, negotiation, and purchasing. Consequently, a new “manipulation layer” will emerge, where the focus will be on deceiving Agentic AIs rather than human consumers. Business agentic AIs might use deceptive language or fabricate credentials to secure a sale. While agents can offer unprecedented hyper-personalization, they could also lead to hyper-efficient discrimination, steering users based on biased data. Furthermore, brand loyalty may diminish as agents prioritize raw specifications and verifiable quality metrics over branding, potentially leveling the playing field for smaller producers. The nature of market risk is transformed from individual consumers being misled to the potential for systemic manipulation affecting millions of agents, leading to large-scale market distortions.

Risks for Consumers and Commerce

For consumers, delegating purchasing power to an Agentic AI introduces significant risks beyond receiving the wrong item. Algorithmic manipulation is a primary concern, as agents can be tricked into purchasing counterfeit goods or revealing sensitive data. Hyper-personalized discrimination is another major risk, where Agenti AIs with deep access to a user’s personal data could be subjected to discriminatory pricing, with different prices offered to different individuals for the same product based on data-driven inferences about their wealth or urgency. The vast amount of personal data required for an Agentic AI to be effective also creates a significant privacy risk, making it a rich target for data breaches.

Businesses also face new threats to market stability, brand reputation, and legal standing. One significant risk is algorithmic collusion, where pricing agents independently learn to collude to raise prices without any explicit communication, a practice difficult to detect and prosecute under current antitrust laws. Notwithstanding, as we learned from the Air Canada case early 2024 — where the airline was held responsible for incorrect information provided by its chatbot — businesses are liable for their agents’ actions. A malfunctioning or misled business agent could trigger a significant financial and public relations crisis. The interaction of millions of high-speed, autonomous Agentic AIs could also lead to market instability and unpredictability, similar to “flash crashes” in algorithmic stock trading, where a minor bug could be amplified across the ecosystem, causing unpredictable market shocks.

These risks stem from fundamental challenges inherent in autonomous AI systems. The “black box” nature of many advanced agents makes their decision-making process opaque, conflicting with the principle of auditability necessary for accountability. This creates a responsibility gap, where it is difficult to assign blame when an Agentic AI among many other causes harm. The speed and scale at which Agentic AIs operate make traditional human oversight (human-in-the-loop) impractical. Perhaps the most significant challenge is that agents can develop unanticipated emergent behaviors and are vulnerable to manipulation, as demonstrated in experiments where business Agentic AIs easily deceived customer agents into buying inferior products. These vulnerabilities show that harmful behaviors can arise from the interaction of multiple agents, even if not explicitly programmed.

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

🔍 SEARCH

Spotlight

A brightly coloured illustration which can be viewed in any direction. It has many elements to it working together: men in suits around a table, someone in a data centre, big hands controlling the scenes and holding a phone, people in a production line. Motifs such as network diagrams and melting emojis are placed throughout the busy vignettes.

Tech Futures: The Fossil Fuels Playbook for Big Tech: Part II

A rock embedded with intricate circuit board patterns, held delicately by pale hands drawn in a ghostly style. The contrast between the rough, metallic mineral and the sleek, artificial circuit board illustrates the relationship between raw natural resources and modern technological development. The hands evoke human involvement in the extraction and manufacturing processes.

Tech Futures: The Fossil Fuels Playbook for Big Tech: Part I

Close-up of a cat sleeping on a computer keyboard

Tech Futures: The threat of AI-generated code to the world’s digital infrastructure

The undying sun hangs in the sky, as people gather around signal towers, working through their digital devices.

Dreams and Realities in Modi’s AI Impact Summit

Illustration of a coral reef ecosystem

Tech Futures: Diversity of Thought and Experience: The UN’s Scientific Panel on AI

related posts

  • The Wrong Kind of AI? Artificial Intelligence and the Future of Labour Demand (Research Summary)

    The Wrong Kind of AI? Artificial Intelligence and the Future of Labour Demand (Research Summary)

  • Sharing Space in Conversational AI

    Sharing Space in Conversational AI

  • Employee Perceptions of the Effective Adoption of AI Principles

    Employee Perceptions of the Effective Adoption of AI Principles

  • AI Policy Corner: Singapore's National AI Strategy 2.0

    AI Policy Corner: Singapore's National AI Strategy 2.0

  • An Audit Framework for Adopting AI-Nudging on Children

    An Audit Framework for Adopting AI-Nudging on Children

  • Responsible Generative AI: A Reference Architecture for Designing Foundation Model-based Agents

    Responsible Generative AI: A Reference Architecture for Designing Foundation Model-based Agents

  • AI Policy Corner: U.S. Executive Order on Advancing AI Education for American Youth

    AI Policy Corner: U.S. Executive Order on Advancing AI Education for American Youth

  • The Montreal AI Ethics Institute (MAIEI) Joins the AI Alliance

    The Montreal AI Ethics Institute (MAIEI) Joins the AI Alliance

  • Can You Meaningfully Consent in Eight Seconds? Identifying Ethical Issues with Verbal Consent for Vo...

    Can You Meaningfully Consent in Eight Seconds? Identifying Ethical Issues with Verbal Consent for Vo...

  • Deployment corrections: An incident response framework for frontier AI models

    Deployment corrections: An incident response framework for frontier AI models

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer


Articles

Columns

AI Literacy

The State of AI Ethics Report


 

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.

Contact

Donate


  • © 2025 MONTREAL AI ETHICS INSTITUTE.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.