
🔬 By Sun-Gyoo Kang.
Sun Gyoo Kang is a lawyer and AI ethics specialist based in Montreal, specializing in the intersection of emerging technology, law, and regulatory policy. As the founder of Law and Ethics in Tech, he serves as an AI curator, producing critical analyses on the legal and moral implications of automation. A frequent contributor to the Montreal AI Ethics Institute (MAIEI) and other journals, his writing explores high-stakes topics such as algorithmic bias, sovereign AI, and the accountability frameworks for AI systems.
Featured image credit: Google DeepMind on Unsplash
In an era of rapid technological advancement, artificial intelligence (AI) systems are evolving from passive tools into semi-autonomous agents capable of making decisions and taking actions with minimal human intervention. This transition from AI assistants managing schedules to sophisticated agents trading stocks marks a new phase in e-commerce, presenting profound challenges to traditional notions of accountability. As these agentic AIs become more integrated into our daily lives, the gap between their deployment and safety grows, highlighted by a significant increase in reported AI incidents.
The current e-commerce landscape is fundamentally human-centric, designed to capture the attention and influence the decisions of a person. This model relies on human-driven search, social proof-like reviews, and the influence of branding and advertising. The user bears the cognitive load of sifting through information, detecting fake reviews, and navigating manipulative designs. Accountability in this system, while challenging, is linked to human-readable information, such as a misleading advertisement.
The introduction of Agentic AI will reshape this marketplace, shifting the focus from influencing human psychology to influencing machine logic. Before going further, it’s important to understand the distinction between a standard AI agent and the more advanced Agentic AI. An AI agent is a goal-driven assistant that performs a specific task you assign, like scheduling meetings or answering questions. Agentic AI, by contrast, can plan, coordinate multiple tasks or agents, and adapt its own strategy over time—acting more like a self-directed project manager than a single helper. It is this ability to autonomously pursue complex goals with minimal human intervention that will be the driving force of change.
As an example, a consumer may initiate the process by giving their personal agentic AI an order (“mandate”) to buy a concert ticket only when its price drops below $100. This personal AI then works autonomously, constantly communicating with numerous other AIs running on ticket websites and resale markets to monitor prices in real-time. When a seller’s AI finally advertises a ticket at the target price, the user’s agent instantly detects it and begins a secure negotiation. This is how Google’s AP2 protocol works. Once verified, a payment agent is automatically triggered, instantly transferring the exact payment amount in a valid currency directly to the seller’s digital wallet. The entire end-to-end process, from discovery to payment, is completed through the interaction of these specialized agentic AIs, with the user simply receiving a notification that the ticket has been bought. The agent then independently formulates and executes a plan, a capability that makes it both powerful and ethically complex (see Brief #169 on this topic).
This shift will bring about significant changes. Users will no longer search for products but will delegate complex goals to their agents, which will handle discovery, vetting, negotiation, and purchasing. Consequently, a new “manipulation layer” will emerge, where the focus will be on deceiving Agentic AIs rather than human consumers. Business agentic AIs might use deceptive language or fabricate credentials to secure a sale. While agents can offer unprecedented hyper-personalization, they could also lead to hyper-efficient discrimination, steering users based on biased data. Furthermore, brand loyalty may diminish as agents prioritize raw specifications and verifiable quality metrics over branding, potentially leveling the playing field for smaller producers. The nature of market risk is transformed from individual consumers being misled to the potential for systemic manipulation affecting millions of agents, leading to large-scale market distortions.
Risks for Consumers and Commerce
For consumers, delegating purchasing power to an Agentic AI introduces significant risks beyond receiving the wrong item. Algorithmic manipulation is a primary concern, as agents can be tricked into purchasing counterfeit goods or revealing sensitive data. Hyper-personalized discrimination is another major risk, where Agenti AIs with deep access to a user’s personal data could be subjected to discriminatory pricing, with different prices offered to different individuals for the same product based on data-driven inferences about their wealth or urgency. The vast amount of personal data required for an Agentic AI to be effective also creates a significant privacy risk, making it a rich target for data breaches.
Businesses also face new threats to market stability, brand reputation, and legal standing. One significant risk is algorithmic collusion, where pricing agents independently learn to collude to raise prices without any explicit communication, a practice difficult to detect and prosecute under current antitrust laws. Notwithstanding, as we learned from the Air Canada case early 2024 — where the airline was held responsible for incorrect information provided by its chatbot — businesses are liable for their agents’ actions. A malfunctioning or misled business agent could trigger a significant financial and public relations crisis. The interaction of millions of high-speed, autonomous Agentic AIs could also lead to market instability and unpredictability, similar to “flash crashes” in algorithmic stock trading, where a minor bug could be amplified across the ecosystem, causing unpredictable market shocks.
These risks stem from fundamental challenges inherent in autonomous AI systems. The “black box” nature of many advanced agents makes their decision-making process opaque, conflicting with the principle of auditability necessary for accountability. This creates a responsibility gap, where it is difficult to assign blame when an Agentic AI among many other causes harm. The speed and scale at which Agentic AIs operate make traditional human oversight (human-in-the-loop) impractical. Perhaps the most significant challenge is that agents can develop unanticipated emergent behaviors and are vulnerable to manipulation, as demonstrated in experiments where business Agentic AIs easily deceived customer agents into buying inferior products. These vulnerabilities show that harmful behaviors can arise from the interaction of multiple agents, even if not explicitly programmed.
