• Skip to main content
  • Skip to secondary menu
  • Skip to primary sidebar
  • Skip to footer
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

  • Articles
    • Public Policy
    • Privacy & Security
    • Human Rights
      • Ethics
      • JEDI (Justice, Equity, Diversity, Inclusion
    • Climate
    • Design
      • Emerging Technology
    • Application & Adoption
      • Health
      • Education
      • Government
        • Military
        • Public Works
      • Labour
    • Arts & Culture
      • Film & TV
      • Music
      • Pop Culture
      • Digital Art
  • Columns
    • AI Policy Corner
    • Recess
    • Tech Futures
  • The AI Ethics Brief
  • AI Literacy
    • Research Summaries
    • AI Ethics Living Dictionary
    • Learning Community
  • The State of AI Ethics Report
    • Volume 7 (November 2025)
    • Volume 6 (February 2022)
    • Volume 5 (July 2021)
    • Volume 4 (April 2021)
    • Volume 3 (Jan 2021)
    • Volume 2 (Oct 2020)
    • Volume 1 (June 2020)
  • About
    • Our Contributions Policy
    • Our Open Access Policy
    • Contact
    • Donate

Agentic AI systems and algorithmic accountability: a new era of e-commerce

December 22, 2025

🔬 By Sun-Gyoo Kang.

Sun Gyoo Kang is a lawyer and AI ethics specialist based in Montreal, specializing in the intersection of emerging technology, law, and regulatory policy. As the founder of Law and Ethics in Tech, he serves as an AI curator, producing critical analyses on the legal and moral implications of automation. A frequent contributor to the Montreal AI Ethics Institute (MAIEI) and other journals, his writing explores high-stakes topics such as algorithmic bias, sovereign AI, and the accountability frameworks for AI systems.

Featured image credit: Google DeepMind on Unsplash


In an era of rapid technological advancement, artificial intelligence (AI) systems are evolving from passive tools into semi-autonomous agents capable of making decisions and taking actions with minimal human intervention. This transition from AI assistants managing schedules to sophisticated agents trading stocks marks a new phase in e-commerce, presenting profound challenges to traditional notions of accountability. As these agentic AIs become more integrated into our daily lives, the gap between their deployment and safety grows, highlighted by a significant increase in reported AI incidents.

The current e-commerce landscape is fundamentally human-centric, designed to capture the attention and influence the decisions of a person. This model relies on human-driven search, social proof-like reviews, and the influence of branding and advertising. The user bears the cognitive load of sifting through information, detecting fake reviews, and navigating manipulative designs. Accountability in this system, while challenging, is linked to human-readable information, such as a misleading advertisement.

The introduction of Agentic AI will reshape this marketplace, shifting the focus from influencing human psychology to influencing machine logic. Before going further, it’s important to understand the distinction between a standard AI agent and the more advanced Agentic AI. An AI agent is a goal-driven assistant that performs a specific task you assign, like scheduling meetings or answering questions. Agentic AI, by contrast, can plan, coordinate multiple tasks or agents, and adapt its own strategy over time—acting more like a self-directed project manager than a single helper. It is this ability to autonomously pursue complex goals with minimal human intervention that will be the driving force of change.

As an example, a consumer may initiate the process by giving their personal agentic AI an order (“mandate”) to buy a concert ticket only when its price drops below $100. This personal AI then works autonomously, constantly communicating with numerous other AIs running on ticket websites and resale markets to monitor prices in real-time. When a seller’s AI finally advertises a ticket at the target price, the user’s agent instantly detects it and begins a secure negotiation. This is how Google’s AP2 protocol works. Once verified, a payment agent is automatically triggered, instantly transferring the exact payment amount in a valid currency directly to the seller’s digital wallet. The entire end-to-end process, from discovery to payment, is completed through the interaction of these specialized agentic AIs, with the user simply receiving a notification that the ticket has been bought. The agent then independently formulates and executes a plan, a capability that makes it both powerful and ethically complex (see Brief #169 on this topic).

This shift will bring about significant changes. Users will no longer search for products but will delegate complex goals to their agents, which will handle discovery, vetting, negotiation, and purchasing. Consequently, a new “manipulation layer” will emerge, where the focus will be on deceiving Agentic AIs rather than human consumers. Business agentic AIs might use deceptive language or fabricate credentials to secure a sale. While agents can offer unprecedented hyper-personalization, they could also lead to hyper-efficient discrimination, steering users based on biased data. Furthermore, brand loyalty may diminish as agents prioritize raw specifications and verifiable quality metrics over branding, potentially leveling the playing field for smaller producers. The nature of market risk is transformed from individual consumers being misled to the potential for systemic manipulation affecting millions of agents, leading to large-scale market distortions.

Risks for Consumers and Commerce

For consumers, delegating purchasing power to an Agentic AI introduces significant risks beyond receiving the wrong item. Algorithmic manipulation is a primary concern, as agents can be tricked into purchasing counterfeit goods or revealing sensitive data. Hyper-personalized discrimination is another major risk, where Agenti AIs with deep access to a user’s personal data could be subjected to discriminatory pricing, with different prices offered to different individuals for the same product based on data-driven inferences about their wealth or urgency. The vast amount of personal data required for an Agentic AI to be effective also creates a significant privacy risk, making it a rich target for data breaches.

Businesses also face new threats to market stability, brand reputation, and legal standing. One significant risk is algorithmic collusion, where pricing agents independently learn to collude to raise prices without any explicit communication, a practice difficult to detect and prosecute under current antitrust laws. Notwithstanding, as we learned from the Air Canada case early 2024 — where the airline was held responsible for incorrect information provided by its chatbot — businesses are liable for their agents’ actions. A malfunctioning or misled business agent could trigger a significant financial and public relations crisis. The interaction of millions of high-speed, autonomous Agentic AIs could also lead to market instability and unpredictability, similar to “flash crashes” in algorithmic stock trading, where a minor bug could be amplified across the ecosystem, causing unpredictable market shocks.

These risks stem from fundamental challenges inherent in autonomous AI systems. The “black box” nature of many advanced agents makes their decision-making process opaque, conflicting with the principle of auditability necessary for accountability. This creates a responsibility gap, where it is difficult to assign blame when an Agentic AI among many other causes harm. The speed and scale at which Agentic AIs operate make traditional human oversight (human-in-the-loop) impractical. Perhaps the most significant challenge is that agents can develop unanticipated emergent behaviors and are vulnerable to manipulation, as demonstrated in experiments where business Agentic AIs easily deceived customer agents into buying inferior products. These vulnerabilities show that harmful behaviors can arise from the interaction of multiple agents, even if not explicitly programmed.

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

🔍 SEARCH

Spotlight

A rock embedded with intricate circuit board patterns, held delicately by pale hands drawn in a ghostly style. The contrast between the rough, metallic mineral and the sleek, artificial circuit board illustrates the relationship between raw natural resources and modern technological development. The hands evoke human involvement in the extraction and manufacturing processes.

Tech Futures: The Fossil Fuels Playbook for Big Tech: Part I

Close-up of a cat sleeping on a computer keyboard

Tech Futures: The threat of AI-generated code to the world’s digital infrastructure

The undying sun hangs in the sky, as people gather around signal towers, working through their digital devices.

Dreams and Realities in Modi’s AI Impact Summit

Illustration of a coral reef ecosystem

Tech Futures: Diversity of Thought and Experience: The UN’s Scientific Panel on AI

This image shows a large white, traditional, old building. The top half of the building represents the humanities (which is symbolised by the embedded text from classic literature which is faintly shown ontop the building). The bottom section of the building is embossed with mathematical formulas to represent the sciences. The middle layer of the image is heavily pixelated. On the steps at the front of the building there is a group of scholars, wearing formal suits and tie attire, who are standing around at the enternace talking and some of them are sitting on the steps. There are two stone, statute-like hands that are stretching the building apart from the left side. In the forefront of the image, there are 8 students - which can only be seen from the back. Their graduation gowns have bright blue hoods and they all look as though they are walking towards the old building which is in the background at a distance. There are a mix of students in the foreground.

Tech Futures: Co-opting Research and Education

related posts

  • AI Ethics Maturity Model

    AI Ethics Maturity Model

  • Evaluating a Methodology for Increasing AI Transparency: A Case Study

    Evaluating a Methodology for Increasing AI Transparency: A Case Study

  • From OECD to India: Exploring cross-cultural differences in perceived trust, responsibility and reli...

    From OECD to India: Exploring cross-cultural differences in perceived trust, responsibility and reli...

  • Why AI Ethics Is a Critical Theory

    Why AI Ethics Is a Critical Theory

  • Research summary: Mass Incarceration and the Future of AI

    Research summary: Mass Incarceration and the Future of AI

  • Special Edition: Honouring the Legacy of Abhishek Gupta (1992–2024)

    Special Edition: Honouring the Legacy of Abhishek Gupta (1992–2024)

  • AI Policy Corner: AI and Security in Africa: Assessing the African Union’s Continental AI Strategy

    AI Policy Corner: AI and Security in Africa: Assessing the African Union’s Continental AI Strategy

  • Down the Toxicity Rabbit Hole: Investigating PaLM 2 Guardrails

    Down the Toxicity Rabbit Hole: Investigating PaLM 2 Guardrails

  • AI Policy Corner: The Texas Responsible AI Governance Act

    AI Policy Corner: The Texas Responsible AI Governance Act

  • Exploring the under-explored areas in teaching tech ethics today

    Exploring the under-explored areas in teaching tech ethics today

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer


Articles

Columns

AI Literacy

The State of AI Ethics Report


 

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.

Contact

Donate


  • © 2025 MONTREAL AI ETHICS INSTITUTE.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.