• Skip to main content
  • Skip to secondary menu
  • Skip to primary sidebar
  • Skip to footer
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

  • Articles
    • Public Policy
    • Privacy & Security
    • Human Rights
      • Ethics
      • JEDI (Justice, Equity, Diversity, Inclusion
    • Climate
    • Design
      • Emerging Technology
    • Application & Adoption
      • Health
      • Education
      • Government
        • Military
        • Public Works
      • Labour
    • Arts & Culture
      • Film & TV
      • Music
      • Pop Culture
      • Digital Art
  • Columns
    • AI Policy Corner
    • Recess
    • Tech Futures
  • The AI Ethics Brief
  • AI Literacy
    • Research Summaries
    • AI Ethics Living Dictionary
    • Learning Community
  • The State of AI Ethics Report
    • Volume 7 (November 2025)
    • Volume 6 (February 2022)
    • Volume 5 (July 2021)
    • Volume 4 (April 2021)
    • Volume 3 (Jan 2021)
    • Volume 2 (Oct 2020)
    • Volume 1 (June 2020)
  • About
    • Our Contributions Policy
    • Our Open Access Policy
    • Contact
    • Donate

Should AI-Powered Search Engines and Conversational Agents Prioritize Sponsored Content?

January 16, 2025

✍️ By Sun Gyoo Kang1


Disclaimer: The views expressed in this article are solely my own and do not reflect my employer’s opinions, beliefs, or positions. Any opinions or information in this article are based on my experiences and perspectives. Readers are encouraged to form their own opinions and seek additional information as needed.


This report explores the ethical implications of prioritizing sponsored content in responses provided by search engines (e.g., you.com2, Perplexity3), and conversational agents (e.g., Microsoft Copilot) powered by artificial intelligence (AI). In the digital age, it is necessary to evaluate the consequences of such practices on information integrity, fairness of access to knowledge, user autonomy, and the loss of user autonomy. It presents an analysis of the issues, arguing that the prioritization of sponsored content raises significant ethical problems that outweigh potential benefits.


1. Introduction

1.1 Technological Context

AI has changed the way we interact with information, data, and technology. Now, search engines like Google are often our gateways to a vast ocean of knowledge. Moreover, with the arrival of ChatGPT, AI conversational agents such as Copilot, Gemini, and Claude give internet users the chance to have increasingly fluid and advanced exchanges.

Nevertheless, like banks that play a crucial and public role in society4, the role of these tools and the platforms that produce them goes far beyond that of a simple tool for answering our questions, as they are driven solely by profits and costs—they have a role in shaping our perception of the world, exerting significant influence on our beliefs, choices, and actions5.

Their presence in our daily lives raises ethical questions, particularly about how they prioritize and present information, challenging their presumed neutrality6.

1.2 The Issue of Sponsored Content

The issue of sponsored content constitutes the Gordian Knot of this ethical reflection. The responses offered to users of AI-powered search engines, or the answers provided by virtual assistants could be affected by platforms that have a business model of receiving compensation in return for prioritizing these responses at the expense of others7. This practice raises concerns8 about its impact on information integrity and user autonomy.


2. The Current Context

2.1 The Predominance of Digital

Let’s start by looking at the figures showing the impact and degree of implementation of digital tools in our daily lives:

  • There are more than 5.45 billion internet users worldwide9.
  • In 2023, Google processed more than 8.5 billion daily searches10.
  • ChatGPT now has more than 180.5 million users11.
  • There were over 500 million questions on Perplexity AI in 202312.

These figures demonstrate the significant impact of these technologies on our access to information, our daily decisions, and even our vision of society.

2.2 Attention and Data: In the Digital Era

The ability to capture users’ attention has become a critical issue for platforms providing search engines or AI-based virtual agents. The industry knows this. It’s a war for data and attention. These platforms are fighting for this attention to monetize it, especially with advertisements and sponsored content13.

This has allowed these giant platforms to develop their systems with the goal of increasing user engagement14 often at the expense of the quality and integrity of the content distributed15.

2.3 The Economic Model of Digital Platforms

Traditional search engines use an economic model based on advertising. This model allows these services to be offered free to users. On the other hand, it also allows them to generate revenue through advertising or sponsored content.

Although AI-powered conversational agents initially adopted different models, we are seeing a gradual increase in the adoption of comparable strategies16. Increasingly, these AI platforms are exploring ways to integrate advertising elements and sponsored content into their interactions, seeking to capitalize on their large user base17.


3. Specific Problems

The priority given to sponsored content in search engine responses and AI conversational agents raises various specific ethical issues.

3.1 Information Manipulation

By favoring paid content, these technologies risk subtly manipulating users’ perceptions of different subjects18. Such manipulation can affect users’ decisions, from their consumption choices to significant societal issues.

Furthermore, the prioritization of sponsored content jeopardizes this integrity in several ways:

  • Omission of essential information : More crucial data may be ignored and put in the background because the companies behind this data simply don’t have the financial resources.
  • Informational imbalance : Users of these platforms could receive a partial or biased response on a subject, as commercial interests take precedence over the balanced presentation of facts.
  • Artificial needs : Overexposure to certain commercial products or services can create needs that did not previously exist in the user19. There is even a negative impact on the environment.

3.2 Inequalities in Access to Information

Prioritizing sponsored content creates a disparity in society’s access to information. Companies that can afford to pay these platforms benefit from an unjustified privilege, potentially at the expense of more relevant or higher-quality information that lacks financial support20.

Here are some of the important questions of equity that are raised21:

  • Unfair advantage : Entities with financial resources to buy visibility benefit from a disproportionate advantage in disseminating their messages or products.
  • Marginalization of alternative voices : Perspectives, products, or services from less wealthy or marginalized sources risk going unnoticed, even if they are more relevant or of better quality.
  • Aggravation of current inequalities : This practice can accentuate existing socio-economic disparities by amplifying the voices of the privileged.

3.3 Erosion of Trust

The rise of sponsored content risks, in the long term, diminishing users’ trust in these digital platforms, which weakens trust and reduces the effectiveness of these platforms. The increased perception of bias affected by commercial interests could lead users to question the integrity, reliability, and objectivity of these platforms.

  • The integrity in question : The increased perception of biases influenced by commercial interests could lead users to question the integrity, reliability, and objectivity of these platforms.

3.4 The Loss of User Autonomy

Furthermore, the prioritization of sponsored content can be seen as an infringement on user autonomy22 in the following ways:

  • Limitation of choice : By highlighting certain content, these platforms actually restrict the choices available to users.
  • Interference with autonomous decision-making : Users may be subtly influenced toward certain decisions without the opportunity to explore all available alternatives.
  • Violation of implicit consent : Users seeking objective information may find themselves exposed to promotional content without their explicit consent.

4. Arguments in Favor of Sponsored Content and their Counterarguments

4.1 The Economic Model Argument

These companies developing these AI systems need revenue to survive. These services can remain free for users thanks to advertising and sponsored content. If these services are not free, it will amplify socio-economic inequalities.

Counterarguments:

  • Economic alternatives : Innovation is key. Viable economic models include VIP subscriptions, continuous online crowdfunding, donations, or public-private partnerships that do not endanger the integrity of information23.
  • Value of trust : By maintaining their integrity, platforms can benefit from increased user loyalty and improved reputation24. It’s important to note that user trust is a crucial advantage for businesses.

4.2 The Transparency Argument

As long as sponsored content is clearly identified as such, there is no ethical issue. Users have the freedom to choose whether or not to interact with it.

Counterarguments:

  • Digital literacy inequalities : Users’ ability to discern or understand these labels varies, particularly with AI conversational agents where interaction is more fluid (Turing Test) and less rigid25.
  • Cognitive overload : In an already information-overloaded environment, requiring users to constantly filter sponsored content adds to their cognitive load, which can lead to decision fatigue26.

4.3 The Personalization Argument

The prioritization of sponsored content can be seen as a form of personalization, offering users information potentially better suited to their interests and needs.

Counterarguments:

  • Personalization vs manipulation : There is an essential distinction between personalization based on the user’s true preferences and that dictated by external commercial interests27.
  • Reinforcement of existing biases : Personalization risks consolidating users’ prejudices based on sponsored content, rather than offering the chance for the user to be confronted with a diversity of opinions and information28.

4.4 The User Choice Argument

If the prioritization of sponsored content doesn’t suit them, users have the freedom to choose other platforms. The market will naturally self-regulate.

Counterarguments:

  • Market concentration and limited competition : The search engine and conversational agent industry is characterized by strong concentration29, creating a monopoly or oligopoly environment. In the end, the effect considerably reduces the options available to users. Moreover, this market structure goes against the fundamental principles of capitalism, which are based on healthy competition30.
  • Information asymmetry : Most citizens are not aware of the extent and impact of these prioritizations, which creates a barrier to their ability to make informed choices31.

5. Conclusion

Prioritizing sponsored content in search engine results and AI conversational agents raises major ethical questions. Despite the economic arguments in its favor, the ethical risks remain unjustifiable. By manipulating users, this practice endangers the integrity of information, user autonomy, and the fairness of access to knowledge. Alternatives such as new economic models or increased transparency need to be examined in detail.

The tech companies behind AI-powered search engines and conversational agents are not philanthropic or non-profit organizations. They will have to make a profit to survive and innovate. However, the overall objective should include creating an ecosystem that serves the interests of society and users. By adhering to ethical principles, we can maximize the benefits of these technologies while minimizing their dangers. Let’s not forget that ‘innovation comes with good governance practice.’


Footnotes

  1. Law and Ethics in Tech Law and Ethics in Tech | Medium ↩︎
  2. AI startup You.com is raising $50M in funding to pivot from AI-powered search engine to AI assistant market | Tech Startups ↩︎
  3. AI-powered search engine Perplexity AI, now valued at $520M, raises $73.6M | TechCrunch ↩︎
  4. Banks: At the Heart of the Matter | imf.org ↩︎
  5. How generative AI is boosting the spread of disinformation and propaganda | MIT Technology Review ↩︎
  6. Chatbots, search engines, and the sealing of knowledges | AI & SOCIETY (springer.com) ↩︎
  7. Annonces sponsorisées Google Ads – Le géant de la recherche toujours plus discret | Actualité – UFC-Que Choisir ↩︎
  8. our-common-agenda-policy-brief-information-integrity-fr.pdf | un.org ↩︎
  9. Internet and social media users in the world 2024 | Statista ↩︎
  10. Le guide ultime des statistiques de recherche Google (rapport 2023) | sortlist Data Hub ↩︎
  11. Number of ChatGPT Users and Key Stats (September 2024) | namepepper.com ↩︎
  12. The Latest Perplexity AI Stats (2024) | Exploding Topics ↩︎
  13. (PDF) Les stratégies de contenus et l’engagement des utilisateurs des médias sociaux envers une marque | researchgate.net ↩︎
  14. The uncomfortable reality behind Facebook’s world-changing ‘Like’ button | yahoo.com ↩︎
  15. Google Responds To Evidence Of Reviews Algorithm Bias | searchenginejournal.com ↩︎
  16. You.com raises $25M to fuel its AI-powered search engine | techcrunch.com ↩︎
  17. AI Firm Perplexity Reportedly Plans New Advertising Model | pymnts.com ↩︎
  18. ÉTUDE DU MARKETING DE CONTENU ET DE SON INFLUENCE SUR LES COMPORTEMENTS D’ENGAGEMENT DES CONSOMMATEURS ↩︎
  19. On Artificial Intelligence and Manipulation | Topoi (springer.com) ↩︎
  20. Paid, Owned et Earned Media : de quoi s’agit-il réellement ? | powertrafic.fr ↩︎
  21. (PDF) Algorithmic Ideology: How Capitalist Society Shapes Search Engines | researchgate.net ↩︎
  22. Technology, autonomy, and manipulation | Internet Policy Review ↩︎
  23. 4 ways AI could transform the economy as we know it | World Economic Forum | weforum.org ↩︎
  24. Digital trust: Why it matters for businesses | McKinsey ↩︎
  25. (PDF) Digital Na(t)ives? Variation in Internet Skills and Uses among Members of the ‘‘Net Generation’’ ↩︎
  26. (PDF) Consumer Decision-Making in the Era of Information Overload | researchgate.net ↩︎
  27. The ethics of algorithms: Mapping the debate – Brent Daniel Mittelstadt, Patrick Allo, Mariarosaria Taddeo, Sandra Wachter, Luciano Floridi, 2016 | sagepub.com ↩︎
  28. Should we worry about filter bubbles? | Internet Policy Review ↩︎
  29. Google has an illegal monopoly on search, judge rules. Here’s what’s next | CNN Business ↩︎
  30. The Importance of Competition for the American Economy | CEA | The White House ↩︎
  31. How Google Can Flip Elections & Change Opinion | WebFX ↩︎

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

🔍 SEARCH

Spotlight

Illustration of a coral reef ecosystem

Tech Futures: Diversity of Thought and Experience: The UN’s Scientific Panel on AI

This image shows a large white, traditional, old building. The top half of the building represents the humanities (which is symbolised by the embedded text from classic literature which is faintly shown ontop the building). The bottom section of the building is embossed with mathematical formulas to represent the sciences. The middle layer of the image is heavily pixelated. On the steps at the front of the building there is a group of scholars, wearing formal suits and tie attire, who are standing around at the enternace talking and some of them are sitting on the steps. There are two stone, statute-like hands that are stretching the building apart from the left side. In the forefront of the image, there are 8 students - which can only be seen from the back. Their graduation gowns have bright blue hoods and they all look as though they are walking towards the old building which is in the background at a distance. There are a mix of students in the foreground.

Tech Futures: Co-opting Research and Education

Agentic AI systems and algorithmic accountability: a new era of e-commerce

ALL IN Conference 2025: Four Key Takeaways from Montreal

Beyond Dependency: The Hidden Risk of Social Comparison in Chatbot Companionship

related posts

  • Implementing Responsible AI: Tensions and Trade-Offs Between Ethics Aspects

    Implementing Responsible AI: Tensions and Trade-Offs Between Ethics Aspects

  • The Return on Investment in AI Ethics: A Holistic Framework

    The Return on Investment in AI Ethics: A Holistic Framework

  • Computers, Creativity and Copyright: Autonomous Robot’s Status, Authorship, and Outdated Copyright L...

    Computers, Creativity and Copyright: Autonomous Robot’s Status, Authorship, and Outdated Copyright L...

  • Atomist or holist? A diagnosis and vision for more productive interdisciplinary AI ethics dialogue

    Atomist or holist? A diagnosis and vision for more productive interdisciplinary AI ethics dialogue

  • Research summary: A Case for Humans-in-the-Loop: Decisions in the Presence of Erroneous Algorithmic ...

    Research summary: A Case for Humans-in-the-Loop: Decisions in the Presence of Erroneous Algorithmic ...

  • Faith and Fate: Limits of Transformers on Compositionality

    Faith and Fate: Limits of Transformers on Compositionality

  • Anthropomorphized AI as Capitalist Agents: The Price We Pay for Familiarity

    Anthropomorphized AI as Capitalist Agents: The Price We Pay for Familiarity

  • Anthropomorphization of AI: Opportunities and Risks

    Anthropomorphization of AI: Opportunities and Risks

  • AI Economist: Reinforcement Learning is the Future for Equitable Economic Policy

    AI Economist: Reinforcement Learning is the Future for Equitable Economic Policy

  • Effects of ROSS Intelligence and NDAS, highlighting the need for AI regulation

    Effects of ROSS Intelligence and NDAS, highlighting the need for AI regulation

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer


Articles

Columns

AI Literacy

The State of AI Ethics Report


 

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.

Contact

Donate


  • © 2025 MONTREAL AI ETHICS INSTITUTE.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.