✍️ By Sun Gyoo Kang1
Disclaimer: The views expressed in this article are solely my own and do not reflect my employer’s opinions, beliefs, or positions. Any opinions or information in this article are based on my experiences and perspectives. Readers are encouraged to form their own opinions and seek additional information as needed.
This report explores the ethical implications of prioritizing sponsored content in responses provided by search engines (e.g., you.com2, Perplexity3), and conversational agents (e.g., Microsoft Copilot) powered by artificial intelligence (AI). In the digital age, it is necessary to evaluate the consequences of such practices on information integrity, fairness of access to knowledge, user autonomy, and the loss of user autonomy. It presents an analysis of the issues, arguing that the prioritization of sponsored content raises significant ethical problems that outweigh potential benefits.
1. Introduction
1.1 Technological Context
AI has changed the way we interact with information, data, and technology. Now, search engines like Google are often our gateways to a vast ocean of knowledge. Moreover, with the arrival of ChatGPT, AI conversational agents such as Copilot, Gemini, and Claude give internet users the chance to have increasingly fluid and advanced exchanges.
Nevertheless, like banks that play a crucial and public role in society4, the role of these tools and the platforms that produce them goes far beyond that of a simple tool for answering our questions, as they are driven solely by profits and costs—they have a role in shaping our perception of the world, exerting significant influence on our beliefs, choices, and actions5.
Their presence in our daily lives raises ethical questions, particularly about how they prioritize and present information, challenging their presumed neutrality6.
1.2 The Issue of Sponsored Content
The issue of sponsored content constitutes the Gordian Knot of this ethical reflection. The responses offered to users of AI-powered search engines, or the answers provided by virtual assistants could be affected by platforms that have a business model of receiving compensation in return for prioritizing these responses at the expense of others7. This practice raises concerns8 about its impact on information integrity and user autonomy.
2. The Current Context
2.1 The Predominance of Digital
Let’s start by looking at the figures showing the impact and degree of implementation of digital tools in our daily lives:
- There are more than 5.45 billion internet users worldwide9.
- In 2023, Google processed more than 8.5 billion daily searches10.
- ChatGPT now has more than 180.5 million users11.
- There were over 500 million questions on Perplexity AI in 202312.
These figures demonstrate the significant impact of these technologies on our access to information, our daily decisions, and even our vision of society.
2.2 Attention and Data: In the Digital Era
The ability to capture users’ attention has become a critical issue for platforms providing search engines or AI-based virtual agents. The industry knows this. It’s a war for data and attention. These platforms are fighting for this attention to monetize it, especially with advertisements and sponsored content13.
This has allowed these giant platforms to develop their systems with the goal of increasing user engagement14 often at the expense of the quality and integrity of the content distributed15.
2.3 The Economic Model of Digital Platforms
Traditional search engines use an economic model based on advertising. This model allows these services to be offered free to users. On the other hand, it also allows them to generate revenue through advertising or sponsored content.
Although AI-powered conversational agents initially adopted different models, we are seeing a gradual increase in the adoption of comparable strategies16. Increasingly, these AI platforms are exploring ways to integrate advertising elements and sponsored content into their interactions, seeking to capitalize on their large user base17.
3. Specific Problems
The priority given to sponsored content in search engine responses and AI conversational agents raises various specific ethical issues.
3.1 Information Manipulation
By favoring paid content, these technologies risk subtly manipulating users’ perceptions of different subjects18. Such manipulation can affect users’ decisions, from their consumption choices to significant societal issues.
Furthermore, the prioritization of sponsored content jeopardizes this integrity in several ways:
- Omission of essential information : More crucial data may be ignored and put in the background because the companies behind this data simply don’t have the financial resources.
- Informational imbalance : Users of these platforms could receive a partial or biased response on a subject, as commercial interests take precedence over the balanced presentation of facts.
- Artificial needs : Overexposure to certain commercial products or services can create needs that did not previously exist in the user19. There is even a negative impact on the environment.
3.2 Inequalities in Access to Information
Prioritizing sponsored content creates a disparity in society’s access to information. Companies that can afford to pay these platforms benefit from an unjustified privilege, potentially at the expense of more relevant or higher-quality information that lacks financial support20.
Here are some of the important questions of equity that are raised21:
- Unfair advantage : Entities with financial resources to buy visibility benefit from a disproportionate advantage in disseminating their messages or products.
- Marginalization of alternative voices : Perspectives, products, or services from less wealthy or marginalized sources risk going unnoticed, even if they are more relevant or of better quality.
- Aggravation of current inequalities : This practice can accentuate existing socio-economic disparities by amplifying the voices of the privileged.
3.3 Erosion of Trust
The rise of sponsored content risks, in the long term, diminishing users’ trust in these digital platforms, which weakens trust and reduces the effectiveness of these platforms. The increased perception of bias affected by commercial interests could lead users to question the integrity, reliability, and objectivity of these platforms.
- The integrity in question : The increased perception of biases influenced by commercial interests could lead users to question the integrity, reliability, and objectivity of these platforms.
3.4 The Loss of User Autonomy
Furthermore, the prioritization of sponsored content can be seen as an infringement on user autonomy22 in the following ways:
- Limitation of choice : By highlighting certain content, these platforms actually restrict the choices available to users.
- Interference with autonomous decision-making : Users may be subtly influenced toward certain decisions without the opportunity to explore all available alternatives.
- Violation of implicit consent : Users seeking objective information may find themselves exposed to promotional content without their explicit consent.
4. Arguments in Favor of Sponsored Content and their Counterarguments
4.1 The Economic Model Argument
These companies developing these AI systems need revenue to survive. These services can remain free for users thanks to advertising and sponsored content. If these services are not free, it will amplify socio-economic inequalities.
Counterarguments:
- Economic alternatives : Innovation is key. Viable economic models include VIP subscriptions, continuous online crowdfunding, donations, or public-private partnerships that do not endanger the integrity of information23.
- Value of trust : By maintaining their integrity, platforms can benefit from increased user loyalty and improved reputation24. It’s important to note that user trust is a crucial advantage for businesses.
4.2 The Transparency Argument
As long as sponsored content is clearly identified as such, there is no ethical issue. Users have the freedom to choose whether or not to interact with it.
Counterarguments:
- Digital literacy inequalities : Users’ ability to discern or understand these labels varies, particularly with AI conversational agents where interaction is more fluid (Turing Test) and less rigid25.
- Cognitive overload : In an already information-overloaded environment, requiring users to constantly filter sponsored content adds to their cognitive load, which can lead to decision fatigue26.
4.3 The Personalization Argument
The prioritization of sponsored content can be seen as a form of personalization, offering users information potentially better suited to their interests and needs.
Counterarguments:
- Personalization vs manipulation : There is an essential distinction between personalization based on the user’s true preferences and that dictated by external commercial interests27.
- Reinforcement of existing biases : Personalization risks consolidating users’ prejudices based on sponsored content, rather than offering the chance for the user to be confronted with a diversity of opinions and information28.
4.4 The User Choice Argument
If the prioritization of sponsored content doesn’t suit them, users have the freedom to choose other platforms. The market will naturally self-regulate.
Counterarguments:
- Market concentration and limited competition : The search engine and conversational agent industry is characterized by strong concentration29, creating a monopoly or oligopoly environment. In the end, the effect considerably reduces the options available to users. Moreover, this market structure goes against the fundamental principles of capitalism, which are based on healthy competition30.
- Information asymmetry : Most citizens are not aware of the extent and impact of these prioritizations, which creates a barrier to their ability to make informed choices31.
5. Conclusion
Prioritizing sponsored content in search engine results and AI conversational agents raises major ethical questions. Despite the economic arguments in its favor, the ethical risks remain unjustifiable. By manipulating users, this practice endangers the integrity of information, user autonomy, and the fairness of access to knowledge. Alternatives such as new economic models or increased transparency need to be examined in detail.
The tech companies behind AI-powered search engines and conversational agents are not philanthropic or non-profit organizations. They will have to make a profit to survive and innovate. However, the overall objective should include creating an ecosystem that serves the interests of society and users. By adhering to ethical principles, we can maximize the benefits of these technologies while minimizing their dangers. Let’s not forget that ‘innovation comes with good governance practice.’
Footnotes
- Law and Ethics in Tech Law and Ethics in Tech | Medium ↩︎
- AI startup You.com is raising $50M in funding to pivot from AI-powered search engine to AI assistant market | Tech Startups ↩︎
- AI-powered search engine Perplexity AI, now valued at $520M, raises $73.6M | TechCrunch ↩︎
- Banks: At the Heart of the Matter | imf.org ↩︎
- How generative AI is boosting the spread of disinformation and propaganda | MIT Technology Review ↩︎
- Chatbots, search engines, and the sealing of knowledges | AI & SOCIETY (springer.com) ↩︎
- Annonces sponsorisées Google Ads – Le géant de la recherche toujours plus discret | Actualité – UFC-Que Choisir ↩︎
- our-common-agenda-policy-brief-information-integrity-fr.pdf | un.org ↩︎
- Internet and social media users in the world 2024 | Statista ↩︎
- Le guide ultime des statistiques de recherche Google (rapport 2023) | sortlist Data Hub ↩︎
- Number of ChatGPT Users and Key Stats (September 2024) | namepepper.com ↩︎
- The Latest Perplexity AI Stats (2024) | Exploding Topics ↩︎
- (PDF) Les stratégies de contenus et l’engagement des utilisateurs des médias sociaux envers une marque | researchgate.net ↩︎
- The uncomfortable reality behind Facebook’s world-changing ‘Like’ button | yahoo.com ↩︎
- Google Responds To Evidence Of Reviews Algorithm Bias | searchenginejournal.com ↩︎
- You.com raises $25M to fuel its AI-powered search engine | techcrunch.com ↩︎
- AI Firm Perplexity Reportedly Plans New Advertising Model | pymnts.com ↩︎
- ÉTUDE DU MARKETING DE CONTENU ET DE SON INFLUENCE SUR LES COMPORTEMENTS D’ENGAGEMENT DES CONSOMMATEURS ↩︎
- On Artificial Intelligence and Manipulation | Topoi (springer.com) ↩︎
- Paid, Owned et Earned Media : de quoi s’agit-il réellement ? | powertrafic.fr ↩︎
- (PDF) Algorithmic Ideology: How Capitalist Society Shapes Search Engines | researchgate.net ↩︎
- Technology, autonomy, and manipulation | Internet Policy Review ↩︎
- 4 ways AI could transform the economy as we know it | World Economic Forum | weforum.org ↩︎
- Digital trust: Why it matters for businesses | McKinsey ↩︎
- (PDF) Digital Na(t)ives? Variation in Internet Skills and Uses among Members of the ‘‘Net Generation’’ ↩︎
- (PDF) Consumer Decision-Making in the Era of Information Overload | researchgate.net ↩︎
- The ethics of algorithms: Mapping the debate – Brent Daniel Mittelstadt, Patrick Allo, Mariarosaria Taddeo, Sandra Wachter, Luciano Floridi, 2016 | sagepub.com ↩︎
- Should we worry about filter bubbles? | Internet Policy Review ↩︎
- Google has an illegal monopoly on search, judge rules. Here’s what’s next | CNN Business ↩︎
- The Importance of Competition for the American Economy | CEA | The White House ↩︎
- How Google Can Flip Elections & Change Opinion | WebFX ↩︎