• Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • Core Principles of Responsible AI
    • Accountability
    • Fairness
    • Privacy
    • Safety and Security
    • Sustainability
    • Transparency
  • Special Topics
    • AI in Industry
    • Ethical Implications
    • Human-Centered Design
    • Regulatory Landscape
    • Technical Methods
  • Living Dictionary
  • State of AI Ethics
  • AI Ethics Brief
  • 🇫🇷
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

Epistemic fragmentation poses a threat to the governance of online targeting

June 5, 2022

🔬 Research summary by Itzel Amieva, MA in Philosophy and Economics from the University of Bayreuth, Germany. She is interested in technology, algorithms, and decision architecture.

[Original paper by Milano, S., Mittelstadt, B., Wachter, S., & Russell, C.]


Overview: This paper argues that online targeted advertising (OTA) is not a benign application of machine learning. It creates a phenomenon the authors denominate epistemic fragmentation where users lose contact with their peers. It is then impossible to assess good or harmful content outside their own.


Introduction

The fast-paced growth of conspiratorial or fake news content online is a perfect example of how vast, diverse, and potentially dangerous online information can be. Since content rapidly spreads across platforms, many users are more and more vulnerable to misinformation, scams, or, in general, poor content daily shared accidentally or deliberately. Accordingly, Milano et al. research the plausible dangers of each consumer being served different (personalized) advertising content. Since consumers do not know what ads others are seeing when visiting the same websites, each consumer’s personal context is hidden from one another, creating a fragmented epistemic experience and allowing a higher vulnerability for exploitation. 

Key Insights

Moreover, with online targeted advertising, users are increasingly more vulnerable in the face of online advertising since the responsibility to protect users from being shown or accessing harmful content or not being shown beneficial content shifts to them. The latter, in turn, further diminishes the possibilities for fair regulation. Put differently, when individuals are only facing a targeted advertisement, without the opportunity for peer-checked revisions or counter-fact checking, then, in that case, the capacity to address harmful content or become aware of omitted relevant content is increasingly challenging. Accordingly, the authors argue that “why epistemic fragmentation matters, moreover, is not just because it limits individuals’ ability to access information that is relevant to them, but also because it limits their ability to assess the quality of content that is accessed by others” (Milano et al., 2021, 469). For instance, missing relevant information can be easily understood as not being able to verify reliable medications and treatment against COVID-19. 

Tabla

Descripción generada automáticamente

(Milano et al., 2021, 467)

Therefore, because it is increasingly harder to detect when a consumer has been highly targeted and not offered the chance to double-check the information from a different source or with the help of a third party, there seems to be a need for more awareness on the matter. Consequently, it is not a surprise to attempt to counteract intrusive online phenomena like OTA by granting individual users access to various advertising sources. The attempt to raise consumer awareness is also not the most effective path to follow when uncovering systemic challenges. There is no familiar context since every consumer’s context is hidden from others. Nobody can see what others see nor raise any complaint on their behalf. 

More importantly, however, what Milano et al. in this paper have framed in terms of advertising harms can easily be inferred from almost every online experience that curates highly targeted information. Thus if a person interacting with online advertising lacks the adequate tools to tackle this phenomenon, it should probably be addressed in other online experiences such as social media or streaming platforms that notably rely on automated systems to curate their information.

Between the lines

While it might very well be the case that OTA does not polarize individuals nor public opinion as much as filter bubbles, it is essential to consider and assess ITS plausible epistemic segregative effects. OTA entails the risk of exposing users to harmful content. But, at the same time, it can also be severely dangerous by not allowing a specific scope of information to be shown, even if no adverse consequences materialize. Ultimately, this phenomenon seems to incisively disconnect individual consumers’ experiences and the experiences of their social circles.

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

🔍 SEARCH

Spotlight

AI Policy Corner: Frontier AI Safety Commitments, AI Seoul Summit 2024

AI Policy Corner: The Colorado State Deepfakes Act

Special Edition: Honouring the Legacy of Abhishek Gupta (1992–2024)

AI Policy Corner: The Turkish Artificial Intelligence Law Proposal

From Funding Crisis to AI Misuse: Critical Digital Rights Challenges from RightsCon 2025

related posts

  • In Consideration of Indigenous Data Sovereignty: Data Mining as a Colonial Practice

    In Consideration of Indigenous Data Sovereignty: Data Mining as a Colonial Practice

  • The Ethics of AI Value Chains: An Approach for Integrating and Expanding AI Ethics Research, Practic...

    The Ethics of AI Value Chains: An Approach for Integrating and Expanding AI Ethics Research, Practic...

  • The Impact of the GDPR on Artificial Intelligence

    The Impact of the GDPR on Artificial Intelligence

  • Atomist or holist? A diagnosis and vision for more productive interdisciplinary AI ethics dialogue

    Atomist or holist? A diagnosis and vision for more productive interdisciplinary AI ethics dialogue

  • Automated Interviewer or Augmented Survey? Collecting Social Data with Large Language Models

    Automated Interviewer or Augmented Survey? Collecting Social Data with Large Language Models

  • Ghosting the Machine: Judicial Resistance to a Recidivism Risk Assessment Instrument

    Ghosting the Machine: Judicial Resistance to a Recidivism Risk Assessment Instrument

  • Consequences of Recourse In Binary Classification

    Consequences of Recourse In Binary Classification

  • HAI Weekly Seminar Series: Decolonizing AI with Sabelo Mhlambi

    HAI Weekly Seminar Series: Decolonizing AI with Sabelo Mhlambi

  • Governance by Algorithms (Research Summary)

    Governance by Algorithms (Research Summary)

  • Study of Competition Issues in Data-Driven Markets in Canada

    Study of Competition Issues in Data-Driven Markets in Canada

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer

Categories


• Blog
• Research Summaries
• Columns
• Core Principles of Responsible AI
• Special Topics

Signature Content


• The State Of AI Ethics

• The Living Dictionary

• The AI Ethics Brief

Learn More


• About

• Open Access Policy

• Contributions Policy

• Editorial Stance on AI Tools

• Press

• Donate

• Contact

The AI Ethics Brief (bi-weekly newsletter)

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.


Archive

  • © MONTREAL AI ETHICS INSTITUTE. All rights reserved 2024.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.