• Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • Core Principles of Responsible AI
    • Accountability
    • Fairness
    • Privacy
    • Safety and Security
    • Sustainability
    • Transparency
  • Special Topics
    • AI in Industry
    • Ethical Implications
    • Human-Centered Design
    • Regulatory Landscape
    • Technical Methods
  • Living Dictionary
  • State of AI Ethics
  • AI Ethics Brief
  • 🇫🇷
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

Epistemic fragmentation poses a threat to the governance of online targeting

June 5, 2022

🔬 Research summary by Itzel Amieva, MA in Philosophy and Economics from the University of Bayreuth, Germany. She is interested in technology, algorithms, and decision architecture.

[Original paper by Milano, S., Mittelstadt, B., Wachter, S., & Russell, C.]


Overview: This paper argues that online targeted advertising (OTA) is not a benign application of machine learning. It creates a phenomenon the authors denominate epistemic fragmentation where users lose contact with their peers. It is then impossible to assess good or harmful content outside their own.


Introduction

The fast-paced growth of conspiratorial or fake news content online is a perfect example of how vast, diverse, and potentially dangerous online information can be. Since content rapidly spreads across platforms, many users are more and more vulnerable to misinformation, scams, or, in general, poor content daily shared accidentally or deliberately. Accordingly, Milano et al. research the plausible dangers of each consumer being served different (personalized) advertising content. Since consumers do not know what ads others are seeing when visiting the same websites, each consumer’s personal context is hidden from one another, creating a fragmented epistemic experience and allowing a higher vulnerability for exploitation. 

Key Insights

Moreover, with online targeted advertising, users are increasingly more vulnerable in the face of online advertising since the responsibility to protect users from being shown or accessing harmful content or not being shown beneficial content shifts to them. The latter, in turn, further diminishes the possibilities for fair regulation. Put differently, when individuals are only facing a targeted advertisement, without the opportunity for peer-checked revisions or counter-fact checking, then, in that case, the capacity to address harmful content or become aware of omitted relevant content is increasingly challenging. Accordingly, the authors argue that “why epistemic fragmentation matters, moreover, is not just because it limits individuals’ ability to access information that is relevant to them, but also because it limits their ability to assess the quality of content that is accessed by others” (Milano et al., 2021, 469). For instance, missing relevant information can be easily understood as not being able to verify reliable medications and treatment against COVID-19. 

Tabla

Descripción generada automáticamente

(Milano et al., 2021, 467)

Therefore, because it is increasingly harder to detect when a consumer has been highly targeted and not offered the chance to double-check the information from a different source or with the help of a third party, there seems to be a need for more awareness on the matter. Consequently, it is not a surprise to attempt to counteract intrusive online phenomena like OTA by granting individual users access to various advertising sources. The attempt to raise consumer awareness is also not the most effective path to follow when uncovering systemic challenges. There is no familiar context since every consumer’s context is hidden from others. Nobody can see what others see nor raise any complaint on their behalf. 

More importantly, however, what Milano et al. in this paper have framed in terms of advertising harms can easily be inferred from almost every online experience that curates highly targeted information. Thus if a person interacting with online advertising lacks the adequate tools to tackle this phenomenon, it should probably be addressed in other online experiences such as social media or streaming platforms that notably rely on automated systems to curate their information.

Between the lines

While it might very well be the case that OTA does not polarize individuals nor public opinion as much as filter bubbles, it is essential to consider and assess ITS plausible epistemic segregative effects. OTA entails the risk of exposing users to harmful content. But, at the same time, it can also be severely dangerous by not allowing a specific scope of information to be shown, even if no adverse consequences materialize. Ultimately, this phenomenon seems to incisively disconnect individual consumers’ experiences and the experiences of their social circles.

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

🔍 SEARCH

Spotlight

AI Policy Corner: U.S. Copyright Guidance on Works Created with AI

AI Policy Corner: AI for Good Summit 2025

AI Policy Corner: Japan’s AI Promotion Act

AI Policy Corner: Texas and New York: Comparing U.S. State-Level AI Laws

What is Sovereign Artificial Intelligence?

related posts

  • Melting contestation: insurance fairness and machine learning

    Melting contestation: insurance fairness and machine learning

  • Research Summary: Explaining and Harnessing Adversarial Examples

    Research Summary: Explaining and Harnessing Adversarial Examples

  • Ethics in the Software Development Process: from Codes of Conduct to Ethical Deliberation

    Ethics in the Software Development Process: from Codes of Conduct to Ethical Deliberation

  • Designing a Future Worth Wanting: Applying Virtue Ethics to Human–Computer Interaction

    Designing a Future Worth Wanting: Applying Virtue Ethics to Human–Computer Interaction

  • Research summary: Out of the Laboratory and Into the Classroom: The Future of AI in Education

    Research summary: Out of the Laboratory and Into the Classroom: The Future of AI in Education

  • The Unnoticed Cognitive Bias Secretly Shaping the AI Agenda

    The Unnoticed Cognitive Bias Secretly Shaping the AI Agenda

  • Mapping the Design Space of Human-AI Interaction in Text Summarization

    Mapping the Design Space of Human-AI Interaction in Text Summarization

  • A Virtue-Based Framework to Support Putting AI Ethics into Practice

    A Virtue-Based Framework to Support Putting AI Ethics into Practice

  • Risky Analysis: Assessing and Improving AI Governance Tools

    Risky Analysis: Assessing and Improving AI Governance Tools

  • Research summary: AI Governance in 2019, A Year in Review: Observations of 50 Global Experts

    Research summary: AI Governance in 2019, A Year in Review: Observations of 50 Global Experts

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer

Categories


• Blog
• Research Summaries
• Columns
• Core Principles of Responsible AI
• Special Topics

Signature Content


• The State Of AI Ethics

• The Living Dictionary

• The AI Ethics Brief

Learn More


• About

• Open Access Policy

• Contributions Policy

• Editorial Stance on AI Tools

• Press

• Donate

• Contact

The AI Ethics Brief (bi-weekly newsletter)

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.


Archive

  • © 2025 MONTREAL AI ETHICS INSTITUTE.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.