• Skip to main content
  • Skip to secondary menu
  • Skip to primary sidebar
  • Skip to footer
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

  • Articles
    • Public Policy
    • Privacy & Security
    • Human Rights
      • Ethics
      • JEDI (Justice, Equity, Diversity, Inclusion
    • Climate
    • Design
      • Emerging Technology
    • Application & Adoption
      • Health
      • Education
      • Government
        • Military
        • Public Works
      • Labour
    • Arts & Culture
      • Film & TV
      • Music
      • Pop Culture
      • Digital Art
  • Columns
    • AI Policy Corner
    • Recess
    • Tech Futures
  • The AI Ethics Brief
  • AI Literacy
    • Research Summaries
    • AI Ethics Living Dictionary
    • Learning Community
  • The State of AI Ethics Report
    • Volume 7 (November 2025)
    • Volume 6 (February 2022)
    • Volume 5 (July 2021)
    • Volume 4 (April 2021)
    • Volume 3 (Jan 2021)
    • Volume 2 (Oct 2020)
    • Volume 1 (June 2020)
  • About
    • Our Contributions Policy
    • Our Open Access Policy
    • Contact
    • Donate

Epistemic fragmentation poses a threat to the governance of online targeting

June 5, 2022

🔬 Research summary by Itzel Amieva, MA in Philosophy and Economics from the University of Bayreuth, Germany. She is interested in technology, algorithms, and decision architecture.

[Original paper by Milano, S., Mittelstadt, B., Wachter, S., & Russell, C.]


Overview: This paper argues that online targeted advertising (OTA) is not a benign application of machine learning. It creates a phenomenon the authors denominate epistemic fragmentation where users lose contact with their peers. It is then impossible to assess good or harmful content outside their own.


Introduction

The fast-paced growth of conspiratorial or fake news content online is a perfect example of how vast, diverse, and potentially dangerous online information can be. Since content rapidly spreads across platforms, many users are more and more vulnerable to misinformation, scams, or, in general, poor content daily shared accidentally or deliberately. Accordingly, Milano et al. research the plausible dangers of each consumer being served different (personalized) advertising content. Since consumers do not know what ads others are seeing when visiting the same websites, each consumer’s personal context is hidden from one another, creating a fragmented epistemic experience and allowing a higher vulnerability for exploitation. 

Key Insights

Moreover, with online targeted advertising, users are increasingly more vulnerable in the face of online advertising since the responsibility to protect users from being shown or accessing harmful content or not being shown beneficial content shifts to them. The latter, in turn, further diminishes the possibilities for fair regulation. Put differently, when individuals are only facing a targeted advertisement, without the opportunity for peer-checked revisions or counter-fact checking, then, in that case, the capacity to address harmful content or become aware of omitted relevant content is increasingly challenging. Accordingly, the authors argue that “why epistemic fragmentation matters, moreover, is not just because it limits individuals’ ability to access information that is relevant to them, but also because it limits their ability to assess the quality of content that is accessed by others” (Milano et al., 2021, 469). For instance, missing relevant information can be easily understood as not being able to verify reliable medications and treatment against COVID-19. 

Tabla

Descripción generada automáticamente

(Milano et al., 2021, 467)

Therefore, because it is increasingly harder to detect when a consumer has been highly targeted and not offered the chance to double-check the information from a different source or with the help of a third party, there seems to be a need for more awareness on the matter. Consequently, it is not a surprise to attempt to counteract intrusive online phenomena like OTA by granting individual users access to various advertising sources. The attempt to raise consumer awareness is also not the most effective path to follow when uncovering systemic challenges. There is no familiar context since every consumer’s context is hidden from others. Nobody can see what others see nor raise any complaint on their behalf. 

More importantly, however, what Milano et al. in this paper have framed in terms of advertising harms can easily be inferred from almost every online experience that curates highly targeted information. Thus if a person interacting with online advertising lacks the adequate tools to tackle this phenomenon, it should probably be addressed in other online experiences such as social media or streaming platforms that notably rely on automated systems to curate their information.

Between the lines

While it might very well be the case that OTA does not polarize individuals nor public opinion as much as filter bubbles, it is essential to consider and assess ITS plausible epistemic segregative effects. OTA entails the risk of exposing users to harmful content. But, at the same time, it can also be severely dangerous by not allowing a specific scope of information to be shown, even if no adverse consequences materialize. Ultimately, this phenomenon seems to incisively disconnect individual consumers’ experiences and the experiences of their social circles.

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

🔍 SEARCH

Spotlight

Close-up of a cat sleeping on a computer keyboard

Tech Futures: The threat of AI-generated code to the world’s digital infrastructure

The undying sun hangs in the sky, as people gather around signal towers, working through their digital devices.

Dreams and Realities in Modi’s AI Impact Summit

Illustration of a coral reef ecosystem

Tech Futures: Diversity of Thought and Experience: The UN’s Scientific Panel on AI

This image shows a large white, traditional, old building. The top half of the building represents the humanities (which is symbolised by the embedded text from classic literature which is faintly shown ontop the building). The bottom section of the building is embossed with mathematical formulas to represent the sciences. The middle layer of the image is heavily pixelated. On the steps at the front of the building there is a group of scholars, wearing formal suits and tie attire, who are standing around at the enternace talking and some of them are sitting on the steps. There are two stone, statute-like hands that are stretching the building apart from the left side. In the forefront of the image, there are 8 students - which can only be seen from the back. Their graduation gowns have bright blue hoods and they all look as though they are walking towards the old building which is in the background at a distance. There are a mix of students in the foreground.

Tech Futures: Co-opting Research and Education

Agentic AI systems and algorithmic accountability: a new era of e-commerce

related posts

  • Anthropomorphic interactions with a robot and robot-like agent

    Anthropomorphic interactions with a robot and robot-like agent

  • Consent as a Foundation for Responsible Autonomy

    Consent as a Foundation for Responsible Autonomy

  • The Ethics of AI Business Practices: A Review of 47 AI Ethics Guidelines

    The Ethics of AI Business Practices: A Review of 47 AI Ethics Guidelines

  • AI Economist: Reinforcement Learning is the Future for Equitable Economic Policy

    AI Economist: Reinforcement Learning is the Future for Equitable Economic Policy

  • Disability, Bias, and AI (Research Summary)

    Disability, Bias, and AI (Research Summary)

  • What’s missing in the way Tech Ethics is taught currently?

    What’s missing in the way Tech Ethics is taught currently?

  • The Moral Machine Experiment on Large Language Models

    The Moral Machine Experiment on Large Language Models

  • Towards Intersectionality in Machine Learning: Including More Identities, Handling Underrepresentati...

    Towards Intersectionality in Machine Learning: Including More Identities, Handling Underrepresentati...

  • Enough With “Human-AI Collaboration”

    Enough With “Human-AI Collaboration”

  • Research Summary: Explaining and Harnessing Adversarial Examples

    Research Summary: Explaining and Harnessing Adversarial Examples

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer


Articles

Columns

AI Literacy

The State of AI Ethics Report


 

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.

Contact

Donate


  • © 2025 MONTREAL AI ETHICS INSTITUTE.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.