• Skip to primary navigation
  • Skip to main content
  • LinkedIn
  • RSS
  • Twitter
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy.

  • Content
    • The State of AI Ethics
    • The AI Ethics Brief
    • The Living Dictionary
    • Research Summaries
    • Columns
      • Social Context in LLM Research: the BigScience Approach
      • Recess
      • Like Talking to a Person
      • Sociology of AI Ethics
      • The New Heartbeat of Healthcare
      • Office Hours
      • Permission to Be Uncertain
      • AI Application Spotlight
      • Ethical AI Startups
    • Publications
  • Community
    • Events
    • Learning Community
    • Code of Conduct
  • Team
  • Donate
  • About
    • Our Open Access Policy
    • Our Contributions Policy
    • Press
  • Contact
  • 🇫🇷
Subscribe

Epistemic fragmentation poses a threat to the governance of online targeting

June 5, 2022 by MAIEI

🔬 Research summary by Itzel Amieva, MA in Philosophy and Economics from the University of Bayreuth, Germany. She is interested in technology, algorithms, and decision architecture.

[Original paper by Milano, S., Mittelstadt, B., Wachter, S., & Russell, C.]


Overview: This paper argues that online targeted advertising (OTA) is not a benign application of machine learning. It creates a phenomenon the authors denominate epistemic fragmentation where users lose contact with their peers. It is then impossible to assess good or harmful content outside their own.


Introduction

The fast-paced growth of conspiratorial or fake news content online is a perfect example of how vast, diverse, and potentially dangerous online information can be. Since content rapidly spreads across platforms, many users are more and more vulnerable to misinformation, scams, or, in general, poor content daily shared accidentally or deliberately. Accordingly, Milano et al. research the plausible dangers of each consumer being served different (personalized) advertising content. Since consumers do not know what ads others are seeing when visiting the same websites, each consumer’s personal context is hidden from one another, creating a fragmented epistemic experience and allowing a higher vulnerability for exploitation. 

Key Insights

Moreover, with online targeted advertising, users are increasingly more vulnerable in the face of online advertising since the responsibility to protect users from being shown or accessing harmful content or not being shown beneficial content shifts to them. The latter, in turn, further diminishes the possibilities for fair regulation. Put differently, when individuals are only facing a targeted advertisement, without the opportunity for peer-checked revisions or counter-fact checking, then, in that case, the capacity to address harmful content or become aware of omitted relevant content is increasingly challenging. Accordingly, the authors argue that “why epistemic fragmentation matters, moreover, is not just because it limits individuals’ ability to access information that is relevant to them, but also because it limits their ability to assess the quality of content that is accessed by others” (Milano et al., 2021, 469). For instance, missing relevant information can be easily understood as not being able to verify reliable medications and treatment against COVID-19. 

Tabla

Descripción generada automáticamente

(Milano et al., 2021, 467)

Therefore, because it is increasingly harder to detect when a consumer has been highly targeted and not offered the chance to double-check the information from a different source or with the help of a third party, there seems to be a need for more awareness on the matter. Consequently, it is not a surprise to attempt to counteract intrusive online phenomena like OTA by granting individual users access to various advertising sources. The attempt to raise consumer awareness is also not the most effective path to follow when uncovering systemic challenges. There is no familiar context since every consumer’s context is hidden from others. Nobody can see what others see nor raise any complaint on their behalf. 

More importantly, however, what Milano et al. in this paper have framed in terms of advertising harms can easily be inferred from almost every online experience that curates highly targeted information. Thus if a person interacting with online advertising lacks the adequate tools to tackle this phenomenon, it should probably be addressed in other online experiences such as social media or streaming platforms that notably rely on automated systems to curate their information.

Between the lines

While it might very well be the case that OTA does not polarize individuals nor public opinion as much as filter bubbles, it is essential to consider and assess ITS plausible epistemic segregative effects. OTA entails the risk of exposing users to harmful content. But, at the same time, it can also be severely dangerous by not allowing a specific scope of information to be shown, even if no adverse consequences materialize. Ultimately, this phenomenon seems to incisively disconnect individual consumers’ experiences and the experiences of their social circles.

Category iconResearch Summaries

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We write every week.
  • LinkedIn
  • RSS
  • Twitter
  • © MONTREAL AI ETHICS INSTITUTE. All rights reserved 2021.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Creative Commons LicenseLearn more about our open access policy here.