• Skip to main content
  • Skip to secondary menu
  • Skip to primary sidebar
  • Skip to footer
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

  • Articles
    • Public Policy
    • Privacy & Security
    • Human Rights
      • Ethics
      • JEDI (Justice, Equity, Diversity, Inclusion
    • Climate
    • Design
      • Emerging Technology
    • Application & Adoption
      • Health
      • Education
      • Government
        • Military
        • Public Works
      • Labour
    • Arts & Culture
      • Film & TV
      • Music
      • Pop Culture
      • Digital Art
  • Columns
    • AI Policy Corner
    • Recess
  • The AI Ethics Brief
  • AI Literacy
    • Research Summaries
    • AI Ethics Living Dictionary
    • Learning Community
  • The State of AI Ethics Report
    • Volume 7 (November 2025)
    • Volume 6 (February 2022)
    • Volume 5 (July 2021)
    • Volume 4 (April 2021)
    • Volume 3 (Jan 2021)
    • Volume 2 (Oct 2020)
    • Volume 1 (June 2020)
  • About
    • Our Contributions Policy
    • Our Open Access Policy
    • Contact
    • Donate

Reprogramming the Public Sphere: AI and News Visibility on Social Media

January 5, 2026

✍️By Natalie Jenkins from Encode Canada.

Natalie is an MSc candidate in Digital Policy at University College Dublin, based in Toronto, Ontario. She is interested in the impacts of emerging technologies on human rights and democracy. Natalie is a writer at Encode Canada and a journalist at Estonian Life newspaper in Toronto.


📌 Editor’s Note: This piece is part of our Recess series, featuring university students from Encode’s Canadian chapter at McGill University. The series aims to promote insights from university students on current issues in the AI ethics space. In this article, Natalie Jenkins examines the fragmentation of the public sphere resulting from algorithmic social media.

Photo credit: Jakob Owens on Unsplash.


When’s the last time you read the news? Maybe you read it every morning, every so often, or not at all, because sometimes you scroll past a headline on social media, and that feels sufficient. If this sounds familiar, perhaps you’d be surprised to learn that you’re among the 62% of other young Canadians aged 15-24 who get their news from social media in 2023. To this, you may say: sure, Gen Z’s use their phones for everything—that’s not surprising. We live in a digital world, after all. What’s the harm in that? 

There’s plenty. We’ve progressed past digitization into a world of automation. Digital platforms and their AI-powered algorithms not only host content but control who sees what information and when. Corporate interests have superseded editorial judgement over news visibility, posing serious risks to a pluralistic public sphere—and, ultimately, to the health of democracy itself. 

AI-powered social media

Digital social media platforms have fundamentally transformed how news is produced and consumed. Traditionally, publishers distributed news directly to their audiences in the form of regularly scheduled content. Journalists acted as qualified, professional gatekeepers, deciding what news would be seen and when. These decisions were based on principles of accountability, ethics, and professionalism. 

Today’s news landscape paints a much different picture. By providing unrestricted access to information at any given time, platforms undermined publishers’ control over news distribution. As young audiences increasingly turn to social media as their primary news source, platforms, rather than journalists, have become the main intermediaries of news. 

But platforms are not neutral. They make their own decisions regarding the type of content they host and the conditions of visibility. While these differ across platforms, they are commonly shaped by their ideological biases and commercial interests. For platforms, maximizing profit means maximizing engagement. To achieve this, many have integrated AI-powered recommendation systems into their designs. These systems use machine learning to analyze users’ data (including viewing time, likes, shares, comments, etc.), to predict what type of content will keep them engaged. Once they can infer users’ interests, priorities, and behavioral patterns, these recommender systems algorithmically curate hyper-personalized content. TikTok’s recommender system, for instance, is so powerful that it can learn the vulnerabilities and interests of a user in less than 40 minutes. 

While hyper-personalization is often framed as a convenience, it comes at a significant cost; it negates the need for users to sift through diverse sources to form opinions. Over time, this repeated exposure sorts users into echo chambers, undermining their ability to meaningfully engage with alternative perspectives. This is especially concerning given that social media’s algorithms are now deliberately suppressing news altogether. For example, Facebook has tweaked its recommender systems to “deprioritize” news, resulting in a steep decline in news visibility: “reactions to news declined by 78% between 2021 and 2024 while reactions to non-news pages increased.”

Rise of the political influencer 

Another consequence of engagement-maximizing algorithms is the amplification of provocative content, which incentivizes the production of misinformation. Within this environment, political influencers have gained popularity. These are individuals who consistently post political content online and shapee public opinion. While they vary widely in ideology, far-right influencers warrant particular concern. Figures like Alex Jones, Andrew Tate, and Ben Shapiro have amassed sizable fanbases by exploiting engagement-based metrics. These figures adopt the same visual language of professional journalists, lending themselves a degree of charismatic authority. However, they are not bound by editorial accountability, nor do they rely on institutional credibility. Their visibility is sustained by recommender systems rewarding outrage. 

Political influencers also gain visibility by exploiting viewers’ insecurities. This content is subsequently amplified by algorithmic processes that infer users’ psychological profiles. For example, information studies researchers found that when users engaged with content addressing feelings of loneliness, just five days of use triggered a four-fold increase in misogynystic content appearing on their “For You” feeds. 

These algorithmic systems are fragmenting the public sphere. While they are presented as neutral, the reality is that they are designed to generate profit, privileging content that provokes rather than informs. This means that we must critically approach what we encounter online. Otherwise, we become ignorant to the fact that AI-powered social media reproduces existing inequalities at scale—making us more vulnerable to the effects of misinformation.

Policy options  

A significant barrier to mitigating these effects is the opacity of platforms’ algorithms. Such secrecy makes it difficult for users to understand why certain content appears on their feeds. Additionally, it creates a structural asymmetry between journalists and platforms, where profit incentives dictate news visibility over civic values.

Regulatory efforts should therefore focus on increasing platforms’ algorithmic transparency. The EU’s Digital Services Act (DSA) achieves this by holding very large online platforms (VLOPs) accountable for their systemic risks, including AI-powered recommender systems. For example, Article 27 addresses this by requiring platforms to clearly explain the main parameters used in their recommeender systems and outline options for users to modify them. Additionally, the DSA empowers users by giving them control in how their feeds are organized. Article 38 requires platforms to “provide at least one option for each of their recommender systems which is not based on profiling.” This may include, for example, a chronologically-sorted feed. 

Canada should look to the EU’s Digital Services Act (DSA) as a model for its own framework. Previous efforts, such as the Online Harms Act, were shelved before they were implemented. Canada must come to terms with the fact that platform regulation is not a matter of if, but when. 

Conclusion

AI has fundamentally transformed media ecosystems and will continue to shape them in ways we have yet to discover. In this evolving landscape, digital media literacy is essential: without the knowledge to critically engage with the technologies we use, platforms retain the power to shape our worldviews. To protect a pluralistic and democratic public sphere, Canada must confront Big Tech’s influence through robust platform regulation and sustained investment in media literacy. Otherwise, AI systems will continue to erode democratic communication rather than support it.


References

Alex Jones Network [@RealAlexJones]. (n.d.). Posts [X profile]. X. Retrieved December 10, 2025, from https://x.com/RealAlexJones

Ben Shapiro Show [@BenShapiroShow]. (n.d.). Posts [X profile]. X. Retrieved December 10, 2025, from https://x.com/BenShapiroShow 

Chow, A. R. (2025, January 7). Why Meta’s fact-checking change could lead to more misinformation on Facebook and Instagram. Time. https://www.time.com/7205332/meta-fact-checking-community-notes/ 

Cobratate [@Cobratate]. (n.d.). Posts [X profile]. X. Retrieved December 10, 2025, from https://x.com/Cobratate

Cranz, A., & Brandom, R. (2021, October 3). Facebook encourages hate speech for profit, says whistleblower. The Verge. https://www.theverge.com/2021/10/3/22707860/facebook-whistleblower-leaked-documents-files-regulation

Finlayson, A. (2022). YouTube and political ideologies: Technology, populism and rhetorical form. Political Studies, 70(1), 62-80. 

Haggart, B., & Tusikov, N. (2021, June 9). Is Canada ready for the platform regulation debate? Centre for International Governance Innovation. https://www.cigionline.org/articles/is-canada-ready-for-the-platform-regulation-debate/

Hermann, E. (2022). Artificial intelligence and mass personalization of communication content—An ethical and literacy perspective. New media & society, 24(5), 1258-1277.

Kang, H., & Lou, C. (2022). AI agency vs. human agency: understanding human–AI interactions on TikTok and their implications for user engagement. Journal of Computer-Mediated Communication, 27(5), zmac014.

Kleis Nielsen, R., & Ganter, S. A. (2018). Dealing with digital intermediaries: A case study of the relations between publishers and platforms. New media & society, 20(4), 1600-1617.

Lane, C. (2024, February 5). Social media algorithms amplify misogynistic content to teens. UCL News. https://www.ucl.ac.uk/news/2024/feb/social-media-algorithms-amplify-misogynistic-content-teens

Olsen, R. K., Solvoll, M. K., & Futsæter, K. A. (2022). Gatekeepers as safekeepers—Mapping audiences’ attitudes towards news media’s editorial oversight functions during the COVID-19 crisis. Journalism and Media, 3(1), 182-197. 

Pehlivan, Z., Zhu, J., Ross, C., Jiang, D., Park, S., Chan, E., Phillips, J., & Bridgman, A. (2025, November). Power shift: The rise of political influencers in Canada. Media Ecosystem Observatory. https://meo.ca/work/power-shift-the-rise-of-political-influencers-in-canada

Regulation (EU) 2022/2065. On a Single Market for Digital Services and amending Directive 2000/31/EC (Digital Services Act). European Parliament, Council of the European Union. https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:32022R2065 

Statistics Canada. (2024, June 25). Scrolling through the social media stats. https://www.statcan.gc.ca/o1/en/plus/6540-scrolling-through-social-media-stats 

Sun, H. (2023). The right to know social media algorithms. Harvard Law & Policy Review, 18(1), https://ssrn.com/abstract=4944976 

Talaga, S., Wertz, E., Batorski, D., & Wojcieszak, M. (2025). Changes to the Facebook Algorithm Decreased News Visibility Between 2021-2024. arXiv preprint arXiv:2507.19373. Wilding, D., Fray, P., Molitorisz, S., & McKewon, E. (2018). The impact of digital platforms on news and journalistic content. Digital Platforms Inquiry., A. (2025, April 24). The Apple Watch just turned 10. Here’s how far  it’s come. Wired.https://www.wired.com/story/apple-watch-turns-10/

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

🔍 SEARCH

Spotlight

Agentic AI systems and algorithmic accountability: a new era of e-commerce

ALL IN Conference 2025: Four Key Takeaways from Montreal

Beyond Dependency: The Hidden Risk of Social Comparison in Chatbot Companionship

AI Policy Corner: Restriction vs. Regulation: Comparing State Approaches to AI Mental Health Legislation

Beyond Consultation: Building Inclusive AI Governance for Canada’s Democratic Future

related posts

  • Real talk: What is Responsible AI?

    Real talk: What is Responsible AI?

  • Editing Personality for LLMs

    Editing Personality for LLMs

  • The Ethics of AI Value Chains: An Approach for Integrating and Expanding AI Ethics Research, Practic...

    The Ethics of AI Value Chains: An Approach for Integrating and Expanding AI Ethics Research, Practic...

  • Research summary: Robot Rights? Let’s Talk about Human Welfare instead

    Research summary: Robot Rights? Let’s Talk about Human Welfare instead

  • AI Governance on the Ground: Canada’s Algorithmic Impact Assessment Process and Algorithm has evolve...

    AI Governance on the Ground: Canada’s Algorithmic Impact Assessment Process and Algorithm has evolve...

  • Knowledge, Workflow, Oversight: A framework for implementing AI ethics

    Knowledge, Workflow, Oversight: A framework for implementing AI ethics

  • AI Policy Corner: Restriction vs. Regulation: Comparing State Approaches to AI Mental Health Legisla...

    AI Policy Corner: Restriction vs. Regulation: Comparing State Approaches to AI Mental Health Legisla...

  • Fusing Art and Engineering for a more Humane Tech Future

    Fusing Art and Engineering for a more Humane Tech Future

  • AI Policy Corner: The Texas Responsible AI Governance Act

    AI Policy Corner: The Texas Responsible AI Governance Act

  • Now I’m Seen: An AI Ethics Discussion Across the Globe

    Now I’m Seen: An AI Ethics Discussion Across the Globe

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer


Articles

Columns

AI Literacy

The State of AI Ethics Report


 

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.

Contact

Donate


  • © 2025 MONTREAL AI ETHICS INSTITUTE.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.