• Skip to main content
  • Skip to secondary menu
  • Skip to primary sidebar
  • Skip to footer
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

  • Articles
    • Public Policy
    • Privacy & Security
    • Human Rights
      • Ethics
      • JEDI (Justice, Equity, Diversity, Inclusion
    • Climate
    • Design
      • Emerging Technology
    • Application & Adoption
      • Health
      • Education
      • Government
        • Military
        • Public Works
      • Labour
    • Arts & Culture
      • Film & TV
      • Music
      • Pop Culture
      • Digital Art
  • Columns
    • AI Policy Corner
    • Recess
  • The AI Ethics Brief
  • AI Literacy
    • Research Summaries
    • AI Ethics Living Dictionary
    • Learning Community
  • The State of AI Ethics Report
    • Volume 7 (November 2025)
    • Volume 6 (February 2022)
    • Volume 5 (July 2021)
    • Volume 4 (April 2021)
    • Volume 3 (Jan 2021)
    • Volume 2 (Oct 2020)
    • Volume 1 (June 2020)
  • About
    • Our Contributions Policy
    • Our Open Access Policy
    • Contact
    • Donate

Democracy, epistemic agency, and AI: Political Epistemology in Times of Artificial Intelligence

July 30, 2023

šŸ”¬ Research Summary by Giuliana Luz Grabina, a philosophy undergraduate student at McGill University, with an interest in AI/technology policy regulation from a gendered perspective.

[Original paper by Mark Coeckelbergh]


Overview: Democratic theories assume that citizens must have some form of political knowledge to vote for representatives or to engage directly in democratic deliberation and participation. However, apart from widespread attention to fake news and misinformation, considerably less attention has been paid to how citizens should acquire that political knowledge in contexts shaped by artificial intelligence and related digital technologies. In this article, Mark Coeckelbergh argues, through the lens of political epistemology, that artificial intelligence (AI) threatens democracy, as it risks diminishing citizens’ epistemic agency and thereby undermines the relevant political agency needed in a democracy.Ā 


Introduction

According to Habermas’ truth-tracking theory, democracy is a structural and procedural process that pushes toward the truth, that is, epistemic robustness. The core features of democracy from a deliberative or discursive perspective—open and free debate, equal status of citizens, a porous and critical public sphere, an independent, active, and accessible free press, the circulation of information, and pluralism—can be understood in terms of the conditions needed to test truth claims. This is grounded in the basic idea that there are better and worse answers to many political questions, and democracy is designed to, over time, get better answers. Yet, with the rise of artificial intelligence and big data, which has a pervasive influence on citizens via social media and which, therefore, is also likely to impact the knowledge basis of democracy, Coeckelbergh worries about AI’s influence on political knowledge. That is to say, do citizens have enough epistemic agency in light of AI? 

Key Insights

Epistemic Agency, Political Knowledge, and AI: do citizens have sufficient epistemic agency in light of AI?

The political democratic agency seems to rely on the epistemic agency. Epistemic agency concerns control over one’s beliefs and how these beliefs are formed and revised. As the author states, ā€œWe have the capacity to reflect on our beliefs, but it is not clear how much control we have over them.ā€ As a citizen in a democracy, I need to have some control over forming my political knowledge. Indeed, reflection on one’s beliefs and willingness to discuss them publicly is especially important in the deliberative and participative ideals of democracy. If my political beliefs are manipulated, neither voting nor deliberative democracy seems to get off the ground: they are based on the premise that citizens, whether as voters or participants in deliberation, have control over their own political knowledge. 

In what follows, the author focuses on 1) trust in one’s own epistemic agency and 2) the influence of AI on the formation and revision of our beliefs. 

  1. Loss of trust in one’s own epistemic agency

The author argues that fake news and misinformation may not just be a problem at the level of what knowledge citizens need for democracy (for example, one could argue that democracy needs truth) but is especially damaging at the procedural how level since it destroys trust in the socio-epistemic environment. Moreover, given fake news’ ubiquitous nature, it erodes our trust in others and ourselves. Indeed, AI knows my political beliefs, and it might even know them better than me, in the sense that it has knowledge about patterns in the data of my online behavior that I might be unaware of. As the author puts it, ā€œin an environment where it is no longer clear what is true or not, real or not, I cannot exercise my capacities for epistemic agency.ā€

  1. The influence of AI on the formation and revision of our beliefs 

The manipulation of beliefs through AI poses a dual threat to forming beliefs and our ability to control them. In an epistemic environment that re-enforces beliefs present in a particular online community and makes it less likely that one’s beliefs are encountered with opposing views, the kind of belief formation and revision needed for democracy becomes less likely and more difficult. According to the author, these epistemic bubbles and the resulting reduction of voices is especially troubling for democracy. This is the case in the thin idea of democracy, which emphasizes voters’ choice and exposure to different political voices, as well as in the thick idea, which emphasizes discussion or agonistic struggle in a deliberative and participatory democracy,  which are hardly possible if there is only exposure to one political voice. As the author puts it, ā€œIf I am reduced to a mere mouthpiece of my bubble, I cannot be a political agent in any (strong or ā€œthickā€) democratic sense.ā€

Policy Recommendations: Developing Politically-Responsible AI

If we care about the kind of political agency needed in a democracy, then we ought toĀ take measures to avoid, limit, or reduce these problems and protect or even enhance epistemic and political agency in the light of AI and related techno-logical-social phenomena. For example, as the author suggests, we could encourage citizens to foster epistemic virtues by opening themselves up to different perspectives, experiencing epistemic doubt, thinking critically, and understanding diverse views. However, there is also a role for AI developers. As the author suggests, the AI algorithms could be changed in such a way as to disrupt epistemic bubbles rather than foster them. While there have been some examples of software designs that try to ā€˜break’ filter bubbles, such tools are limited in terms of the range of democracy models they use. Indeed, we need more research that links democracy theory to technical work. Policymakers must establish a framework that promotes such interdisciplinary research, encouraging and requiring AI developers and their companies to develop ā€˜democracy-proof’ AI.Ā 

As the author emphasizes, ā€œI write ā€œrequiresā€ since given the role of Big Tech and, more generally, private corporations, developing more politically responsible AI is not just a matter of appealing to the individual responsibility of AI researchers but also necessitates policies that push companies to invest in, create, and employ AI that is good for democracy. Self-regulation is unlikely to succeed when companies make a profit from some of the effects discussed in this paper. Regulation and a new distribution of power is needed. In a democracy, the future of AI and society should be decided by the people, not by a handful of companies and their leaders.ā€ Furthermore, given that AI development and its political-epistemic influence have global reach, we need national frameworks and global governance of AI. However, a significant obstacle lies in the absence of global political institutions. In contrast to the powerful Big Tech corporations that wield considerable power in shaping our technological future, our current institutions are insufficiently supranational. 

Between the lines

In this paper, Coeckelbergh offers an example of how political epistemology can be done in a way responsible for socio-technological transformations: transformations that emerged only recently but most likely will continue to impact our epistemic and political world significantly. As we navigate this new techno-epistemological terrain, embracing a responsible approach to political epistemology becomes increasingly imperative for fostering a democratic and informed society.

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

šŸ” SEARCH

Spotlight

This image shows a large white, traditional, old building. The top half of the building represents the humanities (which is symbolised by the embedded text from classic literature which is faintly shown ontop the building). The bottom section of the building is embossed with mathematical formulas to represent the sciences. The middle layer of the image is heavily pixelated. On the steps at the front of the building there is a group of scholars, wearing formal suits and tie attire, who are standing around at the enternace talking and some of them are sitting on the steps. There are two stone, statute-like hands that are stretching the building apart from the left side. In the forefront of the image, there are 8 students - which can only be seen from the back. Their graduation gowns have bright blue hoods and they all look as though they are walking towards the old building which is in the background at a distance. There are a mix of students in the foreground.

Tech Futures: Co-opting Research and Education

Agentic AI systems and algorithmic accountability: a new era of e-commerce

ALL IN Conference 2025: Four Key Takeaways from Montreal

Beyond Dependency: The Hidden Risk of Social Comparison in Chatbot Companionship

AI Policy Corner: Restriction vs. Regulation: Comparing State Approaches to AI Mental Health Legislation

related posts

  • Towards a Framework for Human-AI Interaction Patterns in Co-Creative GAN Applications

    Towards a Framework for Human-AI Interaction Patterns in Co-Creative GAN Applications

  • AI Policy Corner: Restriction vs. Regulation: Comparing State Approaches to AI Mental Health Legisla...

    AI Policy Corner: Restriction vs. Regulation: Comparing State Approaches to AI Mental Health Legisla...

  • Rewiring What-to-Watch-Next Recommendations to Reduce Radicalization Pathways

    Rewiring What-to-Watch-Next Recommendations to Reduce Radicalization Pathways

  • Designing Fiduciary Artificial Intelligence

    Designing Fiduciary Artificial Intelligence

  • Anthropomorphic interactions with a robot and robot-like agent

    Anthropomorphic interactions with a robot and robot-like agent

  • The Paris AI Summit: Deregulation, Fear, and Surveillance

    The Paris AI Summit: Deregulation, Fear, and Surveillance

  • The Ethics of AI in Finance

    The Ethics of AI in Finance

  • UK’s roadmap to AI supremacy: Is the ā€˜AI War’ heating up?

    UK’s roadmap to AI supremacy: Is the ā€˜AI War’ heating up?

  • Research summary: Comparing Privacy Law GDPR Vs CCPA

    Research summary: Comparing Privacy Law GDPR Vs CCPA

  • Lanfrica: A Participatory Approach to Documenting Machine Translation Research on African Languages ...

    Lanfrica: A Participatory Approach to Documenting Machine Translation Research on African Languages ...

Partners

  • Ā 
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer


Articles

Columns

AI Literacy

The State of AI Ethics Report


 

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.

Contact

Donate


  • Ā© 2025 MONTREAL AI ETHICS INSTITUTE.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.