🔬 Research Summary by Giuliana Luz Grabina, a philosophy undergraduate student at McGill University, with an interest in AI/technology policy regulation from a gendered perspective.
[Original paper by Mark Coeckelbergh]
Overview: Democratic theories assume that citizens must have some form of political knowledge to vote for representatives or to engage directly in democratic deliberation and participation. However, apart from widespread attention to fake news and misinformation, considerably less attention has been paid to how citizens should acquire that political knowledge in contexts shaped by artificial intelligence and related digital technologies. In this article, Mark Coeckelbergh argues, through the lens of political epistemology, that artificial intelligence (AI) threatens democracy, as it risks diminishing citizens’ epistemic agency and thereby undermines the relevant political agency needed in a democracy.
Introduction
According to Habermas’ truth-tracking theory, democracy is a structural and procedural process that pushes toward the truth, that is, epistemic robustness. The core features of democracy from a deliberative or discursive perspective—open and free debate, equal status of citizens, a porous and critical public sphere, an independent, active, and accessible free press, the circulation of information, and pluralism—can be understood in terms of the conditions needed to test truth claims. This is grounded in the basic idea that there are better and worse answers to many political questions, and democracy is designed to, over time, get better answers. Yet, with the rise of artificial intelligence and big data, which has a pervasive influence on citizens via social media and which, therefore, is also likely to impact the knowledge basis of democracy, Coeckelbergh worries about AI’s influence on political knowledge. That is to say, do citizens have enough epistemic agency in light of AI?
Key Insights
Epistemic Agency, Political Knowledge, and AI: do citizens have sufficient epistemic agency in light of AI?
The political democratic agency seems to rely on the epistemic agency. Epistemic agency concerns control over one’s beliefs and how these beliefs are formed and revised. As the author states, “We have the capacity to reflect on our beliefs, but it is not clear how much control we have over them.” As a citizen in a democracy, I need to have some control over forming my political knowledge. Indeed, reflection on one’s beliefs and willingness to discuss them publicly is especially important in the deliberative and participative ideals of democracy. If my political beliefs are manipulated, neither voting nor deliberative democracy seems to get off the ground: they are based on the premise that citizens, whether as voters or participants in deliberation, have control over their own political knowledge.
In what follows, the author focuses on 1) trust in one’s own epistemic agency and 2) the influence of AI on the formation and revision of our beliefs.
- Loss of trust in one’s own epistemic agency
The author argues that fake news and misinformation may not just be a problem at the level of what knowledge citizens need for democracy (for example, one could argue that democracy needs truth) but is especially damaging at the procedural how level since it destroys trust in the socio-epistemic environment. Moreover, given fake news’ ubiquitous nature, it erodes our trust in others and ourselves. Indeed, AI knows my political beliefs, and it might even know them better than me, in the sense that it has knowledge about patterns in the data of my online behavior that I might be unaware of. As the author puts it, “in an environment where it is no longer clear what is true or not, real or not, I cannot exercise my capacities for epistemic agency.”
- The influence of AI on the formation and revision of our beliefs
The manipulation of beliefs through AI poses a dual threat to forming beliefs and our ability to control them. In an epistemic environment that re-enforces beliefs present in a particular online community and makes it less likely that one’s beliefs are encountered with opposing views, the kind of belief formation and revision needed for democracy becomes less likely and more difficult. According to the author, these epistemic bubbles and the resulting reduction of voices is especially troubling for democracy. This is the case in the thin idea of democracy, which emphasizes voters’ choice and exposure to different political voices, as well as in the thick idea, which emphasizes discussion or agonistic struggle in a deliberative and participatory democracy, which are hardly possible if there is only exposure to one political voice. As the author puts it, “If I am reduced to a mere mouthpiece of my bubble, I cannot be a political agent in any (strong or “thick”) democratic sense.”
Policy Recommendations: Developing Politically-Responsible AI
If we care about the kind of political agency needed in a democracy, then we ought to take measures to avoid, limit, or reduce these problems and protect or even enhance epistemic and political agency in the light of AI and related techno-logical-social phenomena. For example, as the author suggests, we could encourage citizens to foster epistemic virtues by opening themselves up to different perspectives, experiencing epistemic doubt, thinking critically, and understanding diverse views. However, there is also a role for AI developers. As the author suggests, the AI algorithms could be changed in such a way as to disrupt epistemic bubbles rather than foster them. While there have been some examples of software designs that try to ‘break’ filter bubbles, such tools are limited in terms of the range of democracy models they use. Indeed, we need more research that links democracy theory to technical work. Policymakers must establish a framework that promotes such interdisciplinary research, encouraging and requiring AI developers and their companies to develop ‘democracy-proof’ AI.
As the author emphasizes, “I write “requires” since given the role of Big Tech and, more generally, private corporations, developing more politically responsible AI is not just a matter of appealing to the individual responsibility of AI researchers but also necessitates policies that push companies to invest in, create, and employ AI that is good for democracy. Self-regulation is unlikely to succeed when companies make a profit from some of the effects discussed in this paper. Regulation and a new distribution of power is needed. In a democracy, the future of AI and society should be decided by the people, not by a handful of companies and their leaders.” Furthermore, given that AI development and its political-epistemic influence have global reach, we need national frameworks and global governance of AI. However, a significant obstacle lies in the absence of global political institutions. In contrast to the powerful Big Tech corporations that wield considerable power in shaping our technological future, our current institutions are insufficiently supranational.
Between the lines
In this paper, Coeckelbergh offers an example of how political epistemology can be done in a way responsible for socio-technological transformations: transformations that emerged only recently but most likely will continue to impact our epistemic and political world significantly. As we navigate this new techno-epistemological terrain, embracing a responsible approach to political epistemology becomes increasingly imperative for fostering a democratic and informed society.