• Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • Core Principles of Responsible AI
    • Accountability
    • Fairness
    • Privacy
    • Safety and Security
    • Sustainability
    • Transparency
  • Special Topics
    • AI in Industry
    • Ethical Implications
    • Human-Centered Design
    • Regulatory Landscape
    • Technical Methods
  • Living Dictionary
  • State of AI Ethics
  • AI Ethics Brief
  • 🇫🇷
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

Toward an Ethics of AI Belief

August 6, 2023

🔬 Research Summary by Winnie Ma and Vincent Valton.

Winnie: Assistant Professor of Philosophy at King’s College London and Research Associate at the Sowerby Philosophy & Medicine Project specializing in the ethics of belief and the philosophy of AI.

Vincent: Machine Learning Scientist working in industry and affiliated with University College London’s Institute of Cognitive Neuroscience, specializing in computational neuroscience.

[Original paper by Winnie Ma and Vincent Valton]


Overview: Philosophical research in AI has hitherto largely focused on the ethics of AI. In this paper, we, an ethicist of belief and a machine learning scientist, suggest that we need to pursue a novel area of research in AI – the epistemology of AI, and in particular, ethics of belief for AI. We suggest four topics in extant work on the ethics of (human) belief that can be applied to ethics of AI belief, including the possibility that AI algorithms such as the COMPAS algorithm may be doxastically wrong in virtue of their predictive beliefs and that they may perpetrate a new kind of belief-based discrimination. We discuss two important, relatively nascent areas of philosophical research that have not yet been recognized as research in the ethics of AI belief: the decolonization of AI epistemic injustice in AI. 


Introduction

Are you morally wronged when an algorithm, like the COMPAS algorithm, forms a predictive profiling belief about you, such as your likelihood of committing a crime?

We, a philosopher and an ML scientist, suggest you may be. And more generally, we suggest that there is a novel and practically important area of research to be pursued in the ethics of belief for AI. We argue that numerous areas in this field of philosophy are both applicable and highly salient with respect to the AI domain, including but not limited to the controversial possibility that agents can doxastically wrong each other – that is, morally wrong persons just in virtue of what they believe about them. We point out that beliefs have long been a core part of AI and of subfields of machine learning (for example, since Judea Pearl first introduced belief networks and belief propagation). And we suggest that the moral and practical dimensions of beliefs formed by artificially intelligent agents and algorithms in critical environments (e.g., healthcare, judicial and financial systems, etc.) such as the COMPAS algorithm require further analysis and, potentially, regulation.

Key Insights

Can AI predictive beliefs morally wrong persons?

“Brisha Broden is highly likely (risk score 8) to criminally recidivate.”

This is the prediction – dare we call it a predictive “belief” – that the now-infamous  COMPAS algorithm generates about defendants in the US criminal justice system. Judges may then use these predictions to determine eligibility for probation and treatment programs and in sentencing decisions.

Much has been written about the COMPAS algorithm’s biased nature, particularly against Black defendants, who are much more likely to be incorrectly labeled as having a higher recidivism risk despite not going on to re-offend than White defendants. That the predictions generated by COMPAS can lead to unjustly biased probation, treatment, and sentencing decisions is a clear moral wrong that those working on promoting algorithmic fairness / eliminating algorithmic bias are working on analyzing and remedying. 

There is an additional intuitive kind of moral wrong that might be thought to occur in the COMPAS algorithm’s generation of these predictive “beliefs” that have not yet been fully recognized, however. This additional moral wrong could consist in or be analogous to what philosophers – in particular, ethicists of belief who work at the intersection of epistemology (the study of rationality, beliefs, and knowledge) and ethics (the study of what’s morally right and wrong) – have called doxastic wronging, which as Rima Basu and Mark Schroeder (2019) define it, involves morally wronging agents in virtue of the beliefs that we hold about them.

In particular, these philosophers suggest that we can be morally wronged by agents just by what they believe about us, regardless of whether these beliefs are made known to us or acted upon. Thus, for example, a shop security guard who prejudicially profiles a racially marginalized shopper to form the belief that they are likely to shoplift but who doesn’t speak or act on their belief nevertheless morally wrongs the racially marginalized shopper in virtue of their belief. This is what Basu and Schroeder argue. And, of course, if the security guard were to verbally accuse the shopper of shoplifting and/or take action (such as stopping the shopper for questioning and bag search) based on their profiling belief, these would constitute additional speech- and action-based moral wrongs committed against the shopper.

Likewise, it seems intuitive that the predictive “belief” formed by the COMPAS algorithm about Brisha Borden morally wrongs her, regardless of whether it is made known to Brisha or, for example, is acted upon by the judge in sentencing. And this appears to be, furthermore, a kind of moral wronging, and perhaps a kind of discrimination, committed in virtue of the COMPAS algorithm’s prediction that hitherto has not yet been fully recognized.

The question of whether artificial agents like the COMPAS algorithm, as well as other kinds of algorithms used by banks, insurance companies, healthcare providers, etc., that problematically profile individuals to form predictions about their creditworthiness, risk of default, healthcare needs, etc., can doxastically wrong individuals is just one interesting and important question that can be addressed in a novel field of philosophical research – the ethics of AI belief. 

Moving Toward an Ethics of Belief for AI

We argue that there are also many other interesting and important topics in the ethics of AI belief. Some of the topics that we discuss include the epistemic decolonization of AI, particularly regarding large language models (LLMs) such as ChatGPT, which given the linguistic relativity hypothesis – which says roughly that each language may be associated with a particular set of concepts and way of seeing the world – may perpetuate colonial conceptual ontologies in light of the fact that most major LLMs are trained on the English (a colonial) language. The primacy of English in the AI domain with respect to AI research and AI education means that such powerful AI resources will be inaccessible to many non-English speaking communities. 

Researchers have also already pointed out that there are various kinds of epistemic injustice (injustice done to agents in their capacities as knowers) with respect to AI. And we suggest that, again, just as with human agents, it may be the case that AI may be morally obligated to hold certain beliefs – for example, we suggest that there may be a moral obligation on an algorithm to believe the self-expressed gender identity of an individual rather than any conflicting external gender attributions. We also discuss the ethics of belief of AI profiling more generally; and various other topics. 

Between the lines 

We take it that the ethics of AI belief is an exciting new area of AI research that needs to be urgently explored. The catalog of specific novel topics proposed for research in the paper is certainly not meant to be an exhaustive list of potential research areas. However, we feel that the topics discussed in the paper are important areas for future investigation in AI and have significant implications for social justice, including identifying and eliminating a potentially new kind of discriminatory practice involving doxastic wronging by AI. Myriad essential epistemic aspects of AI should not be overlooked in our efforts to develop and deploy ethical and safe AI. We want to stress, furthermore, the importance of interdisciplinary and collaborative work – which will have to begin with working toward establishing common language(s) around key terms like “belief” – between not just ethicists and AI experts but also epistemologists, logicians, philosophers of mind, metaphysicists, and AI experts, as well as experts from other disciplines.

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

🔍 SEARCH

Spotlight

AI Policy Corner: Singapore’s National AI Strategy 2.0

AI Governance in a Competitive World: Balancing Innovation, Regulation and Ethics | Point Zero Forum 2025

AI Policy Corner: Frontier AI Safety Commitments, AI Seoul Summit 2024

AI Policy Corner: The Colorado State Deepfakes Act

Special Edition: Honouring the Legacy of Abhishek Gupta (1992–2024)

related posts

  • Research summary: On the Edge of Tomorrow: Canada’s AI Augmented Workforce

    Research summary: On the Edge of Tomorrow: Canada’s AI Augmented Workforce

  • A roadmap toward empowering the labor force behind AI

    A roadmap toward empowering the labor force behind AI

  • How to Help People Understand AI

    How to Help People Understand AI

  • LLM-Deliberation: Evaluating LLMs with Interactive Multi-Agent Negotiation Games

    LLM-Deliberation: Evaluating LLMs with Interactive Multi-Agent Negotiation Games

  • Research summary: Using Multimodal Sensing to Improve Awareness in Human-AI Interaction

    Research summary: Using Multimodal Sensing to Improve Awareness in Human-AI Interaction

  • Can You Meaningfully Consent in Eight Seconds? Identifying Ethical Issues with Verbal Consent for Vo...

    Can You Meaningfully Consent in Eight Seconds? Identifying Ethical Issues with Verbal Consent for Vo...

  • Research summary: A Case for Humans-in-the-Loop: Decisions in the Presence of Erroneous Algorithmic ...

    Research summary: A Case for Humans-in-the-Loop: Decisions in the Presence of Erroneous Algorithmic ...

  • AI and the Global South: Designing for Other Worlds  (Research Summary)

    AI and the Global South: Designing for Other Worlds (Research Summary)

  • Counterfactual Explanations via Locally-guided Sequential Algorithmic Recourse

    Counterfactual Explanations via Locally-guided Sequential Algorithmic Recourse

  • Designing Fiduciary Artificial Intelligence

    Designing Fiduciary Artificial Intelligence

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer

Categories


• Blog
• Research Summaries
• Columns
• Core Principles of Responsible AI
• Special Topics

Signature Content


• The State Of AI Ethics

• The Living Dictionary

• The AI Ethics Brief

Learn More


• About

• Open Access Policy

• Contributions Policy

• Editorial Stance on AI Tools

• Press

• Donate

• Contact

The AI Ethics Brief (bi-weekly newsletter)

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.


Archive

  • © MONTREAL AI ETHICS INSTITUTE. All rights reserved 2024.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.