• Skip to main content
  • Skip to secondary menu
  • Skip to primary sidebar
  • Skip to footer
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

  • Articles
    • Public Policy
    • Privacy & Security
    • Human Rights
      • Ethics
      • JEDI (Justice, Equity, Diversity, Inclusion
    • Climate
    • Design
      • Emerging Technology
    • Application & Adoption
      • Health
      • Education
      • Government
        • Military
        • Public Works
      • Labour
    • Arts & Culture
      • Film & TV
      • Music
      • Pop Culture
      • Digital Art
  • Columns
    • AI Policy Corner
    • Recess
  • The AI Ethics Brief
  • AI Literacy
    • Research Summaries
    • AI Ethics Living Dictionary
    • Learning Community
  • The State of AI Ethics Report
    • Volume 6 (February 2022)
    • Volume 5 (July 2021)
    • Volume 4 (April 2021)
    • Volume 3 (Jan 2021)
    • Volume 2 (Oct 2020)
    • Volume 1 (June 2020)
  • About
    • Our Contributions Policy
    • Our Open Access Policy
    • Contact
    • Donate

Group Fairness Is Not Derivable From Justice: a Mathematical Proof

March 11, 2022

🔬 Research summary by Nicolò Cangiotti and Michele Loi, who are postdoctoral researchers in the Department of Mathematics at Politecnico di Milano.

[Original paper by Nicolò Cangiotti and Michele Loi]


Overview: Many law procedures are nowadays based on algorithms, and all the algorithms are based on mathematics. However, it seems not trivial to introduce in this formal context the ethical concepts of justice and fairness. This paper exploits a mathematical structure “to show that theories of justice do not provide a sufficient normative grounding for reasonable accounts of group fairness”.  


Introduction

The debate around the concepts of justice and fairness is becoming more and more relevant especially in relation to the spread of algorithmic decision-making in many human fields, including AI in law. In our paper we propose an abstract mathematical language introducing a formal distinctinction between fairness and justice. This formal framework highlights analogies between predictive algorithms and the way any legal procedure works. Then we argue that if we consider an imperfect criminal law procedure (mistakes are possible)  we cannot be fair with respect to all possible groups of individuals.

In particular, we present a coherent mathematical argument for deterministic procedures leading to binary decisions (an individual is either convicted or acquitted in a trial). It turns out that “unless the procedure is perfect, one can always identify at least two morally arbitrary groups, relative to which the procedure is not fair.”

Is group fairness derivable from justice?

A World Between Justice and Fairness

The concepts of justice and fairness may be considered interchangeable if we do not assume that they actually represent different ideas. Starting from the point that these two notions are different one could try to define fairness with the same normative elements that are used to define justice. However, it turns out that group-fairness can only be achieved in an absolute sense by procedures involving a non-deterministic element. This interesting result could be regarded as an insight about the impossibility of group fairness or it could be regarded as a reason to provide the definition of group-fairness basing it on approaches alternative to the theories of justice. 

Our work focuses on imperfectly just procedures, those that do not guarantee a perfectly just distribution. The property of group fairness as a property of procedures, in the definition from which we start, is an intuitive one. Indeed, it seems intuitive that a procedure is group fair only if it does not favor in a morally arbitrary way (intentionally or unintentionally) any individual, who belongs to a group over an individual who belongs to a different group.

Computational Thinking in Procedural Law

Let’s try to think about the problem of justice and fairness as a computer should do, namely in mathematical terms. At first, we define a procedure as a rule leading to the allocation of benefits, burdens or harms of various kinds. More precisely, by a procedure we mean a sequence of actions such that, when a given criterion is satisfied, a given outcome for the individual is produced. One could observe that some procedures are extremely complicated: they involve a large number of criteria that jointly determine a decision. However, it is not so restrictive to suppose for the sake of simplicity that the criterion can correspond to a combination of criteria, being satisfied, to such an extent that their satisfaction is sufficient and necessary for the decision concerning the individual. 

To provide a more intuitive point of view, we exploit an adjustment of the well-known ROC (Receiver Operating Characteristic) space, in order to represent all possible procedures in our model by a schematic diagram involving the positive and the negative rate of convicted people. This pictorial trick allows us to evaluate case by case the possible procedure computing the relative outcome, when a criterion comes into play.

What is the price of group fairness?

The mathematical formalization described by the ROC diagrams led us to the unintuitive conclusion that if we want to require the fulfillment of group fairness with respect to all imaginable morally arbitrary groups (which could also include the singular individual as a group), this can only be achieved by avoiding determinism. Clearly, in a non-deterministic procedure, a given initial state of the procedure may correspond to some outcomes in one case, and to other outcomes in another, without any change in its inputs, in normal operating conditions. This is problematic since this random element should represent exactly the opposite of what justice should try to achieve, where there is a clear moral ground to justify an outcome.

Thus, could one take this as a reason to reject group-fairness altogether? Certainly one could assess procedures that are imperfect, and simply ought to minimize injustice, disregarding group-distributive effects altogether. But, a less radical option is, for instance, to amend our account of group-fairness. This requires a different theory of what makes certain groups “morally arbitrary” in the sense that the concept of the morally arbitrary is not reducible to “something other than what, morally speaking, justifies inequality”. This is the direction in which we want to push our argument. Indeed, we believe that our result is best interpreted as an argument supporting the quest for such a philosophical view.

Between the lines

The relation between justice and group-fairness (understood as quality of imperfect procedure) requires in-depth examination as our work tries to highlight. In fact, starting from reasonable definitions we show that such a relation is normally problematic. In fact, an intrinsic incompatibility seems to arise already at our first step of approximation for a given procedure (namely a binary deterministic procedure). Our paper aims to stimulate the discussion around group-fairness in imperfect procedures, regarded as a general type of procedures, conceptually broader than statistical models.

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

🔍 SEARCH

Spotlight

ALL IN Conference 2025: Four Key Takeaways from Montreal

Beyond Dependency: The Hidden Risk of Social Comparison in Chatbot Companionship

AI Policy Corner: Restriction vs. Regulation: Comparing State Approaches to AI Mental Health Legislation

Beyond Consultation: Building Inclusive AI Governance for Canada’s Democratic Future

AI Policy Corner: U.S. Executive Order on Advancing AI Education for American Youth

related posts

  • Toward an Ethics of AI Belief

    Toward an Ethics of AI Belief

  • Deciphering Open Source in the EU AI Act

    Deciphering Open Source in the EU AI Act

  • Warning Signs: The Future of Privacy and Security in an Age of Machine Learning  (Research summary)

    Warning Signs: The Future of Privacy and Security in an Age of Machine Learning (Research summary)

  • Reports on Communication Surveillance in Botswana, Malawi and the DRC, and the Chinese Digital Infra...

    Reports on Communication Surveillance in Botswana, Malawi and the DRC, and the Chinese Digital Infra...

  • Regional Differences in Information Privacy Concerns After the Facebook-Cambridge Analytica Data Sca...

    Regional Differences in Information Privacy Concerns After the Facebook-Cambridge Analytica Data Sca...

  • Value-based Fast and Slow AI Nudging

    Value-based Fast and Slow AI Nudging

  • A Look at the American Data Privacy and Protection Act

    A Look at the American Data Privacy and Protection Act

  • Efficiency is Not Enough: A Critical Perspective of Environmentally Sustainable AI

    Efficiency is Not Enough: A Critical Perspective of Environmentally Sustainable AI

  • Industry AI Ethics 101 with Kathy Baxter (Podcast Summary)

    Industry AI Ethics 101 with Kathy Baxter (Podcast Summary)

  • Digital transformation and the renewal of social theory: Unpacking the new fraudulent myths and misp...

    Digital transformation and the renewal of social theory: Unpacking the new fraudulent myths and misp...

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer


Articles

Columns

AI Literacy

The State of AI Ethics Report


 

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.

Contact

Donate


  • © 2025 MONTREAL AI ETHICS INSTITUTE.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.