• Skip to main content
  • Skip to secondary menu
  • Skip to primary sidebar
  • Skip to footer
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

  • Articles
    • Public Policy
    • Privacy & Security
    • Human Rights
      • Ethics
      • JEDI (Justice, Equity, Diversity, Inclusion
    • Climate
    • Design
      • Emerging Technology
    • Application & Adoption
      • Health
      • Education
      • Government
        • Military
        • Public Works
      • Labour
    • Arts & Culture
      • Film & TV
      • Music
      • Pop Culture
      • Digital Art
  • Columns
    • AI Policy Corner
    • Recess
    • Tech Futures
  • The AI Ethics Brief
  • AI Literacy
    • Research Summaries
    • AI Ethics Living Dictionary
    • Learning Community
  • The State of AI Ethics Report
    • Volume 7 (November 2025)
    • Volume 6 (February 2022)
    • Volume 5 (July 2021)
    • Volume 4 (April 2021)
    • Volume 3 (Jan 2021)
    • Volume 2 (Oct 2020)
    • Volume 1 (June 2020)
  • About
    • Our Contributions Policy
    • Our Open Access Policy
    • Contact
    • Donate

Atomist or holist? A diagnosis and vision for more productive interdisciplinary AI ethics dialogue

May 31, 2023

🔬 Research Summary by Travis Greene, an Assistant Professor at Copenhagen Business School’s Department of Digitalization with an interdisciplinary background in philosophy and research interests in data science ethics and machine learning-based personalization.

[Original paper by Travis Greene, Amit Dhurandar, and Galit Shmueli]


Overview: The role of ethics in AI research has sparked fierce debate among AI researchers on social media, at times devolving into counter-productive name-calling and threats of “cancellation.” We diagnose the growing polarization around AI ethics issues within the AI community, arguing that many of these ethical disagreements stem from conflicting ideologies we call atomism and holism. We examine the fundamental political, social, and philosophical foundations of atomism and holism. We suggest four strategies to improve communication and empathy when discussing contentious AI ethics issues across disciplinary divides.


Introduction

As public concern mounts over the use of AI-driven research to enable surveillance technologies, deep fakes, biased language models, misinformation, addictive behaviors, and the discriminatory use of facial recognition and emotion-detection algorithms, data scientists and AI researchers appear divided along ideological lines about what to do. 

While historically, many eminent scientists and mathematicians have held the view that science and values are and should be kept separate, today, a growing number of data science researchers see things differently. They argue that values, particularly those related to the desirability of social and political goals, implicitly influence the foundations of data science practices. While some may view the inclusion of ethics impact statements in several major AI conferences and journals as a sign that the value-neutral ideal of science is no longer tenable, a vocal cadre of data scientists is pushing back and asserting the importance of academic freedom and value neutrality. 

We worry that the conflict between these two ideological camps threatens to polarize the AI and data science communities, lowering the prospects that new AI-based technologies will contribute to our collective well-being.

Key insights

An ideology is an all-encompassing worldview advancing a political and ethical vision of a good society. To better understand the nature of ethical disagreements in data science, we introduce a simple ideological taxonomy we call atomism and holism. Our taxonomy aims to make each ideology’s implicit beliefs, assumptions, and historical foundations more explicit so they can be reflected on, refined, and more openly discussed within the AI and data science community as ethical disagreements arise.

To promote more accurate and empathetic AI ethics discussions, we propose four discipline-targeted recipes to bridge these ideological divisions, reduce data science community polarization, and ensure AI research benefits society.

The “two cultures” within the data science community

In 1959, at the height of the Cold War, and as the US military-industrial complex established itself, scientist and writer C.P. Snow worried about a growing divide between two academic cultures—those from the “hard sciences” and the “humanities”—whose specialization rendered them increasingly hostile and unmotivated to communicate with one another. We suggest that a similar dynamic may be stoking division within the larger data science community. Inspired by philosopher and historian of science Thomas Kuhn, we sketch two guiding metaphors capturing core differences between rival atomist and holist research communities.

Atomists: Data scientists as puzzle solvers

Through a process of disciplinary socialization and training, atomists see themselves as acquiring a constellation of shared beliefs, values, and techniques—in short, a paradigm—that permits progress on open problems, or “puzzles,” that the paradigm identifies as solvable. Commitment to the paradigm identifies a researcher as a member of a distinct scientific community. During “normal” science periods, the legitimacy of the paradigm’s values and traditions is presumed, narrowing researchers’ focus on the task of more reliably and efficiently gathering relevant facts.  Because the atomist’s scientific identity stems from loyalty to the paradigm, atomists worry that undue focus on external social and ethical issues not only slows down puzzle-solving but threatens both the integrity of the paradigm and their autonomy.

Holists: Data scientists as social stewards

In contrast, holist data scientists view the growing public concern over the social impact of AI as anomalies signifying a paradigmatic crisis ultimately requiring a paradigm shift. Holists thus propose revolutionary changes in perspective and new disciplinary procedures, open problems, and traditions in AI and data science. In the emergent holist paradigm, data scientists see themselves as social stewards or fiduciaries working on behalf of society and advancing substantive social values and human interests through data science research and applications.

Holists are concerned about unjust power differentials and coercive dependency relationships that may arise due to the applications of AI-based technologies. The role of social steward or fiduciary aligns with holist beliefs that the self is constituted through social relations and that social responsibilities and caring relations are essential for our psychological well-being.

The table below summarizes the â€śatomist” and “holist” ideologies in the data science community along several core dimensions.

AtomistsHolists
Guiding metaphorpuzzle solversocial steward
Facts and valuesseparateinseparable
Associated “isms”(neo)liberalism, libertarianism, logical positivism, modernismcommunitarianism, feminism, post-positivism, post-modernism
Social orientationindividualistcollectivist
Self-conceptautonomousrelational
Means of social coordinationincentives and marketsshared moral values and dialogic exchange
Key moral conceptsrights, duties, contracts, impartial justiceempathy, caring, connection, responsivity to vulnerable others
Vision of the good lifeneutral and constrainedsubstantive and unconstrained
Scientific methodologydata-driven, empiricist, neutraltheory-laden, rationalist, perspectival
Extreme form leads totechnocracy/nihilism/alienationtotalitarianism/dogmatism/tribalism

Between the lines

As AI-based technologies increasingly impact society, AI ethics is more relevant than ever. But a more central role for ethics in AI research means that members of long-estranged academic disciplines will be forced to engage with one another, disrupting the current intellectual division of labor and posing new barriers to communication and mutual understanding. We suggest several recipes for improved interdisciplinary dialogue among data scientists, including expanding the educational curriculum of data scientists to reflect the social impact of AI and foster the promotion of various intellectual virtues.

Data scientists and AI researchers holding various ethical viewpoints must learn to empathetically and productively discuss controversial AI ethics issues without resorting to name-calling and threats of violence or cancellation. 

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

🔍 SEARCH

Spotlight

A rock embedded with intricate circuit board patterns, held delicately by pale hands drawn in a ghostly style. The contrast between the rough, metallic mineral and the sleek, artificial circuit board illustrates the relationship between raw natural resources and modern technological development. The hands evoke human involvement in the extraction and manufacturing processes.

Tech Futures: The Fossil Fuels Playbook for Big Tech: Part I

Close-up of a cat sleeping on a computer keyboard

Tech Futures: The threat of AI-generated code to the world’s digital infrastructure

The undying sun hangs in the sky, as people gather around signal towers, working through their digital devices.

Dreams and Realities in Modi’s AI Impact Summit

Illustration of a coral reef ecosystem

Tech Futures: Diversity of Thought and Experience: The UN’s Scientific Panel on AI

This image shows a large white, traditional, old building. The top half of the building represents the humanities (which is symbolised by the embedded text from classic literature which is faintly shown ontop the building). The bottom section of the building is embossed with mathematical formulas to represent the sciences. The middle layer of the image is heavily pixelated. On the steps at the front of the building there is a group of scholars, wearing formal suits and tie attire, who are standing around at the enternace talking and some of them are sitting on the steps. There are two stone, statute-like hands that are stretching the building apart from the left side. In the forefront of the image, there are 8 students - which can only be seen from the back. Their graduation gowns have bright blue hoods and they all look as though they are walking towards the old building which is in the background at a distance. There are a mix of students in the foreground.

Tech Futures: Co-opting Research and Education

related posts

  • A Survey on Intersectional Fairness in Machine Learning: Notions, Mitigation and Challenges

    A Survey on Intersectional Fairness in Machine Learning: Notions, Mitigation and Challenges

  • Who is afraid of black box algorithms? On the epistemological and ethical basis of trust in medical ...

    Who is afraid of black box algorithms? On the epistemological and ethical basis of trust in medical ...

  • Fine-Grained Human Feedback Gives Better Rewards for Language Model Training

    Fine-Grained Human Feedback Gives Better Rewards for Language Model Training

  • Auditing for Human Expertise

    Auditing for Human Expertise

  • International Institutions for Advanced AI

    International Institutions for Advanced AI

  • Ethics as a service: a pragmatic operationalisation of AI Ethics

    Ethics as a service: a pragmatic operationalisation of AI Ethics

  • Global AI Ethics: Examples, Directory, and a Call to Action

    Global AI Ethics: Examples, Directory, and a Call to Action

  • Summoning a New Artificial Intelligence Patent Model: In the Age of Pandemic

    Summoning a New Artificial Intelligence Patent Model: In the Age of Pandemic

  • Embedded ethics: a proposal for integrating ethics into the development of medical AI

    Embedded ethics: a proposal for integrating ethics into the development of medical AI

  • Studying up Machine Learning Data: Why Talk About Bias When We Mean Power?

    Studying up Machine Learning Data: Why Talk About Bias When We Mean Power?

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer


Articles

Columns

AI Literacy

The State of AI Ethics Report


 

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.

Contact

Donate


  • © 2025 MONTREAL AI ETHICS INSTITUTE.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.