• Skip to main content
  • Skip to secondary menu
  • Skip to primary sidebar
  • Skip to footer
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

  • Articles
    • Public Policy
    • Privacy & Security
    • Human Rights
      • Ethics
      • JEDI (Justice, Equity, Diversity, Inclusion
    • Climate
    • Design
      • Emerging Technology
    • Application & Adoption
      • Health
      • Education
      • Government
        • Military
        • Public Works
      • Labour
    • Arts & Culture
      • Film & TV
      • Music
      • Pop Culture
      • Digital Art
  • Columns
    • AI Policy Corner
    • Recess
  • The AI Ethics Brief
  • AI Literacy
    • Research Summaries
    • AI Ethics Living Dictionary
    • Learning Community
  • The State of AI Ethics Report
    • Volume 6 (February 2022)
    • Volume 5 (July 2021)
    • Volume 4 (April 2021)
    • Volume 3 (Jan 2021)
    • Volume 2 (Oct 2020)
    • Volume 1 (June 2020)
  • About
    • Our Contributions Policy
    • Our Open Access Policy
    • Contact
    • Donate

Harnessing Collective Intelligence Under a Lack of Cultural Consensus

January 27, 2024

🔬 Research Summary by Necdet Gürkan, a Ph.D. candidate at the Stevens Institute of Technology, is working with Jordan W. Suchow and exploring the intersection of cognitive science and information systems.

[Original paper by Necdet Gürkan and Dr. Jordan W. Suchow]


Overview: Harnessing collective intelligence (CI) to solve complex problems benefits from the ability to detect and characterize heterogeneity in consensus beliefs. This is particularly true in domains where a consensus amongst respondents defines an intersubjective “ground truth,” leading to a multiplicity of ground truths when subsets of respondents sustain mutually incompatible consensuses. In this paper, we extend Cultural Consensus Theory, a classic mathematical framework to detect divergent consensus beliefs, to allow culturally held beliefs to take the form of a deep latent construct: a fine-tuned deep neural network that maps features of a concept or entity to the consensus response among a subset of respondent via stick-breaking construction.


Introduction

Opinions and judgments of a group often represent either objective truths, like image annotations, or subjective views, like product ratings. However, in many cases, a consensus among respondents establishes an intersubjective “ground truth,” allowing for multiple truths if respondents hold conflicting consensuses. For example, a computer vision algorithm might predict a label from an image, not acknowledging that respondents might have varying beliefs about the appropriate label. Similarly, a natural language processing tool might assess if content aligns with community norms or is hate speech without considering varying acceptable standards. Many concepts are influenced by personal and societal beliefs, leading to potentially differing perspectives.

In recent methods, social scientists and organizations study group-level variation but do not leverage machine-learning techniques for insights. Meanwhile, computer scientists and behavioral scientists use machine-learning methods to model human behavior without considering conflicting culturally constructed beliefs.

This paper addresses these limitations across disciplines by enhancing the Cultural Consensus Theory (CCT)—a classic mathematical model for studying consensus beliefs—with a latent mapping. This mapping uses pretrained deep neural network embeddings of entities to capture the consensus beliefs about those entities among one or more subsets of respondents.

Key Insights

Collective Intelligence – Wisdom of the Crowd

Collective intelligence refers to decision-making processes that draw on the opinions of multiple individuals. This often results in higher-quality decisions compared to those made by a single individual, a phenomenon commonly termed the “wisdom of the crowd”. The long-standing practice of employing software to consolidate ideas and opinions from groups of various sizes is evident in using decision support systems, knowledge management systems, and other related tools. Applications of these technologies include algorithmic information-pooling mechanisms, data enrichment through human image and text annotations, sentiment analysis, recommendation through collaborative filtering, and decentralized content moderation.

The Challenge of Diverging Beliefs

However, applications of these technologies are often applied in domains where there is a lack of cultural consensus, with different subsets of people forming conflicting culturally constructed beliefs. For example, in one subset, a slang term may be perceived as offensive; in another, a slang term may be perceived as funny. One subset may view face tattoos as taboo, whereas another may find them to be stylish and meaningful expressions of individual or group identity. A subset may avoid certain technologies because of privacy concerns, while another views them as vital for improving the quality of life.

Cultural Consensus Theory

Cultural Consensus Theory (CCT) provides a statistical framework for information pooling in domains where there may be a lack of cultural consensus. It enables those who use it to infer the beliefs and attitudes that influence social practices and the degree to which respondents know or express those beliefs. Consensus models based on CCT then provide an opportunity to simultaneously study both individual and group-level differences by examining the extent to which a respondent conforms to the consensus within one or more subsets and facilitating the representation of how people differ in terms of their level of knowledge and response biases. Researchers have applied the CCT framework to find a practical and concise definition of beliefs that are accepted by a group sharing common knowledge.

However, it is unworkable for modern applications of data-driven decision-making because it cannot generalize across even highly similar contexts, is ineffective with sparse data, and can leverage neither external knowledge bases nor learned machine representations as contextual data sources.

Machine representations

Deep learning algorithms can analyze vast quantities of data from various modalities, including text, images, and audio, and identify patterns and relationships that generalize beyond the training data. Leveraging these powerful techniques, it is now possible to create vector-feature representations of words, sentences, visual scenes, and images of objects. These high-dimensional representations, at times, approximate human mental representations. Although they are not comprehensive theories of human cognition, vector representations of various real-world objects and concepts have been used as inputs to linear models that can predict individual and aggregate evaluations on a wide range of topics, including perceived risk, first impressions based on faces, perceptions of leadership, and evaluation of creative writing.

Although these models are increasingly successful in predicting behavioral and physiological responses of humans, the implicit notion of “human” that they rely upon often glosses over individual-level differences in subjective beliefs, attitudes, and associations, as well as group-level cultural constructs.

Infinite Deep Latent Cultural Consensus Theory

To combine the strengths of CCT and machine-learning methods while addressing their respective limitations, we propose the iDLC-CCT, an extension to the CCT that (1) allows culturally held beliefs to take the form of a deep latent construct: a fine-tuned deep neural network that maps the features of a concept or entity to the consensus response among a subset of respondents, and (2) draws these deep latent constructs from a Dirichlet Process using the stick-breaking construction. The approach, therefore, aligns pretrained machine representations to both group- and individual-level judgments, effectively capturing variations in belief processes and behaviors across them under a multiplicity of “ground truths.” We evaluate the iDLC-CCT on people’s judgments of various phenomena, including risk sources, leadership effectiveness, first impressions of faces, and humor. These datasets were chosen because they include respondents’ individual responses to questions (items) that are related to their shared knowledge or beliefs and because the nature of the domain is such that a consensus contributes to an intersubjective truth.

Findings

Our results demonstrate that iDLC-CCT effectively aggregates beliefs from individuals, including experts and non-experts, to estimate consensus while identifying idiosyncratic and group-level differences in cultural constructs. By incorporating features from deep neural networks into CCT, we can estimate cultural consensus for any entity using pre-trained networks or other available embeddings. IDLC-CCT is a robust foundation for assessing group consensus levels by leveraging the underlying structure and inter-relatedness of beliefs and a foundation for consensus-aware technologies. Our findings reveal that considering group-level consensus variations enhances predictive accuracy and effectively harnesses CI for organizational decision-making and collaboration.

Between the lines

As machine learning advancements become increasingly prominent in our lives, they often assume a singular consensus opinion. Future generations of technologies must account for variations in culturally constructed consensus beliefs.

A promising application of CCT is its integration into AI systems to enhance their interoperability. By leveraging CCT’s unique capability to interpret shared beliefs within cultural contexts, we can equip AI systems with a more culturally nuanced understanding of consensus beliefs. This integration could significantly enhance these models’ generalizability. The integration of CCT could also increase the transparency of AI systems by rendering their decision-making processes more intelligible; decisions could be attributed to culturally shared beliefs or norms.

The iDLC-CCT can also support the consensus-building process by surfacing the causes of disagreement between respondents, whether individual-level parameters, such as shift and scale biases or competency, or culture-level parameters, such as the consensus. However, using iDLC-CCT for consensus-building may be challenging under adversarial behavior. Consider, for example, voting mechanisms or other consensus-building methods, where knowledge of the aggregation scheme could potentially allow individuals or groups to strategically manipulate the system by skewing their responses, misreporting their preferences, or coordinating their actions with others to drive the consensus toward a desired value.

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

🔍 SEARCH

Spotlight

ALL IN Conference 2025: Four Key Takeaways from Montreal

Beyond Dependency: The Hidden Risk of Social Comparison in Chatbot Companionship

AI Policy Corner: Restriction vs. Regulation: Comparing State Approaches to AI Mental Health Legislation

Beyond Consultation: Building Inclusive AI Governance for Canada’s Democratic Future

AI Policy Corner: U.S. Executive Order on Advancing AI Education for American Youth

related posts

  • A Prompt Array Keeps the Bias Away: Debiasing Vision-Language Models with Adversarial Learning

    A Prompt Array Keeps the Bias Away: Debiasing Vision-Language Models with Adversarial Learning

  • Futures of Responsible and Inclusive AI

    Futures of Responsible and Inclusive AI

  • “Cool Projects” or “Expanding the Efficiency of the Murderous American War Machine?” (Research Summa...

    “Cool Projects” or “Expanding the Efficiency of the Murderous American War Machine?” (Research Summa...

  • AI Neutrality in the Spotlight: ChatGPT’s Political Biases Revisited

    AI Neutrality in the Spotlight: ChatGPT’s Political Biases Revisited

  • Anthropomorphized AI as Capitalist Agents: The Price We Pay for Familiarity

    Anthropomorphized AI as Capitalist Agents: The Price We Pay for Familiarity

  • Research summary: AI in Context: The Labor of Integrating New Technologies

    Research summary: AI in Context: The Labor of Integrating New Technologies

  • Bias in Automated Speaker Recognition

    Bias in Automated Speaker Recognition

  • From Dance App to Political Mercenary: How disinformation on TikTok gaslights political tensions in ...

    From Dance App to Political Mercenary: How disinformation on TikTok gaslights political tensions in ...

  • Ethics-based auditing of automated decision-making systems: intervention points and policy implicati...

    Ethics-based auditing of automated decision-making systems: intervention points and policy implicati...

  • Bots don’t Vote, but They Surely Bother! A Study of Anomalous Accounts in a National Referendum

    Bots don’t Vote, but They Surely Bother! A Study of Anomalous Accounts in a National Referendum

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer


Articles

Columns

AI Literacy

The State of AI Ethics Report


 

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.

Contact

Donate


  • © 2025 MONTREAL AI ETHICS INSTITUTE.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.