• Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • Core Principles of Responsible AI
    • Accountability
    • Fairness
    • Privacy
    • Safety and Security
    • Sustainability
    • Transparency
  • Special Topics
    • AI in Industry
    • Ethical Implications
    • Human-Centered Design
    • Regulatory Landscape
    • Technical Methods
  • Living Dictionary
  • State of AI Ethics
  • AI Ethics Brief
  • 🇫🇷
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

Harnessing Collective Intelligence Under a Lack of Cultural Consensus

January 27, 2024

🔬 Research Summary by Necdet Gürkan, a Ph.D. candidate at the Stevens Institute of Technology, is working with Jordan W. Suchow and exploring the intersection of cognitive science and information systems.

[Original paper by Necdet Gürkan and Dr. Jordan W. Suchow]


Overview: Harnessing collective intelligence (CI) to solve complex problems benefits from the ability to detect and characterize heterogeneity in consensus beliefs. This is particularly true in domains where a consensus amongst respondents defines an intersubjective “ground truth,” leading to a multiplicity of ground truths when subsets of respondents sustain mutually incompatible consensuses. In this paper, we extend Cultural Consensus Theory, a classic mathematical framework to detect divergent consensus beliefs, to allow culturally held beliefs to take the form of a deep latent construct: a fine-tuned deep neural network that maps features of a concept or entity to the consensus response among a subset of respondent via stick-breaking construction.


Introduction

Opinions and judgments of a group often represent either objective truths, like image annotations, or subjective views, like product ratings. However, in many cases, a consensus among respondents establishes an intersubjective “ground truth,” allowing for multiple truths if respondents hold conflicting consensuses. For example, a computer vision algorithm might predict a label from an image, not acknowledging that respondents might have varying beliefs about the appropriate label. Similarly, a natural language processing tool might assess if content aligns with community norms or is hate speech without considering varying acceptable standards. Many concepts are influenced by personal and societal beliefs, leading to potentially differing perspectives.

In recent methods, social scientists and organizations study group-level variation but do not leverage machine-learning techniques for insights. Meanwhile, computer scientists and behavioral scientists use machine-learning methods to model human behavior without considering conflicting culturally constructed beliefs.

This paper addresses these limitations across disciplines by enhancing the Cultural Consensus Theory (CCT)—a classic mathematical model for studying consensus beliefs—with a latent mapping. This mapping uses pretrained deep neural network embeddings of entities to capture the consensus beliefs about those entities among one or more subsets of respondents.

Key Insights

Collective Intelligence – Wisdom of the Crowd

Collective intelligence refers to decision-making processes that draw on the opinions of multiple individuals. This often results in higher-quality decisions compared to those made by a single individual, a phenomenon commonly termed the “wisdom of the crowd”. The long-standing practice of employing software to consolidate ideas and opinions from groups of various sizes is evident in using decision support systems, knowledge management systems, and other related tools. Applications of these technologies include algorithmic information-pooling mechanisms, data enrichment through human image and text annotations, sentiment analysis, recommendation through collaborative filtering, and decentralized content moderation.

The Challenge of Diverging Beliefs

However, applications of these technologies are often applied in domains where there is a lack of cultural consensus, with different subsets of people forming conflicting culturally constructed beliefs. For example, in one subset, a slang term may be perceived as offensive; in another, a slang term may be perceived as funny. One subset may view face tattoos as taboo, whereas another may find them to be stylish and meaningful expressions of individual or group identity. A subset may avoid certain technologies because of privacy concerns, while another views them as vital for improving the quality of life.

Cultural Consensus Theory

Cultural Consensus Theory (CCT) provides a statistical framework for information pooling in domains where there may be a lack of cultural consensus. It enables those who use it to infer the beliefs and attitudes that influence social practices and the degree to which respondents know or express those beliefs. Consensus models based on CCT then provide an opportunity to simultaneously study both individual and group-level differences by examining the extent to which a respondent conforms to the consensus within one or more subsets and facilitating the representation of how people differ in terms of their level of knowledge and response biases. Researchers have applied the CCT framework to find a practical and concise definition of beliefs that are accepted by a group sharing common knowledge.

However, it is unworkable for modern applications of data-driven decision-making because it cannot generalize across even highly similar contexts, is ineffective with sparse data, and can leverage neither external knowledge bases nor learned machine representations as contextual data sources.

Machine representations

Deep learning algorithms can analyze vast quantities of data from various modalities, including text, images, and audio, and identify patterns and relationships that generalize beyond the training data. Leveraging these powerful techniques, it is now possible to create vector-feature representations of words, sentences, visual scenes, and images of objects. These high-dimensional representations, at times, approximate human mental representations. Although they are not comprehensive theories of human cognition, vector representations of various real-world objects and concepts have been used as inputs to linear models that can predict individual and aggregate evaluations on a wide range of topics, including perceived risk, first impressions based on faces, perceptions of leadership, and evaluation of creative writing.

Although these models are increasingly successful in predicting behavioral and physiological responses of humans, the implicit notion of “human” that they rely upon often glosses over individual-level differences in subjective beliefs, attitudes, and associations, as well as group-level cultural constructs.

Infinite Deep Latent Cultural Consensus Theory

To combine the strengths of CCT and machine-learning methods while addressing their respective limitations, we propose the iDLC-CCT, an extension to the CCT that (1) allows culturally held beliefs to take the form of a deep latent construct: a fine-tuned deep neural network that maps the features of a concept or entity to the consensus response among a subset of respondents, and (2) draws these deep latent constructs from a Dirichlet Process using the stick-breaking construction. The approach, therefore, aligns pretrained machine representations to both group- and individual-level judgments, effectively capturing variations in belief processes and behaviors across them under a multiplicity of “ground truths.” We evaluate the iDLC-CCT on people’s judgments of various phenomena, including risk sources, leadership effectiveness, first impressions of faces, and humor. These datasets were chosen because they include respondents’ individual responses to questions (items) that are related to their shared knowledge or beliefs and because the nature of the domain is such that a consensus contributes to an intersubjective truth.

Findings

Our results demonstrate that iDLC-CCT effectively aggregates beliefs from individuals, including experts and non-experts, to estimate consensus while identifying idiosyncratic and group-level differences in cultural constructs. By incorporating features from deep neural networks into CCT, we can estimate cultural consensus for any entity using pre-trained networks or other available embeddings. IDLC-CCT is a robust foundation for assessing group consensus levels by leveraging the underlying structure and inter-relatedness of beliefs and a foundation for consensus-aware technologies. Our findings reveal that considering group-level consensus variations enhances predictive accuracy and effectively harnesses CI for organizational decision-making and collaboration.

Between the lines

As machine learning advancements become increasingly prominent in our lives, they often assume a singular consensus opinion. Future generations of technologies must account for variations in culturally constructed consensus beliefs.

A promising application of CCT is its integration into AI systems to enhance their interoperability. By leveraging CCT’s unique capability to interpret shared beliefs within cultural contexts, we can equip AI systems with a more culturally nuanced understanding of consensus beliefs. This integration could significantly enhance these models’ generalizability. The integration of CCT could also increase the transparency of AI systems by rendering their decision-making processes more intelligible; decisions could be attributed to culturally shared beliefs or norms.

The iDLC-CCT can also support the consensus-building process by surfacing the causes of disagreement between respondents, whether individual-level parameters, such as shift and scale biases or competency, or culture-level parameters, such as the consensus. However, using iDLC-CCT for consensus-building may be challenging under adversarial behavior. Consider, for example, voting mechanisms or other consensus-building methods, where knowledge of the aggregation scheme could potentially allow individuals or groups to strategically manipulate the system by skewing their responses, misreporting their preferences, or coordinating their actions with others to drive the consensus toward a desired value.

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

🔍 SEARCH

Spotlight

Canada’s Minister of AI and Digital Innovation is a Historic First. Here’s What We Recommend.

Am I Literate? Redefining Literacy in the Age of Artificial Intelligence

AI Policy Corner: The Texas Responsible AI Governance Act

AI Policy Corner: Singapore’s National AI Strategy 2.0

AI Governance in a Competitive World: Balancing Innovation, Regulation and Ethics | Point Zero Forum 2025

related posts

  • Connecting the dots in trustworthy Artificial Intelligence: From AI principles, ethics, and key requ...

    Connecting the dots in trustworthy Artificial Intelligence: From AI principles, ethics, and key requ...

  • Labor and Fraud on the Google Play Store: The Case of Install-Incentivizing Apps

    Labor and Fraud on the Google Play Store: The Case of Install-Incentivizing Apps

  • Tell me, what are you most afraid of? Exploring the Effects of Agent Representation on Information D...

    Tell me, what are you most afraid of? Exploring the Effects of Agent Representation on Information D...

  • The struggle for recognition in the age of facial recognition technology

    The struggle for recognition in the age of facial recognition technology

  • Exploiting Large Language Models (LLMs) through Deception Techniques and Persuasion Principles

    Exploiting Large Language Models (LLMs) through Deception Techniques and Persuasion Principles

  • The Design Space of Generative Models

    The Design Space of Generative Models

  • An Audit Framework for Adopting AI-Nudging on Children

    An Audit Framework for Adopting AI-Nudging on Children

  • The State of AI Ethics Report (Jan 2021)

    The State of AI Ethics Report (Jan 2021)

  • The Robot Made Me Do It: Human–Robot Interaction and Risk-Taking Behavior (Research Summary)

    The Robot Made Me Do It: Human–Robot Interaction and Risk-Taking Behavior (Research Summary)

  • Contextualizing Artificially Intelligent Morality: A Meta-Ethnography of Top-Down, Bottom-Up, and Hy...

    Contextualizing Artificially Intelligent Morality: A Meta-Ethnography of Top-Down, Bottom-Up, and Hy...

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer

Categories


• Blog
• Research Summaries
• Columns
• Core Principles of Responsible AI
• Special Topics

Signature Content


• The State Of AI Ethics

• The Living Dictionary

• The AI Ethics Brief

Learn More


• About

• Open Access Policy

• Contributions Policy

• Editorial Stance on AI Tools

• Press

• Donate

• Contact

The AI Ethics Brief (bi-weekly newsletter)

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.


Archive

  • © MONTREAL AI ETHICS INSTITUTE. All rights reserved 2024.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.