• Skip to main content
  • Skip to secondary menu
  • Skip to primary sidebar
  • Skip to footer
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

  • Articles
    • Public Policy
    • Privacy & Security
    • Human Rights
      • Ethics
      • JEDI (Justice, Equity, Diversity, Inclusion
    • Climate
    • Design
      • Emerging Technology
    • Application & Adoption
      • Health
      • Education
      • Government
        • Military
        • Public Works
      • Labour
    • Arts & Culture
      • Film & TV
      • Music
      • Pop Culture
      • Digital Art
  • Columns
    • AI Policy Corner
    • Recess
    • Tech Futures
  • The AI Ethics Brief
  • AI Literacy
    • Research Summaries
    • AI Ethics Living Dictionary
    • Learning Community
  • The State of AI Ethics Report
    • Volume 7 (November 2025)
    • Volume 6 (February 2022)
    • Volume 5 (July 2021)
    • Volume 4 (April 2021)
    • Volume 3 (Jan 2021)
    • Volume 2 (Oct 2020)
    • Volume 1 (June 2020)
  • About
    • Our Contributions Policy
    • Our Open Access Policy
    • Contact
    • Donate

How Different Groups Prioritize Ethical Values for Responsible AI

May 28, 2023

🔬 Research Summary by Maurice Jakesch, a Ph.D. candidate at Cornell University, where he investigates the societal impact of AI systems that change human communication.

[Original paper by Maurice Jakesch, Zana Buçinca, Saleema Amershi, and Alexandra Olteanu]


Overview: AI ethics guidelines argue that values such as fairness and transparency are key to the responsible development of AI. However, less is known about the values a broader and more representative public cares about in the AI systems they may be affected by. This paper surveys a US-representative sample and AI practitioners about their value priorities for responsible AI. 


Introduction

Private companies, public sector organizations, and academic groups have published AI ethics guidelines. These guidelines converge on five central values: transparency, fairness, safety, accountability, and privacy.  But these values may differ from what a broader and more representative population would consider important for the AI technologies they interact with.  

Prior research has shown that value preferences and ethical intuitions depend on peoples’ backgrounds and personal experiences. As AI technologies are often developed by relatively homogeneous and demographically skewed, practitioners may unknowingly encode their biases and assumptions into their concept and operationalization of responsible AI.

This study develops an AI value survey to understand how groups differ in their value priorities for responsible AI and what values a representative public would emphasize.

Key Insights

The authors draw on empirical ethics and value elicitation research traditions to develop a survey. They ask participants about the importance of 12 responsible AI values in different deployment scenarios and field the survey with three groups: 

  1. A US census-representative sample (N=743) to understand what values a broader public cares about in the AI systems they interact with. 
  2. A sample of AI practitioners (N=175) to test what values those who develop AI systems would prioritize.
  3. A sample of crowdworkers (N=755) to explore whether they can diversify ethical judgment in the AI development process.

The findings show that different groups perceive and prioritize responsible AI values differently. AI practitioners, on average, evaluated responsible AI values as less important than other groups. At the same time, AI practitioners prioritized fairness more often than participants from the US-census representative sample who emphasized safety, privacy, and performance. The results highlight the need for AI practitioners to contextualize and probe their ethical intuitions and assumptions. 

The authors also find differences in value priorities along demographic lines. For example, women and black respondents evaluated responsible AI values as more important than other groups. The most contested value trade-off was the one between fairness and performance. Surprisingly, participants reporting past experiences of discrimination did not prioritize fairness more than others, but liberal-leaning participants prioritized fairness while conservative-leaning participants tended to prioritize performance. 

Between the lines

The results empirically corroborate a commonly raised concern: AI practitioners’ priorities for responsible AI are not representative of the value priorities of the wider US population. They show that different groups differ in their judgment of specific behaviors and technical details and disagree on the importance of the values at the core of responsible AI.

The disagreement in value priorities highlights the importance of paying attention to who defines what constitutes “ethical” or “responsible” AI. AI ethics guidelines may emphasize different values depending on who writes them and who is consulted. Representation matters, and consulting populations outside the West about their priorities for responsible AI would surface even starker disagreement about what responsible AI should be about.

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

🔍 SEARCH

Spotlight

This image is a collage with a colourful Japanese vintage landscape showing a mountain, hills, flowers and other plants and a small stream. There are 3 large black data servers placed in the bottom half of the image, with a cloud of black smoke emitting from them, partly obscuring the scenery.

Tech Futures: Crafting Participatory Tech Futures

A network diagram with lots of little emojis, organised in clusters.

Tech Futures: AI For and Against Knowledge

A brightly coloured illustration which can be viewed in any direction. It has many elements to it working together: men in suits around a table, someone in a data centre, big hands controlling the scenes and holding a phone, people in a production line. Motifs such as network diagrams and melting emojis are placed throughout the busy vignettes.

Tech Futures: The Fossil Fuels Playbook for Big Tech: Part II

A rock embedded with intricate circuit board patterns, held delicately by pale hands drawn in a ghostly style. The contrast between the rough, metallic mineral and the sleek, artificial circuit board illustrates the relationship between raw natural resources and modern technological development. The hands evoke human involvement in the extraction and manufacturing processes.

Tech Futures: The Fossil Fuels Playbook for Big Tech: Part I

Close-up of a cat sleeping on a computer keyboard

Tech Futures: The threat of AI-generated code to the world’s digital infrastructure

related posts

  • The Sociology of AI Ethics (Column Introduction)

    The Sociology of AI Ethics (Column Introduction)

  • Rethinking normative status necessary for self-determination in the era of sentient artificial agent...

    Rethinking normative status necessary for self-determination in the era of sentient artificial agent...

  • Algorithmic Impact Assessments – What Impact Do They Have?

    Algorithmic Impact Assessments – What Impact Do They Have?

  • Representation and Imagination for Preventing AI Harms

    Representation and Imagination for Preventing AI Harms

  • Implications of the use of artificial intelligence in public governance: A systematic literature rev...

    Implications of the use of artificial intelligence in public governance: A systematic literature rev...

  • Ethics as a service: a pragmatic operationalisation of AI Ethics

    Ethics as a service: a pragmatic operationalisation of AI Ethics

  • Research summary: Digital Abundance and Scarce Genius: Implications for Wages, Interest Rates, and G...

    Research summary: Digital Abundance and Scarce Genius: Implications for Wages, Interest Rates, and G...

  • On Measuring Fairness in Generative Modelling (NeurIPS 2023)

    On Measuring Fairness in Generative Modelling (NeurIPS 2023)

  • Responsible Generative AI: A Reference Architecture for Designing Foundation Model-based Agents

    Responsible Generative AI: A Reference Architecture for Designing Foundation Model-based Agents

  • Exchanging Lessons Between Algorithmic Fairness and Domain Generalization (Research Summary)

    Exchanging Lessons Between Algorithmic Fairness and Domain Generalization (Research Summary)

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer


Articles

Columns

AI Literacy

The State of AI Ethics Report


 

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.

Contact

Donate


  • © 2025 MONTREAL AI ETHICS INSTITUTE.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.