• Skip to main content
  • Skip to secondary menu
  • Skip to primary sidebar
  • Skip to footer
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

  • Articles
    • Public Policy
    • Privacy & Security
    • Human Rights
      • Ethics
      • JEDI (Justice, Equity, Diversity, Inclusion
    • Climate
    • Design
      • Emerging Technology
    • Application & Adoption
      • Health
      • Education
      • Government
        • Military
        • Public Works
      • Labour
    • Arts & Culture
      • Film & TV
      • Music
      • Pop Culture
      • Digital Art
  • Columns
    • AI Policy Corner
    • Recess
  • The AI Ethics Brief
  • AI Literacy
    • Research Summaries
    • AI Ethics Living Dictionary
    • Learning Community
  • The State of AI Ethics Report
    • Volume 6 (February 2022)
    • Volume 5 (July 2021)
    • Volume 4 (April 2021)
    • Volume 3 (Jan 2021)
    • Volume 2 (Oct 2020)
    • Volume 1 (June 2020)
  • About
    • Our Contributions Policy
    • Our Open Access Policy
    • Contact
    • Donate

How Different Groups Prioritize Ethical Values for Responsible AI

May 28, 2023

🔬 Research Summary by Maurice Jakesch, a Ph.D. candidate at Cornell University, where he investigates the societal impact of AI systems that change human communication.

[Original paper by Maurice Jakesch, Zana Buçinca, Saleema Amershi, and Alexandra Olteanu]


Overview: AI ethics guidelines argue that values such as fairness and transparency are key to the responsible development of AI. However, less is known about the values a broader and more representative public cares about in the AI systems they may be affected by. This paper surveys a US-representative sample and AI practitioners about their value priorities for responsible AI. 


Introduction

Private companies, public sector organizations, and academic groups have published AI ethics guidelines. These guidelines converge on five central values: transparency, fairness, safety, accountability, and privacy.  But these values may differ from what a broader and more representative population would consider important for the AI technologies they interact with.  

Prior research has shown that value preferences and ethical intuitions depend on peoples’ backgrounds and personal experiences. As AI technologies are often developed by relatively homogeneous and demographically skewed, practitioners may unknowingly encode their biases and assumptions into their concept and operationalization of responsible AI.

This study develops an AI value survey to understand how groups differ in their value priorities for responsible AI and what values a representative public would emphasize.

Key Insights

The authors draw on empirical ethics and value elicitation research traditions to develop a survey. They ask participants about the importance of 12 responsible AI values in different deployment scenarios and field the survey with three groups: 

  1. A US census-representative sample (N=743) to understand what values a broader public cares about in the AI systems they interact with. 
  2. A sample of AI practitioners (N=175) to test what values those who develop AI systems would prioritize.
  3. A sample of crowdworkers (N=755) to explore whether they can diversify ethical judgment in the AI development process.

The findings show that different groups perceive and prioritize responsible AI values differently. AI practitioners, on average, evaluated responsible AI values as less important than other groups. At the same time, AI practitioners prioritized fairness more often than participants from the US-census representative sample who emphasized safety, privacy, and performance. The results highlight the need for AI practitioners to contextualize and probe their ethical intuitions and assumptions. 

The authors also find differences in value priorities along demographic lines. For example, women and black respondents evaluated responsible AI values as more important than other groups. The most contested value trade-off was the one between fairness and performance. Surprisingly, participants reporting past experiences of discrimination did not prioritize fairness more than others, but liberal-leaning participants prioritized fairness while conservative-leaning participants tended to prioritize performance. 

Between the lines

The results empirically corroborate a commonly raised concern: AI practitioners’ priorities for responsible AI are not representative of the value priorities of the wider US population. They show that different groups differ in their judgment of specific behaviors and technical details and disagree on the importance of the values at the core of responsible AI.

The disagreement in value priorities highlights the importance of paying attention to who defines what constitutes “ethical” or “responsible” AI. AI ethics guidelines may emphasize different values depending on who writes them and who is consulted. Representation matters, and consulting populations outside the West about their priorities for responsible AI would surface even starker disagreement about what responsible AI should be about.

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

🔍 SEARCH

Spotlight

ALL IN Conference 2025: Four Key Takeaways from Montreal

Beyond Dependency: The Hidden Risk of Social Comparison in Chatbot Companionship

AI Policy Corner: Restriction vs. Regulation: Comparing State Approaches to AI Mental Health Legislation

Beyond Consultation: Building Inclusive AI Governance for Canada’s Democratic Future

AI Policy Corner: U.S. Executive Order on Advancing AI Education for American Youth

related posts

  • On Measuring Fairness in Generative Modelling (NeurIPS 2023)

    On Measuring Fairness in Generative Modelling (NeurIPS 2023)

  • A call for a critical look at the metrics for success in the evaluation of AI

    A call for a critical look at the metrics for success in the evaluation of AI

  • Report on the Santa Clara Principles ​for Content Moderation

    Report on the Santa Clara Principles ​for Content Moderation

  • Breaking Your Neural Network with Adversarial Examples

    Breaking Your Neural Network with Adversarial Examples

  • Research summary: Beyond Near- and Long-Term: Towards a Clearer Account of Research Priorities in AI...

    Research summary: Beyond Near- and Long-Term: Towards a Clearer Account of Research Priorities in AI...

  • What’s missing in the way Tech Ethics is taught currently?

    What’s missing in the way Tech Ethics is taught currently?

  • People are not coins: Morally distinct types of predictions necessitate different fairness constrain...

    People are not coins: Morally distinct types of predictions necessitate different fairness constrain...

  • Research summary: Robot Rights? Let’s Talk about Human Welfare instead

    Research summary: Robot Rights? Let’s Talk about Human Welfare instead

  • Research summary: Health Care, Capabilities, and AI Assistive Technologies

    Research summary: Health Care, Capabilities, and AI Assistive Technologies

  • A Prompt Array Keeps the Bias Away: Debiasing Vision-Language Models with Adversarial Learning

    A Prompt Array Keeps the Bias Away: Debiasing Vision-Language Models with Adversarial Learning

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer


Articles

Columns

AI Literacy

The State of AI Ethics Report


 

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.

Contact

Donate


  • © 2025 MONTREAL AI ETHICS INSTITUTE.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.