• Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • Core Principles of Responsible AI
    • Accountability
    • Fairness
    • Privacy
    • Safety and Security
    • Sustainability
    • Transparency
  • Special Topics
    • AI in Industry
    • Ethical Implications
    • Human-Centered Design
    • Regulatory Landscape
    • Technical Methods
  • Living Dictionary
  • State of AI Ethics
  • AI Ethics Brief
  • 🇫🇷
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

Going public: the role of public participation approaches in commercial AI labs

July 26, 2023

🔬 Research Summary by Lara Groves, a Researcher at the Ada Lovelace Institute, where she researches emerging AI accountability mechanisms and practices

[Original paper by Lara Groves, Aidan Peppin, Andrew Strait, and Jenny Brennan]


Overview: What’s the state of public participation in the AI industry? Our paper explores attitudes and approaches to public participation in commercial AI labs. While tech industry discourse frequently adopts the language of participation in calls to ‘democratize AI’ (and similar), this may not match the reality of practices in these companies.


Introduction

The future I would like to see is where access to AI is super democratized

Sam Altman, OpenAI CEO, 2023

The commercial AI industry has considerable influence over the terrain of ethical or responsible AI practices and increasing monopolization of AI development. Against this backdrop, we’ve seen calls for public participation in AI grow louder. While they sound admirable on their face,  there’s a need to probe around these claims and uncover the reality of public participation approaches on the ground to make a better assessment of their opportunities and limitations.

As researchers exploring the role of public voice in AI and the accountability dynamics between technology developers and the people affected by tech, my coauthors and I identified the question of “What public participation is being conducted in the AI industry?” as underexplored. To better understand the state of play, we conducted interviews with both industry practitioners with a stake in ‘participatory AI ’and public participation experts to answer the following research questions:

  • How do commercial AI labs understand public participation in the development of their products and research? 
  • What approaches to public participation do commercial AI labs adopt? 
  • What obstacles/challenges do labs face when implementing these approaches?

Though we find broad support for public participation, we find little evidence of concerted effort around adopting participatory approaches across the industry. Commercial labs are struggling to adopt meaningful participatory approaches that are impactful for both participants and companies due to a lack of incentives. Through this research, we shed light on a research gap, offering novel empirical evidence exploring the emerging intersection between ‘public participation’ and ‘commercial AI.’

Key Insights

AI for the people, by the people?

In AI ethics, we’ve witnessed a concerted turn toward the potential for participatory and deliberative approaches in AI development and oversight. Proponents of these methods argue that the ‘wisdom of the crowd’ might help technologies better serve the public interest and offer the knowledge and experience that technology developers building AI might lack. The signifiers of ‘participation,’ ‘inclusion,’ and ‘community’ allude to democratic values and take on an attractive allure. Few would challenge the idea of ‘more or better’ participation, in principle.  In practice, however, it is difficult to pinpoint a single definition of ‘participation,’ how it’s supposed to function, and who it’s for. 

Because of the conceptual capaciousness of ‘participatory AI’ and the normative sheen to the language of participation, there’s a real need to interrogate the underlying motivations for, and practices of, public participation to get a better sense of the lay of the land. Dissensus over the aims and value of participation is not unique to the AI industry, but a focus on participation in the commercial context raises some interesting considerations worth exploring. What are the business incentives to adopt public participation, for example?

The business of participation 

Given the appeal of participation (as we set out above), it’s perhaps unsurprising that practitioners view it favorably. Our research interviewees put forward two supporting arguments for public participation: firstly, that participation might instrumentalize societal goals, such as inclusion, fairness, and accountability, and second, that participation may also support the cut and thrust of commercial business missions. The latter argument speaks to the idea that more widespread input or feedback might translate to higher quality products or products that simply ‘work’ for more people (speaking more directly to the profit motive). The former argument, less concerned with improved technological outcomes than whether participation could be a harbinger of social change, was put forward as a laudable goal by nearly everybody we spoke to but described as incredibly difficult to mediate in a commercial environment. 

Some practitioners shared participatory projects they’d worked on directly or witnessed in use across the sector, but we found little convergence around a particular set of practices. Some 19 different methods were mentioned in the interview as potentially falling under the banner of ‘participation.’ Crucially, even if we account for practitioner apprehension to speak candidly about work and practice, we find that hardly any public participation is undertaken in the industry.

Numerous obstacles were put forward as contributing to the slow takeup of these methods. Many practitioners expressed concern about a perceived lack of suitable conditions for creating meaningful practice. Rigid development deadlines, restricted budgets, and insufficient coordination among relevant teams were cited as curbing ambition. Our finding of particular relevance to the current ‘AI spring’ suggests practitioners are apprehensive about embedding public participation into generative AI or general-purpose research, which often lacks a clear use context. The question was raised repeatedly: how do you get members of the public to comprehensively deliberate on or evaluate a foundation model like GPT-4, which may have innumerable downstream applications and impacts?

Many of these challenges lack a clear path forward. They will require rigorous and iterative collaboration and a realignment of incentives at both the firm and the industry levels. 

Between the lines

With this paper, we intend to highlight the lay of the land in an emerging research area and add empirical color to some of the tech industry discourse around ‘participatory AI’ and ‘democratizing AI.’ To make sense of these findings, it’s useful to situate them in the context of current field-level dynamics. 

We note a limitation of our study: participation advocates in these environments already occupy a small niche of the overall practitioner population, often siloed across different teams. As a result, we struggled to gain access to the right people, many were reluctant to share potentially identifiable information, and many more declined to participate, citing burnout. 

Major tech companies are battling in an intense ‘AI arms race,’ building ever larger and more powerful systems rapidly. At the same time, many are touting the benefits of public input into AI development and expressing interest in implementing participatory methods (see Meta’s Community Forums project and OpenAI’s recent call for proposals for ‘democratic input in AI.’ We know from the long history of public participation in other domains that this work is (necessarily) demanding, requiring careful planning and proper resourcing. Coupled with a dreary economic outlook and a dwindling level of ethics or social science expertise in these spaces (with some companies firing their entire ethics teams in recent months), we find ourselves with a considerable challenge for ensuring participatory methods in the industry will have meaningful impact for both companies themselves, but more importantly, for people and society.

In light of these trends, we expect to see the industry spearheading methods that are more cost and time efficient to execute, akin to the deliberative polling via online platforms as trialed by Meta, to garner individual opinions on certain tech design decisions or policies. 

This approach mimics the long tail of user research or user testing, emphasizing gathering would-be consumers, but what is often missing from the question of the societal need for technology, particularly for groups and communities. We see an important role for civil society, academic researchers, activists, and community leaders in helping to guide the conversation and practice to avoid the technology industry unilaterally setting the tone and terrain for ‘participatory AI.’ At this juncture, we need clear-eyed, evidence-driven exploration into potential participation to make sense of some opportunities that public participation in AI can bring.

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

🔍 SEARCH

Spotlight

AI Policy Corner: Frontier AI Safety Commitments, AI Seoul Summit 2024

AI Policy Corner: The Colorado State Deepfakes Act

Special Edition: Honouring the Legacy of Abhishek Gupta (1992–2024)

AI Policy Corner: The Turkish Artificial Intelligence Law Proposal

From Funding Crisis to AI Misuse: Critical Digital Rights Challenges from RightsCon 2025

related posts

  • Choices, Risks, and Reward Reports: Charting Public Policy for Reinforcement Learning Systems

    Choices, Risks, and Reward Reports: Charting Public Policy for Reinforcement Learning Systems

  • Exploring the Carbon Footprint of Hugging Face's ML Models: A Repository Mining Study

    Exploring the Carbon Footprint of Hugging Face's ML Models: A Repository Mining Study

  • A Critical Analysis of the What3Words Geocoding Algorithm

    A Critical Analysis of the What3Words Geocoding Algorithm

  • Towards Healthy AI: Large Language Models Need Therapists Too

    Towards Healthy AI: Large Language Models Need Therapists Too

  • GAM(e) changer or not? An evaluation of interpretable machine learning models

    GAM(e) changer or not? An evaluation of interpretable machine learning models

  • Research summary: AI Governance: A Holistic Approach to Implement Ethics in AI

    Research summary: AI Governance: A Holistic Approach to Implement Ethics in AI

  • How Helpful do Novice Programmers Find the Feedback of an Automated Repair Tool?

    How Helpful do Novice Programmers Find the Feedback of an Automated Repair Tool?

  • Research summary: Algorithmic Accountability

    Research summary: Algorithmic Accountability

  • Algorithmic accountability for the public sector

    Algorithmic accountability for the public sector

  • Artificial Intelligence - Application to the Sports Industry (Research summary)

    Artificial Intelligence - Application to the Sports Industry (Research summary)

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer

Categories


• Blog
• Research Summaries
• Columns
• Core Principles of Responsible AI
• Special Topics

Signature Content


• The State Of AI Ethics

• The Living Dictionary

• The AI Ethics Brief

Learn More


• About

• Open Access Policy

• Contributions Policy

• Editorial Stance on AI Tools

• Press

• Donate

• Contact

The AI Ethics Brief (bi-weekly newsletter)

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.


Archive

  • © MONTREAL AI ETHICS INSTITUTE. All rights reserved 2024.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.