• Skip to main content
  • Skip to secondary menu
  • Skip to primary sidebar
  • Skip to footer
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

  • Articles
    • Public Policy
    • Privacy & Security
    • Human Rights
      • Ethics
      • JEDI (Justice, Equity, Diversity, Inclusion
    • Climate
    • Design
      • Emerging Technology
    • Application & Adoption
      • Health
      • Education
      • Government
        • Military
        • Public Works
      • Labour
    • Arts & Culture
      • Film & TV
      • Music
      • Pop Culture
      • Digital Art
  • Columns
    • AI Policy Corner
    • Recess
    • Tech Futures
  • The AI Ethics Brief
  • AI Literacy
    • Research Summaries
    • AI Ethics Living Dictionary
    • Learning Community
  • The State of AI Ethics Report
    • Volume 7 (November 2025)
    • Volume 6 (February 2022)
    • Volume 5 (July 2021)
    • Volume 4 (April 2021)
    • Volume 3 (Jan 2021)
    • Volume 2 (Oct 2020)
    • Volume 1 (June 2020)
  • About
    • Our Contributions Policy
    • Our Open Access Policy
    • Contact
    • Donate

Going public: the role of public participation approaches in commercial AI labs

July 26, 2023

🔬 Research Summary by Lara Groves, a Researcher at the Ada Lovelace Institute, where she researches emerging AI accountability mechanisms and practices

[Original paper by Lara Groves, Aidan Peppin, Andrew Strait, and Jenny Brennan]


Overview: What’s the state of public participation in the AI industry? Our paper explores attitudes and approaches to public participation in commercial AI labs. While tech industry discourse frequently adopts the language of participation in calls to ‘democratize AI’ (and similar), this may not match the reality of practices in these companies.


Introduction

The future I would like to see is where access to AI is super democratized

Sam Altman, OpenAI CEO, 2023

The commercial AI industry has considerable influence over the terrain of ethical or responsible AI practices and increasing monopolization of AI development. Against this backdrop, we’ve seen calls for public participation in AI grow louder. While they sound admirable on their face,  there’s a need to probe around these claims and uncover the reality of public participation approaches on the ground to make a better assessment of their opportunities and limitations.

As researchers exploring the role of public voice in AI and the accountability dynamics between technology developers and the people affected by tech, my coauthors and I identified the question of “What public participation is being conducted in the AI industry?” as underexplored. To better understand the state of play, we conducted interviews with both industry practitioners with a stake in ‘participatory AI ’and public participation experts to answer the following research questions:

  • How do commercial AI labs understand public participation in the development of their products and research? 
  • What approaches to public participation do commercial AI labs adopt? 
  • What obstacles/challenges do labs face when implementing these approaches?

Though we find broad support for public participation, we find little evidence of concerted effort around adopting participatory approaches across the industry. Commercial labs are struggling to adopt meaningful participatory approaches that are impactful for both participants and companies due to a lack of incentives. Through this research, we shed light on a research gap, offering novel empirical evidence exploring the emerging intersection between ‘public participation’ and ‘commercial AI.’

Key Insights

AI for the people, by the people?

In AI ethics, we’ve witnessed a concerted turn toward the potential for participatory and deliberative approaches in AI development and oversight. Proponents of these methods argue that the ‘wisdom of the crowd’ might help technologies better serve the public interest and offer the knowledge and experience that technology developers building AI might lack. The signifiers of ‘participation,’ ‘inclusion,’ and ‘community’ allude to democratic values and take on an attractive allure. Few would challenge the idea of ‘more or better’ participation, in principle.  In practice, however, it is difficult to pinpoint a single definition of ‘participation,’ how it’s supposed to function, and who it’s for. 

Because of the conceptual capaciousness of ‘participatory AI’ and the normative sheen to the language of participation, there’s a real need to interrogate the underlying motivations for, and practices of, public participation to get a better sense of the lay of the land. Dissensus over the aims and value of participation is not unique to the AI industry, but a focus on participation in the commercial context raises some interesting considerations worth exploring. What are the business incentives to adopt public participation, for example?

The business of participation 

Given the appeal of participation (as we set out above), it’s perhaps unsurprising that practitioners view it favorably. Our research interviewees put forward two supporting arguments for public participation: firstly, that participation might instrumentalize societal goals, such as inclusion, fairness, and accountability, and second, that participation may also support the cut and thrust of commercial business missions. The latter argument speaks to the idea that more widespread input or feedback might translate to higher quality products or products that simply ‘work’ for more people (speaking more directly to the profit motive). The former argument, less concerned with improved technological outcomes than whether participation could be a harbinger of social change, was put forward as a laudable goal by nearly everybody we spoke to but described as incredibly difficult to mediate in a commercial environment. 

Some practitioners shared participatory projects they’d worked on directly or witnessed in use across the sector, but we found little convergence around a particular set of practices. Some 19 different methods were mentioned in the interview as potentially falling under the banner of ‘participation.’ Crucially, even if we account for practitioner apprehension to speak candidly about work and practice, we find that hardly any public participation is undertaken in the industry.

Numerous obstacles were put forward as contributing to the slow takeup of these methods. Many practitioners expressed concern about a perceived lack of suitable conditions for creating meaningful practice. Rigid development deadlines, restricted budgets, and insufficient coordination among relevant teams were cited as curbing ambition. Our finding of particular relevance to the current ‘AI spring’ suggests practitioners are apprehensive about embedding public participation into generative AI or general-purpose research, which often lacks a clear use context. The question was raised repeatedly: how do you get members of the public to comprehensively deliberate on or evaluate a foundation model like GPT-4, which may have innumerable downstream applications and impacts?

Many of these challenges lack a clear path forward. They will require rigorous and iterative collaboration and a realignment of incentives at both the firm and the industry levels. 

Between the lines

With this paper, we intend to highlight the lay of the land in an emerging research area and add empirical color to some of the tech industry discourse around ‘participatory AI’ and ‘democratizing AI.’ To make sense of these findings, it’s useful to situate them in the context of current field-level dynamics. 

We note a limitation of our study: participation advocates in these environments already occupy a small niche of the overall practitioner population, often siloed across different teams. As a result, we struggled to gain access to the right people, many were reluctant to share potentially identifiable information, and many more declined to participate, citing burnout. 

Major tech companies are battling in an intense ‘AI arms race,’ building ever larger and more powerful systems rapidly. At the same time, many are touting the benefits of public input into AI development and expressing interest in implementing participatory methods (see Meta’s Community Forums project and OpenAI’s recent call for proposals for ‘democratic input in AI.’ We know from the long history of public participation in other domains that this work is (necessarily) demanding, requiring careful planning and proper resourcing. Coupled with a dreary economic outlook and a dwindling level of ethics or social science expertise in these spaces (with some companies firing their entire ethics teams in recent months), we find ourselves with a considerable challenge for ensuring participatory methods in the industry will have meaningful impact for both companies themselves, but more importantly, for people and society.

In light of these trends, we expect to see the industry spearheading methods that are more cost and time efficient to execute, akin to the deliberative polling via online platforms as trialed by Meta, to garner individual opinions on certain tech design decisions or policies. 

This approach mimics the long tail of user research or user testing, emphasizing gathering would-be consumers, but what is often missing from the question of the societal need for technology, particularly for groups and communities. We see an important role for civil society, academic researchers, activists, and community leaders in helping to guide the conversation and practice to avoid the technology industry unilaterally setting the tone and terrain for ‘participatory AI.’ At this juncture, we need clear-eyed, evidence-driven exploration into potential participation to make sense of some opportunities that public participation in AI can bring.

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

🔍 SEARCH

Spotlight

Illustration of a coral reef ecosystem

Tech Futures: Diversity of Thought and Experience: The UN’s Scientific Panel on AI

This image shows a large white, traditional, old building. The top half of the building represents the humanities (which is symbolised by the embedded text from classic literature which is faintly shown ontop the building). The bottom section of the building is embossed with mathematical formulas to represent the sciences. The middle layer of the image is heavily pixelated. On the steps at the front of the building there is a group of scholars, wearing formal suits and tie attire, who are standing around at the enternace talking and some of them are sitting on the steps. There are two stone, statute-like hands that are stretching the building apart from the left side. In the forefront of the image, there are 8 students - which can only be seen from the back. Their graduation gowns have bright blue hoods and they all look as though they are walking towards the old building which is in the background at a distance. There are a mix of students in the foreground.

Tech Futures: Co-opting Research and Education

Agentic AI systems and algorithmic accountability: a new era of e-commerce

ALL IN Conference 2025: Four Key Takeaways from Montreal

Beyond Dependency: The Hidden Risk of Social Comparison in Chatbot Companionship

related posts

  • Research summary: Integrating ethical values and economic value to steer progress in AI

    Research summary: Integrating ethical values and economic value to steer progress in AI

  • Rethinking Gaming: The Ethical Work of Optimization in Web Search Engines (Research Summary)

    Rethinking Gaming: The Ethical Work of Optimization in Web Search Engines (Research Summary)

  • Research summary: Decision Points in AI Governance

    Research summary: Decision Points in AI Governance

  • On Measuring Fairness in Generative Modelling (NeurIPS 2023)

    On Measuring Fairness in Generative Modelling (NeurIPS 2023)

  • REAL ML: Recognizing, Exploring, and Articulating Limitations of Machine Learning Research

    REAL ML: Recognizing, Exploring, and Articulating Limitations of Machine Learning Research

  • Humans, AI, and Context: Understanding End-Users’ Trust in a Real-World Computer Vision Application

    Humans, AI, and Context: Understanding End-Users’ Trust in a Real-World Computer Vision Application

  • Response to the AHRC and WEF regarding Responsible Innovation in AI

    Response to the AHRC and WEF regarding Responsible Innovation in AI

  • Promises and Challenges of Causality for Ethical Machine Learning

    Promises and Challenges of Causality for Ethical Machine Learning

  • Research Summary: Countering Information Influence Activities: The State of the Art

    Research Summary: Countering Information Influence Activities: The State of the Art

  • Balancing Data Utility and Confidentiality in the 2020 US Census

    Balancing Data Utility and Confidentiality in the 2020 US Census

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer


Articles

Columns

AI Literacy

The State of AI Ethics Report


 

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.

Contact

Donate


  • © 2025 MONTREAL AI ETHICS INSTITUTE.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.