🔬 Research Summary by Lara Groves, a Researcher at the Ada Lovelace Institute, where she researches emerging AI accountability mechanisms and practices
[Original paper by Lara Groves, Aidan Peppin, Andrew Strait, and Jenny Brennan]
Overview: What’s the state of public participation in the AI industry? Our paper explores attitudes and approaches to public participation in commercial AI labs. While tech industry discourse frequently adopts the language of participation in calls to ‘democratize AI’ (and similar), this may not match the reality of practices in these companies.
The future I would like to see is where access to AI is super democratizedSam Altman, OpenAI CEO, 2023
The commercial AI industry has considerable influence over the terrain of ethical or responsible AI practices and increasing monopolization of AI development. Against this backdrop, we’ve seen calls for public participation in AI grow louder. While they sound admirable on their face, there’s a need to probe around these claims and uncover the reality of public participation approaches on the ground to make a better assessment of their opportunities and limitations.
As researchers exploring the role of public voice in AI and the accountability dynamics between technology developers and the people affected by tech, my coauthors and I identified the question of “What public participation is being conducted in the AI industry?” as underexplored. To better understand the state of play, we conducted interviews with both industry practitioners with a stake in ‘participatory AI ’and public participation experts to answer the following research questions:
- How do commercial AI labs understand public participation in the development of their products and research?
- What approaches to public participation do commercial AI labs adopt?
- What obstacles/challenges do labs face when implementing these approaches?
Though we find broad support for public participation, we find little evidence of concerted effort around adopting participatory approaches across the industry. Commercial labs are struggling to adopt meaningful participatory approaches that are impactful for both participants and companies due to a lack of incentives. Through this research, we shed light on a research gap, offering novel empirical evidence exploring the emerging intersection between ‘public participation’ and ‘commercial AI.’
AI for the people, by the people?
In AI ethics, we’ve witnessed a concerted turn toward the potential for participatory and deliberative approaches in AI development and oversight. Proponents of these methods argue that the ‘wisdom of the crowd’ might help technologies better serve the public interest and offer the knowledge and experience that technology developers building AI might lack. The signifiers of ‘participation,’ ‘inclusion,’ and ‘community’ allude to democratic values and take on an attractive allure. Few would challenge the idea of ‘more or better’ participation, in principle. In practice, however, it is difficult to pinpoint a single definition of ‘participation,’ how it’s supposed to function, and who it’s for.
Because of the conceptual capaciousness of ‘participatory AI’ and the normative sheen to the language of participation, there’s a real need to interrogate the underlying motivations for, and practices of, public participation to get a better sense of the lay of the land. Dissensus over the aims and value of participation is not unique to the AI industry, but a focus on participation in the commercial context raises some interesting considerations worth exploring. What are the business incentives to adopt public participation, for example?
The business of participation
Given the appeal of participation (as we set out above), it’s perhaps unsurprising that practitioners view it favorably. Our research interviewees put forward two supporting arguments for public participation: firstly, that participation might instrumentalize societal goals, such as inclusion, fairness, and accountability, and second, that participation may also support the cut and thrust of commercial business missions. The latter argument speaks to the idea that more widespread input or feedback might translate to higher quality products or products that simply ‘work’ for more people (speaking more directly to the profit motive). The former argument, less concerned with improved technological outcomes than whether participation could be a harbinger of social change, was put forward as a laudable goal by nearly everybody we spoke to but described as incredibly difficult to mediate in a commercial environment.
Some practitioners shared participatory projects they’d worked on directly or witnessed in use across the sector, but we found little convergence around a particular set of practices. Some 19 different methods were mentioned in the interview as potentially falling under the banner of ‘participation.’ Crucially, even if we account for practitioner apprehension to speak candidly about work and practice, we find that hardly any public participation is undertaken in the industry.
Numerous obstacles were put forward as contributing to the slow takeup of these methods. Many practitioners expressed concern about a perceived lack of suitable conditions for creating meaningful practice. Rigid development deadlines, restricted budgets, and insufficient coordination among relevant teams were cited as curbing ambition. Our finding of particular relevance to the current ‘AI spring’ suggests practitioners are apprehensive about embedding public participation into generative AI or general-purpose research, which often lacks a clear use context. The question was raised repeatedly: how do you get members of the public to comprehensively deliberate on or evaluate a foundation model like GPT-4, which may have innumerable downstream applications and impacts?
Many of these challenges lack a clear path forward. They will require rigorous and iterative collaboration and a realignment of incentives at both the firm and the industry levels.
Between the lines
With this paper, we intend to highlight the lay of the land in an emerging research area and add empirical color to some of the tech industry discourse around ‘participatory AI’ and ‘democratizing AI.’ To make sense of these findings, it’s useful to situate them in the context of current field-level dynamics.
We note a limitation of our study: participation advocates in these environments already occupy a small niche of the overall practitioner population, often siloed across different teams. As a result, we struggled to gain access to the right people, many were reluctant to share potentially identifiable information, and many more declined to participate, citing burnout.
Major tech companies are battling in an intense ‘AI arms race,’ building ever larger and more powerful systems rapidly. At the same time, many are touting the benefits of public input into AI development and expressing interest in implementing participatory methods (see Meta’s Community Forums project and OpenAI’s recent call for proposals for ‘democratic input in AI.’ We know from the long history of public participation in other domains that this work is (necessarily) demanding, requiring careful planning and proper resourcing. Coupled with a dreary economic outlook and a dwindling level of ethics or social science expertise in these spaces (with some companies firing their entire ethics teams in recent months), we find ourselves with a considerable challenge for ensuring participatory methods in the industry will have meaningful impact for both companies themselves, but more importantly, for people and society.
In light of these trends, we expect to see the industry spearheading methods that are more cost and time efficient to execute, akin to the deliberative polling via online platforms as trialed by Meta, to garner individual opinions on certain tech design decisions or policies.
This approach mimics the long tail of user research or user testing, emphasizing gathering would-be consumers, but what is often missing from the question of the societal need for technology, particularly for groups and communities. We see an important role for civil society, academic researchers, activists, and community leaders in helping to guide the conversation and practice to avoid the technology industry unilaterally setting the tone and terrain for ‘participatory AI.’ At this juncture, we need clear-eyed, evidence-driven exploration into potential participation to make sense of some opportunities that public participation in AI can bring.