🔬 Research Summary by Fernando Delgado, a PhD candidate at Cornell University’s Bowers College of Computing and Information Sciences.
[Original paper by Fernando Delgado, Solon Barocas, Karen Levy]
Overview: Despite growing calls for participation in AI design, there are to date few empirical studies of what these processes look like and how they can be structured for meaningful engagement with domain stakeholders. In this paper, we examine a notable yet understudied AI design process in the legal domain that took place over a decade ago that made use of novel participatory tactics, the impact of which still informs legal automation efforts today.
Introduction
Across the private and public sectors, expanding participation in AI design and development is increasingly proposed as a strategy for mitigating risks brought about by the integration of AI systems into socially complex contexts. But what does an AI design process actually look like when you bring in domain stakeholders to play a central role? And what do you need as supporting infrastructure to make participatory AI design a successful endeavor?
In this work, we examine—through a historical and ethnographic lens—a multi-year design process that took place between 2006 and 2011 in which a range of participatory approaches was deployed to develop an approach to automate attorney review for evidentiary fact-finding. These participatory methods enabled litigators and other civil justice stakeholders to learn about the affordances and limitations of AI methods to such a degree that they were able to co-design an overarching sociotechnical workflow that still guides over a decade later ML implementations in the legal technology space.
Key Insights
The forging of an interactive simulation methodology
Our analysis reveals how an interactive simulation methodology was developed between computer scientists and litigators that made use of common task framework methods still popular in AI research yet which also leveraged unique in-depth qualitative approaches for modeling the complex litigation domain practice being targeted for automation. A key feature of this interactive simulation-based approach was providing domain stakeholders a central role in not only labeling data, but also specifying the target classifications, informing the evaluation approach protocol, observing how technical teams were gathering requirements and acting on them, and providing nuanced qualitative evaluations of classifier output to complement quantitative accuracy metrics reporting.
Fostering a cohort of cross-disciplinary experts
In large part thanks to this structured collaboration, ML-driven methods tailored to the requirements of litigation practice were developed to meet the needs of the community and which are still in use today a decade after the initial experimentation and design phase. Importantly, this process fostered a cohort of cross-disciplinary experts who learned how to effectively translate and design across computational and legal disciplinary boundaries, and who were instrumental to the early and principled integration of AI into U.S. civil litigation practice. This focus on stakeholder engagement and learning served as key to developing an overarching AI design that withstood the test of time.
Embracing the complexity of real-world tasks
Our analysis also finds that the coordinators of the multi-year design and evaluation effort were uniquely attuned and motivated to engage with the complexity of the real-world task. Rather than responding to the many obstacles they encountered by simplifying how the problem was to be formulated or reducing the scope of analysis to something immediately computationally tractable, coordinators created or updated their datasets, commissioned hypothetical complaints and documents requests, and devised new administrative and evaluation protocols, all to better approximate the complexity of the real world and to reflect the lessons learned from previous iterations. Their aim was to develop systems that could actually address the scalability issues civil discovery was confronting in the face of exponential growth in digital data.
Between the lines
In a contemporary AI research and development landscape in which much focus is placed on competitive quantitative benchmarking, and which often leaves out the perspective of domain stakeholders, we need to ask ourselves what we can learn from this precedent in which stakeholders were brought not only as a source of ground truth, but also a collaborator to help define requirements, establish evaluation protocol, question researcher assumptions, and holistically evaluate AI outputs. This naturally leads to questions regarding what is required to better incentivize AI researchers and designers to involve domain stakeholders throughout the AI innovation lifecycle and how to better manage the impulse on the part of AI researchers and designers to reduce the scope of real-world problems to what is immediately computationally tractable.
Our research in particular illustrates how questions of participation and problem formulation are intimately connected with how researchers and stakeholders are incentivized to work together over time across institutions. Without institutional support, there is a real risk that our participatory interventions as AI researchers and designers become reduced to discrete interactions with users and stakeholders that lack the sustained engagement with domain stakeholders for designs that can adequately respond to the complexity of real-world practice.