• Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • Core Principles of Responsible AI
    • Accountability
    • Fairness
    • Privacy
    • Safety and Security
    • Sustainability
    • Transparency
  • Special Topics
    • AI in Industry
    • Ethical Implications
    • Human-Centered Design
    • Regulatory Landscape
    • Technical Methods
  • Living Dictionary
  • State of AI Ethics
  • AI Ethics Brief
  • 🇫🇷
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

An Uncommon Task: participator Design in Legal AI

October 5, 2022

🔬 Research Summary by Fernando Delgado, a PhD candidate at Cornell University’s Bowers College of Computing and Information Sciences.

[Original paper by Fernando Delgado, Solon Barocas, Karen Levy]


Overview:  Despite growing calls for participation in AI design, there are to date few empirical studies of what these processes look like and how they can be structured for meaningful engagement with domain stakeholders. In this paper, we examine a notable yet understudied AI design process in the legal domain that took place over a decade ago that made use of novel participatory tactics, the impact of which still informs legal automation efforts today.


Introduction

Across the private and public sectors, expanding participation in AI design and development is increasingly proposed as a strategy for mitigating risks brought about by the integration of AI systems into socially complex contexts. But what does an AI design process actually look like when you bring in domain stakeholders to play a central role? And what do you need as supporting infrastructure to make participatory AI design a successful endeavor?

In this work, we examine—through a historical and ethnographic lens—a multi-year design process that took place between 2006 and 2011 in which a range of participatory approaches was deployed to develop an approach to automate attorney review for evidentiary fact-finding. These participatory methods enabled litigators and other civil justice stakeholders to learn about the affordances and limitations of AI methods to such a degree that they were able to co-design an overarching sociotechnical workflow that still guides over a decade later ML implementations in the legal technology space.

Key Insights

The forging of an interactive simulation methodology

Our analysis reveals how an interactive simulation methodology was developed between computer scientists and litigators that made use of common task framework methods still popular in AI research yet which also leveraged unique in-depth qualitative approaches for modeling the complex litigation domain practice being targeted for automation. A key feature of this interactive simulation-based approach was providing domain stakeholders a central role in not only labeling data, but also specifying the target classifications, informing the evaluation approach protocol, observing how technical teams were gathering requirements and acting on them, and providing nuanced qualitative evaluations of classifier output to complement quantitative accuracy metrics reporting.

Fostering a cohort of cross-disciplinary experts

In large part thanks to this structured collaboration, ML-driven methods tailored to the requirements of litigation practice were developed to meet the needs of the community and which are still in use today a decade after the initial experimentation and design phase. Importantly, this process fostered a cohort of cross-disciplinary experts who learned how to effectively translate and design across computational and legal disciplinary boundaries, and who were instrumental to the early and principled integration of AI into U.S. civil litigation practice. This focus on stakeholder engagement and learning served as key to developing an overarching AI design that withstood the test of time.

Embracing the complexity of real-world tasks

Our analysis also finds that the coordinators of the multi-year design and evaluation effort were uniquely attuned and motivated to engage with the complexity of the real-world task. Rather than responding to the many obstacles they encountered by simplifying how the problem was to be formulated or reducing the scope of analysis to something immediately computationally tractable, coordinators created or updated their datasets, commissioned hypothetical complaints and documents requests, and devised new administrative and evaluation protocols, all to better approximate the complexity of the real world and to reflect the lessons learned from previous iterations. Their aim was to develop systems that could actually address the scalability issues civil discovery was confronting in the face of exponential growth in digital data.

Between the lines

In a contemporary AI research and development landscape in which much focus is placed on competitive quantitative benchmarking, and which often leaves out the perspective of domain stakeholders, we need to ask ourselves what we can learn from this precedent in which stakeholders were brought not only as a source of ground truth, but also a collaborator to help define requirements, establish evaluation protocol, question researcher assumptions, and holistically evaluate AI outputs. This naturally leads to questions regarding what is required to better incentivize AI researchers and designers to involve domain stakeholders throughout the AI innovation lifecycle and how to better manage the impulse on the part of AI researchers and designers to reduce the scope of real-world problems to what is immediately computationally tractable. 

Our research in particular illustrates how questions of participation and problem formulation are intimately connected with how researchers and stakeholders are incentivized to work together over time across institutions. Without institutional support, there is a real risk that our participatory interventions as AI researchers and designers become reduced to discrete interactions with users and stakeholders that lack the sustained engagement with domain stakeholders for designs that can adequately respond to the complexity of real-world practice.

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

🔍 SEARCH

Spotlight

AI Policy Corner: Frontier AI Safety Commitments, AI Seoul Summit 2024

AI Policy Corner: The Colorado State Deepfakes Act

Special Edition: Honouring the Legacy of Abhishek Gupta (1992–2024)

AI Policy Corner: The Turkish Artificial Intelligence Law Proposal

From Funding Crisis to AI Misuse: Critical Digital Rights Challenges from RightsCon 2025

related posts

  • Implications of the use of artificial intelligence in public governance: A systematic literature rev...

    Implications of the use of artificial intelligence in public governance: A systematic literature rev...

  • The AI Carbon Footprint and Responsibilities of AI Scientists

    The AI Carbon Footprint and Responsibilities of AI Scientists

  • Warning Signs: The Future of Privacy and Security in an Age of Machine Learning  (Research summary)

    Warning Signs: The Future of Privacy and Security in an Age of Machine Learning (Research summary)

  • Group Fairness Is Not Derivable From Justice: a Mathematical Proof

    Group Fairness Is Not Derivable From Justice: a Mathematical Proof

  • Risk of AI in Healthcare: A Study Framework

    Risk of AI in Healthcare: A Study Framework

  • Exploring XAI for the Arts: Explaining Latent Space in Generative Music

    Exploring XAI for the Arts: Explaining Latent Space in Generative Music

  • Best humans still outperform artificial intelligence in a creative divergent thinking task

    Best humans still outperform artificial intelligence in a creative divergent thinking task

  • The Logic of Strategic Assets: From Oil to AI

    The Logic of Strategic Assets: From Oil to AI

  • Unpacking Human-AI interaction (HAII) in safety-critical industries

    Unpacking Human-AI interaction (HAII) in safety-critical industries

  • Risky Analysis: Assessing and Improving AI Governance Tools

    Risky Analysis: Assessing and Improving AI Governance Tools

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer

Categories


• Blog
• Research Summaries
• Columns
• Core Principles of Responsible AI
• Special Topics

Signature Content


• The State Of AI Ethics

• The Living Dictionary

• The AI Ethics Brief

Learn More


• About

• Open Access Policy

• Contributions Policy

• Editorial Stance on AI Tools

• Press

• Donate

• Contact

The AI Ethics Brief (bi-weekly newsletter)

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.


Archive

  • © MONTREAL AI ETHICS INSTITUTE. All rights reserved 2024.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.