• Skip to main content
  • Skip to secondary menu
  • Skip to primary sidebar
  • Skip to footer
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

  • Articles
    • Public Policy
    • Privacy & Security
    • Human Rights
      • Ethics
      • JEDI (Justice, Equity, Diversity, Inclusion
    • Climate
    • Design
      • Emerging Technology
    • Application & Adoption
      • Health
      • Education
      • Government
        • Military
        • Public Works
      • Labour
    • Arts & Culture
      • Film & TV
      • Music
      • Pop Culture
      • Digital Art
  • Columns
    • AI Policy Corner
    • Recess
  • The AI Ethics Brief
  • AI Literacy
    • AI Ethics Living Dictionary
    • Learning Community
  • The State of AI Ethics Report
    • Volume 6 (February 2022)
    • Volume 5 (July 2021)
    • Volume 4 (April 2021)
    • Volume 3 (Jan 2021)
    • Volume 2 (Oct 2020)
    • Volume 1 (June 2020)
  • About
    • Our Contributions Policy
    • Our Open Access Policy
    • Contact
    • Donate

An Uncommon Task: participator Design in Legal AI

October 5, 2022

🔬 Research Summary by Fernando Delgado, a PhD candidate at Cornell University’s Bowers College of Computing and Information Sciences.

[Original paper by Fernando Delgado, Solon Barocas, Karen Levy]


Overview:  Despite growing calls for participation in AI design, there are to date few empirical studies of what these processes look like and how they can be structured for meaningful engagement with domain stakeholders. In this paper, we examine a notable yet understudied AI design process in the legal domain that took place over a decade ago that made use of novel participatory tactics, the impact of which still informs legal automation efforts today.


Introduction

Across the private and public sectors, expanding participation in AI design and development is increasingly proposed as a strategy for mitigating risks brought about by the integration of AI systems into socially complex contexts. But what does an AI design process actually look like when you bring in domain stakeholders to play a central role? And what do you need as supporting infrastructure to make participatory AI design a successful endeavor?

In this work, we examine—through a historical and ethnographic lens—a multi-year design process that took place between 2006 and 2011 in which a range of participatory approaches was deployed to develop an approach to automate attorney review for evidentiary fact-finding. These participatory methods enabled litigators and other civil justice stakeholders to learn about the affordances and limitations of AI methods to such a degree that they were able to co-design an overarching sociotechnical workflow that still guides over a decade later ML implementations in the legal technology space.

Key Insights

The forging of an interactive simulation methodology

Our analysis reveals how an interactive simulation methodology was developed between computer scientists and litigators that made use of common task framework methods still popular in AI research yet which also leveraged unique in-depth qualitative approaches for modeling the complex litigation domain practice being targeted for automation. A key feature of this interactive simulation-based approach was providing domain stakeholders a central role in not only labeling data, but also specifying the target classifications, informing the evaluation approach protocol, observing how technical teams were gathering requirements and acting on them, and providing nuanced qualitative evaluations of classifier output to complement quantitative accuracy metrics reporting.

Fostering a cohort of cross-disciplinary experts

In large part thanks to this structured collaboration, ML-driven methods tailored to the requirements of litigation practice were developed to meet the needs of the community and which are still in use today a decade after the initial experimentation and design phase. Importantly, this process fostered a cohort of cross-disciplinary experts who learned how to effectively translate and design across computational and legal disciplinary boundaries, and who were instrumental to the early and principled integration of AI into U.S. civil litigation practice. This focus on stakeholder engagement and learning served as key to developing an overarching AI design that withstood the test of time.

Embracing the complexity of real-world tasks

Our analysis also finds that the coordinators of the multi-year design and evaluation effort were uniquely attuned and motivated to engage with the complexity of the real-world task. Rather than responding to the many obstacles they encountered by simplifying how the problem was to be formulated or reducing the scope of analysis to something immediately computationally tractable, coordinators created or updated their datasets, commissioned hypothetical complaints and documents requests, and devised new administrative and evaluation protocols, all to better approximate the complexity of the real world and to reflect the lessons learned from previous iterations. Their aim was to develop systems that could actually address the scalability issues civil discovery was confronting in the face of exponential growth in digital data.

Between the lines

In a contemporary AI research and development landscape in which much focus is placed on competitive quantitative benchmarking, and which often leaves out the perspective of domain stakeholders, we need to ask ourselves what we can learn from this precedent in which stakeholders were brought not only as a source of ground truth, but also a collaborator to help define requirements, establish evaluation protocol, question researcher assumptions, and holistically evaluate AI outputs. This naturally leads to questions regarding what is required to better incentivize AI researchers and designers to involve domain stakeholders throughout the AI innovation lifecycle and how to better manage the impulse on the part of AI researchers and designers to reduce the scope of real-world problems to what is immediately computationally tractable. 

Our research in particular illustrates how questions of participation and problem formulation are intimately connected with how researchers and stakeholders are incentivized to work together over time across institutions. Without institutional support, there is a real risk that our participatory interventions as AI researchers and designers become reduced to discrete interactions with users and stakeholders that lack the sustained engagement with domain stakeholders for designs that can adequately respond to the complexity of real-world practice.

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

🔍 SEARCH

Spotlight

ALL IN Conference 2025: Four Key Takeaways from Montreal

Beyond Dependency: The Hidden Risk of Social Comparison in Chatbot Companionship

AI Policy Corner: Restriction vs. Regulation: Comparing State Approaches to AI Mental Health Legislation

Beyond Consultation: Building Inclusive AI Governance for Canada’s Democratic Future

AI Policy Corner: U.S. Executive Order on Advancing AI Education for American Youth

related posts

  • Upgrading China Through Automation: Manufacturers, Workers and Techno-Development State (Research Su...

    Upgrading China Through Automation: Manufacturers, Workers and Techno-Development State (Research Su...

  • Response to the AHRC and WEF regarding Responsible Innovation in AI

    Response to the AHRC and WEF regarding Responsible Innovation in AI

  • Democracy, epistemic agency, and AI: Political Epistemology in Times of Artificial Intelligence

    Democracy, epistemic agency, and AI: Political Epistemology in Times of Artificial Intelligence

  • From Dance App to Political Mercenary: How disinformation on TikTok gaslights political tensions in ...

    From Dance App to Political Mercenary: How disinformation on TikTok gaslights political tensions in ...

  • Project Let’s Talk Privacy (Research Summary)

    Project Let’s Talk Privacy (Research Summary)

  • Research summary: Lexicon of Lies: Terms for Problematic Information

    Research summary: Lexicon of Lies: Terms for Problematic Information

  • 6 Ways Machine Learning Threatens Social Justice

    6 Ways Machine Learning Threatens Social Justice

  • Fairness and Bias in Algorithmic Hiring

    Fairness and Bias in Algorithmic Hiring

  • Visions of Artificial Intelligence and Robots in Science Fiction: a computational analysis

    Visions of Artificial Intelligence and Robots in Science Fiction: a computational analysis

  • AI Neutrality in the Spotlight: ChatGPT’s Political Biases Revisited

    AI Neutrality in the Spotlight: ChatGPT’s Political Biases Revisited

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer

Categories


• Blog
• Research Summaries
• Columns
• Core Principles of Responsible AI
• Special Topics

Signature Content


• The State Of AI Ethics

• The Living Dictionary

• The AI Ethics Brief

Learn More


• About

• Open Access Policy

• Contributions Policy

• Editorial Stance on AI Tools

• Press

• Donate

• Contact

The AI Ethics Brief (bi-weekly newsletter)

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.


Archive

  • © 2025 MONTREAL AI ETHICS INSTITUTE.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.