• Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • Core Principles of Responsible AI
    • Accountability
    • Fairness
    • Privacy
    • Safety and Security
    • Sustainability
    • Transparency
  • Special Topics
    • AI in Industry
    • Ethical Implications
    • Human-Centered Design
    • Regulatory Landscape
    • Technical Methods
  • Living Dictionary
  • State of AI Ethics
  • AI Ethics Brief
  • 🇫🇷
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

FeedbackLogs: Recording and Incorporating Stakeholder Feedback into Machine Learning Pipelines

September 6, 2023

🔬 Research Summary by Matthew Barker, a recent graduate from the University of Cambridge, whose research focuses on explainable AI and human-machine teams.

[Original paper by Matthew Barker, Emma Kallina, Dhananjay Ashok, Katherine M. Collins, Ashley Casovan, Adrian Weller, Ameet Talwalkar, Valerie Chen, and Umang Bhatt]


Overview: Even though machine learning (ML) pipelines affect an increasing array of stakeholders, there is a growing need for documenting how input from stakeholders is recorded and incorporated. We propose FeedbackLogs, an addendum to existing documentation of ML pipelines, to track the feedback collection process from multiple stakeholders. Our online tool for creating FeedbackLogs and examples can be found here.


Introduction

Who decides how a model is designed? Prior work has emphasized that stakeholders, individuals who interact with or are affected by machine learning (ML) models, should be involved in the model development process. However, their unique perspectives may not be adequately accounted for by practitioners responsible for developing and deploying models (e.g., ML engineers, data scientists, UX researchers). We identify a gap in the existing literature around documenting how stakeholder input was collected and incorporated in the ML pipeline, which we define as a model’s end-to-end lifecycle, from data collection to model development to system deployment and ongoing usage. 

A lack of documentation can create difficulties when practitioners attempt to justify why certain design decisions were made through the pipeline: this may be important for compiling defensible evidence of compliance to governance practices, anticipating stakeholder needs, or participating in the model auditing process. While existing documentation literature (e.g., Model Cards and FactSheets) focuses on providing static snapshots of an ML model, as shown in Figure 1 (Left), we propose FeedbackLogs, a systematic way of recording the iterative process of collecting and incorporating stakeholder feedback.

Key Insights

Design of a FeedbackLog

The FeedbackLog is constructed during the development and deployment of the ML pipeline and updated as necessary throughout the model lifecycle. While the FeedbackLog contains a starting point and final summary to document the start and end of stakeholder involvement, the core of a FeedbackLog is the records documenting practitioners’ interactions with stakeholders. Each record contains the content of the feedback provided by a particular stakeholder and how it was incorporated into the ML pipeline.  The process for adding records to a FeedbackLog is shown in purple in Figure 1 (Right). Over time, a FeedbackLog reflects how the ML pipeline has evolved due to these interactions between practitioners and stakeholders. 

We propose a template-like design for FeedbackLogs with three distinct components (shown in Figure 1): a starting point, one or more records, and a final summary.

Starting Point

The starting point describes the state of the ML pipeline before the practitioner reaches out to any relevant stakeholders. It might contain information on the practitioner’s objectives, assumptions, and current plans. A starting point may consist of descriptions of the data, such as Data Sheets, metrics used to evaluate the models, or policies regarding system deployment. A proper starting point allows auditors and practitioners to understand when, in the development process, the gathered feedback was incorporated and defensibly demonstrates how specific feedback led to changes in the metrics.

Records

The feedback from stakeholders is contained in the records section, which can house multiple records. Each record in a FeedbackLog is a self-contained interaction between the practitioner and a relevant stakeholder. It consists of how the stakeholder was requested for feedback (elicitation), the stakeholder’s response (feedback), and how the practitioner used the stakeholder input to update the ML pipeline (incorporation).  To make these four sections more concrete, we provide questions which should be answered when writing a record:

  1. Elicitation – Who is providing feedback and why?
  2. Feedback – What feedback is provided?
  3. Incorporation – Which, where, when, and why are updates considered?
  4. Summary – What is the overall effect(s) of the updates(s) applied?

Final Summary

The final summary consists of the same questions as the starting points, i.e., which dataset(s) and models are used after the updates and the metrics used to track model performance. Proper documentation of the finishing point of the FeedbackLog allows reviewers to clearly establish how the feedback documented leads to concrete and quantifiable changes within the ML pipeline.

FeedbackLogs in Practice

We engaged directly with ML practitioners to explore how FeedbackLogs would be used in practice. Through interviews, we surveyed the perceived practicality of FeedbackLogs. Furthermore, we collected three real-world examples of FeedbackLogs from practitioners across different industries. Each example FeedbackLog was recorded at a different stage in the ML model development process, demonstrating the flexibility of FeedbackLogs to account for feedback from various stakeholders. The examples show how FeedbackLogs serve as a defensibility mechanism in algorithmic auditing and a tool for recording updates based on stakeholder feedback.

Expected Benefits of Implementing FeedbackLogs

The practitioners we interviewed confirmed many of the benefits of FeedbackLogs we had anticipated, e.g., the predefined structure that allows for fast information gathering and the benefits regarding audits, accountability, and transparency. The practitioners also suggested that FeedbackLogs improve communication and knowledge-sharing within organizations. Additionally, an interviewee noted how FeedbackLogs can be a repository of past mistakes, solutions, and best practices. If an issue emerged, it could be used to trace the source of the issue and identify past reactions to similar issues and the (long-term) effect of these reactions.

Between the lines

The need for FeedbackLogs arises from increasingly complex ML development processes, which typically collect and incorporate stakeholder feedback from various stakeholders. FeedbackLogs provide a way to systematically record this feedback from developers, UX designers, end-users, testers, and regulators. The emerging popularity of large language models that collect feedback from many end-users highlights the need for FeedbackLogs, amongst other forms of documentation in the industry.

However, practitioners anticipated several challenges during the practical implementation of FeedbackLogs, such as the potential privacy issues if sensitive feedback is recorded. In addition, logistical challenges are involved with implementing FeedbackLogs at scale without significantly burdening practitioners. We hope future versions of FeedbackLogs address these concerns and usher in developing extensible tools for practitioners to empower the voices of diverse stakeholders.

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

🔍 SEARCH

Spotlight

AI Policy Corner: Texas and New York: Comparing U.S. State-Level AI Laws

What is Sovereign Artificial Intelligence?

AI Policy Corner: The Kenya National AI Strategy

AI Policy Corner: New York City Local Law 144

Canada’s Minister of AI and Digital Innovation is a Historic First. Here’s What We Recommend.

related posts

  • Research summary: Lessons from Archives: Strategies for Collecting Sociocultural Data in Machine Lea...

    Research summary: Lessons from Archives: Strategies for Collecting Sociocultural Data in Machine Lea...

  • Longitudinal Fairness with Censorship

    Longitudinal Fairness with Censorship

  • Defending Against Authorship Identification Attacks

    Defending Against Authorship Identification Attacks

  • An Empirical Analysis of Racial Categories in the Algorithmic Fairness Literature

    An Empirical Analysis of Racial Categories in the Algorithmic Fairness Literature

  • Reliabilism and the Testimony of Robots (Research Summary)

    Reliabilism and the Testimony of Robots (Research Summary)

  • Research summary: Troops, Trolls and Troublemakers: A Global Inventory of Organized Social Media Man...

    Research summary: Troops, Trolls and Troublemakers: A Global Inventory of Organized Social Media Man...

  • Levels of AGI: Operationalizing Progress on the Path to AGI

    Levels of AGI: Operationalizing Progress on the Path to AGI

  • Data Capitalism and the User: An Exploration of Privacy Cynicism in Germany

    Data Capitalism and the User: An Exploration of Privacy Cynicism in Germany

  • Outsourced & Automated: How AI Companies Have Taken Over Government Decision-Making

    Outsourced & Automated: How AI Companies Have Taken Over Government Decision-Making

  • Predatory Medicine: Exploring and Measuring the Vulnerability of Medical AI to Predatory Science

    Predatory Medicine: Exploring and Measuring the Vulnerability of Medical AI to Predatory Science

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer

Categories


• Blog
• Research Summaries
• Columns
• Core Principles of Responsible AI
• Special Topics

Signature Content


• The State Of AI Ethics

• The Living Dictionary

• The AI Ethics Brief

Learn More


• About

• Open Access Policy

• Contributions Policy

• Editorial Stance on AI Tools

• Press

• Donate

• Contact

The AI Ethics Brief (bi-weekly newsletter)

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.


Archive

  • © MONTREAL AI ETHICS INSTITUTE. All rights reserved 2024.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.