• Skip to main content
  • Skip to secondary menu
  • Skip to primary sidebar
  • Skip to footer
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

  • Articles
    • Public Policy
    • Privacy & Security
    • Human Rights
      • Ethics
      • JEDI (Justice, Equity, Diversity, Inclusion
    • Climate
    • Design
      • Emerging Technology
    • Application & Adoption
      • Health
      • Education
      • Government
        • Military
        • Public Works
      • Labour
    • Arts & Culture
      • Film & TV
      • Music
      • Pop Culture
      • Digital Art
  • Columns
    • AI Policy Corner
    • Recess
    • Tech Futures
  • The AI Ethics Brief
  • AI Literacy
    • Research Summaries
    • AI Ethics Living Dictionary
    • Learning Community
  • The State of AI Ethics Report
    • Volume 7 (November 2025)
    • Volume 6 (February 2022)
    • Volume 5 (July 2021)
    • Volume 4 (April 2021)
    • Volume 3 (Jan 2021)
    • Volume 2 (Oct 2020)
    • Volume 1 (June 2020)
  • About
    • Our Contributions Policy
    • Our Open Access Policy
    • Contact
    • Donate

Responsible Design Patterns for Machine Learning Pipelines

August 7, 2023

🔬 Research Summary by Saud Alharbi, a principal AI Engineer at CertKOR AI, and is pursuing a Ph.D. in Computer Engineering at Polytechnique Montreal.

[Original paper by Saud Alharbi, Lionel Tidjon, and Foutse Khomh]


Overview: This research paper explores integrating ethical considerations into the machine learning (ML) pipeline. It examines “What AI engineering practices and design patterns can be utilized to ensure responsible AI development?” and proposes a comprehensive approach to promote responsible AI design.


Introduction

As AI technologies continue to advance, ensuring responsible AI design becomes paramount. This research paper explores integrating ethical considerations into the ML pipeline, addressing data pre-processing, model selection, and evaluation. The aim is to establish guidelines and frameworks that promote ethical design practices in AI systems. New design patterns are introduced, emphasizing the importance of AI ethics orchestration and automated response (EOAR) for auditing and testing.

Key Insights

How can we design responsible AI systems? 

Responsible AI design is a critical aspect of AI development that cannot be overlooked. This research highlights the importance of integrating ethical principles throughout the ML pipeline. By doing so, we can build AI systems that minimize potential biases and harm. The findings of this research highlight the need for ongoing collaboration between different disciplines to ensure a holistic approach to ethical AI design. 

To achieve responsible AI design, this paper conducted a thorough review of existing frameworks, identified key elements, and proposed an encompassing system that spans all stages of the ML pipeline. They introduce innovative design patterns that address ethical considerations and highlight the importance of AI ethics orchestration and automated response (EOAR) for continuous auditing and testing. 

The authors employ a comprehensive methodology to advance responsible AI design. Their approach encompasses a thorough literature review of existing frameworks, a survey of AI practitioners, and detailed thematic analyses of the survey findings. The study aimed to uncover the challenges and best practices in responsible AI design while gathering invaluable feedback on the proposed framework. Additionally, a comparative analysis of existing ML design patterns was conducted, identifying gaps and proposing innovative design patterns for responsible AI. This research contributes to the field by providing practical insights and solutions to promote ethical and responsible AI development.

In this research, a diverse range of experts from leading organizations and domains collaborated to provide valuable insights into responsible AI design. Participants included renowned organizations such as Ethically Aligned AI, Thomson Reuters, IBM, Conseil de l’innovation du QuĂ©bec, UniversitĂ© de Sherbrooke, CSIRO (Australia), Polytechnique MontrĂ©al, and the Ministry of Health, Quebec. The participants’ expertise spanned various domains related to AI and technology, including AI ethics, AI adoption, data management, responsible AI, cybersecurity, law and technology regulation, software engineering, and technology transfer. The participants held diverse roles, ranging from AI security architects and CEOs to legal and compliance experts, data engineering advisors, innovation and AI adoption directors, professors, principal research scientists, information security officers, and responsible AI specialists.

Introducing the Responsible AI Design Patterns (RAIDPs) Framework

The research paper introduces the Responsible AI Design Patterns (RAIDPs) Framework, propelling responsible AI practices to new heights in machine learning pipelines. Figure 3 of the paper showcases the comprehensive RAIDPs Framework, providing stakeholders with a clear overview of its structure and goals. This framework is a crucial tool for understanding the relevance of design patterns to AI projects or initiatives. The RAIDPs Framework consists of several key components that are important in ensuring responsible AI design in machine learning pipelines. These components include:

  • Data collection: The process of gathering data used in the ML pipeline.
  • Pre-processing: Cleaning and preparing the data for use in the ML pipeline.
  • Training: The process of training the ML model using the collected data
  • Testing: Evaluating the performance of the trained model
  • Inference: Using the trained model to make predictions on new data
  • Deployment: Deploying the trained model in a production environment
  • Post-deployment: Monitoring the performance of the deployed model and making necessary updates or changes.

Extension-Related Patterns for Ethical Considerations

The RAIDPs Framework also introduces extension-related patterns that address ethical considerations in the ML pipeline. These patterns include:

  • Ethical sandbox: Promotes sandboxed experimentation to ensure ethical considerations are met.
  • AI Ethics Auto-Testing: Facilitates automated ethical testing of AI models.
  • AI ethics patching Addresses ethical issues by providing mechanisms to fix and update AI models.
  • Zero-Trust AI Emphasizes the importance of not simply trusting ML models and encourages continuous monitoring.

AI Ethics Orchestration and Automated Response (EOAR): Involves regular auditing and testing of ML models to ensure ethical principles are followed throughout their lifecycle.

Between the lines

This research emphasizes the significance of responsible AI design and provides a comprehensive framework for integrating ethical considerations into the ML pipeline. By following these guidelines and leveraging AI ethics orchestration and automated response, organizations can foster the development of ethically responsible AI systems. In the end, the aim is to create AI systems that not only excel in performance but also adhere to ethical principles. Additionally, future research should focus on evaluating the practicality and effectiveness of the proposed frameworks in real-world scenarios. The security of data management is another vital area that requires further exploration. Is there a schema for organizations to follow to mitigate risks in a specific use case?

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

🔍 SEARCH

Spotlight

This image is a collage with a colourful Japanese vintage landscape showing a mountain, hills, flowers and other plants and a small stream. There are 3 large black data servers placed in the bottom half of the image, with a cloud of black smoke emitting from them, partly obscuring the scenery.

Tech Futures: Crafting Participatory Tech Futures

A network diagram with lots of little emojis, organised in clusters.

Tech Futures: AI For and Against Knowledge

A brightly coloured illustration which can be viewed in any direction. It has many elements to it working together: men in suits around a table, someone in a data centre, big hands controlling the scenes and holding a phone, people in a production line. Motifs such as network diagrams and melting emojis are placed throughout the busy vignettes.

Tech Futures: The Fossil Fuels Playbook for Big Tech: Part II

A rock embedded with intricate circuit board patterns, held delicately by pale hands drawn in a ghostly style. The contrast between the rough, metallic mineral and the sleek, artificial circuit board illustrates the relationship between raw natural resources and modern technological development. The hands evoke human involvement in the extraction and manufacturing processes.

Tech Futures: The Fossil Fuels Playbook for Big Tech: Part I

Close-up of a cat sleeping on a computer keyboard

Tech Futures: The threat of AI-generated code to the world’s digital infrastructure

related posts

  • Intersectional Inquiry, on the Ground and in the Algorithm

    Intersectional Inquiry, on the Ground and in the Algorithm

  • FeedbackLogs: Recording and Incorporating Stakeholder Feedback into Machine Learning Pipelines

    FeedbackLogs: Recording and Incorporating Stakeholder Feedback into Machine Learning Pipelines

  • Faith and Fate: Limits of Transformers on Compositionality

    Faith and Fate: Limits of Transformers on Compositionality

  • Artificial Intelligence: the global landscape of ethics guidelines

    Artificial Intelligence: the global landscape of ethics guidelines

  • Listen to What They Say: Better Understand and Detect Online Misinformation with User Feedback

    Listen to What They Say: Better Understand and Detect Online Misinformation with User Feedback

  • Balancing Transparency and Risk: The Security and Privacy Risks of Open-Source Machine Learning Mode...

    Balancing Transparency and Risk: The Security and Privacy Risks of Open-Source Machine Learning Mode...

  • Public Perceptions of Gender Bias in Large Language Models: Cases of ChatGPT and Ernie

    Public Perceptions of Gender Bias in Large Language Models: Cases of ChatGPT and Ernie

  • Collective Action on Artificial Intelligence: A Primer and Review

    Collective Action on Artificial Intelligence: A Primer and Review

  • A hunt for the Snark: Annotator Diversity in Data Practices

    A hunt for the Snark: Annotator Diversity in Data Practices

  • Incentivized Symbiosis: A Paradigm for Human-Agent Coevolution

    Incentivized Symbiosis: A Paradigm for Human-Agent Coevolution

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer


Articles

Columns

AI Literacy

The State of AI Ethics Report


 

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.

Contact

Donate


  • © 2025 MONTREAL AI ETHICS INSTITUTE.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.