• Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • Core Principles of Responsible AI
    • Accountability
    • Fairness
    • Privacy
    • Safety and Security
    • Sustainability
    • Transparency
  • Special Topics
    • AI in Industry
    • Ethical Implications
    • Human-Centered Design
    • Regulatory Landscape
    • Technical Methods
  • Living Dictionary
  • State of AI Ethics
  • AI Ethics Brief
  • 🇫🇷
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

AI Ethics Maturity Model

September 7, 2021

🔬 Research summary by Connor Wright, our Partnerships Manager.

[Original paper by Kathy Baxter]


Overview: With ethical AI certainly being a hot topic in the business world, how can this be achieved? The Ethical AI Practice Maturity Model sets out 4-steps towards achieving the end goal of an “end-to-end-ethics-by-design” model. With that in sight, the need for company-wide participation and the passion for building ethical AI are a must.


Introduction

Does your company engage in AI products? Does it have an ethical AI team? If not, how would such a team be established? The Ethical AI Practice Maturity Model aims to answer the latter. Stretching from the inception of an ad hoc review from a group of employees, to having ethical AI awareness coursing through a company’s veins, it offers us a roadmap. Calling on company-wide engagement alongside a passion for ethical AI, the end goal includes having ethical thresholds necessary for the AI product to pass in order to be launched. The best way to illustrate how this can be achieved is to go through the model itself.

Key Insights

  1. Ad Hoc

Questioning of the AI process at hand begins to take hold. Certain issues arise and are then brought into question on an ad hoc basis. The question no longer becomes “can we do this?”, but instead “should we do this?”. The resultant conversations can prove good fuel for informal talks about the technology, helping to clarify the importance of these problems. Once these issues are known and employees can see them being dealt with, trust can start to be developed between those designing the AI and the wider company. 

However, the desired confidence takes time to develop. So, building an ethical AI team that accumulates “small wins” can help consolidate their position in the AI process. Churning out results, big or small, will help create more advocates throughout the business and cultivate pivotal involvement from those at the top.

  1. Organised and repeatable

Arriving at this stage means executives are now on board, and responsible AI practices are now being rewarded. As a result, the next step lies in convincing internal stakeholders to join the process as well. Demonstrating why getting involved is crucial by explaining the risks involved with AI is a sure way to get more employees to sign up. Moreover, contextualising AI in the company context and within its ethical principles to explain its importance could prove even more gripping.

What executives must not do is simply “ethics washing” company employees. This entails placing broad ethical principles, such as ‘AI must always do the right thing’, and sticking it on the company’s website. Instead, how these company principles that apply to AI will be achieved is paramount for forming a successful ethical AI team.

Hence, the stage also includes the formation of the team itself. Given the different situations the team will face, different expertise will be required. Accordingly, the team should be comprised of diverse skill sets, backgrounds, and understandings. Furthermore, the metrics for evaluation should not be classic “revenue generation” and the like, but rather making sure the AI systems are safe and not being penalised when they identify ethical risks.

Given the need to identify these risks, considerations on questions of scale can be helpful. To join the team, what is the base knowledge all employees should have of AI? How would you design formal training to convey this knowledge? Would teams working on AI be able to loop in the ethical AI team? Whatever the answers to these questions are, it needs to be sustained and managed in the long run.

  1. Managed and Sustainable

The training required for the desired base level of knowledge must only include mandatory elements for all employees if it’s necessary. The company’s ethical principles ought to be common knowledge, but knowing how to mitigate AI system bias is only relevant for data scientists. Managing what the training allows employees to find is the next important step.

Coming across an AI ethical risk is not to be frowned upon completely. No AI system can be 100% bias-free, so saying what bias there is, how it’s being mitigated, and the potential harms it could cause is the best way to deal with the problem. Any damages that are then caused (which can vary depending on the person) need to have appropriate channels to be brought up. Should your business stretch across different countries, the ethical review must ensure the AI system includes other languages and cultures. Dealing with bias for your American clients will not be the same when approaching your Taiwanese partners.

  1. Optimised and innovative

The final stage is the one to be most desired to achieve. The ethical AI team is no longer a central hub but rather dispersed throughout the whole company. Products and resources require that ethical debt is resolved to be realised, ensuring an “end-to-end-ethics-by-design” model. However, this does not mean that striving for perfection is halted. With “practice” being the keyword, ethical AI practice never reaches its conclusion. New innovations bring new techno-ethical issues, requiring even more elaboration from the diverse backgrounds of the ethical AI team. This stage may be the end goal, but the end goal is a refined process, not a product.

Between the lines

In my view, ethical AI practice is both necessary and sufficient to operationalise principles like transparency, fairness and equality. Subsequently, any ethical review needs to happen early in the design process; otherwise, there’s no time to make significant changes. Should this not be the case, the rise of “ethical debt” from unethical AI models, though almost invisible during the AI design, will become very tangible in the form of harm to the public. The Ethical AI Practice Maturity Model gives a company a roadmap to follow and harbours the vital point that change must come from all. Bravery is required, and it all starts with that first small win.

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

🔍 SEARCH

Spotlight

Canada’s Minister of AI and Digital Innovation is a Historic First. Here’s What We Recommend.

Am I Literate? Redefining Literacy in the Age of Artificial Intelligence

AI Policy Corner: The Texas Responsible AI Governance Act

AI Policy Corner: Singapore’s National AI Strategy 2.0

AI Governance in a Competitive World: Balancing Innovation, Regulation and Ethics | Point Zero Forum 2025

related posts

  • The Limits of Global Inclusion in AI Development (Research Summary)

    The Limits of Global Inclusion in AI Development (Research Summary)

  • The path toward equal performance in medical machine learning

    The path toward equal performance in medical machine learning

  • Fashion piracy and artificial intelligence—does the new creative environment come with new copyright...

    Fashion piracy and artificial intelligence—does the new creative environment come with new copyright...

  • Customization is Key: Four Characteristics of Textual Affordances for Accessible Data Visualizatio...

    "Customization is Key": Four Characteristics of Textual Affordances for Accessible Data Visualizatio...

  • Research summary:  Laughing is Scary, but Farting is Cute: A Conceptual Model of Children’s Perspect...

    Research summary: Laughing is Scary, but Farting is Cute: A Conceptual Model of Children’s Perspect...

  • Zoom Out and Observe: News Environment Perception for Fake News Detection

    Zoom Out and Observe: News Environment Perception for Fake News Detection

  • FairQueue: Rethinking Prompt Learning for Fair Text-to-Image Generation (NeurIPS 2024)

    FairQueue: Rethinking Prompt Learning for Fair Text-to-Image Generation (NeurIPS 2024)

  • Why AI Ethics Is a Critical Theory

    Why AI Ethics Is a Critical Theory

  • Representation Engineering: A Top-Down Approach to AI Transparency

    Representation Engineering: A Top-Down Approach to AI Transparency

  • Tiny, Always-on and Fragile: Bias Propagation through Design Choices in On-device Machine Learning W...

    Tiny, Always-on and Fragile: Bias Propagation through Design Choices in On-device Machine Learning W...

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer

Categories


• Blog
• Research Summaries
• Columns
• Core Principles of Responsible AI
• Special Topics

Signature Content


• The State Of AI Ethics

• The Living Dictionary

• The AI Ethics Brief

Learn More


• About

• Open Access Policy

• Contributions Policy

• Editorial Stance on AI Tools

• Press

• Donate

• Contact

The AI Ethics Brief (bi-weekly newsletter)

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.


Archive

  • Š MONTREAL AI ETHICS INSTITUTE. All rights reserved 2024.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.