• Skip to main content
  • Skip to secondary menu
  • Skip to primary sidebar
  • Skip to footer
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

  • Articles
    • Public Policy
    • Privacy & Security
    • Human Rights
      • Ethics
      • JEDI (Justice, Equity, Diversity, Inclusion
    • Climate
    • Design
      • Emerging Technology
    • Application & Adoption
      • Health
      • Education
      • Government
        • Military
        • Public Works
      • Labour
    • Arts & Culture
      • Film & TV
      • Music
      • Pop Culture
      • Digital Art
  • Columns
    • AI Policy Corner
    • Recess
  • The AI Ethics Brief
  • AI Literacy
    • Research Summaries
    • AI Ethics Living Dictionary
    • Learning Community
  • The State of AI Ethics Report
    • Volume 7 (November 2025)
    • Volume 6 (February 2022)
    • Volume 5 (July 2021)
    • Volume 4 (April 2021)
    • Volume 3 (Jan 2021)
    • Volume 2 (Oct 2020)
    • Volume 1 (June 2020)
  • About
    • Our Contributions Policy
    • Our Open Access Policy
    • Contact
    • Donate

Ethics as a service: a pragmatic operationalisation of AI Ethics

March 2, 2022

🔬 Research summary by Vidhi Chugh, an award-winning AI/ML innovation leader and an advocate for the ethical and responsible use of AI. She has conducted several workshops demonstrating how to integrate ethical principles into AI-enabled products.

[Original paper by  Jessica Morley, Anat Elhalal, Francesca Garcia, Libby Kinsey, Jakob Mökander , Luciano Floridi]


Overview: With increasingly ubiquitous AI comes the greater need for awareness of ethical issues. However, the current regulations are inadequate to save humanity from the possible AI harm and hence enter a stream of guidelines, frameworks, and ethics codes. This paper talks about the effective operationalization of AI ethics in algorithm design.


Introduction

The gap between esoteric ethical AI principles vs the practical implementation is proving varied ethics frameworks futile. Though the development of principles-based policies and frameworks was a significant step in the evolution of AI governance, they continue to be abstract from an implementation perspective. The team doing the groundwork i.e. developers and engineers need to objectively translate principles into practice. The paper describes how current translational tools are either too flexible leading to a waste of effort or too strict lacking ‘right’ interpretation and implementation of ethics. The authors introduce “Ethics as a service” by drawing the analogy from the cloud computing model and share how this moderation is helpful in ethical algorithm design.

Ethics – abstract concept or objective tool?

The key to effective ethical AI implementation lies in enabling AI practitioners with not only what to do but also how to do it. The translational tools have been helpful in raising awareness and interpreting principles within research forums and organizations, but the impact and external validation in terms of helping disadvantaged groups are yet to be gauged.

According to the Global Inventory of AI Ethics Guidelines, managed by Algorithm Watch, currently, 160 documents exist that talk about –  beneficence, non-maleficence, autonomy, justice, and explicability principles. The authors highlight with an example how the mention of “AI systems may be discriminatory” is ineffective and vague in embedding ethics into the design. The various risks arising from such broad and generic guidance include ethics washing, ethics lobbying, ethics dumping, and ethics shirking among many. While these principles intend to lay the foundation, the tools focus on the ‘how’ of technical specifications.

Hence, the need of the hour is to bridge the gap between the abstract principles and technical tools leading to the evolution of the concept ‘Ethics as a Service’.

Limitations of traditional tools

  • Open for manipulation, it depends on the practitioner’s understanding of the principle as against the society’s preferred understanding
  • Diagnostic vs prescriptive: 
    • For example, flags biased data but do not suggest measures to mitigate
    • Parameters in assessing fairness, transparency, or accountability are set by the practitioners and demand objectivity
    • How to identify who is the decision-maker – tool or the user?
    • Ownership of associated risks and injustices, thereof
  • Viewed as one-time compliance evaluation: Induces a false sense of security unless monitored timely during the project lifecycle.

As ethical implications vary based on the domain, context, specific product, country, algorithm, and even stage of deployment, hence it is difficult to assess the non-deterministic impact and design a one-approach solution that fits all.

The paper proposes the ethical AI practice as a reflective development process, highlighting that the practitioners need to be apprised of their own subjectivity and bias as well.

Effective operationalization requires the organization to ensure:

  • Inclusion and discursion: Not just the agreement with all practitioners involved but also the effect of the product on users need to be reviewed periodically
  • The repeatable process to attain the ethical justification and environmental sustainability within the specific context
  • Appropriate oversight measures during validation, verification, and evaluation stages

External audits

The paper highlights below frameworks quoting their significant role in operationalising ethics:

  • Aequitas: an open-source toolkit for bias and fairness audit
  • Turingbox: a platform that audits the explainability
  • AI auditing framework developed by UK’s Information Commissioner’s Office that ensures organizations are compliant with data protection requirements and honor fairness, accuracy, security, and fundamental rights
  • PwC’s Responsible AI framework

However, it doubts the role of external audits’ efficacy as they are conducted after deployment, further discerned by the legal issues arising from consumer data protection and trade secrets.

Quoting the limitations related to data access and appropriate technical documentation to assist external independent and objective analysis, the paper brings out Ethics as a Service.

The authors draw an analogy from the cloud computing model:

  • Software as a Service: Third-party dictates ethical principles, guidelines, processes, and audits to ensure a positive outcome 
  • Infrastructure as a Service: Practitioners responsible for internal processes and principles, runs the risk of being too flexible
  • Platform as a Service: A bridge between devolved and centralized governance maintaining responsibility among different stakeholders:
    • Independent multi-disciplinary ethics advisory board
    • The internal company employees

Source: Original paper

Between the lines

We have to ensure that the assumptions, data collection, data limitations, proxy data curation, etc need to be well documented to assist the external audits, besides internal checks and code reviews. Notably, the role of external AI ethics audits is relatively new and would play a crucial role in regulating AI products, as has been evident in the financial industry for a long period. The amalgamation of the two approaches introduced in the paper as Ethics as a Service is one of the many steps that are needed to develop human-centric trustworthy AI systems.

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

🔍 SEARCH

Spotlight

This image shows a large white, traditional, old building. The top half of the building represents the humanities (which is symbolised by the embedded text from classic literature which is faintly shown ontop the building). The bottom section of the building is embossed with mathematical formulas to represent the sciences. The middle layer of the image is heavily pixelated. On the steps at the front of the building there is a group of scholars, wearing formal suits and tie attire, who are standing around at the enternace talking and some of them are sitting on the steps. There are two stone, statute-like hands that are stretching the building apart from the left side. In the forefront of the image, there are 8 students - which can only be seen from the back. Their graduation gowns have bright blue hoods and they all look as though they are walking towards the old building which is in the background at a distance. There are a mix of students in the foreground.

Tech Futures: Co-opting Research and Education

Agentic AI systems and algorithmic accountability: a new era of e-commerce

ALL IN Conference 2025: Four Key Takeaways from Montreal

Beyond Dependency: The Hidden Risk of Social Comparison in Chatbot Companionship

AI Policy Corner: Restriction vs. Regulation: Comparing State Approaches to AI Mental Health Legislation

related posts

  • Contextualizing Artificially Intelligent Morality: A Meta-Ethnography of Top-Down, Bottom-Up, and Hy...

    Contextualizing Artificially Intelligent Morality: A Meta-Ethnography of Top-Down, Bottom-Up, and Hy...

  • Governing AI to Advance Shared Prosperity

    Governing AI to Advance Shared Prosperity

  • Ghosting the Machine: Judicial Resistance to a Recidivism Risk Assessment Instrument

    Ghosting the Machine: Judicial Resistance to a Recidivism Risk Assessment Instrument

  • Right to be Forgotten in the Era of Large Language Models: Implications, Challenges, and Solutions

    Right to be Forgotten in the Era of Large Language Models: Implications, Challenges, and Solutions

  • Low-Resource Languages Jailbreak GPT-4

    Low-Resource Languages Jailbreak GPT-4

  • Considerations for Closed Messaging Research in Democratic Contexts  (Research summary)

    Considerations for Closed Messaging Research in Democratic Contexts (Research summary)

  • Research summary: What’s Next for AI Ethics, Policy, and Governance? A Global Overview

    Research summary: What’s Next for AI Ethics, Policy, and Governance? A Global Overview

  • Before and after GDPR: tracking in mobile apps

    Before and after GDPR: tracking in mobile apps

  • The Brussels Effect and AI: How EU Regulation will Impact the Global AI Market

    The Brussels Effect and AI: How EU Regulation will Impact the Global AI Market

  • Towards Climate Awareness in NLP Research

    Towards Climate Awareness in NLP Research

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer


Articles

Columns

AI Literacy

The State of AI Ethics Report


 

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.

Contact

Donate


  • © 2025 MONTREAL AI ETHICS INSTITUTE.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.