• Skip to main content
  • Skip to secondary menu
  • Skip to primary sidebar
  • Skip to footer
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

  • Articles
    • Public Policy
    • Privacy & Security
    • Human Rights
      • Ethics
      • JEDI (Justice, Equity, Diversity, Inclusion
    • Climate
    • Design
      • Emerging Technology
    • Application & Adoption
      • Health
      • Education
      • Government
        • Military
        • Public Works
      • Labour
    • Arts & Culture
      • Film & TV
      • Music
      • Pop Culture
      • Digital Art
  • Columns
    • AI Policy Corner
    • Recess
    • Tech Futures
  • The AI Ethics Brief
  • AI Literacy
    • Research Summaries
    • AI Ethics Living Dictionary
    • Learning Community
  • The State of AI Ethics Report
    • Volume 7 (November 2025)
    • Volume 6 (February 2022)
    • Volume 5 (July 2021)
    • Volume 4 (April 2021)
    • Volume 3 (Jan 2021)
    • Volume 2 (Oct 2020)
    • Volume 1 (June 2020)
  • About
    • Our Contributions Policy
    • Our Open Access Policy
    • Contact
    • Donate

Ethics as a service: a pragmatic operationalisation of AI Ethics

March 2, 2022

šŸ”¬ Research summary by Vidhi Chugh, an award-winning AI/ML innovation leader and an advocate for the ethical and responsible use of AI. She has conducted several workshops demonstrating how to integrate ethical principles into AI-enabled products.

[Original paper by  Jessica Morley, Anat Elhalal, Francesca Garcia, Libby Kinsey, Jakob Mökander , Luciano Floridi]


Overview: With increasingly ubiquitous AI comes the greater need for awareness of ethical issues. However, the current regulations are inadequate to save humanity from the possible AI harm and hence enter a stream of guidelines, frameworks, and ethics codes. This paper talks about the effective operationalization of AI ethics in algorithm design.


Introduction

The gap between esoteric ethical AI principles vs the practical implementation is proving varied ethics frameworks futile. Though the development of principles-based policies and frameworks was a significant step in the evolution of AI governance, they continue to be abstract from an implementation perspective. The team doing the groundwork i.e. developers and engineers need to objectively translate principles into practice. The paper describes how current translational tools are either too flexible leading to a waste of effort or too strict lacking ā€˜right’ interpretation and implementation of ethics. The authors introduce ā€œEthics as a serviceā€ by drawing the analogy from the cloud computing model and share how this moderation is helpful in ethical algorithm design.

Ethics – abstract concept or objective tool?

The key to effective ethical AI implementation lies in enabling AI practitioners with not only what to do but also how to do it. The translational tools have been helpful in raising awareness and interpreting principles within research forums and organizations, but the impact and external validation in terms of helping disadvantaged groups are yet to be gauged.

According to the Global Inventory of AI Ethics Guidelines, managed by Algorithm Watch, currently, 160 documents exist that talk about –  beneficence, non-maleficence, autonomy, justice, and explicability principles. The authors highlight with an example how the mention of ā€œAI systems may be discriminatoryā€ is ineffective and vague in embedding ethics into the design. The various risks arising from such broad and generic guidance include ethics washing, ethics lobbying, ethics dumping, and ethics shirking among many. While these principles intend to lay the foundation, the tools focus on the ā€˜how’ of technical specifications.

Hence, the need of the hour is to bridge the gap between the abstract principles and technical tools leading to the evolution of the concept ā€˜Ethics as a Service’.

Limitations of traditional tools

  • Open for manipulation, it depends on the practitioner’s understanding of the principle as against the society’s preferred understanding
  • Diagnostic vs prescriptive: 
    • For example, flags biased data but do not suggest measures to mitigate
    • Parameters in assessing fairness, transparency, or accountability are set by the practitioners and demand objectivity
    • How to identify who is the decision-maker – tool or the user?
    • Ownership of associated risks and injustices, thereof
  • Viewed as one-time compliance evaluation: Induces a false sense of security unless monitored timely during the project lifecycle.

As ethical implications vary based on the domain, context, specific product, country, algorithm, and even stage of deployment, hence it is difficult to assess the non-deterministic impact and design a one-approach solution that fits all.

The paper proposes the ethical AI practice as a reflective development process, highlighting that the practitioners need to be apprised of their own subjectivity and bias as well.

Effective operationalization requires the organization to ensure:

  • Inclusion and discursion: Not just the agreement with all practitioners involved but also the effect of the product on users need to be reviewed periodically
  • The repeatable process to attain the ethical justification and environmental sustainability within the specific context
  • Appropriate oversight measures during validation, verification, and evaluation stages

External audits

The paper highlights below frameworks quoting their significant role in operationalising ethics:

  • Aequitas: an open-source toolkit for bias and fairness audit
  • Turingbox: a platform that audits the explainability
  • AI auditing framework developed by UK’s Information Commissioner’s Office that ensures organizations are compliant with data protection requirements and honor fairness, accuracy, security, and fundamental rights
  • PwC’s Responsible AI framework

However, it doubts the role of external audits’ efficacy as they are conducted after deployment, further discerned by the legal issues arising from consumer data protection and trade secrets.

Quoting the limitations related to data access and appropriate technical documentation to assist external independent and objective analysis, the paper brings out Ethics as a Service.

The authors draw an analogy from the cloud computing model:

  • Software as a Service: Third-party dictates ethical principles, guidelines, processes, and audits to ensure a positive outcomeĀ 
  • Infrastructure as a Service: Practitioners responsible for internal processes and principles, runs the risk of being too flexible
  • Platform as a Service: A bridge between devolved and centralized governance maintaining responsibility among different stakeholders:
    • Independent multi-disciplinary ethics advisory board
    • The internal company employees

Source: Original paper

Between the lines

We have to ensure that the assumptions, data collection, data limitations, proxy data curation, etc need to be well documented to assist the external audits, besides internal checks and code reviews. Notably, the role of external AI ethics audits is relatively new and would play a crucial role in regulating AI products, as has been evident in the financial industry for a long period. The amalgamation of the two approaches introduced in the paper as Ethics as a Service is one of the many steps that are needed to develop human-centric trustworthy AI systems.

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

šŸ” SEARCH

Spotlight

A rock embedded with intricate circuit board patterns, held delicately by pale hands drawn in a ghostly style. The contrast between the rough, metallic mineral and the sleek, artificial circuit board illustrates the relationship between raw natural resources and modern technological development. The hands evoke human involvement in the extraction and manufacturing processes.

Tech Futures: The Fossil Fuels Playbook for Big Tech: Part I

Close-up of a cat sleeping on a computer keyboard

Tech Futures: The threat of AI-generated code to the world’s digital infrastructure

The undying sun hangs in the sky, as people gather around signal towers, working through their digital devices.

Dreams and Realities in Modi’s AI Impact Summit

Illustration of a coral reef ecosystem

Tech Futures: Diversity of Thought and Experience: The UN’s Scientific Panel on AI

This image shows a large white, traditional, old building. The top half of the building represents the humanities (which is symbolised by the embedded text from classic literature which is faintly shown ontop the building). The bottom section of the building is embossed with mathematical formulas to represent the sciences. The middle layer of the image is heavily pixelated. On the steps at the front of the building there is a group of scholars, wearing formal suits and tie attire, who are standing around at the enternace talking and some of them are sitting on the steps. There are two stone, statute-like hands that are stretching the building apart from the left side. In the forefront of the image, there are 8 students - which can only be seen from the back. Their graduation gowns have bright blue hoods and they all look as though they are walking towards the old building which is in the background at a distance. There are a mix of students in the foreground.

Tech Futures: Co-opting Research and Education

related posts

  • Introduction To Ethical AI Principles

    Introduction To Ethical AI Principles

  • AI agents for facilitating social interactions and wellbeing

    AI agents for facilitating social interactions and wellbeing

  • Enough With ā€œHuman-AI Collaborationā€

    Enough With ā€œHuman-AI Collaborationā€

  • Epistemic fragmentation poses a threat to the governance of online targeting

    Epistemic fragmentation poses a threat to the governance of online targeting

  • ā€œCold Hard Dataā€ – Nothing Cold or Hard About It

    ā€œCold Hard Dataā€ – Nothing Cold or Hard About It

  • Teaching AI Ethics Using Science Fiction (Research summary)

    Teaching AI Ethics Using Science Fiction (Research summary)

  • Exploring the under-explored areas in teaching tech ethics today

    Exploring the under-explored areas in teaching tech ethics today

  • Unstable Diffusion: Ethical challenges and some ways forward

    Unstable Diffusion: Ethical challenges and some ways forward

  • From Dance App to Political Mercenary: How disinformation on TikTok gaslights political tensions in ...

    From Dance App to Political Mercenary: How disinformation on TikTok gaslights political tensions in ...

  • Levels of AGI: Operationalizing Progress on the Path to AGI

    Levels of AGI: Operationalizing Progress on the Path to AGI

Partners

  • Ā 
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer


Articles

Columns

AI Literacy

The State of AI Ethics Report


 

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.

Contact

Donate


  • Ā© 2025 MONTREAL AI ETHICS INSTITUTE.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.