• Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • Core Principles of Responsible AI
    • Accountability
    • Fairness
    • Privacy
    • Safety and Security
    • Sustainability
    • Transparency
  • Special Topics
    • AI in Industry
    • Ethical Implications
    • Human-Centered Design
    • Regulatory Landscape
    • Technical Methods
  • Living Dictionary
  • State of AI Ethics
  • AI Ethics Brief
  • 🇫🇷
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

Employee Perceptions of the Effective Adoption of AI Principles

March 23, 2022

🔬 Research summary by Stephanie Kelley, a PhD Candidate in Management Analytics at the Smith School of Business, Queen’s University where she studies the ethical of artificial intelligence, data, and analytics.

[Original paper by Stephanie Kelley]


Overview: The proliferation of ethical issues related to the use of artificial intelligence (AI) has led many organizations to turn to self-regulatory initiatives, most commonly AI principles. This study investigates what organizations can do to effectively adopt these AI principles, in hopes of reducing the ethical issues related to the use of AI technologies in the future.


Introduction

Reports of organizations proliferating bias and discrimination, jeopardizing customer privacy, and making other ethical failures in their use of AI continue to dominate headlines. Organizations like Apple, Goldman Sachs, IBM, Microsoft, and Amazon have all been accused of unethical uses of AI in the past few years. Many of these organizations have put self-regulatory initiatives, including AI principles, in place as a possible method to reduce the ethical issues –  but is having AI principles enough to prevent ethical harm?

This paper explores this by asking “according to the perceptions of employees who work with AI, what components might relate to the effective adoption of AIPs?” 

To answer this question, 49 in-depth semi-structured interviews were conducted with individuals employed in financial services organizations who work with AI. The interview transcripts were coded following a general inductive approach, drawing on a positivist epistemology (which assumes the interviewees are direct holders of accurate information). 

The findings of the analysis suggest that there are eleven components that are important for the effective adoption of AI principles: communication, management support, training, ethics officer, reporting mechanism, enforcement, measurement, accompanying technical processes, sufficient technical infrastructure, organizational structure, and interdisciplinary approach.

Key Insights

Before proceeding to a summary of the main findings, a definition of AI principles (AIPs) is provided. AIPs are a formal document developed or selected by an organization that states normative declarations about how artificial intelligence ought to be used by its managers and employees. Given this definition, the task was then to determine what components relate to their effective adoption?

Components for effective adoption: Lessons learned from business codes

While AI principles are a new form of ethical documentation, business codes have been around for several decades, and have consequently been studied significantly more than their nascent AI-focused counterparts.  These past studies suggest that having a business code by itself does not lead to a reduction in unethical behavior, but that there are several components that impact the adoption of business codes to improve their effectiveness. Building upon the study of business codes, this study used semi-structured questions to ask AI experts whether these same components might also impact the effective adoption of AIPs. 

Indeed, employee perceptions on the effective adoption of AIPs suggest that there are eight components common to the effective adoption of business codes. These components are discussed herein: 

Communication

Communication, the act of making employees aware of an AIP is an important first step in its effective adoption, as simply having an AIP is not enough to ensure effective adoption. Specifically, participants suggested five aspects of communications that could create effective adoption: 

  1. reach,
  2. distribution channel,
  3. sign-off process,
  4. reinforcement,
  5. communication quality, and
  6. external communication.

Management Support

Management support, or employees knowing that one or both of local management or senior management support the company’s AIP, aids in effective adoption. Support includes actions such as modeling appropriate ethical behavior, talking about the code, knowing and understanding the code, or generally taking the code seriously. 

Training

Offering AIP training, whereby employees attend a training session or class to educate them on the AIP was found to impact the effective adoption of AIPs. While the idea of mandatory training received pushback from participants, simply the existence of AIP training suggested that the AIPs were important, ultimately impacting effective adoption. Participants also highlighted the importance of internal training, over external.

Ethics Office(r)

Having either an ethics officer, a specific department or group that deals with ethics and conduct issues; or an ethics committee, a group of people in an organization that employees can turn to with their AIP concerns were found to impact the effective adoption of AIPs.

Reporting Mechanism

The existence of a reporting mechanism whereby an employee can report ethical concerns via a telephone line, app, email address, or other means has been found to impact effective adoption. Participants also noted the importance of a standardized procedure: a clear routine for reporting the ethical issue or complaint.

Enforcement

The use of enforcement mechanisms, such as audits, penalties for breaching the code, communicating violations, and incentive policies all aid in the effective adoption of AIPs.

Measurement

The use of some method of evaluating the achievement and/or failures related to the AIP may suggest an organization is serious about the document. While only a couple of organizations today are measuring adherence to AIPs, many organizations noted interest in future measurement, suggesting its potential importance for effective adoption.

Components for effective adoption: Unique learnings for AIPs

In addition to the components that jointly affect the effective adoption of business codes and AIPs, four unique components were identified to independently impact AIP adoption, through open-ended questioning of participants: 

Accompanying technical processes

The existence of an accompanying technical process that provides detailed technical guidance on how to implement the AIP aids in its effective adoption. The processes are referred to as checklists, frameworks, assessments, and guidelines.

Sufficient technical infrastructure

Having a sufficient technical infrastructure, specifically, a complete inventory of AI projects, and data and system compatibility are important for the effective adoption of AIPs. Participants suggested that a complete AI inventory helped to “cascade” the AIPs to relevant individuals, and ensure all models are “going through the checklist.”

Organizational structure

Participants suggested that centralized AI teams would aid in the effective adoption of AIPs, as the organizational structure would aid in gathering a complete AI inventory, distribution, and reinforcement of the AIPs.

Interdisciplinary approach

Almost every participant noted the effective adoption of AIPs is a highly complex problem that has not yet been solved but suggested that one way to deal with it is to use an interdisciplinary approach. This includes the creation of interdisciplinary teams, combining AI ethics with data ethics, hiring the right people, engaging with third-party experts, and engaging with regulators.

Between the lines

The findings of the study suggest that there are eleven components that are important for the effective adoption of AI principles: communication, management support, training, ethics officer, reporting mechanism, enforcement, measurement, accompanying technical processes, sufficient technical infrastructure, organizational structure, and interdisciplinary approach. 

What this means is that simply having AI principles will not be enough for organizations to prevent unethical AI outcomes, and instead, they should consider additional AI ethics components, per the eleven uncovered in this study, to aid in effective AI principle adoption. The findings also suggest that organizations and researchers should treat AIPs as separate entities from existing business ethics codes.

There are, of course, limitations to the study: participants were all employed in one industry, public organizations and NGOs were not studied, snowballing sampling could have created selection bias, and the qualitative nature of the study relies on self-reported data which could be affected by social desirability bias. Future studies should explore additional industries, public organizations, NGOs, and measure direct behavioral evidence. Additional research to clarify the quantitative importance of each of the components is warranted to help organizations prioritize their AIP efforts and ultimately help reduce unethical AI outcomes.

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

🔍 SEARCH

Spotlight

AI Policy Corner: New York City Local Law 144

Canada’s Minister of AI and Digital Innovation is a Historic First. Here’s What We Recommend.

Am I Literate? Redefining Literacy in the Age of Artificial Intelligence

AI Policy Corner: The Texas Responsible AI Governance Act

AI Policy Corner: Singapore’s National AI Strategy 2.0

related posts

  • Upgrading China Through Automation: Manufacturers, Workers and Techno-Development State (Research Su...

    Upgrading China Through Automation: Manufacturers, Workers and Techno-Development State (Research Su...

  • Creative Agents: Rethinking Agency and Creativity in Human and Artificial Systems

    Creative Agents: Rethinking Agency and Creativity in Human and Artificial Systems

  • Fair and explainable machine learning under current legal frameworks

    Fair and explainable machine learning under current legal frameworks

  • DC-Check: A Data-Centric AI checklist to guide the development of reliable machine learning systems

    DC-Check: A Data-Centric AI checklist to guide the development of reliable machine learning systems

  • Fairness Amidst Non-IID Graph Data: A Literature Review

    Fairness Amidst Non-IID Graph Data: A Literature Review

  • Cleaning Up the Streets: Understanding Motivations, Mental Models, and Concerns of Users Flagging So...

    Cleaning Up the Streets: Understanding Motivations, Mental Models, and Concerns of Users Flagging So...

  • From Pretraining Data to Language Models to Downstream Tasks: Tracking the Trails of Political Biase...

    From Pretraining Data to Language Models to Downstream Tasks: Tracking the Trails of Political Biase...

  • Governing AI to Advance Shared Prosperity

    Governing AI to Advance Shared Prosperity

  • Unpacking Human-AI interaction (HAII) in safety-critical industries

    Unpacking Human-AI interaction (HAII) in safety-critical industries

  • The State of AI Ethics Report (Volume 5)

    The State of AI Ethics Report (Volume 5)

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer

Categories


• Blog
• Research Summaries
• Columns
• Core Principles of Responsible AI
• Special Topics

Signature Content


• The State Of AI Ethics

• The Living Dictionary

• The AI Ethics Brief

Learn More


• About

• Open Access Policy

• Contributions Policy

• Editorial Stance on AI Tools

• Press

• Donate

• Contact

The AI Ethics Brief (bi-weekly newsletter)

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.


Archive

  • © MONTREAL AI ETHICS INSTITUTE. All rights reserved 2024.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.