• Skip to main content
  • Skip to secondary menu
  • Skip to primary sidebar
  • Skip to footer
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

  • Articles
    • Public Policy
    • Privacy & Security
    • Human Rights
      • Ethics
      • JEDI (Justice, Equity, Diversity, Inclusion
    • Climate
    • Design
      • Emerging Technology
    • Application & Adoption
      • Health
      • Education
      • Government
        • Military
        • Public Works
      • Labour
    • Arts & Culture
      • Film & TV
      • Music
      • Pop Culture
      • Digital Art
  • Columns
    • AI Policy Corner
    • Recess
    • Tech Futures
  • The AI Ethics Brief
  • AI Literacy
    • Research Summaries
    • AI Ethics Living Dictionary
    • Learning Community
  • The State of AI Ethics Report
    • Volume 7 (November 2025)
    • Volume 6 (February 2022)
    • Volume 5 (July 2021)
    • Volume 4 (April 2021)
    • Volume 3 (Jan 2021)
    • Volume 2 (Oct 2020)
    • Volume 1 (June 2020)
  • About
    • Our Contributions Policy
    • Our Open Access Policy
    • Contact
    • Donate

The Chief AI Ethics Officer: A Champion or a PR Stunt?

May 16, 2021

✍️ Column by Masa Sweidan (@masasweidan), our Business Development Manager.


We have reached a point where the far-reaching impacts of AI’s ability to identify, prioritize and predict can be felt in virtually every industry. Over the last couple of years, both researchers and practitioners have established that the power relations embedded in these systems can deepen existing biases, affect access to reliable information and shape free speech. Many organizations have attempted to stay relevant and keep up with the developments in AI Ethics by introducing the role of a Chief AI Ethics Officer (CAIEO), which has other titles including AI Ethicist, Ethical AI Lead and Trust and Safety Policy Advisor, to name a few.

Although the creation of this role seems to be a step in the right direction, many are questioning whether the presence of a CAIEO will truly boost the adoption of AI Ethics within organizations and effectively shift the focus from creating guiding principles to implementing effective practices. Before examining the challenges and responsibilities of the profession, it would be helpful to frame this discussion with some context.

The role of an Ethicist has been around for some time, especially in the field of healthcare. Also known as Clinical Ethicists or Bioethicists, these healthcare professionals are typically employed by large, academic medical centers to help patients, families, and medical teams solve health-related dilemmas. They often deal with questions pertaining to autonomy, beneficence, non-maleficence and justice in an attempt to make the “right” choice or decision. 

Within the context of AI, it has proven to be quite difficult to define the responsibilities that fall under this role, because ethical issues around AI are uniquely complex and nuanced, meaning that social, legal and financial consequences must be considered. It is important to highlight the distinction here between the field and the profession, since “the AI Ethicist is only one piece to the puzzle of AI Ethics.” As Olivia Gambelin explains, part of the confusion stems from the fact that the position itself is named so closely after the field, leading to the assumption that the individual with the title is the only one with enough power to create change in the area of AI Ethics.

This contributes to what I consider to be the root of most concerns regarding the CAIEO role. We should be extremely wary of having AI Ethics as a separate mandate that a single officer executes on their own, as opposed to the entire organization. If the other employees do not get involved in the process of ensuring that ethical AI standards are met, then all the responsibility falls on the shoulders of one person. Rather than being a siloed effort limited to risk management practices, it should act as a way to consolidate the organization-wide ethics-related activities. However, this is much easier said than done.

Due to the interdisciplinary and philosophical nature of the role, Natasha Crampton explains that “it is impossible to reduce all the complex sociotechnical considerations into an exhaustive set of pre-defined rules.” To overcome this challenge, companies like Microsoft and Salesforce are developing processes, tools, training and other resources to ensure that their AI solutions reflect the original principles that were adopted.

The progress being made is certainly exciting, but many still question the incentives behind creating this role and wonder if it is just an add-on feeding into the bottom line. Arguments against the initial adoption of this role often include terms such as “ethics-washing” or “PR stunt” to convey the idea that this is an attempt for companies to avoid regulation. Ben Wagner further elaborates by stating that “the role of ‘ethics’ devolves to pre-empting and preventing legislation,” so financial and political goals can be masked by the language of ethics. 

This may seem extreme, but it does touch on an important point that should be mentioned but deserves its own analysis. With many global companies, there is a disconnect between scale and governance. Skeptics may view ‘ethics’ as the new ‘self-regulation’ for private companies that are unwilling to provide real regulatory solutions. As the dangers become increasingly clear, it seems that the answer should be better governance, not self-governance.

Moreover, there are several individuals calling for a more general Ethics Officer, who would be responsible for dealing with the ethics behind practices within the organization and could seek training on the ethical aspects of AI if they lack the necessary knowledge. The problem I see with this suggestion is that it removes the specificity of AI-related challenges from the role, which is crucial to detect if organizations plan to overcome them. 

In this scenario, it is easy to anchor a CAIEO role with the general field of Business Ethics. This well-researched area can take shape in different forms, ranging from the study of professional practices to the academic discipline, and it tends to touch on various aspects of a firm’s relationship with its consumers, employees and society. However, the added context of AI creates new issues that can impact millions of people, meaning that “AI Ethicists are no longer dealing with person-to-person ethical issues, but rather machine to person.” Therefore, a specific position needs to be carved out within an organization to examine the increasingly complex implications of this modern technology. 

After looking at the ambiguity surrounding this role, there is no doubt that the efficacy of this position will ultimately boil down to the individual that is hired. This job involves analytical thinking, proper communication and relationship management, but most importantly, it requires trust. After all, these champions will be responsible for leading the efforts to integrate their operational processes into the DNA of the organization.  

I foresee that the ethical approaches pertaining to AI design and development, such as external participation and transparent decision-making procedures, will continue to improve. However, there is one rule that must remain the same: AI Ethics can not substitute for fundamental human rights. Moving forward, it seems that the organizations employing a key driver at the executive level, such as a CAIEO, to build wider competence and adherence at the firm level will be the leaders in this space. 

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

🔍 SEARCH

Spotlight

Illustration of a coral reef ecosystem

Tech Futures: Diversity of Thought and Experience: The UN’s Scientific Panel on AI

This image shows a large white, traditional, old building. The top half of the building represents the humanities (which is symbolised by the embedded text from classic literature which is faintly shown ontop the building). The bottom section of the building is embossed with mathematical formulas to represent the sciences. The middle layer of the image is heavily pixelated. On the steps at the front of the building there is a group of scholars, wearing formal suits and tie attire, who are standing around at the enternace talking and some of them are sitting on the steps. There are two stone, statute-like hands that are stretching the building apart from the left side. In the forefront of the image, there are 8 students - which can only be seen from the back. Their graduation gowns have bright blue hoods and they all look as though they are walking towards the old building which is in the background at a distance. There are a mix of students in the foreground.

Tech Futures: Co-opting Research and Education

Agentic AI systems and algorithmic accountability: a new era of e-commerce

ALL IN Conference 2025: Four Key Takeaways from Montreal

Beyond Dependency: The Hidden Risk of Social Comparison in Chatbot Companionship

related posts

  • Risk of AI in Healthcare: A Study Framework

    Risk of AI in Healthcare: A Study Framework

  • The Robot Made Me Do It: Human–Robot Interaction and Risk-Taking Behavior (Research Summary)

    The Robot Made Me Do It: Human–Robot Interaction and Risk-Taking Behavior (Research Summary)

  • Promises and Challenges of Causality for Ethical Machine Learning

    Promises and Challenges of Causality for Ethical Machine Learning

  • The Future of Teaching Tech Ethics

    The Future of Teaching Tech Ethics

  • A Virtue-Based Framework to Support Putting AI Ethics into Practice

    A Virtue-Based Framework to Support Putting AI Ethics into Practice

  • Bridging the Gap Between AI and the Public (TEDxYouth@GandyStreet)

    Bridging the Gap Between AI and the Public (TEDxYouth@GandyStreet)

  • Epistemic fragmentation poses a threat to the governance of online targeting

    Epistemic fragmentation poses a threat to the governance of online targeting

  • Research summary: Decolonial AI: Decolonial Theory as Sociotechnical Foresight in Artificial Intelli...

    Research summary: Decolonial AI: Decolonial Theory as Sociotechnical Foresight in Artificial Intelli...

  • Best humans still outperform artificial intelligence in a creative divergent thinking task

    Best humans still outperform artificial intelligence in a creative divergent thinking task

  • Faith and Fate: Limits of Transformers on Compositionality

    Faith and Fate: Limits of Transformers on Compositionality

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer


Articles

Columns

AI Literacy

The State of AI Ethics Report


 

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.

Contact

Donate


  • © 2025 MONTREAL AI ETHICS INSTITUTE.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.