• Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • Core Principles of Responsible AI
    • Accountability
    • Fairness
    • Privacy
    • Safety and Security
    • Sustainability
    • Transparency
  • Special Topics
    • AI in Industry
    • Ethical Implications
    • Human-Centered Design
    • Regulatory Landscape
    • Technical Methods
  • Living Dictionary
  • State of AI Ethics
  • AI Ethics Brief
  • 🇫🇷
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

Brave: what it means to be an AI Ethicist

September 13, 2021

🔬 Research summary by Connor Wright, our Partnerships Manager.

[Original paper by Olivia Gambelin]


Overview: The position of AI Ethicist is a recent arrival to the corporate scene, with one of its key novelties being the importance of bravery. Whether taken seriously or treated as a PR stunt, alongside the need to decipher right or wrong is the ability to be brave.


Introduction

The position of AI Ethicist is a recent arrival to the corporate scene. Tasked with ethical evaluations of AI systems, there may be times that the role feels lonely. Potentially being the only objector to the deployment of an AI product which could earn your company a healthy profit, no matter how sure you are, is a scary thought. Hence, it is important to note that the AI Ethicist’s role requires bravery. Yet, the AI Ethicist is not the only agent operating in the Ethical AI space.

Key Insights

AI Ethics is not just for the AI Ethicist

An important distinction is how an AI Ethicist is not the only one who engages in AI Ethics. With AI stretching into multiple walks of life and business practices, a sole AI ethicist would not be able to capture the different perspectives needed to consider. Hence, technologists, data scientists, lawyers, and the public form part of the field’s multidisciplinary nature. Different backgrounds are more suited to identifying different types of ethical risks. Be it a lawyer identifying a tricky definition used in describing an AI system, or a public member bringing up their view of how it would affect their lives.

To illustrate more clearly, an example involving autonomous vehicles fits. While an Ethicist can comment on the traditional Trolley Problem, data engineers must also understand how to incorporate its thinking into hard code. Not only that, but consultation with the broader public can help understand the broader requirements these vehicles are meant to fill, especially with the older population. All in all, just because the AI Ethicist’s job title is closest semantically to AI Ethics doesn’t mean it’s the sole actor in the space.

The role of an AI Ethicist

Nevertheless, an AI Ethicist still has a role to fill within the field. The job includes potentially being the only member of a team to veto an AI product that could earn your company a healthy profit. Whilst other team members could be “silenced by a profit margin”, an AI Ethicist is expected to draw on moral principles to help decipher what is right and wrong within an AI context before applying their deduction to concrete examples. The application then needs to be presented in an empathetic manner not to receive defensive responses. 

It is also the AI Ethicist’s responsibility to maintain objectivity in ethically charged situations within this process. As a result, the Ethicist may become the default General of assigning responsibility when consulted on the location of potential ethical faults in an AI product.  To do this effectively, proficiency in the design, development and deployment of the AI system at hand is paramount. This does not mean that the ethicist must be fluent in every ethical system in existence, but how they must be fluent in their industrial context. 

Part of understanding the context lies in recognising both the logical and illogical inputs present in making a decision. There is no point in simply appealing to logic when trying to explain an illogical decision made, making the quality of awareness of an AI Ethicist a vital tool. One such example could be how IBM released their facial recognition technology despite the bias problems that resulted. Here, it doesn’t help to ask ‘why did they release a harmful product?’ but rather examine other factors in the decision. There could’ve been a lack of information about the potential for bias, or internal company pressure to release the product. It is not the AI Ethicist’s job to excuse any form of industry behaviour, but to be sensitive to non-logical factors.

All of this requires bravery.

Why bravery is needed

An AI Ethicist is to be prepared to walk into a room where they only disagree with an AI proposal. This also means that the AI Ethicist becomes the focal point of responsibility when discussing ethical decisions and may be used as a scapegoat should the product not be launched. Cases may arise where a moratorium results, placing the blame more on society ‘not being ready’ rather than an AI Ethicist being difficult. 

However, policies that result from a moratorium aren’t guaranteed to be water-tight. Some procedures could potentially only command the bare minimum for a compliant AI product yet still leave room for an AI Ethicist to give a red light. It could be that a company keeps the raw data for an AI system private to external parties in one national context (as mandated by the law) but doesn’t do so in a different space. So, while technically being compliant, an AI Ethicist may still need to step in to encourage against damaging the company’s reputation. To do so, requires bravery.

Between the lines

With the AI Ethicist position becoming more and more prominent, certain qualities are required to prevent it from becoming a marketing stunt. The paper claims that bravery is one of them, and I wholeheartedly agree. One thing that I believe can help is, as mentioned in my last research summary, being more than one AI Ethicist involved. Instead, boasting of AI Ethicists disseminated throughout the company will allow ethical problems to be picked up and talked about far quicker. Nevertheless, every one of these positions, no matter how many there are, will require bravery.

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

🔍 SEARCH

Spotlight

AI Policy Corner: Frontier AI Safety Commitments, AI Seoul Summit 2024

AI Policy Corner: The Colorado State Deepfakes Act

Special Edition: Honouring the Legacy of Abhishek Gupta (1992–2024)

AI Policy Corner: The Turkish Artificial Intelligence Law Proposal

From Funding Crisis to AI Misuse: Critical Digital Rights Challenges from RightsCon 2025

related posts

  • The Larger The Fairer? Small Neural Networks Can Achieve Fairness for Edge Devices

    The Larger The Fairer? Small Neural Networks Can Achieve Fairness for Edge Devices

  • Friend or foe? Exploring the implications of large language models on the science system

    Friend or foe? Exploring the implications of large language models on the science system

  • The Bias of Harmful Label Associations in Vision-Language Models

    The Bias of Harmful Label Associations in Vision-Language Models

  • Open and Linked Data Model for Carbon Footprint Scenarios

    Open and Linked Data Model for Carbon Footprint Scenarios

  • Modeling Content Creator Incentives on Algorithm-Curated Platforms

    Modeling Content Creator Incentives on Algorithm-Curated Platforms

  • Why AI Ethics Is a Critical Theory

    Why AI Ethics Is a Critical Theory

  • Towards A Unified Utilitarian Ethics Framework for Healthcare Artificial Intelligence

    Towards A Unified Utilitarian Ethics Framework for Healthcare Artificial Intelligence

  • How the TAII Framework Could Influence the Amazon's Astro Home Robot Development

    How the TAII Framework Could Influence the Amazon's Astro Home Robot Development

  • Beyond Empirical Windowing: An Attention-Based Approach for Trust Prediction in Autonomous Vehicles

    Beyond Empirical Windowing: An Attention-Based Approach for Trust Prediction in Autonomous Vehicles

  • How Artifacts Afford: The Power and Politics of Everyday Things

    How Artifacts Afford: The Power and Politics of Everyday Things

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer

Categories


• Blog
• Research Summaries
• Columns
• Core Principles of Responsible AI
• Special Topics

Signature Content


• The State Of AI Ethics

• The Living Dictionary

• The AI Ethics Brief

Learn More


• About

• Open Access Policy

• Contributions Policy

• Editorial Stance on AI Tools

• Press

• Donate

• Contact

The AI Ethics Brief (bi-weekly newsletter)

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.


Archive

  • Š MONTREAL AI ETHICS INSTITUTE. All rights reserved 2024.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.