• Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • Core Principles of Responsible AI
    • Accountability
    • Fairness
    • Privacy
    • Safety and Security
    • Sustainability
    • Transparency
  • Special Topics
    • AI in Industry
    • Ethical Implications
    • Human-Centered Design
    • Regulatory Landscape
    • Technical Methods
  • Living Dictionary
  • State of AI Ethics
  • AI Ethics Brief
  • šŸ‡«šŸ‡·
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

Brave: what it means to be an AI Ethicist

September 13, 2021

šŸ”¬ Research summary by Connor Wright, our Partnerships Manager.

[Original paper by Olivia Gambelin]


Overview: The position of AI Ethicist is a recent arrival to the corporate scene, with one of its key novelties being the importance of bravery. Whether taken seriously or treated as a PR stunt, alongside the need to decipher right or wrong is the ability to be brave.


Introduction

The position of AI Ethicist is a recent arrival to the corporate scene. Tasked with ethical evaluations of AI systems, there may be times that the role feels lonely. Potentially being the only objector to the deployment of an AI product which could earn your company a healthy profit, no matter how sure you are, is a scary thought. Hence, it is important to note that the AI Ethicist’s role requires bravery. Yet, the AI Ethicist is not the only agent operating in the Ethical AI space.

Key Insights

AI Ethics is not just for the AI Ethicist

An important distinction is how an AI Ethicist is not the only one who engages in AI Ethics. With AI stretching into multiple walks of life and business practices, a sole AI ethicist would not be able to capture the different perspectives needed to consider. Hence, technologists, data scientists, lawyers, and the public form part of the field’s multidisciplinary nature. Different backgrounds are more suited to identifying different types of ethical risks. Be it a lawyer identifying a tricky definition used in describing an AI system, or a public member bringing up their view of how it would affect their lives.

To illustrate more clearly, an example involving autonomous vehicles fits. While an Ethicist can comment on the traditional Trolley Problem, data engineers must also understand how to incorporate its thinking into hard code. Not only that, but consultation with the broader public can help understand the broader requirements these vehicles are meant to fill, especially with the older population. All in all, just because the AI Ethicist’s job title is closest semantically to AI Ethics doesn’t mean it’s the sole actor in the space.

The role of an AI Ethicist

Nevertheless, an AI Ethicist still has a role to fill within the field. The job includes potentially being the only member of a team to veto an AI product that could earn your company a healthy profit. Whilst other team members could be ā€œsilenced by a profit marginā€, an AI Ethicist is expected to draw on moral principles to help decipher what is right and wrong within an AI context before applying their deduction to concrete examples. The application then needs to be presented in an empathetic manner not to receive defensive responses. 

It is also the AI Ethicist’s responsibility to maintain objectivity in ethically charged situations within this process. As a result, the Ethicist may become the default General of assigning responsibility when consulted on the location of potential ethical faults in an AI product.  To do this effectively, proficiency in the design, development and deployment of the AI system at hand is paramount. This does not mean that the ethicist must be fluent in every ethical system in existence, but how they must be fluent in their industrial context. 

Part of understanding the context lies in recognising both the logical and illogical inputs present in making a decision. There is no point in simply appealing to logic when trying to explain an illogical decision made, making the quality of awareness of an AI Ethicist a vital tool. One such example could be how IBM released their facial recognition technology despite the bias problems that resulted. Here, it doesn’t help to ask ā€˜why did they release a harmful product?’ but rather examine other factors in the decision. There could’ve been a lack of information about the potential for bias, or internal company pressure to release the product. It is not the AI Ethicist’s job to excuse any form of industry behaviour, but to be sensitive to non-logical factors.

All of this requires bravery.

Why bravery is needed

An AI Ethicist is to be prepared to walk into a room where they only disagree with an AI proposal. This also means that the AI Ethicist becomes the focal point of responsibility when discussing ethical decisions and may be used as a scapegoat should the product not be launched. Cases may arise where a moratorium results, placing the blame more on society ā€˜not being ready’ rather than an AI Ethicist being difficult. 

However, policies that result from a moratorium aren’t guaranteed to be water-tight. Some procedures could potentially only command the bare minimum for a compliant AI product yet still leave room for an AI Ethicist to give a red light. It could be that a company keeps the raw data for an AI system private to external parties in one national context (as mandated by the law) but doesn’t do so in a different space. So, while technically being compliant, an AI Ethicist may still need to step in to encourage against damaging the company’s reputation. To do so, requires bravery.

Between the lines

With the AI Ethicist position becoming more and more prominent, certain qualities are required to prevent it from becoming a marketing stunt. The paper claims that bravery is one of them, and I wholeheartedly agree. One thing that I believe can help is, as mentioned in my last research summary, being more than one AI Ethicist involved. Instead, boasting of AI Ethicists disseminated throughout the company will allow ethical problems to be picked up and talked about far quicker. Nevertheless, every one of these positions, no matter how many there are, will require bravery.

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

šŸ” SEARCH

Spotlight

Canada’s Minister of AI and Digital Innovation is a Historic First. Here’s What We Recommend.

Am I Literate? Redefining Literacy in the Age of Artificial Intelligence

AI Policy Corner: The Texas Responsible AI Governance Act

AI Policy Corner: Singapore’s National AI Strategy 2.0

AI Governance in a Competitive World: Balancing Innovation, Regulation and Ethics | Point Zero Forum 2025

related posts

  • Dating Through the Filters

    Dating Through the Filters

  • Tell me, what are you most afraid of? Exploring the Effects of Agent Representation on Information D...

    Tell me, what are you most afraid of? Exploring the Effects of Agent Representation on Information D...

  • Project Let’s Talk Privacy (Research Summary)

    Project Let’s Talk Privacy (Research Summary)

  • Emerging trends: Unfair, biased, addictive, dangerous, deadly, and insanely profitable

    Emerging trends: Unfair, biased, addictive, dangerous, deadly, and insanely profitable

  • Designing Fiduciary Artificial Intelligence

    Designing Fiduciary Artificial Intelligence

  • The Design Space of Generative Models

    The Design Space of Generative Models

  • Open Problems and Fundamental Limitations of Reinforcement Learning from Human Feedback

    Open Problems and Fundamental Limitations of Reinforcement Learning from Human Feedback

  • Research Summary: Explaining and Harnessing Adversarial Examples

    Research Summary: Explaining and Harnessing Adversarial Examples

  • Repairing Innovation - A Study of Integrating AI in Clinical Care (Research Summary)

    Repairing Innovation - A Study of Integrating AI in Clinical Care (Research Summary)

  • Conformity Assessments and Post-market Monitoring: A Guide to the Role of Auditing in the Proposed E...

    Conformity Assessments and Post-market Monitoring: A Guide to the Role of Auditing in the Proposed E...

Partners

  • Ā 
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer

Categories


• Blog
• Research Summaries
• Columns
• Core Principles of Responsible AI
• Special Topics

Signature Content


• The State Of AI Ethics

• The Living Dictionary

• The AI Ethics Brief

Learn More


• About

• Open Access Policy

• Contributions Policy

• Editorial Stance on AI Tools

• Press

• Donate

• Contact

The AI Ethics Brief (bi-weekly newsletter)

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.


Archive

  • Ā© MONTREAL AI ETHICS INSTITUTE. All rights reserved 2024.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.