• Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • Core Principles of Responsible AI
    • Accountability
    • Fairness
    • Privacy
    • Safety and Security
    • Sustainability
    • Transparency
  • Special Topics
    • AI in Industry
    • Ethical Implications
    • Human-Centered Design
    • Regulatory Landscape
    • Technical Methods
  • Living Dictionary
  • State of AI Ethics
  • AI Ethics Brief
  • 🇫🇷
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

Why AI Ethics Is a Critical Theory

March 2, 2022

🔬 Research summary by Connor Wright, our Partnerships Manager.

[Original paper by Rosalie Waelen]


Overview: How can we solve the problems associated with the principled approach to AI Ethics? One way to do so is to focus on AI Ethics as a critical theory. This all begins with exploring how AI principles could all bear a common thread in the form of power, emancipation and empowerment.


Introduction

The principled AI approach has taken centre stage in approaches to AI Ethics, especially in the governmental arena. The outlook aims to deal with the ethical problems that arise from AI as an object (such as facial recognition technology) and as a subject (like questions over moral agency). However, with complaints about the principles being too intangible, viewing AI Ethics as a critical theory can help simplify these aims into more achievable goals. To do so, I’ll showcase the main argument from Waelen before offering her examination of what a critical theory is and then launch into how this relates to AI Ethics.

Key Insights

The main argument

From my point of view, the main argument proposed is how AI Ethics is a critical theory, as it is fundamentally concerned with the emancipation of humanity and empowerment. Consequently, we should analyse AI ethics through the critical theory lens to overcome some of the shortcomings of the principled approach to AI. 

The principled approach has been the main plan of action for AI, with themes such as AI justice and fairness being two staple products. However, Waelen mentions how these principles have often been considered too far-reaching. Hence, viewing these maxims concerning power through the critical theory lens makes their desired goals seem more tangible. It will also help unite all the approaches to AI in one common goal, namely emancipating humanity.

To verify all of this, we should observe what constitutes a critical theory.

What constitutes a critical theory?

A critical theory aims to ‘diagnose and change society’ (pp. 3). Instead of working towards a predefined singular utopian future, critical theorists argue for ‘immanent transcendence’ (pp. 3), where different futures are worked towards based on how the world currently is. In this endeavour, a critical theory is always looking to overcome constraints and restrictions on humanity. One way to do so is to examine the concept of power.

The concept of power

Different views of power are put under the microscope. For example, some believe power is a capacity (the dispositional view), whereas others believe power is exercising a capacity (the episodic view). Furthering this, the systemic view observes how social elements can hold power of people, while constitutive power notes power’s effect over individuals. Despite these final two theories referring more so to structures than agents, Waelen deems all four theories compatible and necessary to examine AI Ethics as a critical theory.

To manifest their thinking, Waelen demonstrates the common thread of emancipation and empowerment in AI Ethics by showing the relation between power and AI principles. The connections are as follows:

  • Transparency and power – the individual is empowered to control their data through data processes being made transparent.
  • Justice, Fairness and Solidarity – these principles ensure that the power relations established between and amongst people are fair and do not divide society further.
  • Non-maleficence and Beneficence – the aim is to ensure AI does not get in the way of emancipating humanity and that non-maleficent power relations do not develop.
  • Responsibility and Accountability – they inspire setting up checks and balances to avoid unjust power relations being established.
  • Privacy – paramount here is the empowerment of individuals to control access to their own information.
  • Freedom and autonomy – the goal is to make sure that the AI technology emancipates and empowers instead of restricting and limiting humanity.
  • Trust – serves to guarantee that humanity doesn’t have to worry about AI enacting unnecessary power over us.

Between the lines

I greatly appreciated how Waeler noted that there are other non-Western ways to view the concept of power. One of them mentioned is my personal favourite Ubuntu, which I believe ties nicely into AI Ethics as a critical theory’s aim of focussing on humanity. The emphasis on emancipation and empowerment can help us see how what gives life to AI is human data and what gives it purpose is human need. Hence, if viewing AI Ethics as a critical helps to emphasise this, I am all for it.

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

🔍 SEARCH

Spotlight

AI Policy Corner: Frontier AI Safety Commitments, AI Seoul Summit 2024

AI Policy Corner: The Colorado State Deepfakes Act

Special Edition: Honouring the Legacy of Abhishek Gupta (1992–2024)

AI Policy Corner: The Turkish Artificial Intelligence Law Proposal

From Funding Crisis to AI Misuse: Critical Digital Rights Challenges from RightsCon 2025

related posts

  • The Impact of the GDPR on Artificial Intelligence

    The Impact of the GDPR on Artificial Intelligence

  • Seeing Like a Toolkit: How Toolkits Envision the Work of AI Ethics

    Seeing Like a Toolkit: How Toolkits Envision the Work of AI Ethics

  • Research summary: Troops, Trolls and Troublemakers: A Global Inventory of Organized Social Media Man...

    Research summary: Troops, Trolls and Troublemakers: A Global Inventory of Organized Social Media Man...

  • Group Fairness Is Not Derivable From Justice: a Mathematical Proof

    Group Fairness Is Not Derivable From Justice: a Mathematical Proof

  • Automating Extremism: Mapping the Affective Roles of Artificial Agents in Online Radicalization

    Automating Extremism: Mapping the Affective Roles of Artificial Agents in Online Radicalization

  • Adding Structure to AI Harm

    Adding Structure to AI Harm

  • Consent as a Foundation for Responsible Autonomy

    Consent as a Foundation for Responsible Autonomy

  • Artificial Intelligence: the global landscape of ethics guidelines

    Artificial Intelligence: the global landscape of ethics guidelines

  • Using Pre-Trained Language Models for Producing Counter Narratives Against Hate Speech: a Comparativ...

    Using Pre-Trained Language Models for Producing Counter Narratives Against Hate Speech: a Comparativ...

  • The Larger The Fairer? Small Neural Networks Can Achieve Fairness for Edge Devices

    The Larger The Fairer? Small Neural Networks Can Achieve Fairness for Edge Devices

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer

Categories


• Blog
• Research Summaries
• Columns
• Core Principles of Responsible AI
• Special Topics

Signature Content


• The State Of AI Ethics

• The Living Dictionary

• The AI Ethics Brief

Learn More


• About

• Open Access Policy

• Contributions Policy

• Editorial Stance on AI Tools

• Press

• Donate

• Contact

The AI Ethics Brief (bi-weekly newsletter)

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.


Archive

  • © MONTREAL AI ETHICS INSTITUTE. All rights reserved 2024.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.