• Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • Core Principles of Responsible AI
    • Accountability
    • Fairness
    • Privacy
    • Safety and Security
    • Sustainability
    • Transparency
  • Special Topics
    • AI in Industry
    • Ethical Implications
    • Human-Centered Design
    • Regulatory Landscape
    • Technical Methods
  • Living Dictionary
  • State of AI Ethics
  • AI Ethics Brief
  • 🇫🇷
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

Why AI Ethics Is a Critical Theory

March 2, 2022

🔬 Research summary by Connor Wright, our Partnerships Manager.

[Original paper by Rosalie Waelen]


Overview: How can we solve the problems associated with the principled approach to AI Ethics? One way to do so is to focus on AI Ethics as a critical theory. This all begins with exploring how AI principles could all bear a common thread in the form of power, emancipation and empowerment.


Introduction

The principled AI approach has taken centre stage in approaches to AI Ethics, especially in the governmental arena. The outlook aims to deal with the ethical problems that arise from AI as an object (such as facial recognition technology) and as a subject (like questions over moral agency). However, with complaints about the principles being too intangible, viewing AI Ethics as a critical theory can help simplify these aims into more achievable goals. To do so, I’ll showcase the main argument from Waelen before offering her examination of what a critical theory is and then launch into how this relates to AI Ethics.

Key Insights

The main argument

From my point of view, the main argument proposed is how AI Ethics is a critical theory, as it is fundamentally concerned with the emancipation of humanity and empowerment. Consequently, we should analyse AI ethics through the critical theory lens to overcome some of the shortcomings of the principled approach to AI. 

The principled approach has been the main plan of action for AI, with themes such as AI justice and fairness being two staple products. However, Waelen mentions how these principles have often been considered too far-reaching. Hence, viewing these maxims concerning power through the critical theory lens makes their desired goals seem more tangible. It will also help unite all the approaches to AI in one common goal, namely emancipating humanity.

To verify all of this, we should observe what constitutes a critical theory.

What constitutes a critical theory?

A critical theory aims to ‘diagnose and change society’ (pp. 3). Instead of working towards a predefined singular utopian future, critical theorists argue for ‘immanent transcendence’ (pp. 3), where different futures are worked towards based on how the world currently is. In this endeavour, a critical theory is always looking to overcome constraints and restrictions on humanity. One way to do so is to examine the concept of power.

The concept of power

Different views of power are put under the microscope. For example, some believe power is a capacity (the dispositional view), whereas others believe power is exercising a capacity (the episodic view). Furthering this, the systemic view observes how social elements can hold power of people, while constitutive power notes power’s effect over individuals. Despite these final two theories referring more so to structures than agents, Waelen deems all four theories compatible and necessary to examine AI Ethics as a critical theory.

To manifest their thinking, Waelen demonstrates the common thread of emancipation and empowerment in AI Ethics by showing the relation between power and AI principles. The connections are as follows:

  • Transparency and power – the individual is empowered to control their data through data processes being made transparent.
  • Justice, Fairness and Solidarity – these principles ensure that the power relations established between and amongst people are fair and do not divide society further.
  • Non-maleficence and Beneficence – the aim is to ensure AI does not get in the way of emancipating humanity and that non-maleficent power relations do not develop.
  • Responsibility and Accountability – they inspire setting up checks and balances to avoid unjust power relations being established.
  • Privacy – paramount here is the empowerment of individuals to control access to their own information.
  • Freedom and autonomy – the goal is to make sure that the AI technology emancipates and empowers instead of restricting and limiting humanity.
  • Trust – serves to guarantee that humanity doesn’t have to worry about AI enacting unnecessary power over us.

Between the lines

I greatly appreciated how Waeler noted that there are other non-Western ways to view the concept of power. One of them mentioned is my personal favourite Ubuntu, which I believe ties nicely into AI Ethics as a critical theory’s aim of focussing on humanity. The emphasis on emancipation and empowerment can help us see how what gives life to AI is human data and what gives it purpose is human need. Hence, if viewing AI Ethics as a critical helps to emphasise this, I am all for it.

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

🔍 SEARCH

Spotlight

Beyond Dependency: The Hidden Risk of Social Comparison in Chatbot Companionship

AI Policy Corner: Restriction vs. Regulation: Comparing State Approaches to AI Mental Health Legislation

Beyond Consultation: Building Inclusive AI Governance for Canada’s Democratic Future

AI Policy Corner: U.S. Executive Order on Advancing AI Education for American Youth

AI Policy Corner: U.S. Copyright Guidance on Works Created with AI

related posts

  • International Human Rights, Artificial Intelligence, and the Challenge for the Pondering State: Time...

    International Human Rights, Artificial Intelligence, and the Challenge for the Pondering State: Time...

  • Exploiting The Right: Inferring Ideological Alignment in Online Influence Campaigns Using Shared Ima...

    Exploiting The Right: Inferring Ideological Alignment in Online Influence Campaigns Using Shared Ima...

  • Industry AI Ethics 101 with Kathy Baxter (Podcast Summary)

    Industry AI Ethics 101 with Kathy Baxter (Podcast Summary)

  • Towards Algorithmic Fairness in Space-Time: Filling in Black Holes and Detecting Bias in the Presenc...

    Towards Algorithmic Fairness in Space-Time: Filling in Black Holes and Detecting Bias in the Presenc...

  • AI Deception: A Survey of Examples, Risks, and Potential Solutions

    AI Deception: A Survey of Examples, Risks, and Potential Solutions

  • Research summary: On the Edge of Tomorrow: Canada’s AI Augmented Workforce

    Research summary: On the Edge of Tomorrow: Canada’s AI Augmented Workforce

  • Resistance and refusal to algorithmic harms: Varieties of ‘knowledge projects’

    Resistance and refusal to algorithmic harms: Varieties of ‘knowledge projects’

  • Social media polarization reflects shifting political alliances in Pakistan

    Social media polarization reflects shifting political alliances in Pakistan

  • Rethinking Gaming: The Ethical Work of Optimization in Web Search Engines (Research Summary)

    Rethinking Gaming: The Ethical Work of Optimization in Web Search Engines (Research Summary)

  • Research summary: Apps Gone Rogue: Maintaining Personal Privacy in an Epidemic

    Research summary: Apps Gone Rogue: Maintaining Personal Privacy in an Epidemic

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer

Categories


• Blog
• Research Summaries
• Columns
• Core Principles of Responsible AI
• Special Topics

Signature Content


• The State Of AI Ethics

• The Living Dictionary

• The AI Ethics Brief

Learn More


• About

• Open Access Policy

• Contributions Policy

• Editorial Stance on AI Tools

• Press

• Donate

• Contact

The AI Ethics Brief (bi-weekly newsletter)

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.


Archive

  • © 2025 MONTREAL AI ETHICS INSTITUTE.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.