• Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • Core Principles of Responsible AI
    • Accountability
    • Fairness
    • Privacy
    • Safety and Security
    • Sustainability
    • Transparency
  • Special Topics
    • AI in Industry
    • Ethical Implications
    • Human-Centered Design
    • Regulatory Landscape
    • Technical Methods
  • Living Dictionary
  • State of AI Ethics
  • AI Ethics Brief
  • 🇫🇷
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

Why AI ethics is a critical theory

March 14, 2022

🔬 Research summary by Rosalie Waelen, a Ph.D. candidate in the Ethics of AI at the University of Twente in the Netherlands. Her research is part of the EU Horizon 2020 ITN project PROTECT under the Marie Skłodowska-Curie grant agreement No 813497.

[Original paper by Rosalie Waelen]


Overview: Analyzing AI ethics principles in terms of power reveals that they share central goal: to protect human emancipation and empowerment in the face of the powerful emerging technology that is AI. Because of this implicit concern with power, we can call AI ethics a critical theory. 


Introduction

The fear of new technologies overpowering mankind is not just a theme we find in science fiction. In the article ‘Why AI Ethics Is a Critical Theory’ it is shown that the concern for AI’s power and what it means for our own emancipation and empowerment is fundamental to the emerging field of AI ethics. The central moral questions and the most commonly cited ethical guidelines in AI ethics can all be defined in terms of power. Doing so reveals that AI ethics has the characteristics of a critical theory. Like any critical theory, AI ethics has an emancipatory goal and seeks not only to diagnose but also to change society.

Analyzing the issues in AI ethics in terms of power can help the field forward because it offers a common language to discuss and compare different issues. Furthermore, understanding ethical issues in AI in terms of power improves our ability to connect the ethical implications of specific AI applications to larger societal and political problems.

What constitutes a critical theory?

The term critical theory usually refers to the work of a group of philosophers known as ‘the Frankfurt School’. But when understood more broadly, a critical theory is any theory or approach that seeks to diagnose and change society in order to promote emancipation and empowerment. Hence, the concept of power plays a central role in critical theory.

Power is a contested concept. When taking a pluralistic approach to power, we could say that there are at least four important aspects to power. 1) Power is a disposition, something that you can gain or lose (by being empowered or disempowered, respectively). 2) We can exercise power over others, which creates a power relation. 3) Who has power and who is subjected to it is often determined by the structural, systemic power relations in a society. 4) Power can be constitutive, it can shape our behavior and self-development.

Analyzing AI ethics in terms of power

The pluralistic understanding of power offers a frame to analyze ethical issues in AI. Take some of the most commonly raised ethical issues in AI: transparency, privacy and fairness. Transparency is considered to be so important, because it grants users the ability to know and control what happens with their information. In other words, it empowers users. Privacy is valued for the same reason. Privacy too should give a person the power to control who has access to their information and to others aspects of the self. Fairness, on the other hand, is not so much about empowering users, but more about protecting them against systemic power asymmetries. So fairness can be understood as promoting emancipation.

Analyzing these and other issues in AI ethics in terms of power shows that the concern for emancipation and empowerment is at the core of the field. Given this central concern, combined with the fact that AI ethics is not just meant to criticize but also to change the role of technology in our lives and societies, it is concluded that AI ethics resembles a critical theory.

Benefits of a critical approach to AI ethics

The thesis that AI ethics is a critical theory not only offers a new perspective on the field, it could also help to overcome some of the shortcomings of the currently dominant principled approach (that is, the development of ethical guidelines like ‘transparency’ and ‘privacy’ for the development and use of AI).

AI ethics principles and guidelines have been accused of being too abstract, little action guiding and insufficiently attuned to the social and political context of ethical issues. They have also been described as lacking a common aim. But defining the different ethical guidelines and moral question in AI ethics in terms of power shows that the field does have a central aim: promoting human emancipation and empowerment. Moreover, discussing the ethical issues in AI in terms of power makes it easier to compare different issues. Translating ethical issues in AI to the language of power, that is also used by non-ethicists, also makes the issues less abstract and can improve interdisciplinary collaboration. And, finally, as power is a central theme in sociology and political science, it can help to improve our understanding of the societal implications of AI as well as the ways in which ethical implications tie into larger societal and political matters.

Between the lines

In recent years, a large amount of AI ethics guidelines have been developed by academics, governments and businesses. Although it is exciting to see the tremendous attention that AI ethics is getting, it is not clear how to move forward from here. Understanding AI ethics as a critical theory and discussing ethical implications of AI in terms of power can bring unity in the numerous AI ethics initiatives and improve the interdisciplinary collaboration that is needed to put ethical principles into practice. Further research can support this effort by applying the proposed power analysis in different contexts and by investigating in what ways the tradition of critical theory can support AI ethics.

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

🔍 SEARCH

Spotlight

Beyond Dependency: The Hidden Risk of Social Comparison in Chatbot Companionship

AI Policy Corner: Restriction vs. Regulation: Comparing State Approaches to AI Mental Health Legislation

Beyond Consultation: Building Inclusive AI Governance for Canada’s Democratic Future

AI Policy Corner: U.S. Executive Order on Advancing AI Education for American Youth

AI Policy Corner: U.S. Copyright Guidance on Works Created with AI

related posts

  • AI in the Gray: Exploring Moderation Policies in Dialogic Large Language Models vs. Human Answers in...

    AI in the Gray: Exploring Moderation Policies in Dialogic Large Language Models vs. Human Answers in...

  • The Ethical Implications of Generative Audio Models: A Systematic Literature Review

    The Ethical Implications of Generative Audio Models: A Systematic Literature Review

  • Research summary: Warning Signs: The Future of Privacy and Security in the Age of Machine Learning

    Research summary: Warning Signs: The Future of Privacy and Security in the Age of Machine Learning

  • Exploring the Carbon Footprint of Hugging Face's ML Models: A Repository Mining Study

    Exploring the Carbon Footprint of Hugging Face's ML Models: A Repository Mining Study

  • Towards a Framework for Human-AI Interaction Patterns in Co-Creative GAN Applications

    Towards a Framework for Human-AI Interaction Patterns in Co-Creative GAN Applications

  • Exploring Collaboration Mechanisms for LLM Agents: A Social Psychology View

    Exploring Collaboration Mechanisms for LLM Agents: A Social Psychology View

  • AI Framework for Healthy Built Environments

    AI Framework for Healthy Built Environments

  • The Narrow Depth and Breadth of Corporate Responsible AI Research

    The Narrow Depth and Breadth of Corporate Responsible AI Research

  • Deepfakes and Domestic Violence: Perpetrating Intimate Partner Abuse Using Video Technology

    Deepfakes and Domestic Violence: Perpetrating Intimate Partner Abuse Using Video Technology

  • Faith and Fate: Limits of Transformers on Compositionality

    Faith and Fate: Limits of Transformers on Compositionality

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer

Categories


• Blog
• Research Summaries
• Columns
• Core Principles of Responsible AI
• Special Topics

Signature Content


• The State Of AI Ethics

• The Living Dictionary

• The AI Ethics Brief

Learn More


• About

• Open Access Policy

• Contributions Policy

• Editorial Stance on AI Tools

• Press

• Donate

• Contact

The AI Ethics Brief (bi-weekly newsletter)

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.


Archive

  • © 2025 MONTREAL AI ETHICS INSTITUTE.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.