• Skip to main content
  • Skip to secondary menu
  • Skip to primary sidebar
  • Skip to footer
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

  • Articles
    • Public Policy
    • Privacy & Security
    • Human Rights
      • Ethics
      • JEDI (Justice, Equity, Diversity, Inclusion
    • Climate
    • Design
      • Emerging Technology
    • Application & Adoption
      • Health
      • Education
      • Government
        • Military
        • Public Works
      • Labour
    • Arts & Culture
      • Film & TV
      • Music
      • Pop Culture
      • Digital Art
  • Columns
    • AI Policy Corner
    • Recess
  • The AI Ethics Brief
  • AI Literacy
    • Research Summaries
    • AI Ethics Living Dictionary
    • Learning Community
  • The State of AI Ethics Report
    • Volume 6 (February 2022)
    • Volume 5 (July 2021)
    • Volume 4 (April 2021)
    • Volume 3 (Jan 2021)
    • Volume 2 (Oct 2020)
    • Volume 1 (June 2020)
  • About
    • Our Contributions Policy
    • Our Open Access Policy
    • Contact
    • Donate

Why AI ethics is a critical theory

March 14, 2022

🔬 Research summary by Rosalie Waelen, a Ph.D. candidate in the Ethics of AI at the University of Twente in the Netherlands. Her research is part of the EU Horizon 2020 ITN project PROTECT under the Marie Skłodowska-Curie grant agreement No 813497.

[Original paper by Rosalie Waelen]


Overview: Analyzing AI ethics principles in terms of power reveals that they share central goal: to protect human emancipation and empowerment in the face of the powerful emerging technology that is AI. Because of this implicit concern with power, we can call AI ethics a critical theory. 


Introduction

The fear of new technologies overpowering mankind is not just a theme we find in science fiction. In the article ‘Why AI Ethics Is a Critical Theory’ it is shown that the concern for AI’s power and what it means for our own emancipation and empowerment is fundamental to the emerging field of AI ethics. The central moral questions and the most commonly cited ethical guidelines in AI ethics can all be defined in terms of power. Doing so reveals that AI ethics has the characteristics of a critical theory. Like any critical theory, AI ethics has an emancipatory goal and seeks not only to diagnose but also to change society.

Analyzing the issues in AI ethics in terms of power can help the field forward because it offers a common language to discuss and compare different issues. Furthermore, understanding ethical issues in AI in terms of power improves our ability to connect the ethical implications of specific AI applications to larger societal and political problems.

What constitutes a critical theory?

The term critical theory usually refers to the work of a group of philosophers known as ‘the Frankfurt School’. But when understood more broadly, a critical theory is any theory or approach that seeks to diagnose and change society in order to promote emancipation and empowerment. Hence, the concept of power plays a central role in critical theory.

Power is a contested concept. When taking a pluralistic approach to power, we could say that there are at least four important aspects to power. 1) Power is a disposition, something that you can gain or lose (by being empowered or disempowered, respectively). 2) We can exercise power over others, which creates a power relation. 3) Who has power and who is subjected to it is often determined by the structural, systemic power relations in a society. 4) Power can be constitutive, it can shape our behavior and self-development.

Analyzing AI ethics in terms of power

The pluralistic understanding of power offers a frame to analyze ethical issues in AI. Take some of the most commonly raised ethical issues in AI: transparency, privacy and fairness. Transparency is considered to be so important, because it grants users the ability to know and control what happens with their information. In other words, it empowers users. Privacy is valued for the same reason. Privacy too should give a person the power to control who has access to their information and to others aspects of the self. Fairness, on the other hand, is not so much about empowering users, but more about protecting them against systemic power asymmetries. So fairness can be understood as promoting emancipation.

Analyzing these and other issues in AI ethics in terms of power shows that the concern for emancipation and empowerment is at the core of the field. Given this central concern, combined with the fact that AI ethics is not just meant to criticize but also to change the role of technology in our lives and societies, it is concluded that AI ethics resembles a critical theory.

Benefits of a critical approach to AI ethics

The thesis that AI ethics is a critical theory not only offers a new perspective on the field, it could also help to overcome some of the shortcomings of the currently dominant principled approach (that is, the development of ethical guidelines like ‘transparency’ and ‘privacy’ for the development and use of AI).

AI ethics principles and guidelines have been accused of being too abstract, little action guiding and insufficiently attuned to the social and political context of ethical issues. They have also been described as lacking a common aim. But defining the different ethical guidelines and moral question in AI ethics in terms of power shows that the field does have a central aim: promoting human emancipation and empowerment. Moreover, discussing the ethical issues in AI in terms of power makes it easier to compare different issues. Translating ethical issues in AI to the language of power, that is also used by non-ethicists, also makes the issues less abstract and can improve interdisciplinary collaboration. And, finally, as power is a central theme in sociology and political science, it can help to improve our understanding of the societal implications of AI as well as the ways in which ethical implications tie into larger societal and political matters.

Between the lines

In recent years, a large amount of AI ethics guidelines have been developed by academics, governments and businesses. Although it is exciting to see the tremendous attention that AI ethics is getting, it is not clear how to move forward from here. Understanding AI ethics as a critical theory and discussing ethical implications of AI in terms of power can bring unity in the numerous AI ethics initiatives and improve the interdisciplinary collaboration that is needed to put ethical principles into practice. Further research can support this effort by applying the proposed power analysis in different contexts and by investigating in what ways the tradition of critical theory can support AI ethics.

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

🔍 SEARCH

Spotlight

ALL IN Conference 2025: Four Key Takeaways from Montreal

Beyond Dependency: The Hidden Risk of Social Comparison in Chatbot Companionship

AI Policy Corner: Restriction vs. Regulation: Comparing State Approaches to AI Mental Health Legislation

Beyond Consultation: Building Inclusive AI Governance for Canada’s Democratic Future

AI Policy Corner: U.S. Executive Order on Advancing AI Education for American Youth

related posts

  • Ethics for People Who Work in Tech

    Ethics for People Who Work in Tech

  • On Measuring Fairness in Generative Modelling (NeurIPS 2023)

    On Measuring Fairness in Generative Modelling (NeurIPS 2023)

  • Research Summary: Towards Evaluating the Robustness of Neural Networks

    Research Summary: Towards Evaluating the Robustness of Neural Networks

  • Ten Simple Rules for Good Model-sharing Practices

    Ten Simple Rules for Good Model-sharing Practices

  • Mapping the Responsible AI Profession, A Field in Formation (techUK)

    Mapping the Responsible AI Profession, A Field in Formation (techUK)

  • Evaluating the Social Impact of Generative AI Systems in Systems and Society

    Evaluating the Social Impact of Generative AI Systems in Systems and Society

  • Research summary: Learning to Diversify from Human Judgments - Research Directions and Open Challeng...

    Research summary: Learning to Diversify from Human Judgments - Research Directions and Open Challeng...

  • CodeAid: Evaluating a Classroom Deployment of an LLM-based Programming Assistant that Balances Stude...

    CodeAid: Evaluating a Classroom Deployment of an LLM-based Programming Assistant that Balances Stude...

  • Towards a Feminist Metaethics of AI

    Towards a Feminist Metaethics of AI

  • Toward Responsible AI Use: Considerations for Sustainability Impact Assessment

    Toward Responsible AI Use: Considerations for Sustainability Impact Assessment

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer


Articles

Columns

AI Literacy

The State of AI Ethics Report


 

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.

Contact

Donate


  • © 2025 MONTREAL AI ETHICS INSTITUTE.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.