• Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • Core Principles of Responsible AI
    • Accountability
    • Fairness
    • Privacy
    • Safety and Security
    • Sustainability
    • Transparency
  • Special Topics
    • AI in Industry
    • Ethical Implications
    • Human-Centered Design
    • Regulatory Landscape
    • Technical Methods
  • Living Dictionary
  • State of AI Ethics
  • AI Ethics Brief
  • šŸ‡«šŸ‡·
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

Why AI ethics is a critical theory

March 14, 2022

šŸ”¬ Research summary by Rosalie Waelen, a Ph.D. candidate in the Ethics of AI at the University of Twente in the Netherlands. Her research is part of the EU Horizon 2020 ITN project PROTECT under the Marie Skłodowska-Curie grant agreement No 813497.

[Original paper by Rosalie Waelen]


Overview: Analyzing AI ethics principles in terms of power reveals that they share central goal: to protect human emancipation and empowerment in the face of the powerful emerging technology that is AI. Because of this implicit concern with power, we can call AI ethics a critical theory.Ā 


Introduction

The fear of new technologies overpowering mankind is not just a theme we find in science fiction. In the article ā€˜Why AI Ethics Is a Critical Theory’ it is shown that the concern for AI’s power and what it means for our own emancipation and empowerment is fundamental to the emerging field of AI ethics. The central moral questions and the most commonly cited ethical guidelines in AI ethics can all be defined in terms of power. Doing so reveals that AI ethics has the characteristics of a critical theory. Like any critical theory, AI ethics has an emancipatory goal and seeks not only to diagnose but also to change society.

Analyzing the issues in AI ethics in terms of power can help the field forward because it offers a common language to discuss and compare different issues. Furthermore, understanding ethical issues in AI in terms of power improves our ability to connect the ethical implications of specific AI applications to larger societal and political problems.

What constitutes a critical theory?

The term critical theory usually refers to the work of a group of philosophers known as ā€˜the Frankfurt School’. But when understood more broadly, a critical theory is any theory or approach that seeks to diagnose and change society in order to promote emancipation and empowerment. Hence, the concept of power plays a central role in critical theory.

Power is a contested concept. When taking a pluralistic approach to power, we could say that there are at least four important aspects to power. 1) Power is a disposition, something that you can gain or lose (by being empowered or disempowered, respectively). 2) We can exercise power over others, which creates a power relation. 3) Who has power and who is subjected to it is often determined by the structural, systemic power relations in a society. 4) Power can be constitutive, it can shape our behavior and self-development.

Analyzing AI ethics in terms of power

The pluralistic understanding of power offers a frame to analyze ethical issues in AI. Take some of the most commonly raised ethical issues in AI: transparency, privacy and fairness. Transparency is considered to be so important, because it grants users the ability to know and control what happens with their information. In other words, it empowers users. Privacy is valued for the same reason. Privacy too should give a person the power to control who has access to their information and to others aspects of the self. Fairness, on the other hand, is not so much about empowering users, but more about protecting them against systemic power asymmetries. So fairness can be understood as promoting emancipation.

Analyzing these and other issues in AI ethics in terms of power shows that the concern for emancipation and empowerment is at the core of the field. Given this central concern, combined with the fact that AI ethics is not just meant to criticize but also to change the role of technology in our lives and societies, it is concluded that AI ethics resembles a critical theory.

Benefits of a critical approach to AI ethics

The thesis that AI ethics is a critical theory not only offers a new perspective on the field, it could also help to overcome some of the shortcomings of the currently dominant principled approach (that is, the development of ethical guidelines like ā€˜transparency’ and ā€˜privacy’ for the development and use of AI).

AI ethics principles and guidelines have been accused of being too abstract, little action guiding and insufficiently attuned to the social and political context of ethical issues. They have also been described as lacking a common aim. But defining the different ethical guidelines and moral question in AI ethics in terms of power shows that the field does have a central aim: promoting human emancipation and empowerment. Moreover, discussing the ethical issues in AI in terms of power makes it easier to compare different issues. Translating ethical issues in AI to the language of power, that is also used by non-ethicists, also makes the issues less abstract and can improve interdisciplinary collaboration. And, finally, as power is a central theme in sociology and political science, it can help to improve our understanding of the societal implications of AI as well as the ways in which ethical implications tie into larger societal and political matters.

Between the lines

In recent years, a large amount of AI ethics guidelines have been developed by academics, governments and businesses. Although it is exciting to see the tremendous attention that AI ethics is getting, it is not clear how to move forward from here. Understanding AI ethics as a critical theory and discussing ethical implications of AI in terms of power can bring unity in the numerous AI ethics initiatives and improve the interdisciplinary collaboration that is needed to put ethical principles into practice. Further research can support this effort by applying the proposed power analysis in different contexts and by investigating in what ways the tradition of critical theory can support AI ethics.

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

šŸ” SEARCH

Spotlight

Canada’s Minister of AI and Digital Innovation is a Historic First. Here’s What We Recommend.

Am I Literate? Redefining Literacy in the Age of Artificial Intelligence

AI Policy Corner: The Texas Responsible AI Governance Act

AI Policy Corner: Singapore’s National AI Strategy 2.0

AI Governance in a Competitive World: Balancing Innovation, Regulation and Ethics | Point Zero Forum 2025

related posts

  • Designing Fiduciary Artificial Intelligence

    Designing Fiduciary Artificial Intelligence

  • Disability, Bias, and AI (Research Summary)

    Disability, Bias, and AI (Research Summary)

  • The Ethics of AI Value Chains: An Approach for Integrating and Expanding AI Ethics Research, Practic...

    The Ethics of AI Value Chains: An Approach for Integrating and Expanding AI Ethics Research, Practic...

  • AI Neutrality in the Spotlight: ChatGPT’s Political Biases Revisited

    AI Neutrality in the Spotlight: ChatGPT’s Political Biases Revisited

  • Research summary: On the Edge of Tomorrow: Canada’s AI Augmented Workforce

    Research summary: On the Edge of Tomorrow: Canada’s AI Augmented Workforce

  • A Holistic Assessment of the Reliability of Machine Learning Systems

    A Holistic Assessment of the Reliability of Machine Learning Systems

  • Teaching AI Ethics Using Science Fiction (Research summary)

    Teaching AI Ethics Using Science Fiction (Research summary)

  • Explaining the Principles to Practices Gap in AI

    Explaining the Principles to Practices Gap in AI

  • Who is afraid of black box algorithms? On the epistemological and ethical basis of trust in medical ...

    Who is afraid of black box algorithms? On the epistemological and ethical basis of trust in medical ...

  • Defining a Research Testbed for Manned-Unmanned Teaming Research

    Defining a Research Testbed for Manned-Unmanned Teaming Research

Partners

  • Ā 
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer

Categories


• Blog
• Research Summaries
• Columns
• Core Principles of Responsible AI
• Special Topics

Signature Content


• The State Of AI Ethics

• The Living Dictionary

• The AI Ethics Brief

Learn More


• About

• Open Access Policy

• Contributions Policy

• Editorial Stance on AI Tools

• Press

• Donate

• Contact

The AI Ethics Brief (bi-weekly newsletter)

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.


Archive

  • Ā© MONTREAL AI ETHICS INSTITUTE. All rights reserved 2024.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.