• Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • Core Principles of Responsible AI
    • Accountability
    • Fairness
    • Privacy
    • Safety and Security
    • Sustainability
    • Transparency
  • Special Topics
    • AI in Industry
    • Ethical Implications
    • Human-Centered Design
    • Regulatory Landscape
    • Technical Methods
  • Living Dictionary
  • State of AI Ethics
  • AI Ethics Brief
  • 🇫🇷
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

A Virtue-Based Framework to Support Putting AI Ethics into Practice

November 7, 2022

🔬 Research Summary by Thilo Hagendorff, an AI ethicist at the University of Tuebingen (Germany).

[Original paper by Thilo Hagendorff]


Overview: A virtue-based approach specific to the AI field is a missing compound in putting AI ethics into practice, as virtue ethics has often been disregarded in AI ethics research this far. To close this research gap, a new paper describes how a specific set of “AI virtues” can act as a further step forward in the practical turn of AI ethics.


Introduction

Hitherto, all the major AI ethics initiatives have chosen a principled approach. They aim at affecting AI research and development by stipulating a list of rules and standards. But, as more and more papers from AI-metaethics show, this approach has many shortcomings. The principled approach in AI ethics has no reinforcement mechanisms; it is not sensitive to different contexts and situations, it fails to address the technical complexity of AI, it uses terms and concepts that are too abstract to be put into practice, etc. To improve the two last-named shortcomings, AI ethics recently underwent a practical turn, stressing its will to put principles into practice. But the typologies and guidelines on how to put AI ethics into practice stick to the principled approach altogether. However, a hitherto largely underrepresented approach, namely virtue ethics, seems to be a promising addition to AI ethics’ “principalism.”

Key Insights

Basic AI virtues

By using meta-studies on AI ethics guidelines, one can correlate principles with virtues by asking the question: Which virtue A, B, C translates into behavior that is likely to result in an outcome that corresponds to the requirements of principles X, Y, Z? Meta-studies show that there seems to be a consensus in AI ethics on a relatively fixed set of reoccurring principles that are requirements for ethical AI, among them fairness, privacy, safety, accountability, explainability, and several more. Out of these principles, four basic AI virtues, namely justice, honesty, responsibility, and care, can be distilled. Surprisingly, these four virtues suffice to cover all established principles.

Justice as a virtue underpins motivations to develop fair machine learning algorithms and efforts to use AI technologies only in those societal contexts where it is fair to apply them. Honesty fosters efforts to establish transparency regarding organizational structures. It also promotes the willingness to provide explainability or other forms of technological transparency. Responsibility leads to a heightened sense of accountability for AI technologies. Responsibility gaps can lead to unethical behavior, and they are especially prevalent in the context of AI. Responsibility as a virtue, however, is a counterweight to that. Care means developing a sense of others’ needs and the will to address them. It builds the bedrock for motivating professionals to prevent AI applications from causing direct or indirect harm.

Second-order AI virtues

However, basic AI virtues alone do not suffice to render AI ethics actionable. Despite possessing them, ethical decision-making in practice faces many limitations. Factors of “bounded ethicality” can compromise or have negative effects on unfolding all four basic virtues. These factors are, among others, situational forces (financial incentives, stress, etc.), peer pressure (unethical group norms), authorities (unethical managerial decisions), implicit biases, value-action gaps, moral disengagement, etc. With this in mind, the question of how to deal with these effects remains. A practicable AI ethics framework must be savvy also from a moral psychology perspective. To do so, the paper proposes to augment the four basic AI virtues with two second-order AI virtues, namely prudence and fortitude. These second-order AI virtues are supposed to offer the best possible defense against the effects of bounded ethicality. Prudence means practical wisdom, meaning a high degree of self-understanding. It encompasses the ability to identify the effects of bounded ethicality in one’s own behavior. Hence, it is the counterweight to the many hidden psychological forces that can impair ethical decision-making.

Further, fortitude is the will to stick to moral ideals and moral responsibilities, potentially against all odds. In light of powerful situational forces or peer influences, this can become very difficult. Fortitude is supposed to counteract these forces. Eventually, second-order AI virtues enable living up to the basic AI virtues.

Cultivating virtues

One can cultivate AI virtues in a specific organizational and cultural context. Virtues can be promoted and implemented via measures for ethics training. This comprises several steps. First of all, and obviously, it means sharing knowledge of the six AI virtues with practitioners. It also means publicly committing to stick to particular virtues.

Moreover, it means giving practitioners self-efficacy to make them feel that they can have a tangible impact on ethically relevant issues. Furthermore, it means setting up audits and discussion groups where practitioners can reflect and discuss professional choices or ethical issues in discussion rounds in order to receive critical feedback. Systemic measures to implement AI virtues comprise positive leader influences, an ethical climate, and working culture in organizations, the decrease of stress and pressure and organizational openness for external critique. An increase in the proportion of women is needed as well since a great number of studies show that women are more sensitive to and less tolerant of unethical activities compared to men. With such measures for ethics training, the framework of the four basic AI virtues (justice, responsibility, honesty, and care) and two second-order AI virtues (prudence and fortitude) can be put into practice in organizations researching and developing AI systems.

Between the lines

The goal of the paper is to outline how virtues can support putting AI ethics into practice. Virtue ethics focuses on an individual’s character development. Character dispositions provide the basis for professional decision-making. On the one hand, the paper considered insights from moral psychology on the many pitfalls the motivation of moral behavior has. On the other hand, it used virtue instead of deontological ethics to promote and foster not only four basic AI virtues but also two second-order AI virtues that can help to circumvent “bounded ethicality” and one’s vulnerability to unconscious biases. The basic AI virtues comprise justice, honesty, responsibility, and care. Each of these virtues motivates a kind of professional decision-making that builds the bedrock for fulfilling all the AI-specific ethics principles discussed in the literature. In addition to that, the second-order AI virtues, namely prudence, and fortitude, can be used to overcome specific effects of bounded ethicality that can stand in the way of the basic AI virtues, meaning biases, value-action gaps, moral disengagement, situational forces, unethical peer influences, and the like. In sum, the paper can be interpreted as an initial proposal to reorientate AI ethics, consider insights from moral psychology, and make effectiveness a top priority.

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

🔍 SEARCH

Spotlight

AI Policy Corner: Singapore’s National AI Strategy 2.0

AI Governance in a Competitive World: Balancing Innovation, Regulation and Ethics | Point Zero Forum 2025

AI Policy Corner: Frontier AI Safety Commitments, AI Seoul Summit 2024

AI Policy Corner: The Colorado State Deepfakes Act

Special Edition: Honouring the Legacy of Abhishek Gupta (1992–2024)

related posts

  • Knowing Your Annotator: Rapidly Testing the Reliability of Affect Annotation

    Knowing Your Annotator: Rapidly Testing the Reliability of Affect Annotation

  • How Artifacts Afford: The Power and Politics of Everyday Things

    How Artifacts Afford: The Power and Politics of Everyday Things

  • Epistemic fragmentation poses a threat to the governance of online targeting

    Epistemic fragmentation poses a threat to the governance of online targeting

  • Incentivized Symbiosis: A Paradigm for Human-Agent Coevolution

    Incentivized Symbiosis: A Paradigm for Human-Agent Coevolution

  • Promoting Bright Patterns

    Promoting Bright Patterns

  • Employee Perceptions of the Effective Adoption of AI Principles

    Employee Perceptions of the Effective Adoption of AI Principles

  • Embedded ethics: a proposal for integrating ethics into the development of medical AI

    Embedded ethics: a proposal for integrating ethics into the development of medical AI

  • Labor and Fraud on the Google Play Store: The Case of Install-Incentivizing Apps

    Labor and Fraud on the Google Play Store: The Case of Install-Incentivizing Apps

  • AI Safety, Security, and Stability Among Great Powers (Research Summary)

    AI Safety, Security, and Stability Among Great Powers (Research Summary)

  • Research Summary: Towards Evaluating the Robustness of Neural Networks

    Research Summary: Towards Evaluating the Robustness of Neural Networks

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer

Categories


• Blog
• Research Summaries
• Columns
• Core Principles of Responsible AI
• Special Topics

Signature Content


• The State Of AI Ethics

• The Living Dictionary

• The AI Ethics Brief

Learn More


• About

• Open Access Policy

• Contributions Policy

• Editorial Stance on AI Tools

• Press

• Donate

• Contact

The AI Ethics Brief (bi-weekly newsletter)

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.


Archive

  • © MONTREAL AI ETHICS INSTITUTE. All rights reserved 2024.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.