🔬 Research Summary by Thilo Hagendorff, an AI ethicist at the University of Tuebingen (Germany).
[Original paper by Thilo Hagendorff]
Overview: A virtue-based approach specific to the AI field is a missing compound in putting AI ethics into practice, as virtue ethics has often been disregarded in AI ethics research this far. To close this research gap, a new paper describes how a specific set of “AI virtues” can act as a further step forward in the practical turn of AI ethics.
Introduction
Hitherto, all the major AI ethics initiatives have chosen a principled approach. They aim at affecting AI research and development by stipulating a list of rules and standards. But, as more and more papers from AI-metaethics show, this approach has many shortcomings. The principled approach in AI ethics has no reinforcement mechanisms; it is not sensitive to different contexts and situations, it fails to address the technical complexity of AI, it uses terms and concepts that are too abstract to be put into practice, etc. To improve the two last-named shortcomings, AI ethics recently underwent a practical turn, stressing its will to put principles into practice. But the typologies and guidelines on how to put AI ethics into practice stick to the principled approach altogether. However, a hitherto largely underrepresented approach, namely virtue ethics, seems to be a promising addition to AI ethics’ “principalism.”
Key Insights
Basic AI virtues
By using meta-studies on AI ethics guidelines, one can correlate principles with virtues by asking the question: Which virtue A, B, C translates into behavior that is likely to result in an outcome that corresponds to the requirements of principles X, Y, Z? Meta-studies show that there seems to be a consensus in AI ethics on a relatively fixed set of reoccurring principles that are requirements for ethical AI, among them fairness, privacy, safety, accountability, explainability, and several more. Out of these principles, four basic AI virtues, namely justice, honesty, responsibility, and care, can be distilled. Surprisingly, these four virtues suffice to cover all established principles.
Justice as a virtue underpins motivations to develop fair machine learning algorithms and efforts to use AI technologies only in those societal contexts where it is fair to apply them. Honesty fosters efforts to establish transparency regarding organizational structures. It also promotes the willingness to provide explainability or other forms of technological transparency. Responsibility leads to a heightened sense of accountability for AI technologies. Responsibility gaps can lead to unethical behavior, and they are especially prevalent in the context of AI. Responsibility as a virtue, however, is a counterweight to that. Care means developing a sense of others’ needs and the will to address them. It builds the bedrock for motivating professionals to prevent AI applications from causing direct or indirect harm.
Second-order AI virtues
However, basic AI virtues alone do not suffice to render AI ethics actionable. Despite possessing them, ethical decision-making in practice faces many limitations. Factors of “bounded ethicality” can compromise or have negative effects on unfolding all four basic virtues. These factors are, among others, situational forces (financial incentives, stress, etc.), peer pressure (unethical group norms), authorities (unethical managerial decisions), implicit biases, value-action gaps, moral disengagement, etc. With this in mind, the question of how to deal with these effects remains. A practicable AI ethics framework must be savvy also from a moral psychology perspective. To do so, the paper proposes to augment the four basic AI virtues with two second-order AI virtues, namely prudence and fortitude. These second-order AI virtues are supposed to offer the best possible defense against the effects of bounded ethicality. Prudence means practical wisdom, meaning a high degree of self-understanding. It encompasses the ability to identify the effects of bounded ethicality in one’s own behavior. Hence, it is the counterweight to the many hidden psychological forces that can impair ethical decision-making.
Further, fortitude is the will to stick to moral ideals and moral responsibilities, potentially against all odds. In light of powerful situational forces or peer influences, this can become very difficult. Fortitude is supposed to counteract these forces. Eventually, second-order AI virtues enable living up to the basic AI virtues.
Cultivating virtues
One can cultivate AI virtues in a specific organizational and cultural context. Virtues can be promoted and implemented via measures for ethics training. This comprises several steps. First of all, and obviously, it means sharing knowledge of the six AI virtues with practitioners. It also means publicly committing to stick to particular virtues.
Moreover, it means giving practitioners self-efficacy to make them feel that they can have a tangible impact on ethically relevant issues. Furthermore, it means setting up audits and discussion groups where practitioners can reflect and discuss professional choices or ethical issues in discussion rounds in order to receive critical feedback. Systemic measures to implement AI virtues comprise positive leader influences, an ethical climate, and working culture in organizations, the decrease of stress and pressure and organizational openness for external critique. An increase in the proportion of women is needed as well since a great number of studies show that women are more sensitive to and less tolerant of unethical activities compared to men. With such measures for ethics training, the framework of the four basic AI virtues (justice, responsibility, honesty, and care) and two second-order AI virtues (prudence and fortitude) can be put into practice in organizations researching and developing AI systems.
Between the lines
The goal of the paper is to outline how virtues can support putting AI ethics into practice. Virtue ethics focuses on an individual’s character development. Character dispositions provide the basis for professional decision-making. On the one hand, the paper considered insights from moral psychology on the many pitfalls the motivation of moral behavior has. On the other hand, it used virtue instead of deontological ethics to promote and foster not only four basic AI virtues but also two second-order AI virtues that can help to circumvent “bounded ethicality” and one’s vulnerability to unconscious biases. The basic AI virtues comprise justice, honesty, responsibility, and care. Each of these virtues motivates a kind of professional decision-making that builds the bedrock for fulfilling all the AI-specific ethics principles discussed in the literature. In addition to that, the second-order AI virtues, namely prudence, and fortitude, can be used to overcome specific effects of bounded ethicality that can stand in the way of the basic AI virtues, meaning biases, value-action gaps, moral disengagement, situational forces, unethical peer influences, and the like. In sum, the paper can be interpreted as an initial proposal to reorientate AI ethics, consider insights from moral psychology, and make effectiveness a top priority.