• Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • Core Principles of Responsible AI
    • Accountability
    • Fairness
    • Privacy
    • Safety and Security
    • Sustainability
    • Transparency
  • Special Topics
    • AI in Industry
    • Ethical Implications
    • Human-Centered Design
    • Regulatory Landscape
    • Technical Methods
  • Living Dictionary
  • State of AI Ethics
  • AI Ethics Brief
  • 🇫🇷
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

Ethics in the Software Development Process: from Codes of Conduct to Ethical Deliberation

May 26, 2022

🔬 Research Summary by Jan Gogoll, a postdoctoral researcher at the Bavarian Institute of Digital Transformation and an Affiliate Researcher at the Technical University of Munich with an interest in ethics, especially the ethics of AI and digitization in general. He is an experimental economist by training (PhD) and a philosopher at heart (B.A.)

[Original paper by Jan Gogoll, Niina Zuber, Severin Kacianka, Timo Greger, Alexander Pretschner & Julian Nida-RĂĽmelin]


Overview:  This article disentangles ethical considerations that can be performed at the level of the software engineer from those that belong in the wider domain of business ethics. The handling of ethical problems that fall into the responsibility of the engineer has traditionally been addressed by the publication of Codes of Ethics and Conduct (CoC). We argue that these Codes are barely able to provide normative orientation for ethical decision making (EDM) in software development.


Introduction

Software developers (SD), designers and decision-makers are ever more expected to consider ethical values and conduct normative evaluations when building digital products. 

While it seems inappropriate and short sighted to shift responsibility entirely to developers, software companies still feel the need to address these issues and promote ethically informed development for two main reasons: Firstly, companies face backlash from unethical software in legal as well as in reputational terms. Secondly, companies and their employees have an intrinsic motivation to create better and ethically sound software, because it is the right thing to do.

This article clarifies the domain: Not every ethical challenge a software company faces should be dealt with at the software developer level (or the development team level). In fact, many possible ethical issues, for instance, the question if a specific software tool should be developed at all, fall into the wider domain of business ethics.

The common approach to assist software engineers in EDM: Codes of Ethics/Conduct. We provide five reasons why CoCs are insufficient to successfully guide software engineers. Finally, we argue  that an approach built on ethical deliberation may be a way to enable SEs to build “ethically sound” software.

Key Insights

The Responsibility of Ethical Decision Making in Software Companies

It is important to define the domain, the scope, and the limit of ethical considerations that can be performed by SDs. Many issues that seem to be the result of software (and its development and use) are actually the result of certain business models and the underlying political, legal, and cultural conditions. Yet, after a business decision including ethical considerations has been made at management level, the development teams still have some leeway in deciding how to exactly develop the product. Once we reach the phase of development, the decision to build a product has already been made, the business model has been chosen and specific demands have been outlined. Any remaining ethical questions must be dealt with by SDs. Of course, there are differences between companies and corporate culture which in turn influences the degree of management’s involvement and to what extent it fosters ethical decision making at the development level. Yet, the developer usually has the greatest influence in translating ethical considerations into the product, when it comes to the implementation of the predefined parameters into software. SDs are usually not specifically educated in ethics and have not had intensive training/experience in this domain. A prominent method to address the mismatch between the lack of ethical training and the impact a product might have, has been the publication of CoCs.

Codes of Ethics/Conduct in Software Development

Codes of Ethics/Conduct are intended to provide guidance to engineers who face ethically relevant issues and provide them with an overview of desirable values and principles. Well over a 100 different examples exist (governmental, private sector, NGOs). CoCs converge on some core values, but at the same time differ in the emphasis they put on said values as well as on the respective subvalues. CoCs range from very abstract core values (such as justice or human dignity) to detailed definitions of technical approaches (data differentiation…). Governmental CoCs, f.e., support general and broad moral imperatives such as “AI software systems need to be humancentric”, whereas corporations tend to favor compliance issues when taking on privacy.

Shortcomings of Codes of Ethics/Conduct 

The majority of CoCs agree on core values such as privacy, transparency, and accountability. Yet, CoCs diverge as soon as this level of abstraction must be supplemented with application-specific details. We identify five shortcomings:

1. The Problem of Underdetermination

The values stated in CoCs take on the role of general statements, which on their own cannot provide practical guidance.  They are often underdetermined insofar as they cannot give clear instructions on what ought to be done in any specific case. As a result, CoCs lack practical applicability, because they do not offer normative orientation for specific ethical challenges. This is especially true when values collide (e.g. privacy vs. transparency).

2. Cherry‑picking Ethics

Many different actions can be justified with recourse to various values from the same CoC (e.g., individual privacy vs. societal welfare). The CoC then becomes a one-stop shop offering an array of ethical values to choose from depending on which principle or value is (arbitrarily) deemed relevant in a certain situation.

3. Risk of Indifference

CoCs are often underdetermined and offer the possibility that any one particular CoC could be used to justify different and even contradictory actions. Thus, many CoCs could foster the danger of ethical indifference. Additionally, most CoCs state obvious and uncontroversial values and goals. Their generic nature leaves the reader with the feeling that their gut feeling and practical constraints should have the final verdict when it comes to trade-offs.

4. Ex‑post Orientation

However, since CoCs provide values that are underdetermined they have little influence on the development process because values are not process-oriented and do not include logically the means by which they can be achieved. This very nature of values may lead to the fact that values are often considered only afterwards and just adapted to actions, but do not align action accordingly.

5. The Desire for Gut Feelings

The underdetermination of values due to their universal character makes it impossible to deduce all possible specific applications of said value. Therefore, SDs may make a rather arbitrary and impromptu choice when it comes to the values they want to comply with: picking whatever value is around or—as economists would say—is in the engineer’s relevant set and which often justify actions that they want to believe to be right (motivated reasoning).

Between the lines

There exists a gap in the literature on how to motivate software engineers to consider values while designing, developing or maintaining digital artifacts. Theoretically, the discussion has already established what problems exist. However, practical solutions are rare to find and difficult to tackle. CoCs are an easy but insufficient approach. Proactive and discursive ethics are important but we need to ensure their continued use by focusing on organizational management structures. Here more work needs to be done: finding methods and theories that empower all participants to commit themselves to ethical deliberations – before, during and after development.  

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

🔍 SEARCH

Spotlight

AI Policy Corner: Frontier AI Safety Commitments, AI Seoul Summit 2024

AI Policy Corner: The Colorado State Deepfakes Act

Special Edition: Honouring the Legacy of Abhishek Gupta (1992–2024)

AI Policy Corner: The Turkish Artificial Intelligence Law Proposal

From Funding Crisis to AI Misuse: Critical Digital Rights Challenges from RightsCon 2025

related posts

  • A Sequentially Fair Mechanism for Multiple Sensitive Attributes

    A Sequentially Fair Mechanism for Multiple Sensitive Attributes

  • Technological trajectories as an outcome of the structure-agency interplay at the national level: In...

    Technological trajectories as an outcome of the structure-agency interplay at the national level: In...

  • Research summary: AI Governance in 2019, A Year in Review: Observations of 50 Global Experts

    Research summary: AI Governance in 2019, A Year in Review: Observations of 50 Global Experts

  • Explaining the Principles to Practices Gap in AI

    Explaining the Principles to Practices Gap in AI

  • Regional Differences in Information Privacy Concerns After the Facebook-Cambridge Analytica Data Sca...

    Regional Differences in Information Privacy Concerns After the Facebook-Cambridge Analytica Data Sca...

  • Generative AI in Writing Research Papers: A New Type of Algorithmic Bias and Uncertainty in Scholarl...

    Generative AI in Writing Research Papers: A New Type of Algorithmic Bias and Uncertainty in Scholarl...

  • Making Kin with the Machines

    Making Kin with the Machines

  • Top 5 takeaways from our conversation with I2AI on AI in different national contexts

    Top 5 takeaways from our conversation with I2AI on AI in different national contexts

  • Open and Linked Data Model for Carbon Footprint Scenarios

    Open and Linked Data Model for Carbon Footprint Scenarios

  • Research summary: AI Mediated Exchange Theory by Xiao Ma and Taylor W. Brown

    Research summary: AI Mediated Exchange Theory by Xiao Ma and Taylor W. Brown

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer

Categories


• Blog
• Research Summaries
• Columns
• Core Principles of Responsible AI
• Special Topics

Signature Content


• The State Of AI Ethics

• The Living Dictionary

• The AI Ethics Brief

Learn More


• About

• Open Access Policy

• Contributions Policy

• Editorial Stance on AI Tools

• Press

• Donate

• Contact

The AI Ethics Brief (bi-weekly newsletter)

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.


Archive

  • © MONTREAL AI ETHICS INSTITUTE. All rights reserved 2024.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.