• Skip to main content
  • Skip to secondary menu
  • Skip to primary sidebar
  • Skip to footer
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

  • Articles
    • Public Policy
    • Privacy & Security
    • Human Rights
      • Ethics
      • JEDI (Justice, Equity, Diversity, Inclusion
    • Climate
    • Design
      • Emerging Technology
    • Application & Adoption
      • Health
      • Education
      • Government
        • Military
        • Public Works
      • Labour
    • Arts & Culture
      • Film & TV
      • Music
      • Pop Culture
      • Digital Art
  • Columns
    • AI Policy Corner
    • Recess
  • The AI Ethics Brief
  • AI Literacy
    • Research Summaries
    • AI Ethics Living Dictionary
    • Learning Community
  • The State of AI Ethics Report
    • Volume 6 (February 2022)
    • Volume 5 (July 2021)
    • Volume 4 (April 2021)
    • Volume 3 (Jan 2021)
    • Volume 2 (Oct 2020)
    • Volume 1 (June 2020)
  • About
    • Our Contributions Policy
    • Our Open Access Policy
    • Contact
    • Donate

Ethics in the Software Development Process: from Codes of Conduct to Ethical Deliberation

May 26, 2022

🔬 Research Summary by Jan Gogoll, a postdoctoral researcher at the Bavarian Institute of Digital Transformation and an Affiliate Researcher at the Technical University of Munich with an interest in ethics, especially the ethics of AI and digitization in general. He is an experimental economist by training (PhD) and a philosopher at heart (B.A.)

[Original paper by Jan Gogoll, Niina Zuber, Severin Kacianka, Timo Greger, Alexander Pretschner & Julian Nida-RĂĽmelin]


Overview:  This article disentangles ethical considerations that can be performed at the level of the software engineer from those that belong in the wider domain of business ethics. The handling of ethical problems that fall into the responsibility of the engineer has traditionally been addressed by the publication of Codes of Ethics and Conduct (CoC). We argue that these Codes are barely able to provide normative orientation for ethical decision making (EDM) in software development.


Introduction

Software developers (SD), designers and decision-makers are ever more expected to consider ethical values and conduct normative evaluations when building digital products. 

While it seems inappropriate and short sighted to shift responsibility entirely to developers, software companies still feel the need to address these issues and promote ethically informed development for two main reasons: Firstly, companies face backlash from unethical software in legal as well as in reputational terms. Secondly, companies and their employees have an intrinsic motivation to create better and ethically sound software, because it is the right thing to do.

This article clarifies the domain: Not every ethical challenge a software company faces should be dealt with at the software developer level (or the development team level). In fact, many possible ethical issues, for instance, the question if a specific software tool should be developed at all, fall into the wider domain of business ethics.

The common approach to assist software engineers in EDM: Codes of Ethics/Conduct. We provide five reasons why CoCs are insufficient to successfully guide software engineers. Finally, we argue  that an approach built on ethical deliberation may be a way to enable SEs to build “ethically sound” software.

Key Insights

The Responsibility of Ethical Decision Making in Software Companies

It is important to define the domain, the scope, and the limit of ethical considerations that can be performed by SDs. Many issues that seem to be the result of software (and its development and use) are actually the result of certain business models and the underlying political, legal, and cultural conditions. Yet, after a business decision including ethical considerations has been made at management level, the development teams still have some leeway in deciding how to exactly develop the product. Once we reach the phase of development, the decision to build a product has already been made, the business model has been chosen and specific demands have been outlined. Any remaining ethical questions must be dealt with by SDs. Of course, there are differences between companies and corporate culture which in turn influences the degree of management’s involvement and to what extent it fosters ethical decision making at the development level. Yet, the developer usually has the greatest influence in translating ethical considerations into the product, when it comes to the implementation of the predefined parameters into software. SDs are usually not specifically educated in ethics and have not had intensive training/experience in this domain. A prominent method to address the mismatch between the lack of ethical training and the impact a product might have, has been the publication of CoCs.

Codes of Ethics/Conduct in Software Development

Codes of Ethics/Conduct are intended to provide guidance to engineers who face ethically relevant issues and provide them with an overview of desirable values and principles. Well over a 100 different examples exist (governmental, private sector, NGOs). CoCs converge on some core values, but at the same time differ in the emphasis they put on said values as well as on the respective subvalues. CoCs range from very abstract core values (such as justice or human dignity) to detailed definitions of technical approaches (data differentiation…). Governmental CoCs, f.e., support general and broad moral imperatives such as “AI software systems need to be humancentric”, whereas corporations tend to favor compliance issues when taking on privacy.

Shortcomings of Codes of Ethics/Conduct 

The majority of CoCs agree on core values such as privacy, transparency, and accountability. Yet, CoCs diverge as soon as this level of abstraction must be supplemented with application-specific details. We identify five shortcomings:

1. The Problem of Underdetermination

The values stated in CoCs take on the role of general statements, which on their own cannot provide practical guidance.  They are often underdetermined insofar as they cannot give clear instructions on what ought to be done in any specific case. As a result, CoCs lack practical applicability, because they do not offer normative orientation for specific ethical challenges. This is especially true when values collide (e.g. privacy vs. transparency).

2. Cherry‑picking Ethics

Many different actions can be justified with recourse to various values from the same CoC (e.g., individual privacy vs. societal welfare). The CoC then becomes a one-stop shop offering an array of ethical values to choose from depending on which principle or value is (arbitrarily) deemed relevant in a certain situation.

3. Risk of Indifference

CoCs are often underdetermined and offer the possibility that any one particular CoC could be used to justify different and even contradictory actions. Thus, many CoCs could foster the danger of ethical indifference. Additionally, most CoCs state obvious and uncontroversial values and goals. Their generic nature leaves the reader with the feeling that their gut feeling and practical constraints should have the final verdict when it comes to trade-offs.

4. Ex‑post Orientation

However, since CoCs provide values that are underdetermined they have little influence on the development process because values are not process-oriented and do not include logically the means by which they can be achieved. This very nature of values may lead to the fact that values are often considered only afterwards and just adapted to actions, but do not align action accordingly.

5. The Desire for Gut Feelings

The underdetermination of values due to their universal character makes it impossible to deduce all possible specific applications of said value. Therefore, SDs may make a rather arbitrary and impromptu choice when it comes to the values they want to comply with: picking whatever value is around or—as economists would say—is in the engineer’s relevant set and which often justify actions that they want to believe to be right (motivated reasoning).

Between the lines

There exists a gap in the literature on how to motivate software engineers to consider values while designing, developing or maintaining digital artifacts. Theoretically, the discussion has already established what problems exist. However, practical solutions are rare to find and difficult to tackle. CoCs are an easy but insufficient approach. Proactive and discursive ethics are important but we need to ensure their continued use by focusing on organizational management structures. Here more work needs to be done: finding methods and theories that empower all participants to commit themselves to ethical deliberations – before, during and after development.  

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

🔍 SEARCH

Spotlight

ALL IN Conference 2025: Four Key Takeaways from Montreal

Beyond Dependency: The Hidden Risk of Social Comparison in Chatbot Companionship

AI Policy Corner: Restriction vs. Regulation: Comparing State Approaches to AI Mental Health Legislation

Beyond Consultation: Building Inclusive AI Governance for Canada’s Democratic Future

AI Policy Corner: U.S. Executive Order on Advancing AI Education for American Youth

related posts

  • On the Creativity of Large Language Models

    On the Creativity of Large Language Models

  • Research summary: Acting the Part: Examining Information Operations Within #BlackLivesMatter Discour...

    Research summary: Acting the Part: Examining Information Operations Within #BlackLivesMatter Discour...

  • The European Commission’s Artificial Intelligence Act (Stanford HAI Policy Brief)

    The European Commission’s Artificial Intelligence Act (Stanford HAI Policy Brief)

  • Rise of the machines: Prof Stuart Russell on the promises and perils of AI

    Rise of the machines: Prof Stuart Russell on the promises and perils of AI

  • Incentivized Symbiosis: A Paradigm for Human-Agent Coevolution

    Incentivized Symbiosis: A Paradigm for Human-Agent Coevolution

  • Research summary: Algorithmic Colonization of Africa

    Research summary: Algorithmic Colonization of Africa

  • Can we trust robots?

    Can we trust robots?

  • Trust me!: How to use trust-by-design to build resilient tech in times of crisis

    Trust me!: How to use trust-by-design to build resilient tech in times of crisis

  • Race and AI: the Diversity Dilemma

    Race and AI: the Diversity Dilemma

  • Towards a Framework for Human-AI Interaction Patterns in Co-Creative GAN Applications

    Towards a Framework for Human-AI Interaction Patterns in Co-Creative GAN Applications

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer


Articles

Columns

AI Literacy

The State of AI Ethics Report


 

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.

Contact

Donate


  • © 2025 MONTREAL AI ETHICS INSTITUTE.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.