š¬ Research summary by Connor Wright, our Partnerships Manager.
[Original paper by Dave Lauer]
Overview: AI systems are often fixed by looking for the broken part, rather than the system that allowed the error to occur. The paper advocates for a more systematic examination of the AI process, which the more you think about it, the more sense it makes.
Introduction
Would Aristotle have bought into AI ethics? Or, does AI ethics sit as a separate entity to all that has gone before it? Given AI ethicsā rise in popularity, it has often been held in its own regard, with special mentions of AI principles at big corporations like Facebook and Google. Nevertheless, the answer to the question ācan AI ethics exist in a vacuum?ā is a resounding no. An examination of an āunethical AIā problem needs to be systemic and aware of the incentives involved in the process, rather than just looking for the ābroken partā. Thus, letās first look at why AI ethics does not exist in a vacuum, with a comparison to medical ethics along the way.
Key Insights
AI Ethics does not exist in a vacuum
The key notion that I found in this piece was how AI ethics could not come about without an ethical environment to surround it. As seen in medical ethics, the AI ethics space comes into contact with a whole host of issues also touched upon by other fields. Take, for example, the issues of autonomy and moral responsibility in AI ethics and for the past 500+ years of philosophy. Hence, without an all-encompassing ethical approach, the subfield of AI ethics quickly becomes isolated and ineffective.
In this sense, given AI ethicsā ties to an overall ethical environment, we need to examine the system as a whole when something goes wrong with an AI system. Here, systems thinking is introduced to mention the relationships between parts of a process/product as being key, not just individual parts themselves. In other words, if an AI system fails, donāt examine its features; examine its ecosystem.
The broken part fallacy
Tying into this last point, the ābroken part fallacyā is introduced. About how humans examine problems, the fallacy lies in seeing that a system/product has malfunctioned and looking for the broken part with which to fix and resolve the issue. Such an approach deems the problem as something individualistic, which wonāt necessarily fix it if it’s systemic. Looking for a broken part treats a systemic problem as too simple, given the complex interactional nature of an ecosystem.
Hence, looking for a malfunction in an AI system will not automatically fix its problem of being unethical. Instead, a thorough look at how that unethical behaviour has surpassed the checks and balances is required, especially surrounding the productās deployment into social and cultural contexts.
The importance of social and cultural sensitivity
When examining the systemic nature of an AIās deployment, more abstract notions are discovered that require change than a simple ābroken partā. Listening to those closest to the problem and avoiding top-down legislation is an excellent first step. This offers a closer look at the situation from those who designed the AI product, cultivating a more trusting relationship.
The question of incentives
The next question is whether businesses can enact this kind of approach and whether they are incentivised to. The incentives created by law and policy can be a good starting point, examining whether there is a legislative push behind specific actions that can be deemed āethicalā.
Such examinations can then expose the type of ownership within a business. To illustrate, Facebook operates on an Absentee Ownership model, whereby the ālocus of control and locus of responsibility are differentā. In Facebookās case, they control what is allowed on their platform but do not have legal responsibility for the content thatās eventually put on there. In this case, an AI ethics programme coming out of Facebook would not prosper without sharing in the centre of responsibility. Instead, ethical frameworks are needed to be part of the companyās ethos and not just something to be checked off the list. AI ethics can then be a branch of central ethical practices and frameworks instead of holding its own fort.
Between the lines
I very much share how AI ethics is not born in a vacuum. I liken it to conversations about bias in AI systems, whereby if the humans programming the AI product have their own biases, then we cannot expect some of these to turn up in the AI system. The aim is then to mitigate the harm that is produced from these biases taking hold. Applied to our present context, I would not be surprised if a company with a flawed ethical approach created anā unethical AIā. Without self-reflection on the AI process itself, the reason why an AI is producing the āunethicalā behaviour that it does will remain an even darker black box. Hence, before looking for the broken part, we should ask ourselves how it got there.