š¬ Research summary by Connor Wright, our Partnerships Manager.
[Original paper by Olivia Gambelin]
Overview: The position of AI Ethicist is a recent arrival to the corporate scene, with one of its key novelties being the importance of bravery. Whether taken seriously or treated as a PR stunt, alongside the need to decipher right or wrong is the ability to be brave.
Introduction
The position of AI Ethicist is a recent arrival to the corporate scene. Tasked with ethical evaluations of AI systems, there may be times that the role feels lonely. Potentially being the only objector to the deployment of an AI product which could earn your company a healthy profit, no matter how sure you are, is a scary thought. Hence, it is important to note that the AI Ethicistās role requires bravery. Yet, the AI Ethicist is not the only agent operating in the Ethical AI space.
Key Insights
AI Ethics is not just for the AI Ethicist
An important distinction is how an AI Ethicist is not the only one who engages in AI Ethics. With AI stretching into multiple walks of life and business practices, a sole AI ethicist would not be able to capture the different perspectives needed to consider. Hence, technologists, data scientists, lawyers, and the public form part of the fieldās multidisciplinary nature. Different backgrounds are more suited to identifying different types of ethical risks. Be it a lawyer identifying a tricky definition used in describing an AI system, or a public member bringing up their view of how it would affect their lives.
To illustrate more clearly, an example involving autonomous vehicles fits. While an Ethicist can comment on the traditional Trolley Problem, data engineers must also understand how to incorporate its thinking into hard code. Not only that, but consultation with the broader public can help understand the broader requirements these vehicles are meant to fill, especially with the older population. All in all, just because the AI Ethicistās job title is closest semantically to AI Ethics doesnāt mean it’s the sole actor in the space.
The role of an AI Ethicist
Nevertheless, an AI Ethicist still has a role to fill within the field. The job includes potentially being the only member of a team to veto an AI product that could earn your company a healthy profit. Whilst other team members could be āsilenced by a profit marginā, an AI Ethicist is expected to draw on moral principles to help decipher what is right and wrong within an AI context before applying their deduction to concrete examples. The application then needs to be presented in an empathetic manner not to receive defensive responses.
It is also the AI Ethicistās responsibility to maintain objectivity in ethically charged situations within this process. As a result, the Ethicist may become the default General of assigning responsibility when consulted on the location of potential ethical faults in an AI product. To do this effectively, proficiency in the design, development and deployment of the AI system at hand is paramount. This does not mean that the ethicist must be fluent in every ethical system in existence, but how they must be fluent in their industrial context.
Part of understanding the context lies in recognising both the logical and illogical inputs present in making a decision. There is no point in simply appealing to logic when trying to explain an illogical decision made, making the quality of awareness of an AI Ethicist a vital tool. One such example could be how IBM released their facial recognition technology despite the bias problems that resulted. Here, it doesnāt help to ask āwhy did they release a harmful product?ā but rather examine other factors in the decision. There couldāve been a lack of information about the potential for bias, or internal company pressure to release the product. It is not the AI Ethicistās job to excuse any form of industry behaviour, but to be sensitive to non-logical factors.
All of this requires bravery.
Why bravery is needed
An AI Ethicist is to be prepared to walk into a room where they only disagree with an AI proposal. This also means that the AI Ethicist becomes the focal point of responsibility when discussing ethical decisions and may be used as a scapegoat should the product not be launched. Cases may arise where a moratorium results, placing the blame more on society ānot being readyā rather than an AI Ethicist being difficult.
However, policies that result from a moratorium arenāt guaranteed to be water-tight. Some procedures could potentially only command the bare minimum for a compliant AI product yet still leave room for an AI Ethicist to give a red light. It could be that a company keeps the raw data for an AI system private to external parties in one national context (as mandated by the law) but doesnāt do so in a different space. So, while technically being compliant, an AI Ethicist may still need to step in to encourage against damaging the companyās reputation. To do so, requires bravery.
Between the lines
With the AI Ethicist position becoming more and more prominent, certain qualities are required to prevent it from becoming a marketing stunt. The paper claims that bravery is one of them, and I wholeheartedly agree. One thing that I believe can help is, as mentioned in my last research summary, being more than one AI Ethicist involved. Instead, boasting of AI Ethicists disseminated throughout the company will allow ethical problems to be picked up and talked about far quicker. Nevertheless, every one of these positions, no matter how many there are, will require bravery.