🔬 Research summary by Connor Wright, our Partnerships Manager.
[Original paper by Lee Rainie, Janna Anderson and Emily A. Vogels]
Overview: How would you answer the following question: “By 2030, will most of the AI systems being used by organizations of all sorts employ ethical principles focused primarily on the public good?” An overwhelming majority (68%) say no, and there are more than just ethical reasons why this is the case.
How would you answer the following question: “By 2030, will most of the AI systems being used by organizations of all sorts employ ethical principles focused primarily on the public good?” A resounding 68% of the experts involved in this research paper answered no. Positives are few and far between within the research presented, despite some clear examples. So, let’s look into why that is the case.
Ethics is both vague and subjective
One prevalent theme throughout this piece is the frustratingly vague and subjective nature of ethics. There is no consensus over what ethical AI looks like, nor is there any agreement over what is a moral outcome. In this sense, it could be rightly said how our ethical frameworks are only ‘half-written books, missing some crucial pages and chapters to guide us. As a result, ethics turns out to be an iterative rather than dogmatic process, requiring us to be okay with not knowing the potential outcomes and answers of a situation. Unfortunately, this does not bode well with trying to encode ethical systems into AI.
What I mean by this is how real-life situations can be seen as being too situational to programme into an ethical AI framework, whereby actual ethical dilemmas do not possess any correct answers. For example, views of what is ethical differ worldwide, where countries such as China values social stability more than, say, Western countries. Thus, when AI is applied (such as in warfare), it is unlikely that both sides of the conflict would employ the same ethical framework. Hence, finding a common ethical thread can better help fuse a potentially fractured AI regulation approach, which I believe lies in identifying the human in the AI process.
Identifying the human in the process
Here, the paper rightly points out the false claim that technological solutions are better than human solutions as they’re based on ‘cold computing’ and not ‘emotive human responses’. Instead, it should be noted how, perhaps, when we talk about AI ethics, we should referring to human ethics mediated through AI. By this, I mean how there are no inherently good or evil mathematical functions, whereby it is rather the human presence that determines the ethical propensity of the AI application. The obligation to be moral lies in the hands of corporations and system designers rather than in what the AI does.
As a result, the role the human plays in ‘feeding and nurturing’ their AI is to be acknowledged. Supplying the system with adequate data for it to train on and proper privacy protections are two ways in which this role can be carried out meaningfully. Without such measures in place, AI then has the potential to become the medium through which our lack of understanding of human bias and bias in itself is expressed. One environment in which this has become too apparent is in AI innovation.
Ethics doesn’t drive AI innovation
Effective AI has been seen to be prioritised over ethical AI. Looking at facial recognition systems such as Amazon’s Rekognition and IBM, it becomes clear that companies are prioritising the ‘E’ word, but not the one that should be emphasised. Thus, Techno-power has become the main driver behind the pursuit of AI instead of ethical considerations. As a consequence, those few at the helm of AI innovation have proliferated the techno-solutionist mindset throughout the practice, allowing AI to be used as the new manifestation to masquerade and hide the business interests and biases of the institutions and people involved. In this sense, AI has become the digital representation of the collective corporate mindset, meaning that, as some experts in the paper observed, so long as AI is owned those who have access to it will benefit and those who do not will suffer the consequences.
In this sense, perhaps taking the view of seeing the wood for the trees and observing what AI is at its core is now worth exploring.
Taking AI as it really is
One of the lures of AI is how it almost creates its own separate reality, filled with the promise of what can be in a different world separate from the current reality. However, this distracts from what AI is in essence. For example, AI applications in different sectors such as law enforcement do what they’re told to do. It does not possess a moral compass nor social awareness. In this sense, AI can be seen to lack contextual understanding as it sets out to achieve its goal. To illustrate, the paper included how an AI tasked to keep you dry would not be fussed about stealing an umbrella from an old lady in the street when it starts to rain. In this sense, recognising AI as a tool, or even yet, potentially going as far as saying that it’s an elongation of previous statistical techniques and innovations, could serve to help cut away the confusing mist surrounding such technology. Perhaps viewing it as a tool can then help to influence the future applications of such a tool, including in the incentives to action it brings with it.
The problem of incentives
One potential way to correct the mentioned corporate prioritisation of efficiency could then be to look into what incentivises businesses to act this way. In this sense, the experts involved in the paper observe how, in its current state, the corporate world is not offered any benefits from ethically coordinating AI, with businesses tending to prioritise efficiency, scale and automation, rather than augmentation, inclusion and local context. If this can be achieved, there certainly is a bright side to AI.
AI has been showing promise in its use in education and health, allowing the prioritisation of accessible and necessary digital skills in education programmes, as well as improving the accuracy of certain diagnoses. In this way, it has been observed in the paper how the more we develop AI, the more we appreciate the unique traits and special qualities of humans that are so hard to code. Such qualities such as compassion, contextual understanding and decision-making are common through the human world, meaning that AI could also prove the median through which we are able to bridge the conversation between countries. While these positives are few in the paper, they are worth keeping in mind nonetheless.
Between the lines
From my perspective, what kind of humans we want to be should be reflected in how we go about designing our AI systems. In this sense, there should be a lack of cheap and subversive techniques to avoid complicated issues like justice, with the social good and social infrastructure over innovation and the good of the governments. For me, this comes through acknowledging the human in the process, both in its role as the protagonist in the AI process, as well as the eventual recipients of both its positives and its negatives.