Mini summary (scroll down for full summary): The adoption of AI-enabled solutions in the healthcare industry has accelerated with the ongoing pandemic and while there are a lot of concerns raised, most quite aptly, there is a need to evaluate these concerns in firm moral principles and foundations before dismissing these solutions as not meeting our high standards of care. Another argument put forth is the potential replacement of human carers by the use of such technologies to the detriment of the quality of care that would otherwise be provided to patients.
However, this overlooks the fact that already due to high burdens on the healthcare sector, often care is quite low-touch and distanced thus not creating that much of a difference. In fact, AI-enabled solutions might even provide an opportunity for improving the healthcare outcomes by automating routine and repetitive tasks minimizing the burnout experienced by healthcare professionals and enabling them to concentrate their efforts on the aspects that are not yet replicable by machines.
This paper provides more examples where it does a careful evaluation of the tradeoffs between the use of technology and achieving some of the aims of realizing a good life as characterized by the capabilities approach. Especially, in a time where there is a rush towards picking a solution and deploying it within the healthcare industry to combat the surge in care demand because of COVID-19, the paper offers some guidelines rooted in theory with practical applications for making a choice that is well-informed.
Most concerns when aiming to use technology within healthcare are along the lines of replacing human labor and the ones that are used in aiding humans to deliver care don’t receive as much attention. With the ongoing pandemic, we’ve seen this come into the spotlight as well and this paper sets the stage for some of the ethical issues to watch out for when thinking about using AI-enabled technologies in the healthcare domain and how to have a discussion that is grounded in concrete moral principles.
An argument put forth to counter the use of AI solutions is that they can’t “care” deeply enough about the patients and that is a valid concern, after all machines don’t have empathy and other abilities required to have an emotional exchange with humans. But, a lot of the care work in hospitals is routine and professionalism asks for maintaining a certain amount of emotional distance in the care relationship. Additionally, in places where the ratio of patients/carers is high, they are unable to provide personalized attention and care anyways. In that respect, human-provided care is already “shallow” and the author cites research where care that is too deep actually hurts the carer when the patients become better and move out of their care or die. Thus, if this is the argument, then we need to examine more deeply our current care practices.
The author also posits that if this is indeed the state of care today, then it is morally less degrading to be distanced by a machine than by a human. In fact, the use of AI to automate routine tasks in the rendering of medical care will actually allow human carers to focus more on the emotional and human aspects of care.
Good healthcare, supposedly that provided by humans doesn’t have firm grounding in the typical literature on the ethics of healthcare and technology. It’s more so a list of things not to do but not positive guidance on what this kind of good healthcare looks like. Thus, the author takes a view that it must, at the very least, respect, promote and preserve the dignity of the patient.
Yet, this doesn’t provide concrete enough guidance and we can expand on this to say that dignity is a) treating the patient as a human b) treating them as a part of a culture and community and c) treating them as a unique human. To add even more concreteness, the author borrows from the work done in economics on the capabilities approach. This capabilities approach states that having the following 10 capabilities in their entirety is necessary for a human to experience dignity in their living: life, bodily health, bodily integrity, being able to use your senses, imaginations and thoughts, emotions, practical reasoning, affiliation, other species, play, and control over one’s environment. This list applied to healthcare gives us a good guideline for what might constitute the kind of healthcare that we deem should be provided by humans, with or without the use of technology.
Now, the above list might seem too onerous for healthcare professionals but we need to keep in mind that healthcare to achieve a good life as highlighted by the capabilities approach things that are dependent on things beyond just the healthcare professionals and thus, the needs as mentioned above need to be distributed accordingly. The threshold for meeting them should be high but not so high that they are unachievable.
Principles are only sufficient for giving us some guidance for how to act in difficult situations or ethical dilemmas, they don’t determine the outcome because they are only one element in the decision making process. We have to rely on the context of the situation and the moral surroundings of it. The criteria proposed are to be used in moral deliberation and should address whether the criterion applies to the situation, is it satisfied and is it sufficiently met (which is in reference to the threshold).
With the use of AI-enabled technology, privacy is usually cited as a major concern but the rendering of care is decidedly a non-private affair, imagine a scenario where the connection facilitated by technology allows for meeting the social and emotional needs of a terminal patient, if there is a situation where the use of technology allows for a better and longer life, then in these cases there can be an argument for sacrificing privacy to meet the needs of the patient. Ultimately, a balance needs to be struck between the privacy requirements and other healthcare requirements and privacy should not be blindly touted as the most important requirement.
Framing the concept of the good life with a view of restoring, maintaining and enhancing the capabilities of the human, one mustn’t view eudaimonia as happiness but rather the achievement of the capabilities listed because happiness in this context would fall outside of the domain of ethics. Additionally, the author proposes the Care Experience Machine thought experiment that can meet all the care needs of a patient and asks the question if it would be morally wrong to plug in a patient into such a machine. While intuitively it might seem wrong, we struggle when trying to come up with concrete objections. As long as the patient feels cared for and has, from an objective standpoint, their care needs met, it becomes hard to contest how such virtual care might differ from real care that is provided by humans.
If one can achieve real capabilities, such as the need to have freedom of movement and interaction with peers outside of their care confinement and virtual reality technology enables that, then the virtual good life enhances the real good life – a distinction that becomes increasingly blurred as technology progresses.
Another moral argument put forward in determining whether to use technology-assisted healthcare is if it is too paternalistic to determine what is best for the patient. In some cases where the patient is unable to make decisions that restore, maintain and enhance their capabilities, such paternalism might be required but it must always be balanced with other ethical concerns and keeping in mind the capabilities that it enables for the patient.
When we talk about felt care and how to evaluate whether care rendered is good or not, we should not only look at the outcomes of the process through which the patient exits the healthcare context but also the realization of some of the capabilities during the healthcare process. To that end, when thinking about felt care, we must also take into account the concept of reciprocity of feeling which is not explicitly defined in the capabilities approach but nonetheless forms an important aspect of experiencing healthcare in a positive manner from the patient’s perspective.
In conclusion, it is important to have an in-depth evaluation of technology assisted healthcare that is based on moral principles and philosophy, yet resting more on concrete arguments rather than just the high-level abstracts as they provide little practical guidance in evaluating different solutions and how they might be chosen to be used in different contexts. An a priori dismissal of technology in the healthcare domain, even when based on very real concerns like breach of privacy in the use of AI solutions which require a lot of personal data, begets further examination before arriving at a conclusion.
Original piece by Mark Coeckelbergh: https://link.springer.com/article/10.1007/s10677-009-9186-2