Healthcare automation

The Care in Healthcare May Not Be So Easily Automated

Since last January, residents in London have a new way of getting medical assistance through their phone: Britain’s National Health Service (NHS) has been offering a non-emergency help line through the chat-bot app Babylon, which analyzes callers’ inquiries in order to assess the urgency of a case. Each assessment takes around a minute and half, the Financial Times reports, compared to more than ten minutes with a human operator—who may not necessarily be clinically trained.

Healthcare is an expensive industry. Be it in public or private hands, it’s about spending resources on those who are—temporarily or permanently—unable to contribute to productivity. Couple this with the pressure from a aging population globally (22 percent of people worldwide will be sixty or older by 2050, according to the Pew Research Center, up from eight percent in 1950) and promises of cost-reduction from automation become very alluring to decision-makers. “We’re so enthralled by technology that we tend to get [the product] first and then work out whether it’s value for money or effective or safe,” says Jonathon Tomlinson, a physician at the UK-based Centre for Health and the Public Interest. He says that in chronically underfunded public health systems, like Britain’s, politicians “are more likely, paradoxically, to take a gamble on something that looks exciting and completely different, like automated robot nurses, or something like that.”

The practice where Tomlinson works, in East London, has already been experimenting with automated interfaces in the form of eConsult WebGP, a tool that helps patients assess symptoms and connect with their physicians. Tomlinson’s satisfaction with the experience, however, is mixed. “It takes a huge amount of human effort to make this system work, which exceeds [what] is saved in terms of people who were able to solve their problems [by themselves] instead of booking an appointment…. ‘We got this fantastic robot that does the work of two men, but unfortunately it takes three to work it.’ The WebGP thing is like that.”

Like Tomlinson, Tobias Gantner is also a doctor, and though he heads a think-tank boldly named “Healthcare Futurists,” he maintains a healthy skepticism in discussing care technologies. “We always need an answer to the ‘why’ question—why are we doing this? Is this to make life easier for doctors, to make healthcare cheaper for insurance companies? [Or is it] to make life easier for a patient?” A flawed experience like WebGP, in the eyes of Gantner, is simply the result of a faulty implementation of technology. In contrast, he points out the benefits a well-oiled system could bring to a doctor’s office; for example, a voice recognition system could transcribe a conversation with a physician, allowing the patient to go through the salient points again at home.

In Tomlinson’s vision, automation’s main purpose should not be to supplement and eventually replace a doctor’s or a nurse’s job. Rather, it should attempt to liberate medical professionals from menial tasks, like typing or form-filling. Indeed, already by the late ’80s a survey from Southwest Texas State University recognized that the tasks most suited to robotization would be back-office ones. “If we carry this [process] out,” Gantner says, “we will free doctors’ and nurses’ time to physically interact with people, to actually do what only human beings can do…. I firmly believe, being a doctor myself, that a doctor or a nurse is a remedy: we don’t always need to pop pills as a first choice—we need human interaction. Some people come to the doctor because they just want to talk, and we need to serve this purpose.”

Doctor and patient

iStock.com / monkeybusinessimages

iStock.com / AH86

But the industry’s focus is not just on the traditional hubs of medicine, like hospitals and care homes. Perhaps the biggest interest is in new ways to bring care to the patient’s home, with everything it implies in terms of bed occupancy, human resources, and self-sufficiency. “There’s a lot of focus on that in the US,” says Matthew Holt, co-chairman of Health 2.0, a global conference network for the med-tech industry. He explains that under Obamacare, if a patient is readmitted to a hospital less than thirty days after being dismissed, the hospital is subject to a fine. “They used not to care at all, they just got paid again—but now there’s a lot more saying, ‘What can we do to keep that patient healthy?’ I think that [trend] will continue…especially for people with chronic illnesses.”

Regarding this new generation of tele-health, Gantner recalls how in the early days, “you needed to install machines in people’s rooms, so cameras, surveillance systems, [but] obviously this [was] stigmatizing.” Now, instead, he sees a whole new effort to make the process non-intrusive and integrated with everyday life. He gives the example of energy companies, who had “no claim in healthcare whatsoever” and are now “coming to the scene and saying, ‘Look, we sell electricity, we have smart-meters installed in your house, and we know, by the digital signature of your device, what you’re using.’…. Their idea is to apply algorithms, and if you have a deviation by, say, five percent or ten percent, then something has probably happened, and we give you a call—and if you don’t answer it then we give somebody else a call, maybe your son or your daughter, to check whether anything has happened to you.”

Intriguing as the concept might be, it is also somewhat disquieting, which is why, Gantner himself adds, talking about it publicly is crucial. Holt, of Health 2.0, says there are a number of reasons why people might want to opt out of “connected care.” “There’s a lot of concern in the US about data being used to discriminate against you for access to health insurance. That went away under Obamacare, but it might come back under proposed legislation from the Republicans…. The most problematic aspect is: Is somebody going to tell me what to do, and interfere with my life in ways that I don’t like, because they think I’ll add to healthcare costs?”

What’s more, considering that one of the core target users of assistive technologies are the elderly, getting their informed consent and listening to their sentiments regarding proposed care solutions becomes a moral imperative. “People who self-track are usually very fit young men who just want to be reminded of how healthy they are,” says Tomlinson, of the practice in East London. “You look at how much health policy is driven by healthy young men, and how much…by women and elderly, unhealthy people, and very poor people. It’s the polar-opposite demographics. They start with different assumptions.”

Tomlinson says that just like some patients might actually prefer to put a screen between them and their caregiver—“I have lots of patients who will refuse home visits because [they don’t want to be seen] in such a bad [condition]”—others might find the experience unbearable: “To have a robot who perhaps can spot the fact that you’ve been incontinent, that you haven’t taken your medication, that you haven’t been to the toilet to empty your bowels for three days…. It’s like a medical gaze [that’s been] massively digitally augmented—there’s no hiding from it.”

Even in healthcare sectors that are bursting at their seams, like in Britain, the discourse should not be reduced to a dilemma: robotize the workforce or face collapse of the system. “I don’t think you can start from a ‘better than nothing’ argument,” says Tomlinson. “People often do choose nothing…because they don’t like what’s on offer—they don’t want chemotherapy, for example. People opt to have alternative therapies instead, or simply not go to the doctor—and people might choose nothing over a robot.”