A/IS in Healthcare: Who Benefits?

Array

By Lisa Morgan, Program Director, Content and Community, IEEE Global Initiative on Ethics of Automated and Intelligent Systems Outreach Committee

Older Americans remember when they had a personal relationship with their family physician. That was before personal computers, computer networks, mobile devices, the Internet of Things (IoT) and Autonomous and Intelligent Systems (A/IS) became mainstream. Even in the 1980’s, doctors still had dual roles as confidants and healthcare providers. Today, healthcare decisions are at least informed by algorithms. The key question is who benefits from algorithmic decision-making?

“Over the last 30 to 40 years, healthcare has undergone a huge transition in the West,” said Randale Sechrest, MD. “In the old days, doctors were more priests than scientists because the scientific tools they had to work with were pretty primitive, so that relationship between the healer and the patient was very different than it is today. The Hippocratic Oath defined the duties and obligation to the patient by the healer for more than two millennia. In the realm of ethics, duties and obligations are associated with a deontological framework and are always in tension with consequences – a framework associated with utilitarianism.“

The entire U.S. healthcare system has undergone several seismic shifts during the past few decades, fueled by changes in government spending, new laws, new regulations, and accelerating technology innovation.  Notably, there has been a shift to “outcome-based” (now called “value-based”) healthcare which leverages predictive analytics.

Not surprisingly, the outcome-based and value-based healthcare terminology have been criticized for their lack of clarity. Specifically, it’s not always clear what a “good” outcome is or how “value” is defined. For example, Medicare now penalizes hospitals for hospital readmissions within 30 days of discharge. In that case, readmission is considered a “bad” outcome, albeit more from a cost perspective than from a patient’s well-being perspective.

“The core ethical issue is, what are the rules of engagement? How do we actually deliver what we say we’re delivering to an individual?” said Sechrest. “It’s much less abstract when you’re dealing with a discrete individual lying in a hospital bed.  That’s a different sort of experience than if you’re a programmer programming an algorithm. When you create an algorithm, you don’t see how the decision you create with the algorithm impacts the individual.”

Population health is slowly, but surely, displacing individual healthcare, despite what the glossy brochures say. Everything is being measured from individual heartbeats to the number of patients a doctor sees per day to the length of hospital stays. According to the annual Gallup-Sharecare Well-Being Index, individual well-being declined in 21 U.S. states in 2017, the largest year-to-year decline since the index began in 2008.

Outcomes Are Not Just Patient-Focused

Whom does value-based healthcare benefit? The individual, the healthcare provider, the insurance company, or the government? Clearly, there are many simultaneous interests at stake.

“Now it’s a brand of healthcare,” said Sechrest. “I worked for a large, non-profit healthcare system.  They are very cognizant of what their duty is to the individual patient. At the same time, they’re very cognizant of the brand and making sure whatever they deliver is aligned with what the brand represents.”

Integrated healthcare systems are huge technology ecosystems that use algorithmic decision-making to modify patient behavior, among other things.  Specifically, individuals are being encouraged to adopt healthier lifestyle choices to improve their overall well-being.

However, individual patients aren’t the only ones that benefit from a healthier lifestyle. Healthy patients cost insurance companies, healthcare providers, and the government less than diseased patients. So, on one hand, all parties “win” when patients modify their behavior in healthy ways.

“You’re trying to nudge patients into doing things that are good for them, but it’s good for your bottom line too, which becomes somewhat of a conflict,” said Sechrest.

There’s a fine line between behavioral modification and coercion, in other words.  At what point should agreeing to be exposed to behavioral modification be required to get or maintain health insurance? Already, insurance companies offer financial incentives to individuals who wear fitness bands. Even before insurance companies knew what to do with the data, the mere fact someone was willing to wear such a device signaled that the person was likely health-conscious.

“You really don’t understand who’s being manipulated, whether it’s you, the patient, both of you,” said Sechrest. “Up to this point, we’ve accepted the nice things about algorithmic systems that make our lives better or more convenient. I don’t know if we’ve thought about where ethics should make their way into that.”

Ethical thinking in relation to healthcare should involve moral responsibility, but does it? Even if the designer of one system or piece of software designed ethics into its product or service, that doesn’t mean the entire ecosystem would be designed or used ethically.

Over the past few years, the medical IoT has blossomed, transforming simple, physical devices into intelligent devices that collect and generate massive amounts of data. Integrated healthcare systems are extending out to the edge to include such devices.

“For example, there’s an incredible amount of software built into pacemakers.   The question is how that information is going to be used ultimately,” said Sechrest. “Will the device anonymize the data? The individual becomes a target if the device is used to upsell or cross-sell different services that individual might need. It’s the same problem you’re seeing everywhere else with large databases of cross referenced personal information that are used to target and intrusively market to individuals.”

Individual Liberties Are at Stake

Right now, wearing fitness bands or smart watches is optional for health insurance purposes. If a person wears one and is willing to share the data, then that person is eligible for lower-cost healthcare premiums. However, as health insurance providers and employers look to further lower costs, the use of such devices may become mandatory

“It’s one thing for me to sign up and say I’d like to manage my blood sugar better.  It’s another thing to say to get healthcare coverage, I must wear the device to comply because I have Type 2 diabetes, for example,” said Sechrest. “The biggest change is the trend toward population health where the focus is now becoming managing populations rather than providing the individual patient a choice of treatment paths to consider. This is a utilitarian view of the world where the good of the whole is maximized and less a Kantian view where individual rights trump the benefits to the collective.”

Slowly, but surely, individual rights are being eroded.  It may be that patients have to comply with an entire range of behaviors to get access to healthcare in the U.S. in the future.

“Evidence tells us what the accepted norm is. If we don’t agree with that accepted norm, then we’re an outlier in the system and we may not be able to access the benefits of the system,” said Sechrest. “How far out do you allow people to go before you start denying them access to hospitals and the healthcare system because they choose not to comply with certain prescribed (and increasingly monitored) behaviors and can’t get insurance?  All these things depend on the answer to that question.”

Share this Article