Just yesterday, I read a number of articles that discussed various elements of future health care. If you have the time, I would suggest looking at the selected links below, before reading my comments, so that you have a perspective on my viewpoints. The links are as follows:

I find articles like this absolutely fascinating. They demonstrate how you can read about major changes in perspective and practice, almost on a day-to-day basis.

 

The first link talks about the future of hospitals. Within the next 20 to 50 years, most of our medical care will be delivered outside of the hospital. Of course, that is the situation now. But I’m talking about even more severe healthcare situations. Preventative measures supported by constant monitoring will help avert readmissions for complicated diseases like congestive heart failure and emphysema. At-home treatment will be far more effective, based on new testing and treatment systems that could be fully installed in a person’s home. So, with a single drop of blood (if that much), a person could have 1000 different parameters of health and disease tested on a daily basis. When a computer based monitoring system detects a problem, the patient could self administer care using new forms of medication delivery patches that are as easily applicable as a bandaid. The treatment will be on par with that of any hospital. So, why go to one?

By the way, I believe that as time goes on, community based emergency rooms and other healthcare provider services will take on a far greater role in the management of patients. You will have monitoring centers as part of such outpatient services, that will call patients and invite them in, when an abnormality arises in their monitored status. Such non-hospital institutions will also become involved with continuing care and chronic care, but will focus on the acute services that patients require. All of the information used and gathered by these community centers will be shared with the primary care doctor via future versions of EMRs (perhaps even through an Apple HealthKit link). The endpoint is to drastically reduce referrals to and admissions in the hospital.

Project Honeybee is a major new initiative that was discussed at mHealth Summit 2014 which took place last week. This is one of a few medical technology conferences that I really wish I could attend. The concept behind Project Honeybee is to collect information. To quote:

“By developing the ability to continuously record physiological parameters, we will pinpoint the transition from health to disease and intervene more effectively for improved health outcomes at a reduced cost. Importantly, our validation process is disease, device, and outcome-agnostic, particularly one that can handle the large variety of devices for clinical settings.”

Just because I can’t resist, let me throw in a quick summary for the mHealth conference itself:

“The mHealth Summit, the largest event of its kind, convenes a diverse international delegation to explore the limits of mobile and connected health, including every aspect and every audience. Technology, business, research and policy. Mobile, wireless, digital, wearable, telehealth, gaming, connected health and consumer engagement. The 6th annual Summit will put new emphasis on innovation and evidence”

Projects like Honeybee imply that huge amounts of data will be collected. I previously discussed the issue of “big data” and argued that data should be collected, anonymized and then open sourced, so that data analysts from anywhere in the world would have the opportunity to convert the data into usable knowledge, to the benefit of all (socially, medically and financially).

Most pundits that I follow agree that the next decade will be focused on the development and availability of all forms of AI resources, that will effectively allow us to throw a batch of data at the “wall”, to see what sticks. There are some really smart people who can develop algorithms for intelligently and automatically analyzing data. These individuals and their algorithms will become the magic sauce that is needed to allow decision-makers to make proper evidence-based decisions.

Despite all of this, data is being collected SO quickly that even the process of storage first and analysis second, may become untenable (due to the quantities of data).

I just saw an article about the company Cisco, that is a (the) world leader in networking infrastructure. They are already talking about hooking up their routers to intelligent data analysis tools. The idea is that as the data flows through the network, before it even reaches its end point [such as the data center], a Cisco router could have already delivered the data to the data analysis system. So, at any given point in time, the user could just load up the data analysis tool and already see the results.

The AI tools that will be developed over this and the following decade will be adept at inputting totally unstructured information and looking for key patterns. When medical services are collecting petabytes (if not more) of data on a daily basis, there won’t even be time for a human to sit and consider what to study. Instead, AI tools backed by huge data center resources like that of Google and Microsoft, will effectively ask the questions based on the continuing analysis of the data. I repeat – the AI tools (not humans) will be asking the questions that are related to improving our health.

Imagine that you will see an output from your data center that reads as follows:

“On initial review of the data of the last 30 days, there is an apparent trend of correlation between X and Y. A second review of the data was automatically initiated with a focus on X versus Y. The data analysis was extended back multiple months. There is clear confirmation of a causal relationship between X and Y. It is suggested that a follow-up study be done to further confirm these results. The description of the study is included below.”

I can imagine that some researchers will feel very intimidated by such a system. While they will have the option to manually review the data and ask their own questions, based on their own theories, the likelihood of discovering something that the computer missed will be very small. In time, it will be possible to simply ask the AI system whether there is, as theorized by a human, a correlation between A and B. But if the computer did not spot this on its own, the answer will almost always be no.

When doing a clinical study, the initial input is data collected in any number of ways. It might be a survey that patients fill out. It might be results of certain tests that are done by order of the physician. In time, all of this data will be readily available online. I would argue that the researcher will be hard-pressed to come up with a study design that has not already had its data collected, via automated 24/7 monitoring (and even surveys that the AI system develops on its own).

What happens when a new medical test or device comes out?  What happens when we need to evaluate the efficacy of a medication that has just been developed by a pharmaceutical company?

I suspect that it will take two to four decades before we truly can simulate a drug’s effect on our bodies, all on the computer. You do not have to simulate the entire human physiology. What you need to do is to identify the surface markers/receptors on various cells. Then, using the computer, you can see if the new medication fits into the 3-D form of any of these markers/receptors. In time, the full gamut of receptors will be deduced, very possibly from direct DNA analysis. At some point, the pharmaceutical companies will be able to submit the molecular structure [in 3-D form] of a new medication to the computer and then see exactly where it will “stick”. Based on the receptor that this new medication acts on (or does NOT act on), the researchers will know how the medication will affect the patient.

At the point in time that this degree of virtual testing is possible, I strongly suspect that it will change the way in which all clinical studies are done. It is already a well-known and disappointing fact, that even multi-hundred-million dollar multi-year studies, cannot identify all of the positive and negative effects of a medication. It is only when the medication is released into the wild, that the one in 10,000 side effect is really detected. But with a virtual study environment that includes every human cell receptor there is, such primitive human-based studies will likely no longer be needed. Virtual studies will likely identify better and safer medications than any human-based study. At the point that the computer simulation proves the safety of the drug, the medication will be released for general use, and constant monitoring will allow for the detection of side effects (which will be very unlikely).

For those who would challenge such future innovations, all I can say is “Resistance is futile“.

Thanks for listening