I was recently very flattered to be called in for a discussion with a health care IT department that I hold in the highest regard. The lead member of the team truly cares about the work and the people affected by the IT products. To be included in such a circle, is humbling.

Understandably, this team is incredibly busy, not only maintaining all of the systems they have designed, but constantly pushing forward to add critical features to their software. I should point out that these are not “puff” add-ons. The features they are working on will make a huge difference in the safety and productivity of the health care work being done. And I applaud their devotion to this endpoint.

At one point, they asked if I would review a data set that they simply had not yet had time to fully evaluate. I was more than happy to do so. The data set had no personal information in it, and therefore I was able to take a copy and work on it on my own. What I discovered, by using some very basic tools that I have on my computer,  was that the data tables were incomplete.

When I informed the IT team of this fact, they immediately contacted the original source of the data tables in order to get the most recent version. The hope was that the missing entries were related to the age of the data, rather than some inherent problem with the way the data was recorded. As of this writing, which is more than a month since my meeting, the issue has not yet been resolved. This is by no means due to lack of effort by the IT team. Unfortunately, they have come up against a bureaucracy that makes a snail look like it’s moving as fast as The Flash (from the TV show, not the shadowy one from “Batman vs Superman”).

The tool that I primarily used in order to investigate the data is literally a 13 year old piece of software. My father, may he rest in peace, taught me that buying once, but buying well is always the best approach. If you are not someone who compulsively must have the latest and greatest product on the market [Apple users should stop reading at this point], then you can spend more at the time of purchase, but be able to rely on the item that will last for years. The same is as true of a winter coat as it is of hardware and software.

My present computer, which is my life’s blood, is a six-year-old machine. I spent a great deal on it, especially on the graphics cards that run my six screens [of course, I still have to mention this at every chance I get]. My machine continues to run quickly and allows me extremes of  computer work. As long as its CPU continues to beat, I will stand guard over its fans.

Why is this point so important? This is not the first time I have raised the issue of sticking with older technology that works, rather than running to the newest update. Just as an aside, I personally believe that most people outside of the IT world are quickly becoming exhausted by the pace of change in the world.

I was just watching a couple of hours of the most recent Microsoft major conference, called Build 2016. This is a conference where Microsoft shows off many of its latest advances, and it takes your breath away. Once upon a time, the latest advances were  the changes in system design between Windows 3.1 and Windows 95. Don’t be surprised to hear that you could still do a great deal of very important work with even Windows 3.1. But these days, the jump in capability between the previous version and next version of technology is far greater than it used to be.

There are actually many developers who are upset by the pace of progress in companies like Microsoft, but for different reasons than the general public. All you need to do is mention the name  “Silverlight” and there will be a number of developers who become somewhat enraged [not so much so as to become the Hulk, but still pretty annoyed]. At one point, not that long ago, Silverlight was being touted as the solution for many of the issues  that had to do with running software from the Internet on local machines. It was Microsoft’s equivalent of Adobe Flash.

This is actually a very personally important story because when I first began developing my own EMR, I seriously considered developing it in Adobe Flash. To those who are acquainted with Adobe’s technologies, this was not only a reasonable thing to do, but would have made it possible to deliver a high quality user experience entirely via the Internet. Until today, a major part of my EMR still needs to be locally installed on a machine. This is something which I hope the present TEREM IT team will eventually resolve [by converting all of the code to be web based].

As it turns out, I made the right decision by not relying on either Silverlight or Adobe, as both technologies have effectively fallen by the wayside. People have invested years to decades in these technologies, and one can understand how upsetting it is to hear that they are effectively being phased out. For myself and my software, a dependency on these systems could have led to a major crisis which would have demanded a tremendous number of person-hours to correct. So whether it was my grand vision or dumb luck, I made the right decision, in retrospect.

Nevertheless,  this pace of progress in development philosophies and tools, makes programmers nervous. They literally worry if their code will still be considered appropriate or useful or even upgradable within the next few years. If the people who are in the midst of all of this progress are nervous about tomorrow, what does that say about all of the non-IT people who make use of the products generated by these programmers?

In a recent blog post, I commented on the fact that I will have to refresh my calculus in order to understand the recent advances in machine learning. Machine learning and its variants are being spoken of in almost every circle of business, research, medicine and more. One definitely gets the sense that without a machine learning system in place, a company, no matter what its focus, will eventually fail. You can almost imagine people saying “how did you ever expect to compete in the marketplace without a machine learning solution”.

If analyzing a couple of tables using 13 year old technology is still of great value, then what should one say about some of the latest and greatest machine learning tools? Do we “need” them, or do they simply offer an alternative to the way things are done now? Is the old way inherently rusted and useless? Is the new way going to be replaced in 2 to 3 years, so why not just wait for THAT version? I for one feel somewhat lost when trying to answer these questions. But, as to machine learning, as I have already noted in the past, I have officially dedicated a year of some of my daily free time, to become qualified in this cutting edge field

It reminds me of an old “joke” from the shtetl. When a person living today would remember the old way of life, he would say that things were better in the past. The question is asked “but you had no heat, no running water, your lives were constantly threatened by starvation and the police – how could you say that they were good times?” The answer is that they were so busy just surviving, they didn’t have time to realize how miserable they were.

This is part of the reason why I am taking the time to relearn my linear algebra and probability and differential calculus – so that I can be so busy as to not spend time philosophizing about whether I should be doing it in the first place. I have already been told by experts in the field that one can spend years studying machine learning and still not have mastered it completely. So, my free time will not get boring for a while. And of course, as I advance, I hope that I can help out IT departments like the one I mentioned above, with even better tools.

I am presently looking at a newsletter that I regularly get in my email, amongst the tens to maybe hundreds of such newsletters that I get on a monthly basis. And no, I don’t manage to read them all. But I try to get a gist of what they are saying and sometimes focus on one specific article that seems of personal importance. The header on this particular newsletter states as its purpose “bridging the data preparation gap”, and specifically refers to health care. A number of the topics covered in this particular article include increasing analyst productivity, improving organizational data usage and empowering exploratory analytics. This article is not meant to be a homework assignment for someone studying data analysis. It is meant to be  an enabling tool to help health IT people to, in turn, help the healthcare providers that work with the patients..

I can guarantee that the vast majority of doctors would not understand the significance of the topics I listed above. They wouldn’t appreciate what an analyst does, they wouldn’t know what exploratory analytics is supposed to do, and they would completely fall apart when presented with the concept of improving organizational data usage. The doctors would not so quietly revolt, stating that all of these fancy words are totally out of their field, and as such will not improve patient care. In fact, trying to understand these concepts and actually apply them will probably confuse the healthcare staff  and lead to a detrimental effect on their practices.

Non-IT people find a great deal of modern technology to be incomprehensible. Many doctors are comfortable with an Excel spreadsheet, but only to a limit. Start showing these doctors advanced features or even just advanced statistical tools, and they walk away. Now imagine trying to get the same doctors on board for advanced machine learning projects. I will say it quite simply – if the interfaces for these machine learning projects are not simplified to the point of Word or Excel, doctors will simply not be able to directly benefit from them.

I will take this point further. If machine learning systems end up having anything more than a “show results” button, doctors will become lost. They will not understand the various types of models that can be used in machine learning. Rather, the doctors will insist on simplification of the system to the point that all of the data is automatically collected, and the results are presented in a way that, almost anyone, could understand. If the machine learning algorithm pops up a message that says “this patient is at very high risk for a deterioration in their kidney function”, then the doctors will do what they know how to do.

The doctors will, of course, check the patient’s kidney function, make sure that the patient is receiving sufficient fluids and is properly expelling urine, and that the patient’s blood electrolytes [like sodium and potassium] are all within normal range. If the doctors discover that there is a problem, then they will have to decide how to manage acute renal failure. Now it could be that at this point,  the same machine learning algorithm will pop up a message that tells the doctor that the ideal way to manage this patient is to perform immediate hemodialysis. And this kind of pop up/technology will be tolerated by the physicians.

There will be doctors who might be somewhat skeptical of the machine making such a decision. But in practice, very few doctors will be capable of challenging this conclusion. The doctors will not be up-to-date on every paper in the literature regarding acute renal failure. And the doctors will definitely not be up-to-date on the entire patient’s medical history and how he has responded to varying types of dialysis in the past. A doctor could acquaint him or herself with all of this information. But it would take time and could very well be subject  to personal bias.

As such, the tendency will be for the doctors to simply follow the orders of the computer. While one can still argue  that there is a human component to the management of this patient, in practice, the doctors simply do not have the knowledge or the skills to analyze all of the data in order to make as informed a decision as the computer. At this point, I leave it to others to argue, who really is the doctor in these cases.

There is a line in the sand that few doctors cross. As far as medical curricula go, young doctors are happy enough to have memorized a sufficient amount of anatomy to pass their exams. For doctors to become a true contributing force within the world of data analytics, they will need to spend a significant amount of time studying this entire field. Considering that the effect of data analysis is ubiquitous, no doctor can honestly claim that he or she does not really need to know this “stuff”.

The end result will likely be a whole new industry made up of machine learning specialists and experts who are brought in to set up, possibly even, autonomous systems that will spit out results, whose origin is a black box to everyone else.

And when the patient asks the doctor if he or she truly believes that the best thing to do is to, for example, have a specific type of surgery, the doctor will have somewhat of an ethical dilemma. On one hand, the doctor will be able to quote the results of the machine learning analysis and reassure the patient that the decision to operate is based on the most up-to-date data and technology. On the other hand, since the doctor really does not understand how the data is processed and how the decisions are made by the computer, the doctor’s personal feelings about the best way forward are moot.

How will this affect  the entire doctor-patient relationship? It’s a great question for which I do not have a simple answer. I do believe that medical school curricula do need to be majorly overhauled and that graduating doctors should have a mastery of non-medical technologies that are nevertheless critical to the success of their practices. It may even be necessary to extend medical school by years, in order to incorporate all of this new knowledge.

From the moment a student begins medical school, they should understand that the ground below them is constantly moving. They should be reading material, both standard fare and about cutting-edge technologies, from day one of their studies. Medical school is hard enough as it is, and I am speaking about elevating the level of difficulty significantly. Admittedly, there are now residencies that allow people to specialize in the fields of informatics, which is an alternate way of saying data appreciation and analysis. But as I said above, every doctor will have to develop a significant level of comfort with these data technologies. Otherwise, the doctors will be reduced to parrots that simply read the screen of the EMR computer system.

I personally consider it a good thing that doctors be required to constantly update their knowledge, even after having been practicing for decades. There are a lot worse things than having to refresh your calculus knowledge. And when a group discussion about the patient ends up centering on hard data and various conclusions based on different models of machine learning, then patients will truly be benefiting from top-quality care.

Until then, I really don’t know how this entire matter will play out. As I’ve said before, even at the age of 54 and even considering that I no longer directly manage patients, I am doing my best to update my skills and learn this daunting new field of data analysis. I hope that I will be given the opportunity to apply my new knowledge to real-life datasets. But there is no guarantee. And all of my efforts might literally be wasted.  But I am going to try. Why? Because I have the free time.

Thanks for listening

If you have any questions, or are looking for advice on your ideas in medical related technology, I can be contacted via one of the following:

My web site: http://mtc.expert           
Emails: nk@mtc.expert   
Twitter: @nahkov