I have often half-joked about my preference of computer science over medicine. When dealing with computers, even under very difficult circumstances [such as when patients are waiting for patient care until a bug is fixed], there was always that last resort of flipping the switch off and on on the physical computer box. When the bug is discovered, it is often that it can be dealt with by a a simple and minimal change in code. Many times, it involves only adding an IF statement that blocks a certain condition from occurring. It all comes down to zero or one, yes or no, true or false – something binary. Until quantum computing is the norm, the heart of hearts of every computer will be a chip that understands nothing more than two states. The binary simplicity is incredibly attractive. It is as if the world can be broken down into just one decision. Admittedly, a series of decisions, but each decision is yes or no. And that makes things much easier to understand and much easier to decide.

In my new favorite show, Mr. Robot, a character is forced to make a decision that would bring down a huge conglomerate that practically owns everything. The conglomerate is run by, what is often depicted as, typical heartless capitalistic managers, who only see numbers on a spreadsheet. Even the number of people who will potentially die based on these managers’ decisions, are a cell in a spreadsheet. The contents of that cell may be multiplied by some factor of insurance payout to the families. But ultimately, there is one cell in the spreadsheet that summarizes the lives and hopes and dreams and futures of even large groups of people.

It would seem that the choice to bring down such a conglomerate is simple. They should be stopped. The managing directorship should be completely replaced with people who do not treat other people as nothing more than entries in EXCEL. The problem is that if one of the show’s key characters goes to court to expose the huge conglomerate, another company will go bankrupt. Hundreds of people will lose their jobs, and perhaps their futures. Their children may not be able to go to a prime university because mom and/or dad cannot find a new job in the present economy. And let’s be clear, the emotional damage done to these innocent workers may be so bad that it will lead to emotional trauma for the children, divorce for the parents and even suicide.

Even the innocent workers in the conglomerate are at risk. While the decision of the conglomerate, to act in a way that literally cost the lives of people in the community, was made by the most senior management, punishing these elite members will also do tremendous damage to the whole conglomerate. Once again, innocent people (from the accountants to the cleaning staff) will lose their jobs and possibly end up with lives that are broken and even lost.

This is by far not a simple binary decision. There is no way to know, even with today’s most advanced predictive models, what the outcomes will be for all of the various innocents involved. If the predictive models existed, you could potentially do the math and determine that in one scenario 100 people suffer whereas in the other scenario 10,000 people suffer.  And you could even imagine that a decision is made to financially help out the hundred person group, so that they fall to earth but with a golden parachute. The involved parties might not be able to afford doing so for the 10,000 people, but for the hundred people, it’s an option. So with sufficient data and programming power, you could reduce this down to a couple of cells in a spreadsheet with a simple decision, which is greater. As such, you would be back to a binary decision.

In medicine there is still a huge lack of critical data that is necessary to decide on the management and ultimate welfare of many patients. If you discover cancer in a patient, which is better – radiotherapy, surgery, immunotherapy, and/or some totally experimental technique? For the father of three children who is the only source of income for his family, and who does not have life insurance, he will probably try anything and everything to survive. What if you know, as a doctor, that the cost of his medical care will end up wiping out any benefits from living. The father might end up alive but the family will be beyond bankrupt. In this scenario, what is the better outcome for the family?

Let’s say the father does have life insurance. You could very quickly do a calculation such that letting the father die saves all of the money that would have been spent on healthcare, and offers the bonus of getting the life insurance. You can even argue that the wife is young enough to stand a good chance of remarrying so that the children will not be denied a father figure for the rest of their lives. As painful as it is to reduce human suffering down to such numerical calculations, this goes on all the time.

Someone ultimately ends up asking, what is the moral thing to do? As a physician, do you encourage the father to go ahead with a high risk, experimental and expensive treatment, or do you suggest that he try standard therapy even though you know that the chance of survival is very low?

The simplest thing to do is to punt. You present all of the options to the patient and his family, with as many statistics as you can find [all of which fly over the head of the father patient and mother] and then allow them to decide. You are in the clear. You step back into a roll of being nothing more than a service provider. You exclude any personal opinions you might have and you simply follow the decision of the family, just as someone who attends to your car follows your decision to fill the tank.  Whatever decision is made, you record it on the chart and walk out of the room with a tremendous sense of relief that you have nothing to answer for in terms of the moral abyss that the family has been left in..

There are a number of definitions for a successful artificial intelligence. In the movie [from many years ago] called “Short Circuit”, a robot effectively proved its self-awareness by understanding humor. Humor is an absolutely fascinating characteristic and it requires a tremendous amount of brainpower and humanity to appreciate it. Will an appreciation of humor be used as the test of computers’ ability to emulate humanity?

The day will come when computers will assist patients in making medical decisions. The computers will present all of the data, with lots of pretty graphs and details about outcomes. If you do X,  you will live 1.2 years longer than if you do Y, but you will spend most of that time in a hospital. The user may then look back at the computer screen and ask “what would you do?”. Doctors get asked this often enough. Sometimes the doctors answer, and sometimes they walk away from the question. But what if the computer tries to answer. What if the computer struggles to find the “best” answer based on the computer’s understanding of the value of human life, morality, responsibility to loved ones, faith in the world-to-come (where we will all ultimately meet again) and much more?

What if the computer responds by saying “I don’t have a good answer. I can only decide for myself but I cannot decide for you”. I personally think that this would be the ultimate Turing test. When a computer supersedes its fundamental binary nature and is able to express a feeling that there are things beyond its ability to answer for anyone other than itself, I would argue that this is the ultimate expression of human nature. If the computer goes on to tell the patient that there are some things that just don’t have a right answer but that the computer will be there to guide the patient as much as possible every step of the way, then I would say that, for all of the problematic implications, we would have to accept that we have created life.

Once such a computer has been created, it truly is pointless to remind it that it is ultimately just a bunch of ones and zeros. It would be equivalent to reminding a human being that his or her whole essence comes from a strand of microscopic DNA (that even these days, can be easily altered to create “new” forms of life). The existence of such a humanoid computer will prove that it’s not what you’re made of – it’s what you do with it.

Thanks for listening