It was the right thing to do, but was it the right thing to do?

If you are wondering how it is that I know so many movie references (mentioned in previous blogs), it’s because I spend half of my day watching TV and movies. The other half I sleep. The last half I spend with my wife and children, watching TV and movies.

I think there are many movies, even comedies, that make brilliant comments about very subtle and critical issues that can define not only an entire generation, but literally our entire way of life. I personally believe that once such issues are raised in a movie or TV show, they should already become a topic of interest for the general public and specifically, for the halachic community. Rabbis tend to prefer to deal with a specific question only once it has been presented to them. In my opinion, technology is advancing so quickly, that by the time the question reaches the rabbis, many people are already struggling with the issue and in some cases, making fundamental errors. In other words, the time has come for the halachic leadership to predict reasonable questions and already discuss them.

There is a movie, loosely based on the book by Isaac Asimov, called, “I Robot.” The details of the movie are not an issue at the moment, although I would humbly suggest that people watch it just for the pure enjoyment. The key character in the movie is a police officer who has a prejudice against advanced robots that are infiltrating into society in a welcomed manner [as dog walkers, help at home for the elderly, cooks, healthcare workers and so on]. At one point this lead character, played by Will Smith, is asked why he is so resistant of and so angry at the entire robotic movement.

He retells the story where there is a [driverless] car accident and he and a child are plunged into the water.  A passing robot jumps into the water to save, ideally both people, but decides only to save the Will Smith character. Why? Why not save the child? Because the robot did the calculation that the likelihood of saving the life of the child was very low while the likelihood of saving the Will Smith character was over 50%. So, based on a very simple calculation, the robot saved the adult’s life, letting the child drown. Since that time, the Will Smith character truly hates the robots.

There are countless, tragically, real-life situations where people must choose, amongst themselves, who lives and dies. They have no leader that spurs them to march on into the valley of the 600. Amongst themselves, they must decide. It tends to be a human instinct to save a child before any adult, even if that child’s likelihood of survival is very small, and even if that child’s survival interferes with the chances of the adults. One can argue, that the robot that chose to save the adult before the child was simply insufficiently programmed to deal with such situations. After such an event, you can imagine the programmers teaching the robots that a child’s life always comes first.

At the moment at least, robots don’t have an inherent morality or intuition. What that means is that all they can do is try to fit their environment into the code inside of them. If they are faced with a situation for which they have no pre-programmed logic, they literally become stumped and can become ineffectual. Newer learning technologies like those that are used to identify faces even if covered with eyeglasses and facial hair, are much better at looking at millions of previous similar cases and deducing what they are now looking at. So if you teach the computer what fluffy means, based on 1 million photos of fluffy objects, animals and people, then these computers would have a very high chance of identifying a new object as being fluffy, even though the computer has never seen the object before.

The day will come when complex scenarios are presented to a computer as examples of correct behavior under varied situations. Imagine a computer being presented with live video feeds from millions of incidents in a medical, major disaster, military or any other scenario involving the potential loss of life. In each case, the incident would have to be flagged as being either ethically correct or not. One can easily imagine the computer viewing multiple videos where parents are seen risking their lives to save their children, or soldiers risking their lives to save  the young of others, i.e. the children of people they don’t even know. Based on enough such cases, the computer could learn that children’s lives are considered of the highest value in most societies.

Of course, this is not universal. What happens when the ethics of one society challenge the ethics of another. Without going in-depth on the issue, it is a known fundamental difference between Jewish and Catholic religious law, in regards to whether to save a mother or a fetus when both are in danger. It’s ridiculous to claim that one viewpoint is universally correct and the other is fundamentally flawed. In a Catholic country, you would expect that Catholic religious law would have tremendous sway. In a Jewish country, i.e., Israel, you would expect that Jewish law would take precedence [amongst Jews at least].

Programming ethics into computers is effectively impossible for the same reasons that deciding on universal ethics amongst humans is effectively impossible. When two countries or two civilizations with fundamentally different religious and ethical viewpoints face off against each other, it often ends in war. And the only reason that certain wars don’t happen is because of the risk of mutual total annihilation. Is a Spanish robot expected to follow the ethical standards of Spanish law? And when that robot crosses a border and is now in a Muslim country, is that robot expected to change its perspective and to act fundamentally differently in a way which would be considered unjust back in Spain?

Medicine tends to have rules that are more universal, and thus the decision-making may be easier. A physician is expected to never refuse any patient, even if that patient is a murderer or a child molester or anyone considered morally corrupt. The Geneva Convention is considered the standard set of rules for war. It does truly sound ridiculous that in the madness of war, there should be rules. But there are. The question is whether both sides follow them. There are far too many cases of countries who fought wars against each other, where the actions of one side could easily be considered more humane or ethical than the other. What does that mean? What did a good clean cut American soldier do when faced with a Nazi prisoner in World War II? Did he execute the prisoner on the spot? Did he give them food and water? If humans struggle with these questions, how will computers do a better job?

It is possible that the future of human physicians is to be the conscience of robotic doctors. The robots will always know more than the humans, and will always be able to make a diagnosis and decide on treatment faster than any human. But when an ethical issue comes into play, it may very well be the human doctor who makes the final decision about treating or not treating, how to treat, or even letting the patient died. It might very well be that we will one day have the ability to design a virtual soul for robots. The joke is, that by giving each robot its own virtual soul, these robots could become as ineffective as humans when faced with moral dilemmas. You can imagine in the future, to robots literally arguing over the correct treatment for an end-stage patient, because their individual virtual souls lead them to different decisions.

Human beings will also likely feel much more comfortable with final difficult decisions being left in human hands. As ridiculous as it is, humans may prefer a higher risk of malpractice, just to keep critical decision-making out of the hands of future robots.

I have no doubt that we will not be prepared, socially, legally, ethically, when computers and robots do become truly capable of making difficult decisions. It might become law, in some countries, to turn off the virtual soul of the robot until that time that society is ready to deal with it. And when will society be ready to deal with conscious, conscientious and moral robots? Your guess is as good as mine. I guess I will have to keep watching more and more movies to find the answer.

Thanks for listening

About the Author
Dr. Nahum Kovalski received his bachelor's of science in computer science and his medical degree in Canada. He came to Israel in 1991 and married his wife of 22 years in 1992. He has 3 amazing children and has lived in Jerusalem since making Aliyah. Dr. Kovalski was with TEREM Emergency Medical Services for 21 years until June of 2014, and is now a private consultant on medicine and technology.
Comments