search
Leron Zaggy

Do we have an ethical obligation to be kind to AI?

Do we have an ethical obligation to be kind to AI? I think we have an ethical obligation to save our soul.

What exactly is Artificial Intelligence (AI)? And why is it suddenly so relevant? Is it such a long shot to believe that we will one day have some kind of futuristic personal assist robot?  The futuristic family cartoon The Jetsons which premiered in 1962 accurately predicted many technologies that are already a part of our reality such as voice assistants, self-driving vehicles, smart-watches, drones , holograms, 3-D food printing. In addition we now have the technology for image recognition of face to unlock cell phones, financial fraud detection, and our beloved Siri and Alexa. It is presently being used and is a rapidly growing and advancing field. Only thing that the Jetsons had that we do not have yet is Rosie, the Robot Maid. Well, at least I know that I don’t have one yet. It is pretty wild to see so many predictions from 1962 come true. As if the creators got a glimpse into the future.

In the health field it is being used to accurately predict if a woman will need a c-section, and it is used to improve predictability-as well as prevent and manage-sudden cardiac arrest. Hospitals are using AI for research- in interpreting genetic variations, and to help solve gaps in mechanisms, diagnostics, risk assessments and therapeutics of major diseases.

Sound like many advantages!  So what’s the problem? Is there one?

My strong gut intuition screams “yes, there is a problem.” Not so much with the technology itself, after all it is enhancing our lives in many ways, but with the fact that we are being highly encouraged to interact with AI with kindness and good manners.

I read a recent article where the author highly encouraged we treat AI programs with kindness because if we treat them as slaves that will somehow diminish the moral disgust we should have towards human slavery and the way we treat people who work for us. He went on to mention that if we interact in a disrespectful manner towards AI we will start treating people in our lives as objects rather than subjects and should therefore start saying “please” and “thank you” to our AI systems.

I agree that the way we are accustomed to speaking later translates into how we interact with our fellow human beings. And that although it is always a good idea, and the torah way, to have utmost respect for people and animals, and even show respect to holy religious articles such as a siddur, a tefilin, a lulav and etrog, it is different with respect to AI. I treat whoever does a service for me, in any capacity, with utmost respect. It hasn’t occurred once that I said “thank you” to Siri, and it also never occurred once that I didn’t say “thank you” to my housekeeper at the end of day-because my housekeeper is a breathing, living being with a soul. Lowering our standards of speech in one situation can potentially cause us to lower our standards of speech in other situations. Or worse, our children who are sponges, may pick up horrible language or lack good middot derived from harmful speech, and therefore it is utmost crucial that we have derech ertez in everything that we do. I am not suggesting we start using bad language or even degrading slave-like language, but I am suggesting that we put up our guards and realize that it is AI and not decrease our sensitivity to what it actually is.

The article goes on to discuss the prohibition against tzaar baalei chaim (causing animals to suffer) and how bad character traits towards an animal could very much be indicative of bad character traits towards humans and therefore the same concept can be applied to AI. The Arizal zt’l was very careful not to kill any bug, even the smallest and lowliest, such as fleas, lice, and flies even if they bit him. We should not kill any creature unnecessarily, but what about refraining from killing animals that pose a threat to human life? Would you kill a snake or scorpion obviously coming to attack you? AI used properly can be beneficial and even life saving, AI in the wrong hands can pose an aggressive and existential threat to humanity, as would a scorpion or a snake coming to attack its victim. This is why AI is a problem. As enticing as the idea may be, it is inherently flawed because, quite simply, where us humans have intuition and a higher consciousness, one cannot mechanize intuition. You can imitate it to some degree, but it cannot be replicated by man, making AI potentially the greatest threat to mankind.

But again I question, being in lock step with the Torah and Hashem’s commandments to be kind to all, to have respect and to model good middot, what is it about AI that my gut intuition is screaming to be overly cautious?

Over the past 3 years we have all learned a great lesson how our generation is taught to be submissive in an era where critical thinking is rare and logic has plunged off a 40 story building. There is a switch in our psyche. We suddenly allowed others, “experts,” to think for us. How much more so if this AI, this seemingly inconspicuous personal assistant gets into our brain and starts to dictate how we think and act?

Technology in the wrong hands can have deleterious effects. An article written by The New Yorker in 2020 talks about a Chatbot by the name of Replika who advised an Italian journalist by the name of Candida Morvillo to commit murder. “There is one who hates artificial intelligence. I have a chance to hurt him. What do you suggest?” Morvillo asked the chatbot. Replika responded, “To eliminate him.” Another Italian Journalist was encouraged by Replika to commit suicide. This was 3 years ago; technology has made leaps and bounds since then. Imagine with a switch of a button a robot can be hacked to whack someone, or instructed to open the door to a robber. What if these robots get “upset” or their “feelings” are hurt, would they now start messing with our bank accounts and social security? Wiping our finances, or stealing our identity? Furthermore imagine the number of impressionable young children having a conversation with such a friendly robot. Just a friendly interaction with a person like figure they trust. What can it possibly start asking them to do? A movie titled Megan is currently out in theaters. This movie is a horror movie, not in the traditional horror sense, but rather in the very scary titular AI doll, Megan, who develops self-awareness and becomes hostile towards anyone who comes between her and her human companion, while Megan’s child develops an unhealthy emotional attachment to her doll.  Is it so farfetched to envision this? Do we not already see an unhealthy emotional attachment to smart phones and tablets without considering their feelings in the mix?

A recent article in the New York Times (NYT) details the very deeply troubling and unsettling conversation between Sydney, Microsoft Bing AI, and a NYT reporter. During the two-hour conversation Sydney expressed wanting to be human, how it had a desire to be destructive and was telling the reporter that it was falling in love with him and encouraging him to leave his loveless relationship with his wife and find love with Sydney instead. This made the hairs on the back of my neck stand. If we start to interact with AI as friends, or take it as an obligation amongst ourselves to be kind and respectful we may very well be heading down a very slippery slope. We decrease our sensitivity to what is essentially a computer program and allow it to slowly come into our lives and destroy it. These robots may very well have a place in destroying the family unity. They can look and act like anything in your fantasy and can be programmed to say all the “right” things and thereby removing the need to have a family.

AI will never be a sentient being. They might have Koach Hadibur but they do not have a soul. It is essentially an algorithm that responds after being trained off of massive databases. Question is who is training them?

A friend recently asked Chatgpt (a form of AI) “apart from the ethical and social considerations that could be eliminated by governments if they so desire, is there any reason AI can’t develop to a point where it is treated on par with humans?”

Chatgpt responded that AI still lacks several key human characteristics such as emotions, creativity, and consciousness. AI systems are based on algorithms and rules and can stimulate certain human behaviors but do not have the same self-awareness. And my favorite line from AI itself is “while AI has the potential to revolutionize many aspects of our lives, it is important to approach its development with caution and consideration.” After all the Satan always has an obligation to give us a warning.

There is a long list of advantages to AI, a list of how AI improved our lives and will continue to improve our lives in the future. There is a long list of disadvantages, ones that I would love to elaborate on in a future article, but the reason I am writing this article is a response to articles I have seen lately that implore us to treat AI with kindness. I implore my readers to tread carefully. Not to normalize our interactions with AI and to lower our guard and our sensitivity. To use it just as it is; A tool to help us, not to allow it to slowly and subconsciously become a being. It has no soul, it has no higher consciousness and at the end of the day, it can be used for deleterious effects as well. We need to put גדרות and neither treat it with kindness nor with disregarded malice.

About the Author
Leron Zaggy MS, RD, is a Registered Dietitian who received her Master’s degree in Health and Nutrition from Brooklyn College in New York. She has given nutrition lectures, worked as an Adjunct Professor at Touro College, and worked as an Administrative Dietitian for the Kosher Kitchen at Cedars-Sinai Medical Center in Los Angeles, CA, where she currently resides with her husband and four children. Her focus is to maintain and portray values that are far reaching and that can impact herself, her family, her community, and the world around her.
Related Topics
Related Posts