search
David Sedley
Rabbi, teacher, author, husband, father

The Eliza effect, AI and modern idols: Parshat Re’eh

Joseph Weizenbaum (r) in 2006. (CC BY, Andreas Schepers/ Wikimedia Commons)
Joseph Weizenbaum (r) in 2006. (CC BY, Andreas Schepers/ Wikimedia Commons)

Using Waze in New Zealand is a very different experience, mainly because it speaks with a Kiwi accent and is so polite. “Let’s take State Highway 29, Te Poi Road (north) heading towards Matamata.” Don’t let the phrase “State Highway” fool you – the road has single lane bridges at times, and is windy, hilly, tree-lined, and absolutely gorgeous. New Zealand Waze and I had many conversations as we drove around New Zealand together.

It was great to get to know New Zealand Waze, and I felt almost guilty when I came back and began using my Israeli, Hebrew Waze. Almost as though I had to apologize for speaking with another Waze behind her back.

Obviously, I am being tongue-in-cheek here. I know that Waze is a computer-generated voice which has no sentience and no personality. And yet, it is so easy for us to think of Waze and all the other computers we interact with as almost human.

Do you find yourself thanking Siri, or saying “please” when you ask Alexa to perform some chore? We laugh to read about an 86-year-old grandmother who asked Google very politely, “Please translate these roman numerals mcmxcviii thank you” (Google laughed too, replying in a Tweet, “Dearest Ben’s Nan. Hope you’re well. In a world of billions of Searches, yours made us smile. Oh, and it’s 1998. Thank YOU”). Yet we do the same thing ourselves every time we interact with one of our computer tools as if it was human.

Joseph Weizenbaum was one of the first to raise the alarm at how human we think computers are. At a time when computers were just beginning to become part of mainstream conversation, he warned of the dangers of imagining a person speaking from the other side of the computer. He called this the “Eliza Effect.”

Weizenbaum himself felt like an outsider his entire life. He was born to an upper-class Jewish family in Berlin in 1923. His father Jechiel had escaped the shtetl life of Eastern Europe to reinvent himself as an accomplished businessman, with a nice apartment, and a place in society. He married Henrietta, a Viennese woman several years younger than him.

Pretty soon the Nazis were coming to power and taking over Berlin. Once, as child, his nanny had to push him under a parked car as communists and Nazis began shooting at each other on the street they were walking along.

When he was 10, Hitler became Chancellor and anti-Jewish laws began coming into effect. Weizenbaum had to transfer from his upper-class private school to a Jewish school. Once again, he felt an outsider as most of the other Jews were Yiddish-speaking, poor and from eastern Europe – the very people his father had left behind in his quest for status in Berlin.

In 1936, Weizenbaum and his family left Berlin for Detroit, leaving on his 13th birthday. As a foreigner, he was an outsider at school yet he did not receive emotional support from his parents. He said in an interview, “I was very, very lonely.” The real world was a difficult and dangerous place for him, but he found refuge in mathematics. “Of all the things that one could study,” he said, “mathematics seemed by far the easiest. Mathematics is a game. It is entirely abstract.”

Weizenbaum enrolled to study mathematics in Detroit’s Wayne State University in 1941. However, after the US joined World War II, he enlisted and served as a meteorologist in the Army Air Corps. He would have probably been even more useful using his mathematical background to join a codebreaking team, but the military did not want to entrust a German with such a sensitive task. He spent five years moving around from one army base to another in the US, fighting Hitler from afar.

After his military service, he returned to Detroit where he married Selma Goode, a Jewish civil rights activist. However, they divorced after only a few years.

He struggled for years after the divorce, doubting his self-worth. But around this time, two things happened that changed his life – he began psychoanalysis and started working with computers.

Computer in County of Orange offices, 1967. (CC BY, Orange County Archives/ Wikimedia Commons)

He helped one of his professors at Wayne build a massive computer which took up an entire lecture hall. This ultimately helped him get a job with General Electric in Silicon Valley building computers for the military and Bank of America. In 1952, he remarried to Ruth Manes, who, according to their daughter Miriam, “couldn’t have been further from him culturally.”

In 1962, Weizenbaum was invited to be a visiting professor at MIT. He was successful there, and in 1970 became a professor of computer science there.

While at MIT, in 1966, Weizenbaum created a chatbot, which he named ELIZA. In his 1976 book, “Computer Power and Human Reason: From Judgment to Calculation,” Wiezenbaum explained (page 3):

I chose the name ELIZA for the language analysis program because like the Eliza of Pygmalion fame, it could be taught to ‘speak’ increasingly well. Because conversations must be about something… the program was constructed in a two-tier arrangement, the first tier consisting of the language analyzer and the second of a script…. For my first experiment, I gave ELIZA a script designed to permit it to play (I should really say parody) the role of a Rogerian psychotherapist engaged in an initial interview with a patient.

Weizenbaum gave an example of an interaction between a human and ELIZA:

User: Men are all alike.
ELIZA: IN WHAT WAY
User: They’re always bugging us about something or other.
ELIZA: CAN YOU THINK OF A SPECIFIC EXAMPLE
User: Well, my boyfriend made me come here.
ELIZA: YOUR BOYFRIEND MADE YOU COME HERE
User: He says I’m depressed much of the time.
ELIZA: I AM SORRY TO HEAR YOU ARE DEPRESSED

A conversation with Eliza. (Public Domain/ Wikimedia Commons)

Although Weizenbaum thought of ELIZA as a parody, he quickly found that the computer program (which was named DOCTOR) became famous around MIT. Weizenbaum wrote that he was shocked at what happened next.

There were a few things that shocked him and changed his worldview about chatbots. I want to focus on two of them:

Many psychiatrists believed the computer program could come to replace human therapists. Weizenbaum wrote (page. 5):

I had thought it essential, as a prerequisite to the very possibility that one person might help another learn to cope with his emotional problems, that the helper himself participate in the other’s experiences of those problems and, in large part by way of his own empathetic recognition of them, himself come to understand them.

Secondly, Weizenbaum was surprised at:

How quickly and how deeply people conversing with DOCTOR became emotionally involved with the computer and how unequivocally they anthropomorphized it. Once my secretary, who had watched me work on the program for many months, and therefore surely knew it to be merely a computer program, started conversing with it. After only a few interchanges with it, she asked me to the leave the room.

“I believe this anecdote testifies to the success with which the program maintains the illusion of understanding,” he wrote.

In a 1966 paper he published, “ELIZA — a computer program for the study of natural language communication between man and machine,” Weizenbaum wrote:

Some subjects have been very hard to convince that Eliza (with its present script) is not human.

This belief that there is a human operating the computer system is one which has persisted and continues to resonate. It is why the Kit Kat commercial showing humans operating an ATM resonates with us.

It is the reason I speak to Waze, and anthropomorphize Google. And now, with the proliferation of Large Language Model Artificial Intelligence programs such as ChatGPT, many people are fooled into thinking the computer is becoming sentient.

Back in 2022, a software engineer at Google, named Blake Lemoine, warned his bosses that an AI program called LaMDA was sentient. In response, he was fired.

It is a mistake to attribute humanity to a computer program. Colin Fraser, a data scientist at Meta, wrote a long Medium post in which he argues that AI must not be viewed as an individual, but as a tool.

It feels vividly as though there’s actually someone on the other side of the chat window, conversing with you. But it’s not a conversation. It’s more like a shared Google Doc. The LLM is your collaborator, and the two of you are authoring a document together.

When people trust AI and forget that it is a tool, hilarity can ensue.

A month ago, a New York federal judge sanctioned lawyers who used ChatGPT to write a legal brief, which included fabricated opinions.

A school pupil handed in an essay on Shakespeare’s Twelfth Night, but didn’t think to read the introduction, which stated:

I am sorry, but as an Al language model, I am not able to complete this assignment. However, I can provide you with some guidance on how to approach this essay.

This humanizing and unquestioning belief in computer technology to me parallels the belief in idols as described in the Torah.

Parshat Re’eh warns against being seduced to worship false gods (Deuteronomy 13:7-9):

If your maternal or paternal half-brother seduces you secretly, or your daughter or dear wife, or your friend who you love as yourself, saying, ‘Let us go and serve other gods which neither you nor your ancestors have ever known.’… You should not follow him or listen to him.

For years I was puzzled. How could a rational person ever be convinced to worship an idol? Especially given that the Israelites were living under the protection of God and had witnessed many miracles. How does idolatry make any sense?

Earlier (Deuteronomy 4:28) the Torah warned that if the Israelites worshipped idols they would be exiled from the land to foreign countries:

And there you will worship gods made by humans, of wood or stone, that cannot see or hear or eat or smell.

Why would anyone want to worship something they have built themselves? How could a stone or a piece of wood become an idol?

But now, especially with the greater prominence of AI, I’m beginning to understand. We are able to create machines that appear far more powerful than us. Technology replaced not only how we work, but also how we think. And ultimately, we start thinking of ourselves as inferior to the technology that we ourselves have created. As Weizenbaum wrote in 1966:

It is said that to explain is to explain away. This maxim is nowhere so well fulfilled as in the area of computer programming, especially in what is called heuristic programming and artificial intelligence. For in those realms machines are made to behave in wondrous ways, often sufficient to dazzle even the most experienced observer.

And as that technology becomes more sophisticated it becomes more and more godlike. We no longer understand how it works, but we cannot manage without it. And we start to trust it more than we trust our own human senses. This is why people occasionally drive their cars into the ocean with the excuse that their GPS told them to do so.

Computers are tools, and so is AI. As long as we remember this, we are fine. But we can easily cross a line to believe that the computer should tell us what to do, how to act, or what to think. This then becomes idolatry.

Weizenbaum touched on this years ago, in his 1976 book.

Our society’s growing reliance on computer systems that were initially intended to ‘help’ people make analyses and decisions, but which have long since both surpassed the understanding of their users and become indispensable to them, is a very serious development. It has two important consequences. First, decisions are made with the aid of, and sometimes entirely by, computers whose programs no one any longer knows explicitly or understands. Hence, no one can know the criteria or the rules on which such decisions are based.

Second, the system of rules and criteria that are embodied in such computer systems become immune to change, because in the absence of a detailed understanding of the inner workings of a computer system, any substantial modification of it is very likely to render the whole system inoperative and possibly unrestorable. Such computer systems can therefore only grow. And their growth and the increasing reliance placed on them is then accompanied by an increasing legitimation of their ‘knowledge base.’

And he warned that the danger of this reliance on technology and computers is that we abdicate our responsibility as humans.

The ‘Good German’ in Hitler’s time could sleep more soundly because he ‘didn’t know’ about Dachau. He didn’t know, he told us later, because the highly organized Nazi system kept him from knowing. (Curiously, though, I, as an adolescent in that same Germany, I knew about Dachau. I thought I had reason to fear it.) Of course, the real reason the good German didn’t know is that he never felt it to be his responsibility to ask what had happened to his Jewish neighbor whose apartment suddenly became available…

Today, even the most highly placed managers represent themselves as innocent victims of a technology for which they accept no responsibility and which they do not even pretend to understand…

The myth of technological and political and social inevitability is a powerful tranquilizer of the conscience. Its service is to remove responsibility from the shoulders of everyone who truly believes in it.

Weizenbaum cautions us to take responsibility for our thoughts and actions. Artificial Intelligence cannot guide our decisions or our worldviews.

The seduction of idols was that they, through their priests, told people what to do and took away their responsibility. Today’s computers and the various programs have the potential to become our idols and remove our responsibility.

It is up to us to ensure that never happens. To remember that Artificial Intelligence is a tool, not a god. And that it is up to us to determine the truth and the best way to act. The antidote to idolatry is understanding that technology is just a tool. Weizenbaum died in 2008, but his words from 1966 are still just as relevant as ever:

Once a particular program is unmasked, once its inner workings are explained in language sufficiently plain to induce understanding, its magic crumbles away; it stands revealed as a mere collection of procedures, each quite comprehensible. The observer says to himself “I could have written that”. With that thought he moves the program in question from the shelf marked “intelligent” to that reserved for curios, fit to be discussed only with people less enlightened than he.

__________________

The next series on WebYeshiva begins on August 22 and is entitled “A History of Selichot”. You can sign up on WebYeshiva. I’ve also started sharing more of my Torah thoughts on Facebook. Follow my page, Rabbi Sedley.

About the Author
David Sedley lives in Jerusalem with his wife and children. He has been at various times a teacher, translator, author, community rabbi, journalist and video producer. He currently teaches online at WebYeshiva. Born and bred in New Zealand, he is usually a Grinch, except when the All Blacks win. And he also plays a loud razzberry-colored electric guitar.
Related Topics
Related Posts