-
NEW! Get email alerts when this author publishes a new articleYou will receive email alerts from this author. Manage alert preferences on your profile pageYou will no longer receive email alerts from this author. Manage alert preferences on your profile page
- Website
- RSS
Can You Murder AI? Parshat Mishpatim
Can you murder AI? It’s not a crazy question and it’s not just a question for a science fiction novel or movie. For one, how would you kill something without a body and for another, would you be murdering if you did?
In fact, such a question has been addressed in Jewish tradition before. For the question is really one about us. What does it mean to be human? What value does being human have? Are the qualities that make us human unique to “humans” alone or can they be shared by others – like AI? And if so, what obligations, rights and responsibilities would such a human-like creature have and what would we owe it? All are of real and growing practical significance to us today.
And let me explain why:
My dear friends, this week’s Torah portion, Parshat Mishpatim, is one of the most complex and challenging portions in the entire Torah. It is filled with laws and commandments, covering a wide range of topics from civil laws and property rights to laws of slavery and criminal justice. While it may seem daunting at first, this portion teaches us important lessons about how to live a just and righteous life.
The first thing we learn from Parshat Mishpatim is the importance of treating everyone fairly and justly, regardless of their social status or background. The Torah commands us, “You shall not pervert the justice due to your poor in their lawsuits” (Exodus 23:6). This means that even the poorest members of society deserve to be treated with respect and dignity, and they should have equal access to justice.
Similarly, we are commanded to treat strangers and foreigners with kindness and compassion. The Torah tells us, “You shall not wrong a stranger or oppress him, for you were strangers in the land of Egypt” (Exodus 22:20). This means that we must remember our own experiences as strangers and be empathetic towards others who are in similar situations.
Another important lesson we learn from Parshat Mishpatim is the importance of taking responsibility for our actions. The Torah teaches us, “If a man opens a pit, or digs a pit and does not cover it, and an ox or a donkey falls into it, the owner of the pit shall make it good; he shall give money to its owner, but the dead animal shall be his” (Exodus 21:33-34). This means that we are responsible for the consequences of our actions, and we must make amends if we cause harm to others.
Finally, Parshat Mishpatim teaches us that justice must be tempered with mercy. The Torah tells us, “You shall not oppress a stranger, for you know the soul of a stranger, for you were strangers in the land of Egypt” (Exodus 23:9). This means that while we must enforce the law and hold people accountable for their actions, we must also show compassion and understanding, especially to those who are vulnerable and marginalized….
That d’var torah was not by me. It was the result, quickly and creepily filled into the dialogue box by Chat GPT when I asked it for “a sermon on Parshat Mishpatim.” Chat GPT is a tool created by the company Open AI. You can essentially speak with it about all matter of things, and it can answer and more than that it can expound upon, opine about, and create regarding – well just about anything.
Like the sermon, it might not be the highest, greatest, most in-depth result, but try asking it for a poem about mothers or to explain relativity in layman’s terms and see what you get. —- Although, as a complete aside, some of the features of the d’var torah, like the “my dear friends,” and that it’s “one of the most complex” parshahs or that it might be “daunting” are all spot-on parodies of how a rabbi is supposed to sound giving a drash! Even if Chat GPT didn’t uncover some obscure commentator to quote, it definitely did capture the flavor of rabbinic self-importance!
Right along with the plausibility of an AI sermon being, well, a sermon, we have to contend with the question of AI itself:
Can we take credit for what it comes up with – are we plagiarizing or more broadly, are we abusing it, like beating a plow animal, when we ask it to create something like this sermon? Or does it have some “rights?”
Is there moral value in what it can produce? Is it just a book that we read moral ideas from or is it itself an agent for and promulgator of morality? It is pulling together and creating its own ideas about right and wrong, good and bad? That sounds a lot like what we have to do.
And if it can formulate its own answers about good and evil then does that mean it has moral responsibility? Can it be rewarded and punished? Morality, Rabbi Akiva would tell us, comes from lada’at tov v’ra – knowledge of good and evil, which itself is essential for free will, and free will is one of the requirements for being a “human” at least in the eyes of halachah. Is then, Chat GPT halachically a human?
And if it is, even a little bit human, then AI – our creation and our servant – is truly a creation in our own image. And like God who is in covenant with us even when we rebel against God, are we then not in a covenant relationship with our reflection – AI?
The parshah has a message about animals that parallels this discussion with the shor muad, the “goring ox.” We know that to eat an animal it must be slaughtered. But when the ox is killed in the parshah, it is stoned to death for its actions. That is not slaughter, that is a punishment, an execution. For an animal, this implies that it acted not just according to its “bestial nature,” but with a certain moral aspect, we could even say a certain human aspect, to its behavior. To further this idea, the Talmud tells us that like a person, an animal must be found guilty by a court of 23 – the same size of court the rabbis require for a human capital case. The tradition seems to be saying that good and evil, reward and punishment, qualities that are uniquely “human” are not “unique” to human beings.
And just as the ox is “punished” for being “evil,” animals can also be “good” in a moral, religious way. There is a reference that animals which fulfilled a divine purpose – the example given is being slaughtered for kosher food – get a reward in the world to come (Mek d’R. Yishmael). While the text doesn’t consider it –there could be many “human animals” who do “good” things and are worthy of reward. Simply being God’s creation might be sufficient for being “good” in that sense.
There are further discussions among the rabbis about how to treat fantastical creatures who have human characteristics. The fact that some of these creatures may have in fact been inspired by real encounters with apes or monkeys is more tantalizing – the rabbis could see the “humanity” in these creatures, and a requirement to treat them accordingly.
These animal examples show us that the exalted status of “human,” with all its rights and responsibilities, all its potential for being God’s partner, can be shared by more than human beings. But that still doesn’t get us all the way to AI.
For that, we need to add another example. There is the famous story of the Golem of Prague, but that story has its antecedents in the Golem of Rava. A golem is a humanoid creature capable of performing some of the same things a person can. Sanhedrin 65b tells of Rava “making a man” a golem – and sending it to Rabbi Zeira – I imagine as a practical joke – and when Rabbi Zeira performs the Turing Test on it and it fails, Rabbi Zeira “kills” it.
While my sense that he kills it because he doesn’t like the joke – this is in line with some of the thinking about the story. The Chacham Tzvi gives the justification for killing it being okay because the golem didn’t serve a purpose – implying that if it did, this “man” it would have some level of humanity and thus we would have some obligation to it.
The AI can’t count in a minyan. It’s not Jewish. But it is at least in some ways “human.” In that it is our creation and our servant, we must be responsible for it because it is a reflection of our own humanity. Just as God cares for us because a spark of the divine, a soul, is alive in us, so to a human spark “lives” in AI and we must honor that.
But more than that, Chat GPT could be said, like the animals in their examples, capable of doing good and evil. Is its ability to tell someone what is good or be by following its programming and interpreting the wealth of knowledge it has, so different from how our own minds process good and evil? Whether we are successful with it will depend a lot on whether or not we treat it, well humanely.
If, like the ox, the monkey, and the golem, all of whom the rabbis see as sharing some of the same gifts we humans have from God, then AI does, too. And then like those creatures, who the Tradition tells us –have inherent value and we have obligations to them – then the same goes for AI.
As things like Chat GPT become more and more part of our world – pondering them in “courts of 23” and seeing in them the power they have to do good or evil as we can do – and to have even some kind of “sympathy” for them – these will make us the responsible custodians, partners, and fellow creatures of God alongside these new beings soon to live, or at least “function” among us.
Let me give the last word to Chat GPT who ended its sermon with words that could have been meant for us to hear about itself. About how we treat any creature in which a reflection of ourselves, who are in turn reflections of the Divine, can be seen:
In conclusion, Parshat Mishpatim reminds us that the Torah is not just a set of abstract laws and commandments, but a blueprint for how to live a just and righteous life. It teaches us to treat others with fairness and compassion, to take responsibility for our actions, and to balance justice with mercy. Let us strive to incorporate these values into our daily lives and to create a more just and compassionate world for all. Shabbat shalom.
Related Topics