Interacting with God by employing AI?
The New York Times, God bless it, by publishing its article “Opening Religion to Artificial Intelligence” (Tani, Eli, N.Y. Times, 1/7/2025), has oddly encouraged its readership to potentially take its belief in religion and religious practices in a brand-new direction.
Just imagine; as the Times article reports it, some clergy members are actually employing AI to help write their sermons. Surely, there is no issue with clergy members using AI to research sermons that were given in the past by themselves or others for the purpose of adapting them contextually for use or reuse today.
But exactly what’s next? “AI priests” who lack “intercessor” status as part of Catholic doctrine, extending absolution as long as the confessant – whether or not seated in a confessional booth – admits his sin and promises to not repeat it, as might an actual priest? AI rabbis declaring what violates the Sabbath or the laws of kashruth, and what does not? AI imams lecturing about the Prophet Muhammed’s “Night Flight” without any human understanding of the doctrine being espoused by them?
Has the world become so digitally driven that it sees human interaction largely to be an unnecessary element in the belief in God or religion – beyond AI’s basic reportage of what has previously been authored about God’s supposed existence? So as long as the AI sermonizer doesn’t “hallucinate,” employing here the jargon for AI responses that have gone rogue, we can now receive as gospel – no pun intended – what AI alone tells us about God’s existence and the authoritativeness of religious doctrine? Indeed, maybe we don’t need priests, rabbis or imams altogether. God forbid!
Can a purely mechanical instrument that has never encountered a newborn baby, a beautiful sunset, or an uncommonly selfless act of charity opine on the existence of God? If ChatGPT authoritatively (in the eyes of many) tells a young subscriber that she has a perfect right to believe in God but needn’t follow any of the technical precepts of the religion in which she was brought up, would there be any value in human clergy trying to persuade her of her arguably mistaken or shortsighted view?
Along what potentially treacherous path are we heading? Will AI allow some elusive digital figure that God didn’t in a real sense create (except in His capacity as “prime mover”), to explain the meaning of God? Will people truly find value, meaning or comfort in a robotic AI entity telling them in purely rote fashion that “everything will be all right,” or that “if you pray to God, He will surely bring healing to your loved one” when she falls deathly ill, or suffers a horrible accident?
Of course, for most, probably not. Yes, one might be moved by such an attempt at consolation from his priest, rabbi or imam – because his own belief system (which he has come to accept or adopt) helps him to believe it. Yes, it would obviously be better and more persuasive if God Himself were to look a person in the eye and provide that comfort directly; but generally speaking God doesn’t conventionally engage in such “conversation” with human beings in the strict sense of the term.
So, thinking, perhaps, that I was being too hard on AI (and that I might unintentionally be hurting its “feelings”), I actually asked ChatGPT: “Should people rely on ChatGPT if it says that God exists.” It responded this way:
“Relying on ChatGPT – or any AI – when it comes to matters like the existence of God can be tricky. While ChatGPT can provide information from various sources, perspectives and philosophies, it does not have personal beliefs, consciousness or access to the divine. It cannot offer definitive answers to spiritual or existential questions as such topics are deeply personal and subjective.”
I guess that my question posed to ChatGPT was a softball; but also, perhaps, I had been selling AI short. Maybe ChatGPT, my non-human correspondent, got it completely right in human terms – in fact, reminding us that existential questions such as the existence of God are, at the end of the day, “personal and subjective.”
Indeed, to paraphrase Dirty Harry: even “a bot’s GOT to know its limitations!”