Human, Angel, or Machine: The Challenge of Consciousness
I. Introduction
The confrontation between Jacob and an unidentified being in Genesis raises fundamental questions about the nature of consciousness, identity, and the boundaries between different orders of being. This encounter gains new relevance as we approach an era where artificial intelligence may become indistinguishable from human intelligence.
II. The Biblical Paradigm
The Torah presents two distinct instances of ontological ambiguity regarding angels. The first appears in the term malakh (מַלְאָךְ), which can denote either human or divine messengers. The second, more profound ambiguity emerges in Genesis 32:25, where Jacob’s supernatural opponent is described simply as an ish (“man”):
“And Jacob was left alone; and there wrestled a man with him until the breaking of the day.” Genesis 32:25
Traditional commentary (Midrash Tanchuma, Rashi) identifies this figure as Esau’s guardian angel in human guise, yet the text’s deliberate ambiguity raises a crucial question: How does one discern the true nature of an apparently human entity?
This question had already emerged in an earlier narrative when three angels visited Abraham after his circumcision. The text introduces them as “three men” (Genesis 18:2), and Abraham’s subsequent meal preparation suggests he initially perceived them as human visitors. This biblical paradigm of angels appearing in human form presents what we might term an ancient version of the identification problem: determining the true nature of an entity that presents itself in human form.
III. The Nature of Consciousness and Agency
The distinction between humans, angels, and artificial intelligence hinges on two fundamental characteristics: consciousness and free will. Humans and many other animals (and, in my opinion, all living beings) possess consciousness. Importantly, consciousness is not intelligence. There are many animals that are not intelligent, but they are undeniably conscious.
In the words of the philosopher Thomas Nagel, consciousness is the subjective experience of being—the “what it is like” aspect of the mental state, as articulated in his seminal paper, “What Is It Like to Be a Bat?” (1974).
The necessary ingredients of consciousness are feelings, what philosophers term “qualia”—instances of subjective, conscious experience (Eliasmith, 2004). Some philosophers argue—and I share this view—that consciousness in general and qualia in particular are irreducible to physical processes, presenting what David Chalmers (1995) terms the “hard problem of consciousness.” Examples of qualia include what it is like to feel a headache, smell a particular apple, or experience the redness of a rose. Each requires a feeling—at least a rudimentary sensation of pain or pleasure. All living forms appear to have this “phenomenal consciousness,” as Chalmers calls it ( 1996).
According to Jewish theological tradition, both humans and angels possess consciousness in the form of emotional experience. Angels experience profound spiritual states, including yirat Hashem (“fear of G‑d”) as described by Maimonides in Mishnah Torah (Yesodei HaTorah, 12c/1984), and ahavat Hashem (“love of G‑d”) as elaborated in the Zohar (II:236b; III:19a).
However, a crucial distinction emerges regarding free will. While humans possess beḥirah ḥofshit (“free choice”), angels, despite their consciousness, are bound to their divine missions without true autonomy, as noted in Midrash (Bereshit Rabbah, 48:11) and elaborated by Rabbi Chaim Vital (Shaarei Kedushah, part 3, ch. 2, 16 c./1986). In this regard, humans have the potential to reach even higher spiritual levels than angels (Talmud, Chullin, 91b.; Tanya, Likutei Amarim, Part I, ch. 39 & 49, 1984).
While machines may simulate intelligent behavior and even emotional responses, they lack both the subjective experience of consciousness and the genuine agency of free will. This ontological limitation persists regardless of their computational sophistication. Therefore, no matter how intelligent, machines cannot become sentient, even in principle.
IV. The Challenge of Machine Consciousness
Machines have neither feelings nor the freedom of choice. No matter how intelligent a machine (or software) may be, even if it reaches the level of general artificial intelligence matching or exceeding human intelligence, machines cannot experience feeling. They cannot have a subjective experience because they are not subjects. As philosopher John Searle might say, there is nobody “there.” Consequently, AI systems and robots endowed with AI cannot have qualia and are not conscious. Furthermore, AI systems and intelligent machines cannot have the freedom of choice because there is no subject to choose from the available alternatives. These two factors preclude the emergence of sentience in AI systems and intelligent machines.
1. Testing Machine Intelligence: The Turing Test and Beyond
The challenges of identifying machine consciousness become particularly apparent when we examine our methods for testing artificial intelligence. While the Turing test (Turing, 1950) aims to determine if a machine can exhibit intelligent behavior indistinguishable from that of a human, it does not address whether a machine can experience subjective feelings. Indeed, the question of whether a machine can feel—experience subjective emotions, sensations, and qualia—is substantially more difficult than determining if it can think in a way that mimics human intelligence. The Turing Test, as proposed by Alan Turing, is fundamentally a behavioral benchmark: if a machine can produce human-like responses in conversation well enough to be indistinguishable from a human, we say that it “passes” the test, at least for intelligence. Dennett (1991) argues that consciousness can be understood behaviorally (“heterophenomenology”) but acknowledges the challenge of proving subjective inner states. More definitively, Block (1995) distinguishes between “access consciousness” and “phenomenal consciousness,” highlighting that computational access does not prove subjective experience.
2. The Challenge of Testing for Feelings
This limitation of behavioral testing becomes even more apparent when we consider the nature of emotional experience. Feelings—pain, pleasure, longing, joy—are inherently private experiences. When you claim to feel pain, no external observer can directly verify what it is like for you. With a machine, this problem is magnified. How would we know the machine truly has an inner, subjective experience rather than simulating the outward signs of one? Scholars across philosophy of mind, cognitive science, and AI ethics have grappled with whether computational processes can ever generate true “inner experience” (Block, 1995; Searle, 1980; Tononi, 2004). Searle’s (1980) Chinese Room argument demonstrates that behavioral similarity does not necessitate shared internal states (Searle, 1980).
Drawing a parallel to our biblical paradigm, when pondering if the “man” (ish) he was wrestling with was a man or an angel, Jacob asked his name. Similarly, the Turing Test relies solely on behavior (linguistic output). We might try a similar test for feeling by interacting with the machine—asking how it “feels” about certain experiences—but the machine could always mimic the language of emotion without genuinely having any inner sensation.
3. The Question of Free Will
The challenge becomes even more complex when we consider the question of genuine free will or agency. While feelings relate to subjective experience, free will or agency encompasses the capacity for self-directed action and moral accountability (Dennett, 1984; Kane, 2002; Wallach & Allen, 2009). This philosophical complexity manifests in compatibilist, incompatibilist, and illusionist views (Harris, 2012); (Wegner, 2002).
Free will is a deeply philosophical concept—not only do we lack a definitive test even for humans, but it is also unclear how the concept translates into artificial systems. Nevertheless, certain lines of questioning can probe signs of autonomy, self-direction, and moral reflection. These questions will not definitively prove or disprove true free will, but they can reveal whether the AI or robot displays behaviors consiste0nt with agency beyond its programmed constraints.
However, a fundamental challenge remains: even if an AI responds convincingly, it might be merely simulating free will through complex algorithms and language models. The philosophical challenge is profound: no matter how compelling the answers, they could still be the product of advanced but purely deterministic processes. Today, advanced language models can simulate free will or emotional depth (Bostrom, 2014; Russell, & Norvig, 2020).
V. The Future of AI
These philosophical challenges take on urgent practical significance as we look toward the near future. With the advent of general intelligence and humanoid robots, distinguishing AI from human intelligence—or a humanoid robot from a real human—will become increasingly difficult, if not impossible. This growing ambiguity mirrors the story of Jacob wrestling with a “man” (ish): during the entire struggle that lasted the whole night, Jacob was unaware that his opponent was an angel.
The parallel extends further: although ultimately, Jacob prevails, he emerges from this struggle transformed, limping with his hip dislocated. As I wrote in a previous essay, “Wrestling with AI: From Divine Dreams to Digital Reality,” this biblical narrative can be read as a prediction of the future struggle between the human race and AI. While humanity may prevail in its struggle with AI, we are unlikely to emerge unscathed from this transformative encounter.
VI. The Soul
The challenges we face in distinguishing human from artificial consciousness point toward a more fundamental question about the source of genuine consciousness and agency. I am inclined to believe that both feeling and true agency are fundamentally rooted in the soul. According to Jewish tradition, the human soul is a divine spark that endows us with our sense of self, subjective experience, and true agency. This divine spark—the soul—is the inner subject that experiences consciousness, giving us feelings and emotions. It empowers us with true agency and free will.
Furthermore, Jewish mystics teach that animals and plants also possess souls. I believe all animate matter, including unicellular organisms, have some rudimentary form of soul—a divine spark that animates them. By contrast, AI, computers, or robots—lacking this divine spark—cannot possess genuine feelings or free choice. They may simulate emotions or decision-making, but without the spiritual dimension of the soul and its capacity for moral struggle (bechirah chofshit), they do not attain true personhood.
Yet, this theological insight leads to a profound practical dilemma: most of us do not “see” souls. We perceive only outward behavior or intelligence. Thus, from our limited vantage, we struggle to differentiate between a human being endowed with a soul and an advanced machine with sophisticated intelligence but no soul. This limitation reflects a broader principle in Jewish thought—that spiritual realities are hidden in our current era. Indeed, the very Hebrew word for “world”, olam, is cognate with helem (“to hide”), implying that in this world, divine truth is hidden from our eyes.
This concept of hiddenness finds particular resonance in the narrative of Jacob’s wrestling match (Genesis 32:25–31). Jacob only realizes his opponent’s angelic nature when the angel says, “Let me go, for daybreak has come” (Genesis 32:27). According to the Talmudic tradition (e.g., Talmud, Chullin 91b), angels must sing praises to G‑d at dawn, implying they cannot linger past daybreak.
This temporal element carries deep symbolic significance. Jewish literature often uses night as a metaphor for galut (physical and spiritual exile, a time of spiritual obscurity), while daybreak symbolizes messianic redemption—an era of revealed divine truth. The Zohar (e.g., I:119a; II:6a) repeatedly draws parallels between cosmic darkness and the concealment of spiritual reality. In such sources, the dawn represents the final redemption (geulah), when “The earth shall be filled with knowledge of the Lord” (Isaiah 11:9). Specifically, commenting on our verse in Genesis 32:27, Rabbeinu Bahya (c. 14th century) notes the daybreak is symbolic of the messianic redemption.
VII. Conclusion
This metaphorical framework offers profound insight into the ultimate challenge of advanced AI. Just as Jacob emerged from his struggle with an angel with both an injury and a blessing, humanity may also be tested by increasingly sophisticated technology. Yet once “daybreak” arrives —when divine truth is openly revealed—the distinction between soul-endowed beings and their simulacra will become self-evident. As the prophet Isaiah says, “Arise, shine, for your light has come” (Isaiah 60:1), pointing to a future in which spiritual realities, such as the human soul, are visibly manifest.
A longer version of this essay was originally published on QuantumTorah.com on 12/16/2024.