Standing before Am Yisrael on the day before his death, Moshe speaks what I consider to be some of the most powerful verses in the entire Torah [Devarim 30:11-14]: “For this commandment that I command you this day is not concealed from you, nor is it far away. It is not in heaven, that you should say, ‘Who will go up to heaven for us and fetch it for us, to tell [it] to us, so that we can perform it?’ Nor is it beyond the sea, that you should say, ‘Who will cross to the other side of the sea for us and fetch it for us, to tell [it] to us, so that we can perform it?’ Rather, this thing is very close to you; it is in your mouth and in your heart so that you can perform it.” Moshe’s words are at once both comforting and encouraging: Nothing is beyond our reach if we really try.

And yet something remains unclear: what is this “commandment” that is “not concealed” or “far away”? Which particular commandment is Moshe referring to? Some of the medieval commentators[1] suggest that Moshe is talking about the mitzvah of repentance, a mitzvah that is described at length in the immediately preceding verses. According to this hypothesis, Moshe is telling Am Yisrael that “this commandment [that I have just commanded you] is not far from you”: Repentance is not impossible – Hashem has given us the opportunity to reinvent ourselves and to change our trajectory in life. Other commentators[2] open the aperture a little bit wider. They suggest that “the commandment” is really “all of the commandments”, and that Moshe is talking about the entire Torah: Living a life of Torah and mitzvot is eminently possible and infinitely rewarding. I propose opening the aperture even wider. The Zohar teaches that “Hashem looked at the Torah and then created the world”. In colloquial English we could say that “The Torah is the blueprint according to which the world was designed”. By extrapolation, then, it would be possible to say that when Moshe is talking about “the entire Torah” he is actually talking about “the entire universe”. Now we must deduce the message Moshe is trying to convey. That is to say, what does Moshe mean when he says that the entire universe is “not concealed” nor is it “far away”?

A few weeks ago a scientific paper was published called “Why Does Deep and Cheap Learning Work So Well?” The paper, written by two physicists, Henry Lin from Harvard and Max Tegmark from MIT, is nothing less than momentous. The Lin and Tegmark investigate something called “Deep Learning (DL)”. Here is what Wikipedia has to say about DL: “DL is a branch of machine learning based on a set of algorithms that attempt to model high level abstractions in data by using a deep graph with multiple processing layers, composed of multiple linear and non-linear transformations.” While this sounds like something out of Star Trek, DL has been around in one form or another for more than seventy years. DL is associated with something called “Artificial Neural Networks (ANN)”, which grossly mimic the way the human brain works[3]. In short, the basic unit of the human brain is called the neuron. A neuron can fire or not fire, dependant on the amount of stimulation that it receives. The brain contains about one hundred billion neurons that are interconnected in Shrek-like layered networks, in which the output of certain neurons serve as the input to other neurons. An ANN has a similar layered structure. An ANN is “trained” to model a desired system by adjusting the “weights” of individual neurons. DL, which was discovered in 2006, is a powerful method of training an ANN and it really set scientists free to see what these networks are capable of. It turns out that they are extremely capable. Using and ANN trained with DL, a computer has beaten world masters in the ancient Chinese game of Go, once considered to be the Holy Grail of artificial intelligence. But what DL is best at is classification – recognizing faces and objects. In fact, DL has surpassed humans in this capability, and scientists want to know why they are so successful. This is where the Lin-Tegmark paper comes in. It suggests that the reason that DL is so powerful is because the universe is built in a way that lends itself to being modelled by Neural Networks. In other words, it’s not the mathematics, but, rather, the physics.

Let me try to illustrate. An example brought in the Lin-Tegmark paper discusses determining if a certain picture contains either a cat or a dog. The task of analysing every pixel in a picture, even a small one-megapixel image, is fantastically huge[4]. Fortunately we don’t need to look at every pixel. The secret is that if you rotate a cat, it still looks like a cat. If you make a cat smaller or larger, it is still a cat, and if you move the cat around in the picture, it is still a cat. These facts greatly simplify the problem of cat (or dog) recognition. But it didn’t have to be that way: the laws of physics could have been designed such that if you turn a cat upside down it looks like a dog, and our problem would become much more protracted, and character recognition would become much more difficult.

We should not find this well-defined behaviour surprising, as the universe has a well-defined hierarchical structure. “Elementary particles form atoms which in turn form molecules, cells, organisms, planets, solar systems, galaxies, etc.,” say Lin and Tegmark. “And complex structures are often formed through a sequence of simpler steps.” Lin and Tegmark take this idea one step further: At the end of the day, the behaviour of a galaxy is determined by the behaviour of its smallest particles. This means that the laws of physics that govern the entire universe can be – and indeed are – governed by a relatively small set of functions, parameters, and constants.

Readers who have made it this far can now understand why Lin and Tegmark believe that an ANN, especially when taught by DL, is so powerful: The layered structure of the ANN enables it to naturally model our hierarchical world, and different layers of an ANN can correspond to different levels of hierarchy. Here is a quote from a review[5] of the Lin-Tegmark paper that I dare not change: “So not only do Lin and Tegmark’s ideas explain why DL machines work so well, they also explain why human brains can make sense of the universe. Evolution has somehow settled on a brain structure that is ideally suited to teasing apart the complexity of the universe.” In “Jew-Speak”, we could say that Hashem created man in a way that he could make sense of the universe in which he lives. With a little exertion, man can analyse Hashem’s handiwork and he can understand how it all fits together. Nothing is concealed. It’s all right there in front of him.

Now we can begin to gradually close the aperture. If man is inherently capable of understanding how the universe ticks, then he must also be capable of understanding the workings of the Torah, upon which the universe is designed. Closing the aperture even further, if man can understand the entire Torah, then he can surely understand one of its most basic concepts – the concept of repentance. According to the Talmud in Tractate Pesachim [54b], repentance, along with the Torah[6], was “created before the world”, meaning that repentance, along with Torah, is the blueprint according to which Hashem created the world.

It’s all right there for the taking: the spiritual and physical rules that govern our universe, along with a fix if we break any of the rules. So what are you waiting for?

Shabbat Shalom and Shana Tova,

Ari Sacher, Moreshet, 5776-7

Please daven for a Refu’a Shelema for Moshe Dov ben Malka and Adi bat Ravit.

[1] See the Seforno ad loc.

[2] See the Ramban ad loc, and see the Ohr HaChayim HaKadosh who offers two interpretations: one for repentance and one for the entire Torah.

[3] Perhaps it would be fairer to say that Neural Networks are “biologically inspired”.

[4] Such an image consists of one million pixels that can each take one of, say, 256 grayscale values. There are 2561000000 possible images, and for each one it is necessary to compute whether it shows a cat or dog.

[5] “Physicists have discovered what makes neural networks so extraordinarily powerful”, appeared on September 9, 2016, on the MIT Technology Review, at https://www.technologyreview.com/s/602344/the-extraordinary-link-between-deep-neural-networks-and-the-nature-of-the-universe/

[6] And along with five other concepts, including heaven, hell, and the name of the Mashiach,