search
Jean Pierre Braun

Who’s afraid of Artificial Intelligence? Anticipated Benefits, Dreaded Dangers

Embed from Getty Images

It is almost impossible today to open a newspaper or to listen to news on TV without being subjected to yet another report on Artificial Intelligence (AI), its grandiose benefits for mankind and its existential dangers.

This blog article is not meant to provide an exhaustive coverage of AI technologies across their vast field of applications, but simply to provide readers some background on interesting AI applications and to bring some clarification on the notions of benefits and dangers attached to AI as well as some possible remedies.

What is AI and what is not?

In the famous 1983 movie “War Games”, Matthew Broderick plays a high school computer geek who manages to infiltrate the Pentagon mainframe. There he finds what he believes is an innocent game (of course it is a real computer program aimed at defending the USA against what was still then Cold War threats), he stumbles onto a program with an enticing name, and starts playing “global thermonuclear warfare.” However, this innovative program was built not only to recognize a hostile move when he sees one – like a missile launch aimed at the US territory-, but also to resist attempts, either through cyber-attacks or physical interferences, aimed at preventing it (the program) from doing the tasks he is programmed to do, that is to annihilate the enemy. And for the 90 remaining minutes of the movie, this smart student battles the computer and the program to find a way to stop the inevitable: global mutual annihilation. The student succeeds of course.

The “program” at the center of this movie exhibits several of the key elements of AI: it reacts to prompts (the foreign threat), adapts its reaction to the nature of the prompt, and knows what the end goal is. Furthermore, it has been given authority to act and deliver the end goal, in this case by launching thousands of missiles on an unsuspecting enemy, all this without any human intervention.

This is a modern-day GOLEM, isn’t it? The dream of creating a non-human intelligent being has been in the Jewish tradition for a long time. In the Talmud (Tractate Sanhedrin 38B then 65B), two versions of such an artificial but intelligent being are explored. The Golem of course became famous at the time of the Maharal of Prague (16th century). The attributes of the Golem are the same as what we listed above talking about the movie War Games: ability to recognize a threat, to devise a defense and to act to achieve the end goal (here the protection of the Jewish people).

By today’s definition of AI, one key element is missing in these two examples: the ability to learn!

What is called Machine Learning, (in and of itself a brand-new field of Computer Science), is really what gives intelligence to the AI machine or program. Of course, many other fundamental developments were necessary for these ideas to be successfully implemented, among others: Neural network computing and all its derivatives (computers architected to mimic how a human brain stores, access and processes information), all the iterations of machine learning such as reinforcement learning, deep learning, Q-learning (mostly aimed at giving more independence to the AI system in acquiring the knowledge it needs, with less and less human intervention), etc.

In summary, an AI machine or program learns from its own experience, is able to modify its behavior towards the end goal that has been defined beforehand, and is able to recognize whether the outcome of its actions has been successful. Whether or not such a machine has been given the authority to act on its own to achieve its goal depends entirely on design decision made by the designers of said machine.

There are many sophisticated devices and programs deployed in the real world today, but short of having the attributes just listed above, they cannot truthfully be labelled as possessing AI.

Action oriented AI

Originally composed of game programs and robots, this category of AI applications has grown exponentially in the last 10 years to include many kinds of machinery, as well as programs for games, accounting, engineering, medical and many other applications.

The Google Deepmind program AlphaGO became an instant sensation in 2016. AlphaGO was at that time the single most advance AI based game program with unsurpassed self-learning ability and the power to statistically predict the game outcome move after move, based on its own, always evolving knowledge base developed continuously by playing many many thousands of game by itself (reinforcement learning). GO is the oldest continuously played board game in the world and is an integral part of the education and culture of many Far East countries. When the AlphaGO team challenged Lee Sedol, a world champion and allegedly the best individual who ever played the game of GO, to a best of 5 match, sarcasms could be heard throughout the Far East: Lee Sedol was going to crush AlphaGO to a 5 to nothing win, no question about it. Well, AlphaGO won the first three games, therefore won the tournament while over 80 million TV viewers watched the drama unfold. Lee Sedol visibly crushed and in tears after the third game apologized to the people of Korea for being unworthy of their confidence. Lee Sedol ended up winning the fourth game and losing the fifth and last one. Several things are of consequences as we try to understand how AI will play in our lives: (1) During the tournament, AlphaGO made some moves that, according to the experts “no human player would ever think of making”. This is of critical importance: the Ai machine cannot be construed as “thinking outside the box” since it knows of no box. It was given just the minimum set or rules to move the beads, and also the end goal of the game. All the rest, it learns by itself over countless games where no human was involved. In a gracious, but oh so true, acknowledgement, Lee Sedol himself admitted having learned and improved his own game by playing against AlphaGO. But also (2) the AI machine’s win was viewed as a devastating cultural hit on populations which considered the game of GO as an art, almost a religious ritual.

Many robots are in use in automobile manufacturing plans. Many are sophisticated, a few possess some form of Artificial Intelligence. These are often used in painting and soldering labs and are regularly featured in Discovery Channel programs. These robots understand their task, have learned through training the various ways to accomplish it, are even sometimes equipped with visual inspection tools to evaluate whether these tasks have been performed well or not, and are capable of evolving their knowledge base to improve their performance.

AI robots are spreading fast: Amazon warehouse robots and delivery drones, Japan’s Senior Citizen’s assistants able not only to keep company, to help with many domestic tasks, but mostly to pick up an elderly person who has fallen to the ground, all these robots bring with them positive and negative consequences: (1) They benefit mankind by filling essential roles, by minimizing exposure to harmful chemical or dangerous machines and by improving quality of life for many. But also (2) they are replacing human beings who were performing these tasks before, they increase unemployment and cause misery to the families of those who have lost their jobs.

Generative AI

Before ChatGPT, the best known and most successful generative AI (GAI) program to date, could be brought to market, many technologies had to be developed. First and foremost, GAI designers evolved their algorithms to analyze, then recognize, then parse the elements, the patterns and the structure of the data that was presented to it (text, photos, images or speech). GAI used this as its training material to develop its knowledge base. Speech recognition, image processing, facial recognition, etc. came out of these initial efforts. A breaking point came when GAI programs learned to generate text, photos and images based on the rules learned through processing very large quantities of such materials and in response to a specific user’s prompt. A prompt is an input given to the GAI system whereby the user communicates its goal for the GAI to achieve. Such an input for ChatGPT (the prompt) could be for example “write me three sentences on the beauty of a forest in Vermont in the fall.” To which, within seconds, the program would come up with the following superb paragraph:

“As autumn paints its vibrant palette across the landscape, Vermont’s forests come alive in a breathtaking symphony of colors. The towering maples, oaks, and birches form a tapestry of fiery reds, golden yellows, and rustic oranges, captivating the senses with their stunning hues. Walking through this enchanting woodland paradise, one can’t help but be overwhelmed by the serene beauty and tranquility that nature offers in its most resplendent season.”

No controversy in this prompt, no surprise in the generated text. As Americans say:” motherhood and apple pie.”

When asked about global warming, ChatGPT comes out representing the generally accepted consensus: human activity, greenhouse gas, imminent danger for the planet, even though there are strong voices out there questioning the science and the truth behind these allegations. When asked “who wrote the Torah”, ChatGPT while briefly mentioning the Jewish traditional answer, profusely explains the conflicting Wellhausen hypothesis. When asked about BARD, a direct competitor, ChatGPT provides a completely false, denigrating answer (a fake news if you want), and only when prompted further does ChatGPT apologizes and claims ignorance.

All these answers: the non-controversial one, the controversial ones, and the fake news are written in perfect, natural sounding English and carry an aura of reliability and truth. They are all believable. And nowhere is there any indication that the texts were composed by a machine.

To close the chapter on Generative AI, it seems to me that everyone should know whether a text, or picture, etc. has been generated by a machine or by a human being. The simple way to do this is to insert an indelible watermark known to all into such a text. That will go a very long way to alleviate conflicts over plagiarism, and to ensure that the real author of the work is known beyond any reasonable doubt.

Beyond text generation, GAI is rapidly making headway in code generation (writing computer programs…. Quid of the software engineers?), Still image and even video generation whether abstract artistic or realistic pictures or movie clip, Music generation based on a textual description of the kind of music one is looking for, etc. Medical applications for GAI are countless. Some could deepen our understanding of the workings of molecules, proteins, etc. Some could help find a cure for diseases.

So, what’s the controversy about?

Sam Altman and Demis Hassabis are absolute geniuses. Their colleagues at OpenAi and DeepMind respectively, as well as their competitors working at the cutting edge of AI and AGI all fall in the same category. This is why we need to take very seriously their writings and warnings about Ethics, Dangers, Risks, etc. when it comes to AI. They know the field better than anyone else.

I was astonished, when studying the web sites of both companies and the excellent publications by Forbes and others on the topic, to see that these pages contain an enormous amount of materials on the risks associated with AI and more specifically AGI. And not any risk, existential risk, lethal risk, unpredictable risk.

To make this point even clearer, Hassabis, Altman and 300 other high-profile AI executives, in May 2023 signed a very short doomsday statement about the potential extreme existential risks of AI. It read: “mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”

To understand their big fear, let’s read the published literature and also the guidelines they impose to their companies to make sure they act responsibly and to make themselves look like the proverbial “good guy”.

Both Altman and Hassabis use the same term to qualify the AI machine behavior that could indicate “good behavior” or “danger”. That term is “Alignment”, the lack of it being “Misalignment”. The alignment in question is the similarity of goals between the designer’s intent and the AI machine’s understanding of the task it has been entrusted with. Not only that, but in order to achieve the task at hand, all along the implementation process, the AI machine must remain aligned with guidelines (ethical, behavioral, legal, etc..) that it would have learned as part of its training.

How such an intelligent machine becomes misaligned with the intent of its designers, and what happens then? There are at least three ways: (1) External, probably malicious interference (hacking, sabotage), (2) Unclear specification that the AI machine may misunderstand, and (3) Wandering by the AI machine outside of the scope of its training (also known as side effect, self-modification or distributional shift). The consequences can be disastrous, and the AI machine could potentially become dangerous to its handlers, its users and its environment. Note that even though AI machine are equipped with a Kill Switch, AI companies are worried about the ability of the misaligned machine to override the Kill Switch and become virtually uninterruptible.

To try and avoid the worst possible consequences of an AI machine becoming misaligned, OpenAI for example published a list of applications where use of its models, tools and services cannot be used. The list is three pages long, and includes, among many others, Illegal activities, generation of child sexual abuse material, weapons development and warfare, scam, political campaigning, financial advice, etc.

Going further, OpenAI, DeepMind and others have formed ethics review committees to define boundaries not to be crossed by AI machines. Finally, some of these companies openly call for international ONG to be formed (à la IAEA) to come up with generally accepted democratic principles that would ensure smooth adoption of their AI products throughout mankind.

This is where I personally have a big problem with AGI and other super-intelligent generative agents. The goals openly stated in the previous paragraph are unrealistic at best, irresponsible and dangerous at worst. Recent years have seen the spreading of Woke movements, cancel culture, struggles convergence, and many other so called political trends with a double negative effect: (1) the brilliance, the popular wisdom, the beauty, the narratives evolved over centuries of common history are being erased and (2) specific “sides” of the political spectrum have arrogated to themselves the right to the good, politically correct discourse, and all others are being demonized.

In this context, any attempt to generate, even enforce AI guidelines coming out of any international organization one can think of, may only reinforce the movement towards globalization and cultural erasure, all this with the unprecedented efficiency of AI machines and their ability to generate and distribute instantly biased materials through the world.

We have witnessed the use of AI to generate Fake News. A new term has even been invented to described a fake news which is so well done (by Ai of course) that it is really hard to question its veracity. These are “Deep Fake News”.

AI is here to stay, and hopefully, AI will contribute to solving some of the most difficult problems facing mankind: illness, hunger, drought, etc. AI can be put to use to improve virtually all aspects of human life.

But AI can also be used to generate the most frightening weapon systems ever developed.

And AI can be used to annihilate the cultural differences that make our world so diverse, so rich and so interesting.

It seems everyone, from the US congress to politicians the world over and to the AI community, is aware of the benefits and worried about the dangers. Let’s hope they will steer this nascent effort towards the greater good of humanity.

About the Author
Jean Pierre Braun is a retired Silicon Valley CEO now living in Jerusalem. Born in Paris, Jean Pierre immigrated to the USA after completing its Electrical Engineering degree in France. Besides being a serial entrepreneur, Jean Pierre was also the founder of a unique, very successful Silicon Valley Synagogue, and upon his return to France became Vice President of a local CRIF branch, and the President of the Rachi community in Grenoble. A father of 3 and grandfather of 10 ב'ה, Jean Pierre and his wife Annie made Aliyah in 2016.
Related Topics
Related Posts