search
Daniel S. Smith

Genesis: AI, Hope, and the Human Spirit

Courtesy: Flickr
Courtesy: Flickr

Review of Genesis: Artificial Intelligence, Hope, and the Human Spirit. By Henry A. Kissinger, Eric Schmidt & Craig Mundie. Edited by Eleanor Runde. Foreword by Niall Ferguson. November 19, 2024. Little, Brown & Co. Hachette Book Group. 

Pamela McCorduck famously observed, “Artificial intelligence began with the ancient wish to forge the gods.” Now, as we shift from an industrial to an intelligence-driven era, this ancient dream appears within reach. Dr. Henry Kissinger, a titan of the post-World War II era, along with Special Competitive Studies Project chairman Dr. Eric Schmidt and polymath Craig Mundie, envision a future where AI could usher in unprecedented abundance by transforming the very nature of human experience.

Are we experiencing the genesis of utopia, or devising our own demise? AI’s rapid advancement offers immense potential while posing significant risks to society and existential risk to humanity. Our task is to guide this evolution in harmony with human values.

The authors’ optimism about AI may be colored by their interests. Henry Kissinger, “a 19th-century historian turned 20th-century statesman and, in his later years, a 21st-century oracle on global affairs,” became interested in AI after Schmidt convinced him to attend a lecture on the topic at the 2016 Bilderberg meeting. Kissinger focuses on the philosophical, societal, and strategic dimensions whereas Schmidt & Mundie offer deep experience in business and technology. In 2021, Kissinger & Schmidt co-authored The Age of AI: And Our Human Future with Daniel Huttenlocher of MIT, looking at AI through a systemic lens. 

Genesis usually means the beginning, but for Kissinger, this book signaled the end. He passed away in late 2023. His life ends just as a new form of life begins.

Socrates argued,

We move closer to the truth only to the extent that we move further from life. What do we who love truth strive for in life? To be free of the body and of all the evils that result from the life of the body. If this is so, then how can we fail to rejoice when death approaches?

Genesis is Kissinger’s parting roadmap for navigating the complexities of AI.

His unique personality cannot be easily replaced, “In 1954 at Harvard,” he wrote later, “I was always an oddball, I was always in that sense an outsider. I had one hell of a time.” Today’s transparency and connectivity hinders the development of independent thinkers. Prophets operate outside of convention and are rarely recognized in their own time. Yet they appear when most urgently needed.

Kissinger began his career at the Council on Foreign Relations (CFR) where his 1957 book Nuclear Weapons and Foreign Policy started his worrying about existential risk. 66 years later, at 93, he could not resist getting involved in AI. Familiarizing himself with the technical side to augment his focus on strategy, he said: “You people work on the applications. I work on the implications.”

AI is dominated by specialists with narrow technical expertise. This creates a paradox: while industry leaders emphasize the need for broad perspectives, their hiring practices do not reflect this sentiment. A young Kissinger would struggle to find a role at these cutting-edge AI companies. Historian Yuval Noah Harari writes in Nexus: A Brief History of Information Networks from the Stone Age to AI:

The rise of AI is arguably the biggest information revolution in history. But we cannot understand it unless we compare it with its predecessors. History isn’t the study of the past; it is the study of change. History teaches us what remains the same, what changes, and how things change……A deep knowledge of history is also vital to understand what is new about AI, how it is fundamentally different from printing presses and radio sets, and in what specific ways an AI dictatorship could be very unlike anything we have seen before.

AI job listings target technical specialists, leaving little room for contributions from historians, philosophers, or others who could provide valuable insights, “merging the will of humans, the knowledge of machines, and the wisdom of history.” As a historian, I can attest Claude Levi-Strauss said it best: “History can take you anywhere, provided you get out of it.” Kissinger, Schmidt & Mundie wonder: “How, then, will we compile and compress the entire range of human experience for easy comprehension by AI?” 

Works like Genesis, Harari’s Nexus, and Ray Kurzweil’s The Singularity is Nearer, all benefit from a deep historical perspective. Hopefully, these publications will encourage industry leaders to address the barriers for anyone outside the inner circle to work on big questions surrounding AI politics, security, prosperity & science.

The AI revolution is reinforcing existing power structures rather than democratizing influence. Those who have established themselves are experiencing unprecedented demand for their insights, shaping industry direction. Meanwhile, newcomers face formidable barriers to entry at a time when new perspectives are most needed.

Throughout history, humanity stood alone at the pinnacle of intelligence. Superintelligent AI might possess not only vast intellectual capabilities, but also the autonomy and drive to reshape global institutions according to their own values. The concept of the nation-state, which has dominated international relations since the Peace of Westphalia in 1648, may prove inadequate. AI could revolutionize our grasp of human behavior, economics, and social structures. Just as physics follows predictable natural laws, we may discover that human societies follow patterns that become predictable through AI analysis.

Imagine if algorithms possess a deeper understanding of individuals than they have of themselves. How would this impact politics and daily life? Who gets to shape AI’s development? How will the emergence of AI make governance by established institutions more difficult? The current trajectory suggests a future where a few wield influence over a technology that affects us all. We all share the risk if something goes wrong, so we should share the reward if things go right. 

Tech luminaries’ constant media presence highlights the exclusion of diverse voices in AI.  Yuval Harari warns in 21 Lessons for the 21st Century: “Since the corporations and entrepreneurs who lead the technological revolution naturally tend to sing the praises of their creations, it falls to sociologists, philosophers, and historians such as myself to sound the alarm and explain all the ways things can go terribly wrong.” In Sapiens, he argued: 

Unlike physics or economics, history is not a means for making accurate predictions. We study history not to know the future but to widen our horizons, to understand that our present situation is neither natural nor inevitable, and that we consequently have many more possibilities before us than we imagine….that the world might well be arranged differently.

When grappling with the implications of AI on society, can CEOs understand the experiences of ordinary people? As a prisoner recounts in Gulag Archipelago: “Only those can understand us who ate from the same bowl with us.” We need a more inclusive dialogue to ensures AI’s trajectory serves society as a whole. Klaus Schwab, who recently stepped down as executive chairman of the World Economic Forum to become chair of the board of trustees, wrote in his essay The Intelligent Age: A time for cooperation: 

As we delegate more decision-making to algorithms, we risk exacerbating social divides if the systems are designed without fairness, inclusion, and an understanding of what it means to be human at their core. Social intelligence means understanding the broader societal impacts of technology and ensuring that the Intelligent Age fosters greater inclusion and equity, not further division and polarization.

Unless Lady Thatcher had it right when she quipped “there is no such thing as society.” Kissinger, Schmidt & Huttenlocher wrote in the Wall Street Journal

Without guiding principles, humanity runs the risk of domination or anarchy, unconstrained authority or nihilistic freedom. The need for relating major societal change to ethical justifications and novel visions for the future will appear in a new form. If the maxims put forth by ChatGPT are not translated into a recognizably human endeavor, alienation of society and even revolution may become likely.

Executives lack time for deep philosophical reflection. The task of exploring fundamental truths falls to historians and philosophers. However, only a few, such as Yuval Noah Harari or Nick Bostrom, influence public discourse. 

Hopfield net. Courtesy: Flickr

Referring to his own field of history, Kissinger said in 1968: “You have to know what history is relevant. You have to know what history to extract.” He writes

The challenges presented by AI today are not simply a second chapter of the nuclear age. History is not a cookbook with recipes that can be followed to produce a soufflé. The differences between AI and nuclear weapons are at least as significant as the similarities. Properly understood and adapted, however, lessons learned in shaping an international order that has produced nearly eight decades without great-power war offer the best guidance available for leaders confronting AI today.

AI companies should seek out individuals with intellectual training in the humanities. Machine sapience may eliminate the need for vocational training and cultivate a “learned individual” to tackle new challenges. Or, if original thinkers are impossible to find in the modern world, we could train a language model on all of Kissinger’s papers, emails, texts, diaries, etc. into a language model. As we face new challenges surrounding AI and geopolitics, we could consult the model, and the “Henry bot” would advise us on what to do. As historian Greg Grandin ends his book Kissinger’s Shadow: “Far from disappearing into oblivion, he endures. And after Kissinger himself is gone, one imagines Kissingerism will endure as well.” Much to The Trial of Henry Kissinger author Christopher Hitchen’s as well as protestors against US bombing of Cambodia in the 1970s chagrin, Dr. Kissinger continues to get the last laugh. 

But this is no laughing matter. Kissinger, Schmidt & Mundie: “desire a future in which human intelligence and machine intelligence empower one another.” The choice between human autonomy and a new partnership with AI will require careful consideration: “What will become of human consciousness if its own explanatory power is surpassed by AI, and societies are no longer able to interpret the world they inhabit in terms that are meaningful to them?” If humans go their own way, they risk falling behind, for:

AI can judge with exceptional accuracy the most promising avenues for further exploration. Rapidly selecting, testing, reversing, and reselecting, it can evaluate the effects of millions of potential choices…….In time, we should expect that they will come to conclusions about history, the universe, the nature of humans, and the nature of intelligent machines— developing a rudimentary self‑ consciousness in the process.

Many CEOs and politicians aren’t primarily thinkers, nor are they necessarily meant to be. The demands of their roles often preclude philosophical contemplation. Kissinger told the National Security Council (NSC) in 1978: “Bureaucracies are designed to execute, not to conceive.” Furthermore:

High office teaches decision making, not substance. It consumes intellectual capital; it does not create it. Most high officials leave the office with the perceptions and insights with which they entered; they learn how to make decisions but not what decisions to make.

Those in power may not grasp the profound implications of AI. Kissinger wrote in 2018:

The Enlightenment started with essentially philosophical insights spread by a new technology. Our period is moving in the opposite direction. It has generated a potentially dominating technology in search of a guiding philosophy.

Philosophers could guide AI development and governance. Kissinger believed intellectuals should work with policymakers yet be freed from bureaucratic constraints in order to think independently. Drawing on Leo Strauss:

The philosopher would rule the state through proximity to power. In such an arrangement, the philosopher could pursue and apply his or her accumulated knowledge, distant enough from the uncleanliness of politics to preserve purity of thought but close enough for a society to benefit from the result.

The problem is a deluge of information which no one can digest in all of its complexity. Kissinger wrote in a 1978 essay “Controls, Inspections, and Limited War”, “The rate of technological change has outstripped the pace of diplomatic negotiations.”   AI, however, could keep up. Speed is a key attribute that distinguishes AI from human learners: “the average AI supercomputer is already 120 million times faster than the processing rate of the human brain.” Bill Gates writes:  “An electrical signal in the brain moves at 1/100,000th the speed of the signal in a silicon chip!” Klaus Schwab argues: “In the new world, it is not the big fish which eats the small fish, it’s the fast fish which eats the slow fish.”  It should be no surprise that we, as Kissinger describes it, “generated a potentially dominating technology in search of a guiding philosophy.” We must continue to merge with AI or risk this alien species outmaneuvering us to the point of obsolescence.

David Armitage & Jo Lundi write in The History Manifesto: “We live in a moment of accelerating crisis that is characterized by the shortage of long-term thinking.” This is likely to get worse. Kissinger worried technology would make it difficult to find leaders with perspective on human affairs: “reading a complex book carefully, and engaging with it critically, has become as countercultural an act as was memorizing an epic poem in the earlier print-based age.”  We may experience: “diminished inquisitiveness as humans entrust AI with an increasing share of the quest for knowledge,” losing what essayist Adam Garfinkle defined as ‘deep literacy,’ “engaging with an extended piece of writing in such a way as to anticipate an author’s direction and meaning.” Professor Kissinger believed: “Reading creates a ‘skein of intergenerational conversation’, encouraging learning with a sense of perspective………in an age dominated by television and the Internet, thoughtful leaders must struggle against the tide.” Niall Ferguson argues we should study the past for two reasons: 

First, the current world population makes up approximately 7 percent of all the human beings who have ever lived. The dead outnumber the living, in other words, fourteen to one, and we ignore the accumulated experience of such a huge majority of mankind at our peril. Second, the past is really our only reliable source of knowledge about the fleeting present and to the multiple futures that lie before us, only one of which will actually happen.

AI’s rapid advancement presents significant risks. Misuse could devalue human life, concentrate power in the hands of a few, and exploit cognitive limitations. Non-human intelligence might autonomously shape its own emergence. Companies that own and develop AI might amass excessive power, dominating social, economic, military, and political spheres. To mitigate these risks and ensure equitable distribution of AI’s benefits, Kissinger, Schmit & Mundie propose distributing divisible units of wealth generated by AI models. This could harness the power of AI for the betterment of humanity.

Efforts to fuse humans with machines are well underway, what the authors call merging with the divine. They perceive AI as a catalyst for human progress, urging “sober optimism.” Yuval Harari argues this is not the end of history, but rather the end of human-dominated history.

There may be an election upcoming in the US, but we will not get a chance to vote on AI. Tom Friedman writes: “It is as if the automobile was just invented and reporters and candidates preferred to continue discussing the future of horses.” Kissinger wrote in his 2018 Atlantic essay How the Enlightenment Ends

AI developers, as inexperienced in politics and philosophy as I am in technology, should ask themselves some of the questions I have raised here in order to build answers into their engineering efforts. The U.S. government should consider a presidential commission of eminent thinkers to help develop a national vision. This much is certain: If we do not start this effort soon, before long we shall discover that we started too late.

Indeed.

Thanks to Niall Ferguson and Little Brown & Company for arranging an advance review copy

About the Author
Dan is a historian and human rights advocate
Related Topics
Related Posts