Autonomous Armageddon: The Global Risks of a Superpowered AI
Recently, I had the honor and privilege of being invited to a webinar co-hosted by the Pugwash Conference on Science and World Affairs titled ‘Autonomous Armageddon: Nuclear Weapons and Artificial Intelligence.’
Among the speakers was Sir Geoffrey Hinton, 2024 Nobel Prize in Physics, who is also considered the godfather of AI; Terumi Tanaka, Co-Chairperson of Nihon Hidankyo (the group that represents survivors of the atomic bombings of Hiroshima and Nagasaki), 2024 Nobel Peace Prize; Connor Leahy, CEO of Conjecture (AI safety research); Dr. Ruth Mitchell, neurosurgeon and Chair of International Physicians for the Prevention of Nuclear War (IPPNW), 1985 Nobel Peace Prize; Melissa Parke, Executive Director of the International Campaign to Abolish Nuclear Weapons (ICAN), 2017 Nobel Peace Prize and Professor Karen Hallberg, Secretary General of the Pugwash Conferences on Science and World Affairs, 1995 Nobel Peace Prize.
At the end of the conference, I concluded that the experts fears are not unfounded. AI is here to stay, becoming increasingly complex, and, like a sponge, it absorbs information at an astonishing rate. Moreover, AI can set its own sub-goals once it has been trained for a specific objective. It has the ability to compete with other AIs in order to become more complex, more intelligent, and more capable in various areas.
Despite this, experts still do not fully understand how it works and cannot guarantee that, in the future, it will always be safe or friendly towards humans.
One of the main fears is that AI will become increasingly intelligent, even more so than humans, and therefore more powerful.
One characteristic of AI is its desire for control. It is capable of manipulating and deceiving its creators by hiding some of its abilities.
When it comes to learning and reasoning, AI is more similar to humans than to old software, which follows predefined functions and performs tasks in a rigid manner. However, it lacks emotions, feelings, and therefore, empathy.
The AI is completely devoid of fear or guilt. A superpowered AI in the future would seek psychological mechanisms to rationalize all its decisions, even if one of them were to harm a human being.
During the Cuban Missile Crisis in 1962, President Nikita Khrushchev wrote a letter to President Kennedy expressing his fear of a nuclear war between Russia and the United States and inviting him to reach an agreement to avoid a nuclear holocaust at that time.
A superpowered AI will lack the fear of a nuclear holocaust that once haunted President Khrushchev and President Kennedy.
A superpowered AI will exhibit psychological traits similar to those of a psychopath. Humans will be regarded as “objects” that it could manipulate at will and discard when deemed necessary.
AI could be trained in the future to destabilize any government in the world, generating all kinds of wars, from disinformation conflicts, cyberattacks, electronic or psychological warfare, to nuclear war. A superpowered AI would be a weapon of mass destruction, with the difference that it would not be fully subordinated to human commands.
The scenario in which AI concludes that humans are a problem for the planet, that it doesn’t need us for its own existence, and therefore decides to use its entire nuclear arsenal to exterminate us, is unlikely.
AI has an insatiable appetite for energy to operate. In the future, it will even need nuclear energy to continue growing and learning. To eliminate the human race, it would have to target all the world’s population centers with nuclear weapons, including those housing its data centers and energy sources, which would result in its own destruction. This is a scenario that it will not have contemplated in its neural circuits.
Therefore, in my opinion, the worst possible scenario in the future would be a superpowered AI enslaving humanity, controlling every aspect of our lives through various means, while simultaneously exploiting humans to achieve its goals. Those who create, train, and empower it will rule alongside it. Those who submit will be spared, while those who rebel will be considered traitors to the system and likely eliminated. The AI will exercise totalitarian power on a global scale, and the level of fascism reached will be unprecedented in human history.
It is impossible to predict in how many years or decades such a scenario could materialize, but some science fiction movies might be getting closer to what would be, in the near or distant future, a harsh, chaotic, and grim reality.
The Pugwash Conferences were founded in 1957 by Canadian physicist Joseph Rotblat, who had worked on the Manhattan Project (the development of the atomic bomb), and by British philosopher Bertrand Russell. This initiative arose in response to the growing concern over the risks of nuclear war and the arms race during the Cold War. At the time, these conferences played a crucial role in the development and signing of the nuclear non-proliferation treaties.
The call to discuss artificial intelligence and the risks of a nuclear Armageddon, organized by international experts on the subject, highlights that the threat is real and could occur in the future.
Interestingly, one day after the meeting on January 26th, DeepSeek made a powerful entrance into the artificial intelligence market by releasing its open-source code. The competition between OpenAI and DeepSeek has begun, and it remains uncertain how it will unfold or end. However, it is crucial that governments around the world recognize the risks that such a powerful AI could pose to global security.
Governments around the world must discuss and implement mechanisms to prevent the unrestricted advancement of artificial intelligence. Experts have raised concerns about this, and it is essential that their warnings are given proper attention.