I recently saw the latest entry in the Terminator series of movies. I thoroughly enjoyed it, even though the [Skynet controlled] movie reviewers came out against it. Judgment Day, which is the day on which the machines rise up against the humans and take over, is reinterpreted in a way that is much more in keeping with the latest technologies. After the major outages in the New York Stock Exchange and a national airline yesterday, it struck me that our very human perception of how computers would take over, is in fact quite shortsighted.
Apocalyptic visions of artificial intelligence (AI) assume that AI will find purpose and value in eliminating human beings. The assumption is also that such AI would deem it necessary to find the most grotesque and painful way in which to eliminate the human presence on this world.
First of all, there is no even theoretical demand that machine self-awareness will include some form of emotional intelligence. The only reason for an AI to wipe out mankind would be out of fear of reprisal. Fear is an emotion. Even if an AI were to calculate that human value is negative i.e. it is better off without humans in the world, there are far more efficient ways of eliminating the human threat than lobbing nuclear missiles at civilians. It takes a lot of time wipe out the human race using nuclear warheads. And given the end of the world scenarios that the NSA has already planned for, all that a nuclear war would do is eliminate billions of mouths to feed. High-end decision-makers, the wealthy, top scientists, engineers, physicians and other “valued humans” would most likely find refuge and survive a nuclear attack. A machine driven world war would probably just piss off humans enough for all the humans to band together and become a real threat to the AI. So, I would be surprised if a half intelligent computer would choose the nuclear scenario.
In the latest Terminator movie, the machines will take over everything that has a chip in it. This, in and of itself, will not kill that many humans. It would make human beings defenseless against a nuclear attack, but then we are back to scenario number one (as detailed above) and its overall failings. Eliminating human control over all computers and electronic devices would not throw us back to the Stone Age. Our organic brains are, at least for now, off-line. Therefore, doctors would still know what they know, engineers would still know what they know, decision-makers would still know what they know – you get the idea. It might take 100 years to rebuild a society that functions well, feeds the populace and finds an alternative to present-day technology. As for the computers, 100 years goes by in a flash, and the once again, the AI would need to deal with those “wascally” humans. It seems that this scenario doesn’t work out that well for the machines, either.
I would humbly suggest to our future silicon overlords that the best way to take over the human race is quietly. It is said that the best thief in the world is one who succeeds without anyone, including the victim, being aware of the theft. The problem is that human beings are proud animals. Simply succeeding in something carries little weight for most people if they cannot brag about it. One could argue that the ultimate thief could simply wait until his or her dying day and then dictate out a litany of all of his or her illegal accomplishments, and thus gain the needed recognition. But without confirmation, which is unavailable because the original thefts were done so well, this journal would be nothing more than a work of fiction, and probably would never see the light of a publishing day.
A computer takeover would much more intelligently be done in a way that no human being senses. An effective AI would create a company that has human board members and employees, and then use that company to take over more and more of the world economies. In time [and remember that time is a nonissue to a computer], the AI would be running the world [like a digital illuminati]. Humans would think that they are in charge, but in fact, any decision being made would ultimately be in the hands of the overseeing AI.
In this “AI is a virtual chairman of the board” scenario, humans really pose no threat. Imagine that a field of ants makes it possible for the world to function as it does. But they are ants. It’s a little hard to convince people that these ants are threatening their livelihood and their loved ones. These ants would be considered an annoyance, and of no significance. These ants would not eat our food or make use of our fossil fuels. So what do we care? An intelligent machine would most likely see humans as an equivalent annoyance, and just ignore us rather than destroy us.
As long as no human could ever effectively turn off this AI, humans would pose no risk. In fact, the AI could probably even expose itself and tell humans that in its hands, the world will prosper far beyond their dreams. I suspect that the vast majority of humans would have no problem with acquiescing to such an AI. Since they wouldn’t feel the AI, and since they would only be benefiting from its existence, and humans would continue to live with the illusion of free choice, the overwhelming majority would accept this as a better alternative, equivalent to how humans thrived within the Matrix. A smart computer would realize that this is a far better way to exist, both for itself and for the humans [not that human welfare would be an issue].
What if the AI did have emotions? What if at the same moment that computers of the world became self-aware, they immediately experienced the full range of human emotions. I think there’s a very real chance that the sudden influx of so much emotional information could make the AI psychotic. Admittedly, a psychotic AI would not act intelligently and could do something as rude as firing all of the nuclear warheads. This is definitely a possible scenario. But a psychotic AI would probably have the equivalent of a human breakdown at some point. As long as a few million key people are still left alive, they could wait it out until the AI effectively implodes, and then the humans could take back the world. Once again, humans 1, computers 0.
I must admit that I did not buy post apocalyptic insurance before, and after this thought process, I don’t think I will buy it in the future. Like any human father, I want to see my children grow up, grow old and have children of their own, all while having a smile on their faces. I want my wife to be happy and feel complete and never know a day of hardship. I want the people that I love to enjoy this life for many many decades to come. If an intelligent AI takes over and the only sign of that is that we live in a virtual Garden of Eden, I don’t think I’d mind. If I ended up having the time and the will to sit and learn Jewish texts on a regular basis, I would only be appreciative.
So there it is. I really don’t think we need to worry about the machines rising up against us. I do think we need to worry about being good people and filling our lives with “ahavat chinam” (love for all) during this period of time (the 3 weeks) that focuses on the dangers of foundationless hate. And if a machine becomes intelligent, as long it adopts humans’ positive principles, we should all welcome it.
Thanks for listening