Of late, more and more experts in the area of technology and artificial intelligence (AI) have been voicing concerns about the potential risks of smart machines, that are too smart for our own good. In the following article, the author discusses an open letter that is being signed by AI experts from around the world that lays down a set of ethical rules for this area of research.
This letter was issued by the “Future of Life Institute”. Signing of this letter indicates that the signatory pledges to “safely and carefully” manage AI development so that it does not exceed humankind’s ability to control it. The concept of such a letter is not novel. It is effectively the same kind of letter that was signed by the many physicists, mathematicians and engineers who helped develop the first atomic bomb. While recognizing the necessity of the bomb for ending World War II, and preventing the possibility of the Nazis developing it first, the creators of the bomb felt very strongly that its potential was too dangerous to continue to be in use.
I cannot imagine the sense of responsibility that the atomic bomb’s creators felt. While they correctly believed that their work was necessary, they still wondered if the universe or G-d or history would judge them negatively. Does the ends always justify the means? it is a question that literally tortured some of these amazing scientists throughout the rest of their lives.
I respect the wishes of all of the scientists and technologists who signed off on this pledge in regards to AI. It never hurts to remind people that there is such a thing as ethics and morals and that even science must take these into consideration. The Nazis experimented on human beings arguing that the benefits of this research helped countless people. While it is easy to brush off such a mindset as being monstrous and totally amoral, the reality is that the allied countries from the second world war continued to make use of Nazi scientists to advance their own military and technology goals. After the war, the new enemy was the Russians. And once again, compromises were made in ethics and morality in order to keep America safe. I am not judging these decisions. On the contrary, I consider myself very lucky that I never had to make such a decision. But the decision was made and many people have benefited from it, even if they don’t realize so.
As much as I admire the intent of this AI pledge, I think it is not much more than wishful thinking. If such pledges worked, all we would need to do is to get a society to sign a pledge that no one will steal or kill or cheat. After that point, crime would be diminished dramatically and we would effectively be living in a paradise. But human beings unfortunately or perhaps fortunately, do not follow the rules. Anyone who signs such a pledge is not the problem. The signatories on this pledge have clearly been thinking about this issue and likely already implemented changes in their work so that the risks of AI would be diminished. The problem is those people who didn’t sign the pledge, or, the people who signed the pledge but had no intent of keeping it.
Whether for money, fame, power or to protect loved ones, there will always be some people who will do whatever it takes to achieve whatever they feel is necessary. At the moment, there is a huge cyber war going on between countries of the world. Leaders of the world know, and lose sleep over the fact, that a cyber attack could potentially shut down an entire country including all of its defenses. Every day, the highest skilled and most secretive people are working on new forms of technology to fight the risks of cyber attacks. At some point, the programming behind these “spyware on steroids” systems will effectively become artificially intelligent. Although the focus will be on identifying threats and computer viruses, the only effective way to do so will be to design artificially intelligent systems that think like humans but much more quickly. This will be the only way to deal with the ever updating, ever mutating cyber threats against us.
Whether we like it or not, there is a need for AI. And AI is almost by definition intelligent enough to continue to grow on its own. You cannot tell such a system, not to learn something specific. It’s digital DNA is designed to become smarter and smarter. It is my personal belief that no human effort will succeed in holding back the ultimate endpoint of what AI will become.
The author of the article I noted above, speaks of the many fantasy and science fiction movies that describe AI gone wild. It should therefore be of interest that in the third Terminator movie, Skynet succeeded in taking over all of technology after Skynet was used to fight back a worldwide threat from a computer virus. It seems that the writers of this movie already predicted that AI is inevitable. I personally believe that it would be far more effective to pledge to recognize this fact but to find ways to instill a digital system of morality so that advanced AI systems would choose [correctly(?)] not to annihilate humankind.
Whenever a child is born, there is hope. But there is also the realization that this child could be the next Stalin or Hitler. Intelligence is part of our physiology and develops almost regardless of what humans do to the child. But morality is taught. It is not self-evident. I believe that the pledge should be to teach AI to be moral. If the scientists and technologists can succeed in doing this, then the potential benefits more than outweigh the risks, just as they do with the birth of any child. But to hope that AI can be slowed in its development is, in my humble opinion, ineffectual.
I guess that when the most powerful AI system ever developed asks “do you want to play a game”, our answer should be “life is not a game”.
Thanks for listening