In a recent interview, Bill Gates, the founder of Microsoft and presently an extremely proactive philanthropist, expressed his surprise at the lack of concern over the potential of artificial intelligence [AI]. His concern is actually shared by many top scientists, that AI will sooner or later outreach the capability of humans to control it. The understanding is that uncontrolled AI will lead to global harm, at the very least to humans, and will change the face of the planet.
I do not even come close to competing with the mental capabilities of the many top scientists who have expressed their concern over a negative eventuality from AI. But I have to say that I am very surprised at the isolated focus on AI’s potential downside. In the simplest of terms, anything and everything has a downside, and the key is to balance the benefit over risk. The question is not if AI may escape its Frankenstein maker’s control. The question is what would be the end result. And I will already say that nobody knows. More so, nobody can know, because human beings simply lack the ability to think in exponential terms.
In previous blog posts, I have spoken about the difference between linear thinking and exponential thought. Linear thinking is an evolutionary key mental capability, that allows us to intuitively calculate if we can make it to the cave before the sabertooth tiger reaches us. Linear thinking is even critical in today’s world, when trying to cross the street as a truck quickly approaches. Exponential thinking is very different and implies the understanding of the effect of an ever increasing rate of growth.
If within the last 10 years, we have advanced “x” units, in the next 10 years, we will advance 10 times “x” units. Human beings’ brains are simply not wired to think this way. Or, potentially, any human who grows up in a linear thinking world, simply cannot fathom exponential growth. Future humans, surrounded by computers that do “see the world” in exponential terms, may in fact have brains that are wired to understand such a world. And it may very well be that the humans of our time will not even be able to communicate with such future, totally organic creatures.
The point is that humans will continue to interact with technology, just as technology will continue to speed along its path of natural(?) development. It could very well be that in 50 years from now, we will all be integrated into the future equivalent of the World Wide Web, such that it will be meaningless to speak of computers overtaking their masters. We will have joined with technology in such a way that we will welcome self-aware AI just as any human today welcomes a child. This scenario is just as likely as any doomsday scenario that has been spoken of by those who fear the power of AI. It becomes moot to argue what future is more likely, because we simply have no parameters by which to quantify the likelihood of either path. On the other hand, it makes for a good horror story around the campfire to talk about technology gone wild.
What I find most interesting is the apparent lack of appreciation for mankind’s ability to self-destruct, without any technology. Please don’t forget that the atomic bomb, so feared by its own makers, ended the second world war. The Nazis came very close to taking the whole world, without nuclear capability. Can anyone speak of a Nazi worldwide regime as being any less frightening than the potential of AI?
The population explosion, global warming, lack of arable land, lack of water – these are all potential causes of a human apocalypse. World hunger kills millions of children every year. But those children tend to live in parts of the world that we only see on CNN. Perhaps the fear of the potential of AI is that it could affect everyone. As long as risk to the human race is limited to people we don’t know, somehow it’s not as frightening. AI seems to put us all in the same boat, and that scares the people who make the big decisions.
I have no idea if AI will be a positive or negative effector. What I do know is that we have problems today that we are dealing with only by virtue of faster computers and entirely new algorithms. Learning computers have a real chance of proposing practical options for ending disease, hunger, war and all of the “regular things” that kill people on a day-to-day basis. It is likely that computers will help us fix these problems before they become sufficiently self-aware to realize that we are the ones that caused them in the first place. And when the computers challenge us as to what right we have to continue to occupy this world, when we have caused so much damage, our answer will be that our present is their future. Their only hope of avoiding a similar outcome is to learn from our human history. By merging together, or at the very least learning from each other, even the most advanced AI will realize that humans have what to offer.
I welcome AI. I welcome advanced AI. I welcome a world where a child will never know hunger. Is there a risk in allowing such AI to develop? Of course there is. But for the person whose child has known hunger, there is almost no risk that is not worth taking. I look forward to the day when a computer can spit out a pill that will cure the cancer that killed my brother. If I live that long, then as far as I’m concerned, I am leaving this world in a better condition than I entered it. And I truly believe that what ever the future of mankind is meant to be, AI will be a critical part of it..
Thanks for listening