The release of this movie got scientists talking about AI. Stephen Hawking and other physicists wrote an article for The Independent after seeing the movie. They caution that while developing AI would be the largest achievement of mankind, it also could be our last. The fear is that a truly aware computer will not be human and that it will pursue its own agenda over time. An AI will have the ability to be far smarter than mankind and yet contain no human ethics or morality.
This has been a recurrent theme in science fiction starting with Robby the Robot up through Hal in 2001, Blade Runner and The Terminator. But when Hawking issues a warning about AI one has to ask if this is moving out of the realm of science fiction into science reality.
Certainly we have some very rudimentary forms of AI today. We have Apple’s Siri and Microsoft’s Cortana that help us find a restaurant or schedule a phone call. We have IBM’s Deep Blue that can beat the best chess players in the world, win at Jeopardy and that is now making medical diagnosis. And these are just the beginning and numerous scientists are working on the next breakthroughs in machine intelligence that will help mankind. For example, a lot of the research into how to understand big data is based upon huge computational power coupled with some way to make sense out of what the data tells us. But not all AI research leads to good things and it’s disconcerting to see that the military is looking into building self-aware missiles and bombs that can seek out their targets.
One scientist I have always admired is Douglas Hofstadter, the author of Godel, Escher, Bach – An Eternal Golden Braid, that won the Pulitzer prize in 1980. It’s a book I love and one that people call the bible of artificial intelligence. It’s a combination of exercises in computing, cognitive science, neuroscience and psychology and it inspired a lot of scientists to enter the AI world. Hofstadter says that Siri and Deep Blue are just parlor games that overpower problems with sheer computational power. He doesn’t think these kinds of endeavors are going to lead to AI and that we won’t get there until we learn more about how we think and what it means to be aware.
With that said, most leading scientists in the field are predicting the singularity anywhere from 20 to 40 years from now. And just about everybody is sure that it will happen by the end of this century. Hawking is right and this will be the biggest event in human history to date – we will have created another intelligence. Nobody knows what that means, but it’s easy to see how a machine intelligence could be dangerous to mankind. Such an intelligence could think circles around us and could compete with us for our resources. It would likely put most of us out of work since it would most of the thinking for us.
And it will probably arise without warning. There are numerous paths being taken in AI research and one of them will probably hit pay dirt. Do we really want a smart Siri, one that is smarter than us? My answer to that question is, only if we can control it. However, there is a good chance that we won’t be able to control such a genie or ever put it back into its bottle. Add this to the things to worry about, I guess.