The Start of the Information Age

Claude_Elwood_Shannon_(1916-2001)A few weeks ago I wrote a blog about the key events in the history of telecom. Today I am going to take a look at one of those events which is how today’s information age sprung out of a paper published in 1948 titled “A Mathematical Theory of Communication” by Claude Shannon. At the time of publication he was a young 32-year old researcher at Bell Laboratories.

But even prior to that paper he had made a name for himself when at MIT. His Master’s dissertation there was “A Symbolic Analysis of Relay and Switching Circuits” that pointed out that the logical values of true and false could easily be substituted for a one and a zero, and that this would allow for physical relays to perform logical calculations. Many have called this the most important Master’s thesis of the 1900s.

His paper was a profound breakthrough at the time and was done a decade before the development of computer components. Shannon’s thesis showed how a machine could be made to perform logical calculations and was not limited to just doing mathematical calculations. This made Shannon the first one to realize that a machine could be made to mimic the actions of human thought and some call this paper the genesis of artificial intelligence. This paper provided the push to develop computers since it made it clear that machines could do a lot more things that merely calculate.

Shannon joined Bell Labs as WWII was looming and he went to work immediately on military projects like cryptography and designing a fire control for antiaircraft guns. But in his spare time Shannon worked on his idea that he referred to as a fundamental theory of communications. He saw that it was possible to ‘quantify’ knowledge by the use of binary digits.

This paper was one of those rare breakthroughs in science that come along that are unique and not a refinement of earlier work. Shannon saw information in a way that nobody else had ever thought of it. He showed that information could be quantified in a very precise way. His paper was the first place to use the word ‘bit’ to describe a discrete piece of information.

For those who might be interested, a copy of this paper is here. I read this many years ago and I still find it well worth reading. The paper was unique and so clearly written that it is still used today to teach at MIT.

What Shannon had done was to show how we could measure and quantify the world around us. He made it clear how all measurable data in the world could be captured precisely and then transmitted without losing any precision. Since this was developed at Bell Labs, one of the first applications of the concept was applied to telephone signals. In the lab they were able to convert a voice signal into digital code of 1’s and 0’s and then transmit it to be decoded somewhere else. And the results were just as predicted in that the voice signal that came out at the receiving end was as good as what was recorded at the transmitting end. Until this time voice signals had been analog and that meant that any interference that happened on the line between callers would affect the quality of the call.

But of course, voice is not the only thing that can be encoded as digital signals and as a society we have converted about everything imaginable as 1s and 0s. We applied digital coding to music, pictures, film and text over time and today everything on the Internet has been digitized.

The world reacted quickly to Shannon’s paper and accolades were everywhere. Within two years everybody in science was talking about information theory and applying it to their particular fields of research. Shannon was not comfortable with the fame that came from his paper and he slowly withdrew from society. He left Bell Labs and returned to teach at MIT. But he even slowly withdrew from there and stopped teaching by the mid-60’s.

We owe a huge debt to Claude Shannon. His original thought gave rise to the components that let computers ‘think’, which gave a push to the nascent computer industry and was the genesis of the field of artificial intelligence. And he also developed information theory which is the basis for everything digital that we do today. His work was unique and probably has more real-world applications than anything else developed in the 20th century.

Will We Be Seeing Real Artificial Intelligence?

robbyI have always been a science fiction fan and I found the controversy surrounding the new movie Transcendence to be interesting. It’s a typical Hollywood melodrama in which Johnny Depp plays a scientist who is investigating artificial intelligence. After he is shot by anti-science terrorists his wife decides to upload his dying brain into their mainframe. As man and machine merge they reach that moment that AI scientists call the singularity – when a machine becomes aware. And with typical Hollywood gusto this first artificial intelligence goes on to threaten the world.

The release of this movie got scientists talking about AI. Stephen Hawking and other physicists wrote an article for The Independent after seeing the movie. They caution that while developing AI would be the largest achievement of mankind, it also could be our last. The fear is that a truly aware computer will not be human and that it will pursue its own agenda over time. An AI will have the ability to be far smarter than mankind and yet contain no human ethics or morality.

This has been a recurrent theme in science fiction starting with Robby the Robot up through Hal in 2001, Blade Runner and The Terminator. But when Hawking issues a warning about AI one has to ask if this is moving out of the realm of science fiction into science reality.

Certainly we have some very rudimentary forms of AI today. We have Apple’s Siri and Microsoft’s Cortana that help us find a restaurant or schedule a phone call. We have IBM’s Deep Blue that can beat the best chess players in the world, win at Jeopardy and that is now making medical diagnosis. And these are just the beginning and numerous scientists are working on the next breakthroughs in machine intelligence that will help mankind. For example, a lot of the research into how to understand big data is based upon huge computational power coupled with some way to make sense out of what the data tells us. But not all AI research leads to good things and it’s disconcerting to see that the military is looking into building self-aware missiles and bombs that can seek out their targets.

One scientist I have always admired is Douglas Hofstadter, the author of Godel, Escher, Bach – An Eternal Golden Braid, that won the Pulitzer prize in 1980. It’s a book I love and one that people call the bible of artificial intelligence. It’s a combination of exercises in computing, cognitive science, neuroscience and psychology and it inspired a lot of scientists to enter the AI world. Hofstadter says that Siri and Deep Blue are just parlor games that overpower problems with sheer computational power. He doesn’t think these kinds of endeavors are going to lead to AI and that we won’t get there until we learn more about how we think and what it means to be aware.

With that said, most leading scientists in the field are predicting the singularity anywhere from 20 to 40 years from now. And just about everybody is sure that it will happen by the end of this century. Hawking is right and this will be the biggest event in human history to date – we will have created another intelligence. Nobody knows what that means, but it’s easy to see how a machine intelligence could be dangerous to mankind. Such an intelligence could think circles around us and could compete with us for our resources. It would likely put most of us out of work since it would most of the thinking for us.

And it will probably arise without warning. There are numerous paths being taken in AI research and one of them will probably hit pay dirt. Do we really want a smart Siri, one that is smarter than us? My answer to that question is, only if we can control it. However, there is a good chance that we won’t be able to control such a genie or ever put it back into its bottle. Add this to the things to worry about, I guess.