AI, Machine Learning and Deep Learning

Data CenterIt’s getting hard to read tech articles any more that don’t mention artificial intelligence, machine learning or deep learning. It’s also obvious to me that many casual writers of technology articles don’t understand the differences and they frequently interchange the terms. So today I’ll take a shot at explaining the three terms.

Artificial intelligence (AI) is the overall field of working to create machines that carry out tasks in a way that humans think of as smart. The field has been around for a long time and twenty years ago I had an office on a floor shared by one of the early companies that was looking at AI.

AI has been in the press a lot in the last decade. For example, IBM used its Deep Blue supercomputer to beat the world’s chess champion. It really didn’t do this with anything we would classify as intelligence. It instead used the speed of a supercomputer to look forward a dozen moves and was able to rank options by looking for moves that produced the lowest number of possible ‘bad’ outcomes. But the program was not all that different than chess software that ran on PCs – it was just a lot faster and used the brute force of computing power to simulate intelligence.

Machine learning is a subset of AI that provides computers with the ability to learn without programming them for a specific task. The Deep Blue computer used a complex algorithm that told it exactly how to rank chess moves. But with machine language the goal is to write code that allows computers to interpret data and to learn from their errors to improve whatever task they are doing.

Machine learning is enabled by the use of neural network software. This is a set of algorithms that are loosely modeled after the human brain and that are designed to recognize patterns. Recognizing patterns is one of the most important ways that people interact with the world. We learn early in life what a ‘table’ is, and over time we can recognize a whole lot of different objects that also can be called tables, and we can do this quickly.

What makes machine learning so useful is that feedback can be used to inform the computer when it makes a mistake, and the pattern recognition software can incorporate that feedback into future tasks. It is this feedback capability that lets computers learn complex tasks quickly and to constantly improve performance.

One of the earliest examples of machine language I can recall is the music classification system used by Pandora. With Pandora you can create a radio station to play music that is similar to a given artist, but even more interestingly you can create a radio station that plays music similar to a given song. The Pandora algorithm, which they call the Music Genome Project, ‘listens’ to music and identifies patterns in the music in terms of 450 musical attributes like melody, harmony, rhythm, composition, etc. It can then quickly find songs that have the most similar genome.

Deep learning is the newest field of artificial intelligence and is best described as the cutting-edge subset of machine learning. Deep learning applies big data techniques to machine learning to enable software to analyze huge databases. Deep learning can help make sense out of immense amounts of data. For example, Google might use machine learning to interpret and classify all of the pictures its search engine finds on the web. This enables Google to be able to show you a huge number of pictures of tables or any other object upon request.

Pattern recognition doesn’t have to just be visual. It can include video, written words, speech, or raw data of any kind. I just read about a good example of deep learning last week. A computer was provided with huge library of videos of people talking along with the soundtracks and was asked to learn what people were saying just by how people moved their lips. The computer would make its best guess and then compare its guess to the soundtrack. With this feedback the computer quickly mastered lip reading and is now outperforming experienced human lip readers. The computer that can do this is still not ‘smart’ but it can become incredibly proficient at certain tasks and people interpret this as intelligence.

Most of the promises from AI are now coming from deep learning. It’s the basis for self-driving cars that learn to get better all of the time. It’s the basis of the computer I read about a few months ago that is developing new medicines on its own. It’s the underlying basis for the big cloud-based personal assistants like Apple’s Siri and Amazon’s Alexa. It’s going to be the underlying technology for computer programs that start tackling white collar work functions now done by people.

Will We Be Seeing Real Artificial Intelligence?

robbyI have always been a science fiction fan and I found the controversy surrounding the new movie Transcendence to be interesting. It’s a typical Hollywood melodrama in which Johnny Depp plays a scientist who is investigating artificial intelligence. After he is shot by anti-science terrorists his wife decides to upload his dying brain into their mainframe. As man and machine merge they reach that moment that AI scientists call the singularity – when a machine becomes aware. And with typical Hollywood gusto this first artificial intelligence goes on to threaten the world.

The release of this movie got scientists talking about AI. Stephen Hawking and other physicists wrote an article for The Independent after seeing the movie. They caution that while developing AI would be the largest achievement of mankind, it also could be our last. The fear is that a truly aware computer will not be human and that it will pursue its own agenda over time. An AI will have the ability to be far smarter than mankind and yet contain no human ethics or morality.

This has been a recurrent theme in science fiction starting with Robby the Robot up through Hal in 2001, Blade Runner and The Terminator. But when Hawking issues a warning about AI one has to ask if this is moving out of the realm of science fiction into science reality.

Certainly we have some very rudimentary forms of AI today. We have Apple’s Siri and Microsoft’s Cortana that help us find a restaurant or schedule a phone call. We have IBM’s Deep Blue that can beat the best chess players in the world, win at Jeopardy and that is now making medical diagnosis. And these are just the beginning and numerous scientists are working on the next breakthroughs in machine intelligence that will help mankind. For example, a lot of the research into how to understand big data is based upon huge computational power coupled with some way to make sense out of what the data tells us. But not all AI research leads to good things and it’s disconcerting to see that the military is looking into building self-aware missiles and bombs that can seek out their targets.

One scientist I have always admired is Douglas Hofstadter, the author of Godel, Escher, Bach – An Eternal Golden Braid, that won the Pulitzer prize in 1980. It’s a book I love and one that people call the bible of artificial intelligence. It’s a combination of exercises in computing, cognitive science, neuroscience and psychology and it inspired a lot of scientists to enter the AI world. Hofstadter says that Siri and Deep Blue are just parlor games that overpower problems with sheer computational power. He doesn’t think these kinds of endeavors are going to lead to AI and that we won’t get there until we learn more about how we think and what it means to be aware.

With that said, most leading scientists in the field are predicting the singularity anywhere from 20 to 40 years from now. And just about everybody is sure that it will happen by the end of this century. Hawking is right and this will be the biggest event in human history to date – we will have created another intelligence. Nobody knows what that means, but it’s easy to see how a machine intelligence could be dangerous to mankind. Such an intelligence could think circles around us and could compete with us for our resources. It would likely put most of us out of work since it would most of the thinking for us.

And it will probably arise without warning. There are numerous paths being taken in AI research and one of them will probably hit pay dirt. Do we really want a smart Siri, one that is smarter than us? My answer to that question is, only if we can control it. However, there is a good chance that we won’t be able to control such a genie or ever put it back into its bottle. Add this to the things to worry about, I guess.