Categories
Technology

There is No Artificial Intelligence

It seems like most new technology today comes with a lot of hype. Just a few years ago, the press was full of predictions that we’d be awash with Internet of Thing sensors that would transform the way we live. We’ve heard similar claims for technologies like virtual reality, block chain, and self-driving cars. I’ve written a lot about the massive hype surrounding 5G – in my way of measuring things, there isn’t any 5G in the world yet, but the cellular carriers are loudly proclaiming its everywhere.

The other technology with a hype that nearly equals 5G is artificial intelligence. I see articles every day talking about the ways that artificial intelligence is already changing our world, with predictions about the big changes on the horizon due to AI. A majority of large corporations claim to now be using AI. Unfortunately, this is all hype and there is no artificial intelligence today, just like there is not yet any 5G.

It’s easy to understand what real 5G will be like – it will include the many innovations embedded in the 5G specifications like frequency slicing and dynamic spectrum sharing. We’ll finally have 5G when a half dozen new 5G technologies are on my phone. Defining artificial intelligence is harder because there is no specification for AI. Artificial intelligence will be here when a computer can solve problems in much the way that humans do. Our brains evaluate available data on hand to see if we know enough to solve a problem. If not, we seek the additional data we need. Our brains can consider data from disparate and unrelated sources to solve problems. There is no computer today that is within a light-year of that ability – there are not yet any computers that can ask for specific additional data needed to solve a problem. An AI computer doesn’t need to be self-aware – it just has to be able to ask the questions and seek the right data needed to solve a given problem.

We use computer tools today that get labeled as artificial intelligence such as complex algorithms, machine learning, and deep learning. We’ve paired these techniques with faster and larger computers (such as in data centers) to quickly process vast amounts of data.

One of the techniques we think of artificial intelligence is nothing more than using brute force to process large amounts of data. This is how IBM’s Deep Blue works. It can produce impressive results and shocked the world in 1997 when the computer was able to beat Garry Kasparov, the world chess champion. Since then, the IBM Watson system has beat the best Jeopardy players and is being used to diagnose illnesses. These computers achieve their results through processing vast amounts of data quickly. A chess computer can consider huge numbers of possible moves and put a value on the ones with the best outcome. The Jeopardy computer had massive databases of human knowledge available like Wikipedia and Google search – it looks up the answer to a question faster than a human mind can pull it out of memory.

Much of what is thought of as AI today uses machine learning. Perhaps the easiest way to describe machine learning is with an example. Machine learning uses complex algorithms to analyze and rank data. Netflix uses machine learning to suggest shows that it thinks a given customer will like. Netflix knows what a viewer has already watched. Netflix also knows what millions of others who watch the same shows seem to like, and it looks at what those millions of others watched to make a recommendation. The algorithm is far from perfect because the data set of what any individual viewer has watched is small. I know in my case, I look at the shows recommended for my wife and see all sorts of shows that interest me, but which I am not offered. This highlights one of the problems of machine learning – it can easily be biased and draw wrong conclusions instead of right ones. Netflix’s suggestion algorithm can become a self-fulfilling prophecy unless a viewer makes the effort to look outside of the recommended shows – the more a viewer watches what is suggested, the more they are pigeonholed into a specific type of content.

Deep learning is a form of machine learning that can produce better results by passing data through multiple algorithms. For example, there are numerous forms of English spoken around the world. A customer service bot can begin each conversation in standard English, and then use layered algorithms to analyze the speaker’s dialect to switch to more closely match a given speaker.

I’m not implying that today’s techniques are not worthwhile. They are being used to create numerous automated applications that could not be done otherwise. However, almost every algorithm-based technique in use today will become instantly obsolete when a real AI is created.

I’ve read several experts that predict that we are only a few years away from an AI desert – meaning that we will have milked about all that can be had out of machine learning and deep learning. Developments with those techniques are not leading towards a breakthrough to real AI – machine learning is not part of the evolutionary path to AI. At least for today, both AI and 5G are largely non-existent, and the things passed off as these two technologies are pale versions of the real thing.

Categories
Technology

The Fourth Industrial Revolution

There is a lot of talk around the world among academics and futurists that we have now entered into the beginnings of the fourth industrial revolution. The term industrial revolution is defined as a rapid change in the economy due to technology.

The first industrial revolution came from steam power that drove the creation of the first large factories to create textiles and other goods. The second industrial revolution is called the age of science and mass production and was powered by the simultaneous development of electricity and oil-powered combustion engines. The third industrial revolution was fairly recent and was the rise of digital technology and computers.

There are differing ideas of what the fourth industrial revolution means, but every prediction involves using big data and emerging technologies to transform manufacturing and the workplace. The fourth industrial revolution means mastering and integrating an array of new technologies including artificial intelligence, machine learning, robotics, IoT, nanotechnology, biotechnology, and quantum computing. Some technologists are already predicting that the shorthand description for this will be the age of robotics.

Each of these new technologies is in their infancy but all are progressing rapidly. Take the most esoteric technology on the list – quantum computing. As recently as three or four years ago this was mostly an academic concept and we now have first generation quantum computers. I can’t recall where I read it, but I remember a quote that said that if we think of the fourth industrial revolution in terms of a 1,000-day process that we are now only on day three.

The real power of the fourth industrial revolution will come from integrating the technologies. The technology that is the most advanced today is robotics, but robotics will change drastically when robots can process huge amounts of data quickly and can use AI and machine learning to learn and cope with the environment in real time. Robotics will be further enhanced in a factory or farm setting by integrating a wide array of sensors to provide feedback from the surrounding environment.

I’m writing about this because all of these technologies will require the real-time transfer of huge amounts of data. Futurists and academics who talk about the fourth industrial revolution seem to assume that the needed telecon technologies already exist – but they don’t exist today and need to be developed in conjunction with the other new technologies.

The first missing element to enable the other technologies are computer chips that can process huge amounts of data in real time. Current chip technology has a built-in choke point where data is queued and fed into and out of a chip for processing. Scientists are exploring a number of ways to move data faster. For example, light-based computing has the promise to move data at speeds up to 50 Gbps. But even that’s not fast enough and there is research being done using lasers to beam data directly into the chip processor – a process that might increase processing speeds 1,000 times over current chips.

The next missing communications element is a broadband technology that can move data fast enough to keep up with the faster chips. While fiber can be blazingly fast, a fiber is far too large to use at the chip level, and so data has to be converted at some point from fiber to some other transmission path.

The amount of data that will have to be passed in some future applications is immense. I’ve already seen academics bemoaning that millimeter wave radios are not fast enough, so 5G will not provide the solution. Earlier this year the first worldwide meeting was held to officially start collaborating on 6G technology using terabit wave spectrum. Transmissions at those super-high frequencies only stay coherent for a few feet, but these frequencies can carry huge amounts of data. It’s likely that 6G will play a big role in providing the bandwidth to the robots and other big data needs of the fourth industrial revolution. From the standpoint of the telecom industry, we’re no longer talking about last-mile and we are starting to address the last-foot!

Categories
Technology

2017 Technology Trends

I usually take a look once a year at the technology trends that will be affecting the coming year. There have been so many other topics of interest lately that I didn’t quite get around to this by the end of last year. But here are the trends that I think will be the most noticeable and influential in 2017:

The Hackers are Winning. Possibly the biggest news all year will be continued security breaches that show that, for now, the hackers are winning. The traditional ways of securing data behind firewalls is clearly not effective and firms from the biggest with the most sophisticated security to the simplest small businesses are getting hacked – and sometimes the simplest methods of hacking (such as phishing for passwords) are still being effective.

These things run in cycles and there will be new solutions tried to stop hacking. The most interesting trend I see is to get away from storing data in huge data bases (which is what hackers are looking for) and instead distributing that data in such a way that there is nothing worth stealing even after a hacker gets inside the firewall.

We Will Start Talking to Our Devices. This has already begun, but this is the year when a lot of us will make the change and start routinely talking to our computer and smart devices. My home has started to embrace this and we have different devices using Apple’s Siri, Microsoft’s Cortana and Amazon’s Alexa. My daughter has made the full transition and now talks-to-text instead of screen typing, but us oldsters are catching up fast.

Machine Learning Breakthroughs will Accelerate. We saw some amazing breakthroughs with machine learning in 2016. A computer beat the world Go champion. Google translate can now accurately translate between a number of languages. Just this last week a computer was taught to play poker and was playing at championship level within a day. It’s now clear that computers can master complex tasks.

The numerous breakthroughs this year will come as a result of having the AI platforms at Google, IBM and others available for anybody to use. Companies will harness this capability to use AI to tackle hundreds of new complex tasks this year and the average person will begin to encounter AI platforms in their daily life.

Software Instead of Hardware. We have clearly entered another age of software. For several decades hardware was king and companies were constantly updating computers, routers, switches and other electronics to get faster processing speeds and more capability. The big players in the tech industry were companies like Cisco that made the boxes.

But now companies are using generic hardware in the cloud and are looking for new solutions through better software rather than through sheer computing power.

Finally a Start of Telepresence. We’ve had a few unsuccessful shots at telepresence in our past. It started a long time ago with the AT&T video phone. But then we tried using expensive video conference equipment and it was generally too expensive and cumbersome to be widely used. For a while there was a shot at using Skype for teleconferencing, but the quality of the connections often left a lot to be desired.

I think this year we will see some new commercial vendors offering a more affordable and easier to use teleconferencing platform that is in the cloud and that will be aimed at business users. I know I will be glad not to have to get on a plane for a short meeting somewhere.

IoT Technology Will Start Being in Everything. But for most of us, at least for now it won’t change our lives much. I’m really having a hard time thinking I want a smart refrigerator, stove, washing machine, mattress, or blender. But those are all coming, like it or not.

There will be More Press on Hype than on Reality. Even though there will be amazing new things happening, we will still see more press on technologies that are not here yet rather than those that are. So expect mountains of articles on 5G, self-driving cars and virtual reality. But you will see fewer articles on the real achievements, such as talking about how a company reduced paperwork 50% by using AI or how the average business person saved a few trips due to telepresence.

Categories
Technology

Is 2017 the Year of AI?

Artificial Intelligence is making enormous leaps and in 2016 produced several results that were unimaginable a few years ago. The year started with Google’s AlphaGo beating the world champion in Go. The year ended up with an announcement by Google that its translation software using artificial intelligence had achieved the same level of competency as human translators.

This has all come about through applying the new techniques of machine learning. The computers are not yet intelligent in any sense of being able to pass the Turing test (a computer being able to simulate human conversation), but the new learning software builds up competency in specific fields of endeavor using trial and error, in much the same manner as people learn something new.

It is the persistent trials and errors that enable software like that used at Facebook to be getting eerily good at identifying people and places in photographs. The computer software can examine every photograph posted to Facebook or the open internet. The software then tries to guess what it is seeing, and its guess is then compared to what the photograph is really showing. Over time, the computer makes more and more refined guesses and the level of success climbs. It ‘learns’ and in a relatively short period of time can pick up a very specific competence.

2017 might be the year where we finally start seeing real changes in the world due to this machine learning. Up until now, each of the amazing things that AI has been able to do (such as beat the Go champion) were due to an effort by a team aimed at a specific goal. But the main purpose of these various feats was to see just how far AI could be pushed in terms of competency.

But this might be the year when AI computing power goes commercial. Google has developed a cloud product they are calling the Google Brain Team that is going to make Google’s AI software available to others. Companies of all sorts are going to be able, for the first time, to apply AI techniques to what they do for a living.

And it’s hard to even imagine what this is going to mean. You can look at the example of Google Translate to see what is possible. That service has been around for a decade, and was more of an amusement than a real tool. It was great for translating individual words or short phrases but could not handle the complicated nuances of whole sentences. But within a short time after applying the Google Brain Team software to the existing product it leaped forward in the competence of translating. The software can now accurately translate sentences between eight languages and is working to extend that to over one hundred languages. Language experts already predicted that this is likely to put a lot of human translators out of business. But it will also make it easier to converse and do business between those using different languages. We are on the cusp of having a universal human translator through the application of machine learning.

Now companies in many industries will unleash AI on their processes. If AI can figure out how to play Go at a championship level then it can learn a whole lot of other things that could be of great commercial importance. Perhaps it can be used to figure out the fastest way to create vaccines for new viruses. There are firms on Wall Street that have the goal of using AI to completely replace human analysts. It could be used to streamline manufacturing processes to make it cheaper to make almost anything.

The scientists and engineers working on Google Translate said that AI improved their product far more within a few months than what they had been able to do in over a decade. Picture that same kind of improvements popping up in every industry and within just a few years we could be looking at a different world. A lot of companies have already figured out that they need to deploy AI techniques or fall behind competitors that use them. We will be seeing a gold rush in AI and I can’t wait to see what this means in our daily lives.

Categories
Technology

AI, Machine Learning and Deep Learning

It’s getting hard to read tech articles any more that don’t mention artificial intelligence, machine learning or deep learning. It’s also obvious to me that many casual writers of technology articles don’t understand the differences and they frequently interchange the terms. So today I’ll take a shot at explaining the three terms.

Artificial intelligence (AI) is the overall field of working to create machines that carry out tasks in a way that humans think of as smart. The field has been around for a long time and twenty years ago I had an office on a floor shared by one of the early companies that was looking at AI.

AI has been in the press a lot in the last decade. For example, IBM used its Deep Blue supercomputer to beat the world’s chess champion. It really didn’t do this with anything we would classify as intelligence. It instead used the speed of a supercomputer to look forward a dozen moves and was able to rank options by looking for moves that produced the lowest number of possible ‘bad’ outcomes. But the program was not all that different than chess software that ran on PCs – it was just a lot faster and used the brute force of computing power to simulate intelligence.

Machine learning is a subset of AI that provides computers with the ability to learn without programming them for a specific task. The Deep Blue computer used a complex algorithm that told it exactly how to rank chess moves. But with machine language the goal is to write code that allows computers to interpret data and to learn from their errors to improve whatever task they are doing.

Machine learning is enabled by the use of neural network software. This is a set of algorithms that are loosely modeled after the human brain and that are designed to recognize patterns. Recognizing patterns is one of the most important ways that people interact with the world. We learn early in life what a ‘table’ is, and over time we can recognize a whole lot of different objects that also can be called tables, and we can do this quickly.

What makes machine learning so useful is that feedback can be used to inform the computer when it makes a mistake, and the pattern recognition software can incorporate that feedback into future tasks. It is this feedback capability that lets computers learn complex tasks quickly and to constantly improve performance.

One of the earliest examples of machine language I can recall is the music classification system used by Pandora. With Pandora you can create a radio station to play music that is similar to a given artist, but even more interestingly you can create a radio station that plays music similar to a given song. The Pandora algorithm, which they call the Music Genome Project, ‘listens’ to music and identifies patterns in the music in terms of 450 musical attributes like melody, harmony, rhythm, composition, etc. It can then quickly find songs that have the most similar genome.

Deep learning is the newest field of artificial intelligence and is best described as the cutting-edge subset of machine learning. Deep learning applies big data techniques to machine learning to enable software to analyze huge databases. Deep learning can help make sense out of immense amounts of data. For example, Google might use machine learning to interpret and classify all of the pictures its search engine finds on the web. This enables Google to be able to show you a huge number of pictures of tables or any other object upon request.

Pattern recognition doesn’t have to just be visual. It can include video, written words, speech, or raw data of any kind. I just read about a good example of deep learning last week. A computer was provided with huge library of videos of people talking along with the soundtracks and was asked to learn what people were saying just by how people moved their lips. The computer would make its best guess and then compare its guess to the soundtrack. With this feedback the computer quickly mastered lip reading and is now outperforming experienced human lip readers. The computer that can do this is still not ‘smart’ but it can become incredibly proficient at certain tasks and people interpret this as intelligence.

Most of the promises from AI are now coming from deep learning. It’s the basis for self-driving cars that learn to get better all of the time. It’s the basis of the computer I read about a few months ago that is developing new medicines on its own. It’s the underlying basis for the big cloud-based personal assistants like Apple’s Siri and Amazon’s Alexa. It’s going to be the underlying technology for computer programs that start tackling white collar work functions now done by people.

Exit mobile version