Why Aren’t We Talking about Technology Disruption?

One of the most interesting aspects of modern society is how rapidly we adapt to new technology. Perhaps the best illustration of this is the smartphone. In the short period of a decade, we went from a new invention to the point where the large majority of the American public has a smartphone. Today the smartphone is so pervasive that recent statistics from Pew show that 96% of those under between 18 and 29 have a smartphone.

Innovation is exploding in nearly every field of technology, and the public has gotten so used to change that we barely notice announcements that would have made worldwide headlines a few decades ago. I remembre as a kid when Life Magazine had an issue largely dedicated to nylon and polymers and had the world talking about something that wouldn’t even get noticed today. People seem to accept miracle materials, gene splicing, and self-driving cars as normal technical advances. People now give DNA test kits as Christmas presents. Nobody blinks an eye when big data is used to profile and track us all. We accept cloud computing as just another computer technology. In our little broadband corner of the technology world, the general public has learned that fiber and gigabit speeds are the desired broadband technology.

What I find perhaps particularly interesting is that we don’t talk much about upcoming technologies that will completely change the world. A few technologies get talked to death such as 5G and self-driving cars. But technologists now understand that 5G is, in itself, not a disruptive technology – although it might unleash other disruptive technologies such as ubiquitous sensors throughout our environment. The idea of self-driving cars no longer seems disruptive since I can already achieve the same outcome by calling an Uber. The advent of self-driving semi trucks will be far more disruptive and will lower the cost of the nationwide supply chain when we use fleets of self-driving electric trucks.

I’ve always been intrigued about those who peer into the future and I read everything I can find about upcoming technologies. From the things I read there are a few truly disruptive technologies on the horizon. Consider the following innovations that aren’t too far in the future:

Talking to Computers. This will be the most important breakthrough in history in terms of the interface between humans and technology. In a few short generations, we’ve gone from typing on keyboards, to using a mouse, to using cellphones – but the end game will be talking directly to our computers using natural conversational language. We’ve already seen significant progress with natural language processing and are on a path to be able to converse with computers in the same way we communicate with other people. That will trigger a huge transition in society. Computers will fade into the background since we’ll have the full power of the cloud anywhere that we’re connected to the cloud. Today we get a tiny inkling by seeing how people use Apple Siri or Amazon Alexa – but these are rudimentary voice recognition systems. It’s nearly impossible to predict how mankind will react to having the full power of the web with us all of the time.

Space Elevator. In 2012 the Japanese announced a nationwide goal of building a space elevator by 2050. That goal has now been pulled forward to 2045. A space elevator will be transformational since it will free mankind from the confines of the planet earth. With a space elevator we can cheaply and safely move people and materials to and from space. We can drag up the raw materials needed to build huge space factories that can then take advantage of the mineral riches in the asteroid belt. From there we can colonize the moon and mars, build huge space cities and build spaceships to explore nearby stars. The cost of the space elevator is still estimated to only be around $90 billion, the same as the cost of the high-speed rail system between Osaka and Tokyo.

Alternate Energy. We are in the process of weaning mankind from fossil fuel energy sources. While there is a long way to go, several countries in Europe have the goal to be off carbon fuels within the coming decade. The EU already gets 30% of electricity from alternate energy sources. The big breakthrough might finally come from fusion power. This is something that has been 30 years away my whole adult life, but scientists at MIT and other places have developed the needed magnets that can contain the plasma necessary for a fusion reaction and some scientists are now predicting fusion power is now only 15 years away. Fusion power would supply unlimited non-polluting energy, which would transform the whole world, particularly the third world.

An argument can be made that there are other equally disruptive technologies on the horizon like artificial intelligence, robotics, gene-editing, virtual reality, battery storage, and big data processing. Nothing on the list would be as significant as a self-aware computer – but many scientists still think that’s likely to be far into the future. What we can be sure of is that breakthroughs in technology and science will continue to come at us rapidly from all directions. I wonder if the general public will even notice the mosts important breakthroughs or if change has gotten so ho hum that it’s just an expected part of life.

The Fourth Industrial Revolution

There is a lot of talk around the world among academics and futurists that we have now entered into the beginnings of the fourth industrial revolution. The term industrial revolution is defined as a rapid change in the economy due to technology.

The first industrial revolution came from steam power that drove the creation of the first large factories to create textiles and other goods. The second industrial revolution is called the age of science and mass production and was powered by the simultaneous development of electricity and oil-powered combustion engines. The third industrial revolution was fairly recent and was the rise of digital technology and computers.

There are differing ideas of what the fourth industrial revolution means, but every prediction involves using big data and emerging technologies to transform manufacturing and the workplace. The fourth industrial revolution means mastering and integrating an array of new technologies including artificial intelligence, machine learning, robotics, IoT, nanotechnology, biotechnology, and quantum computing. Some technologists are already predicting that the shorthand description for this will be the age of robotics.

Each of these new technologies is in their infancy but all are progressing rapidly. Take the most esoteric technology on the list – quantum computing. As recently as three or four years ago this was mostly an academic concept and we now have first generation quantum computers. I can’t recall where I read it, but I remember a quote that said that if we think of the fourth industrial revolution in terms of a 1,000-day process that we are now only on day three.

The real power of the fourth industrial revolution will come from integrating the technologies. The technology that is the most advanced today is robotics, but robotics will change drastically when robots can process huge amounts of data quickly and can use AI and machine learning to learn and cope with the environment in real time. Robotics will be further enhanced in a factory or farm setting by integrating a wide array of sensors to provide feedback from the surrounding environment.

I’m writing about this because all of these technologies will require the real-time transfer of huge amounts of data. Futurists and academics who talk about the fourth industrial revolution seem to assume that the needed telecon technologies already exist – but they don’t exist today and need to be developed in conjunction with the other new technologies.

The first missing element to enable the other technologies are computer chips that can process huge amounts of data in real time. Current chip technology has a built-in choke point where data is queued and fed into and out of a chip for processing. Scientists are exploring a number of ways to move data faster. For example, light-based computing has the promise to move data at speeds up to 50 Gbps. But even that’s not fast enough and there is research being done using lasers to beam data directly into the chip processor – a process that might increase processing speeds 1,000 times over current chips.

The next missing communications element is a broadband technology that can move data fast enough to keep up with the faster chips. While fiber can be blazingly fast, a fiber is far too large to use at the chip level, and so data has to be converted at some point from fiber to some other transmission path.

The amount of data that will have to be passed in some future applications is immense. I’ve already seen academics bemoaning that millimeter wave radios are not fast enough, so 5G will not provide the solution. Earlier this year the first worldwide meeting was held to officially start collaborating on 6G technology using terabit wave spectrum. Transmissions at those super-high frequencies only stay coherent for a few feet, but these frequencies can carry huge amounts of data. It’s likely that 6G will play a big role in providing the bandwidth to the robots and other big data needs of the fourth industrial revolution. From the standpoint of the telecom industry, we’re no longer talking about last-mile and we are starting to address the last-foot!

The Continued Growth of Data Traffic

Every one of my clients continues to see explosive growth of data traffic on their broadband networks. For several years I’ve been citing a statistic used for many years by Cisco that says that household use of data has doubled every three years since 1980. In Cisco’s last Visual Networking Index published in 2017 the company predicted a slight slowdown in data growth to now double about every 3.5 years.

I searched the web for other predictions of data growth and found a report published by Seagate, also in 2017, titled Data Age 2025: The Evolution of Data to Life-Critical. This report was authored for Seagate by the consulting firm IDC.

The IDC report predicts that annual worldwide web data will grow from the 16 zettabytes of data used in 2016 to 163 zettabytes in 2025 – a tenfold increase in nine years. A zettabyte is a mind-numbingly large number that equals a trillion gigabytes. That increase means an annual compounded growth rate of 29.5%, which more than doubles web traffic every three years.

The most recent burst of overall data growth has come from the migration of video online. IDC expects online video to keep growing rapidly, but also foresees a number of other web uses that are going to increase data traffic by 2025. These include:

  • The continued evolution of data from business background to “life-critical”. IDC predicts that as much as 20% of all future data will become life-critical, meaning it will directly impact our daily lives, with nearly half of that data being hypercritical. As an example, they mention the example of how a computer crash today might cause us to lose a spreadsheet, but that data used to communicate with a self-driving car must be delivered accurately. They believe that the software needed to ensure such accuracy will vastly increase the volume of traffic on the web.
  • The proliferation of embedded systems and the IoT. Today most IoT devices generate tiny amounts of data. The big growth in IoT data will not come directly from the IoT devices and sensors in the world, but from the background systems that interpret this data and make it instantly usable.
  • The increasing use of mobile and real-time data. Again, using the self-driving car as an example, IDC predicts that more than 25% of data will be required in real-time, and the systems necessary to deliver real-time data will explode usage on networks.
  • Data usage from cognitive computing and artificial intelligence systems. IDC predicts that data generated by cognitive systems – machine learning, natural language processing and artificial intelligence – will generate more than 5 zettabytes by 2025.
  • Security systems. As we have more critical data being transmitted, the security systems needed to protect the data will generate big volumes of additional web traffic.

Interestingly, this predicted growth all comes from machine-to-machine communications that are a result of us moving more daily functions onto the web. Computers will be working in the background exchanging and interpreting data to support activities such as traveling in a self-driving car or chatting with somebody in another country using a real-time interpreter. We are already seeing the beginning stages of numerous technologies that will require big real time data.

Data growth of this magnitude is going to require our data networks to grow in capacity. I don’t know of any client network that is ready to handle a ten-fold increase in data traffic, and carriers will have to beef up backbone networks significantly over time. I have often seen clients invest in new backbone electronics that they hoped to be good for a decade, only to find the upgraded networks swamped within only a few years. It’s hard for network engineers and CEOs to fully grasp the impact of continued rapid data growth on our networks and it’s more common than not to underestimate future traffic growth.

This kind of data growth will also increase the pressure for faster end-user data speeds and more robust last-mile networks. If a rural 10 Mbps DSL line feels slow today, imagine how slow that will feel when urban connections are far faster than today. If the trends IDC foresees hold true, by 2025 there will be many homes needing and using gigabit connections. It’s common, even in the industry to scoff at the usefulness of residential gigabit connections, but when our use of data needs keeps doubling it’s inevitable that we will need gigabit speeds and beyond.

Is 2017 the Year of AI?

Data CenterArtificial Intelligence is making enormous leaps and in 2016 produced several results that were unimaginable a few years ago. The year started with Google’s AlphaGo beating the world champion in Go. The year ended up with an announcement by Google that its translation software using artificial intelligence had achieved the same level of competency as human translators.

This has all come about through applying the new techniques of machine learning. The computers are not yet intelligent in any sense of being able to pass the Turing test (a computer being able to simulate human conversation), but the new learning software builds up competency in specific fields of endeavor using trial and error, in much the same manner as people learn something new.

It is the persistent trials and errors that enable software like that used at Facebook to be getting eerily good at identifying people and places in photographs. The computer software can examine every photograph posted to Facebook or the open internet. The software then tries to guess what it is seeing, and its guess is then compared to what the photograph is really showing. Over time, the computer makes more and more refined guesses and the level of success climbs. It ‘learns’ and in a relatively short period of time can pick up a very specific competence.

2017 might be the year where we finally start seeing real changes in the world due to this machine learning. Up until now, each of the amazing things that AI has been able to do (such as beat the Go champion) were due to an effort by a team aimed at a specific goal. But the main purpose of these various feats was to see just how far AI could be pushed in terms of competency.

But this might be the year when AI computing power goes commercial. Google has developed a cloud product they are calling the Google Brain Team that is going to make Google’s AI software available to others. Companies of all sorts are going to be able, for the first time, to apply AI techniques to what they do for a living.

And it’s hard to even imagine what this is going to mean. You can look at the example of Google Translate to see what is possible. That service has been around for a decade, and was more of an amusement than a real tool. It was great for translating individual words or short phrases but could not handle the complicated nuances of whole sentences. But within a short time after applying the Google Brain Team software to the existing product it leaped forward in the competence of translating. The software can now accurately translate sentences between eight languages and is working to extend that to over one hundred languages. Language experts already predicted that this is likely to put a lot of human translators out of business. But it will also make it easier to converse and do business between those using different languages. We are on the cusp of having a universal human translator through the application of machine learning.

Now companies in many industries will unleash AI on their processes. If AI can figure out how to play Go at a championship level then it can learn a whole lot of other things that could be of great commercial importance. Perhaps it can be used to figure out the fastest way to create vaccines for new viruses. There are firms on Wall Street that have the goal of using AI to completely replace human analysts. It could be used to streamline manufacturing processes to make it cheaper to make almost anything.

The scientists and engineers working on Google Translate said that AI improved their product far more within a few months than what they had been able to do in over a decade. Picture that same kind of improvements popping up in every industry and within just a few years we could be looking at a different world. A lot of companies have already figured out that they need to deploy AI techniques or fall behind competitors that use them. We will be seeing a gold rush in AI and I can’t wait to see what this means in our daily lives.

AI, Machine Learning and Deep Learning

Data CenterIt’s getting hard to read tech articles any more that don’t mention artificial intelligence, machine learning or deep learning. It’s also obvious to me that many casual writers of technology articles don’t understand the differences and they frequently interchange the terms. So today I’ll take a shot at explaining the three terms.

Artificial intelligence (AI) is the overall field of working to create machines that carry out tasks in a way that humans think of as smart. The field has been around for a long time and twenty years ago I had an office on a floor shared by one of the early companies that was looking at AI.

AI has been in the press a lot in the last decade. For example, IBM used its Deep Blue supercomputer to beat the world’s chess champion. It really didn’t do this with anything we would classify as intelligence. It instead used the speed of a supercomputer to look forward a dozen moves and was able to rank options by looking for moves that produced the lowest number of possible ‘bad’ outcomes. But the program was not all that different than chess software that ran on PCs – it was just a lot faster and used the brute force of computing power to simulate intelligence.

Machine learning is a subset of AI that provides computers with the ability to learn without programming them for a specific task. The Deep Blue computer used a complex algorithm that told it exactly how to rank chess moves. But with machine language the goal is to write code that allows computers to interpret data and to learn from their errors to improve whatever task they are doing.

Machine learning is enabled by the use of neural network software. This is a set of algorithms that are loosely modeled after the human brain and that are designed to recognize patterns. Recognizing patterns is one of the most important ways that people interact with the world. We learn early in life what a ‘table’ is, and over time we can recognize a whole lot of different objects that also can be called tables, and we can do this quickly.

What makes machine learning so useful is that feedback can be used to inform the computer when it makes a mistake, and the pattern recognition software can incorporate that feedback into future tasks. It is this feedback capability that lets computers learn complex tasks quickly and to constantly improve performance.

One of the earliest examples of machine language I can recall is the music classification system used by Pandora. With Pandora you can create a radio station to play music that is similar to a given artist, but even more interestingly you can create a radio station that plays music similar to a given song. The Pandora algorithm, which they call the Music Genome Project, ‘listens’ to music and identifies patterns in the music in terms of 450 musical attributes like melody, harmony, rhythm, composition, etc. It can then quickly find songs that have the most similar genome.

Deep learning is the newest field of artificial intelligence and is best described as the cutting-edge subset of machine learning. Deep learning applies big data techniques to machine learning to enable software to analyze huge databases. Deep learning can help make sense out of immense amounts of data. For example, Google might use machine learning to interpret and classify all of the pictures its search engine finds on the web. This enables Google to be able to show you a huge number of pictures of tables or any other object upon request.

Pattern recognition doesn’t have to just be visual. It can include video, written words, speech, or raw data of any kind. I just read about a good example of deep learning last week. A computer was provided with huge library of videos of people talking along with the soundtracks and was asked to learn what people were saying just by how people moved their lips. The computer would make its best guess and then compare its guess to the soundtrack. With this feedback the computer quickly mastered lip reading and is now outperforming experienced human lip readers. The computer that can do this is still not ‘smart’ but it can become incredibly proficient at certain tasks and people interpret this as intelligence.

Most of the promises from AI are now coming from deep learning. It’s the basis for self-driving cars that learn to get better all of the time. It’s the basis of the computer I read about a few months ago that is developing new medicines on its own. It’s the underlying basis for the big cloud-based personal assistants like Apple’s Siri and Amazon’s Alexa. It’s going to be the underlying technology for computer programs that start tackling white collar work functions now done by people.

New Technologies and the Business Office

old robotI often write about new technologies that are just over the horizon.  Today I thought it would be interesting to peek ten years into the future and see how the many new technologies we are seeing today will appear in the average business office of a small ISP. Consider the following:

Intelligent Digital Assistants. Within ten years we have highly functional digital assistants to help us. These will be successors to Apple’s Siri or Amazon’s Alexa. These assistants will become part of the normal work day. When an employee is trying to find a fact these assistants will be able to quickly retrieve the needed answer. This will be done using a plain English voice interface and employees will no longer need to through a CRM system or do a Google search to find what they need. When an employee wants a reminder of where the company last bought a certain supply or wants to know the payment history of a given customer – they will just ask, and the answer will pop up on their screen or be fed into an earbud or other listening device as appropriate.

Telepresence. It will start becoming common to have meetings by telepresence, meaning there will be fewer face-to-face meetings with vendors, suppliers or customers. Telepresence using augmented reality will allow for a near-real life conversation with a person still sitting at their own home or office.

Bot-to-Bot Communications. The way you interface with many of your customers will become fully automated. For instance, if a customer wants to know the outstanding balance on their account they will ask their own digital assistant to go find the answer. Their bot will interface with the carrier’s customer service bot and the two will work to provide the answer your customer is seeking. Since there is artificial intelligence on both sides of the transaction the customer will no longer be limited to only asking about the few facts you make available today through a customer service GUI interface.

Self-Driving Cars. At least some of your maintenance fleets will become self-driving. This will probably become mandatory as a way to control vehicle insurance costs. Self-driving vehicles will be safer and they will always take the most direct path between locations. By freeing up driving time you will also free up technicians to do other tasks like communicating with customers or preparing for the next customer site.

Drones. While you won’t use drones a lot, they are far cheaper than a truck roll when you need to deliver something locally. It will be faster and cheaper to use drones to send a piece of electronics to a field technician or to send a new modem to a customer.

3D Printing. Offices will begin to routinely print parts needed for the business. If you need a new bracket to mount a piece of electronics you will print one that will be an exact fit rather than have to order one. Eventually you will 3D print larger items like field pedestals and other gear – meaning you don’t have to keep an inventory of parts or wait for shipments.

Artificial Intelligence. Every office will begin to cede some tasks to artificial intelligence. This may start with small things like using an AI to cover late night customer service and trouble calls. But eventually offices will trust AIs to perform paperwork and other repetitive tasks. AIs will take care of things like scheduling the next day’s technician visits, preparing bank deposit slips, or notifying customers about things like outages or scheduled repairs. AIs will eventually cut down on the need for staff. You are always going to want to have a human touch, but you won’t need to use humans for paperwork and related tasks that can be done more cheaply and precisely by an AI.

Robots. It’s a stretch to foresee physical robots in a business office environment in any near-future setting. It’s more likely that you will use small robots to do things like inspect fiber optic cables in the field or to make large fiber splices. When the time comes when a robot can do everything a field technician can do, we will all be out of jobs!

Looking Into the Future

Alexander_Crystal_SeerYesterday I presented at the South Dakota Telephone Association annual conference, with the topic being ‘A Glimpse into the Future’. In this presentation I talked about the trends that are going to affect the telecom industry over the next 5 – 10 years as well as the broader technology changes we can expect to see over the next few decades.

These are topics that I research and think about often. A lot of this blog looks at telecom trends and technologies we can expect to see over the next five years. And once in a while I indulge myself in the blog and look at the future of other technologies. Researching this presentation was fun since it made me take a fresh look at what others are predicting about our future.

I am an optimist and my research tells me that we are living at the most amazing time in mankind’s history. There is so much groundbreaking research being done in so many different fields that the announcement of new technology breakthroughs will become commonplace during the next decade. Barely a day goes by already that I don’t see the announcement of a new technology or scientific breakthrough.

I don’t think the average person is prepared for how fast the world is going to soon be changing. The last time that the world underwent such a dramatic shift was at the beginning of the 20th century when we were introduced to electricity, cars, telephones, radios and airplanes. We are about to be hit with a tsunami of innovations far more numerous than that last big wave of change.

It’s hard for the mind to grasp the idea of exponential growth. Over the last forty years our technology has been dominated by a single exponential growth curve – the continuous growth of the speed and density of the computer chip. This one change has brought most of what we think of as modern technology – computers, the smartphone, the Internet, the cloud and our broadband and telecom networks. Anybody working in any field of electronics has been blessed for a long time by knowing that they would be able to produce a new version of their technology every few years that was faster, cheaper and smaller.

What is amazing about today is that there are numerous new technologies that are at the early stages of the exponential growth curve – and all happening at the same time. Just looking at the list of these technologies is exciting – robotics, advanced machine language (artificial intelligence), nanotechnology, alternate energy, super materials, genetics and medical research. As these technologies progress we will soon be inundated with breakthroughs in all of these areas. It’s mind-boggling to envision which of these technologies will dominate our lives in a decade or two, and it’s even harder to think of how these various technical trends will intersect to produce things we can’t imagine.

What is even more exciting is that this is not even the whole list, because there are a lot of other technology trends that might become equally important in our lives. Such trends as the Internet of Things, the blockchain, natural language computing, or virtual reality might have a big impact on many of us in the very near future. I will be discussing some of these future trends over the next few months and I hope some of my readers share my enthusiasm about what is coming over the next decade or two.

I don’t usually use this blog to promote myself, but I am interested in talking to other associations and trade groups about the many topics I cover in this blog. You can contact me at blackbean2@ccgcomm.com if you are interested.

Coming Technology Trends

618486main_earth_fullI love to look into the future and think about where the various technology trends are taking us. Recently, Upfront Ventures held a conference for the top technology venture capital investors and before that conference they asked those investors what they foresaw as the biggest technology trends over the next five years. Five years is not the distant future and it’s interesting to see where the people that invest in new businesses see us heading in that short time. Here were the most common responses:

Talent Goes Global. Innovation and tech talent has most recently been centered in the US, Europe, Japan and China. But now there are tech start-ups everywhere and very talented young technologists to be found in all sorts of unlikely places.

For many years we have warned US kids that they are competing in a worldwide economy, and this is finally starting to come true. In a fully networked world it’s getting easier to collaborate with the brightest people from around the world, and that’s a much larger talent pool. The days of Silicon Valley being the only places developing the next big thing are probably behind us.

Sensors Everywhere. There will be a big increase in sensors that are going to supply us feedback about the world around us and will provide feedback in ways that were unimaginable. Assuming that we can find a way to tackle the huge influx of big data in real time we are going to have a whole new way to look at much of our world.

There will be sensors on farms, in factories, in public places and in our homes and businesses that will begin providing a wide array of feedback on the environment around us. There are currently hundreds of companies working on medical monitors that are going to be able to tell us a lot more about ourselves, which will allow us to track and treat diseases and allow older folks to stay in their homes longer.

The First Real Commercial A.I. It’s hard to go a week these days without hearing about an A.I platform that has solved the same kinds of issues we face every day. A.I. systems are now able to learn things from scratch, on their own, and self-monitor to improve their performance in specific applications.

This opens up the possibility of automating huge numbers of repetitive processes. I have a friend who is a CPA who has already automated the tax preparation process and he can go from bank accounts and create a set of books and tax returns in an hour or two – a process that used to take a week or longer. And soon it will be totally automated and not require much human assistance at all until the finished product is ready for review. People think that robots are going to take over physical tasks – and they will – but before then expect to see a huge wave of the automation of paperwork processes like accounting, insurance claim processing, mortgage and credit card approval and a long list of other clerical and white collar tasks.

Better Virtual Reality. The first generation of virtual reality is now hitting the market. But with five more years of development the technology will find its way into many facets of our lives. If you haven’t tried it yet, the first generation VR is pretty spectacular, but its potential is almost mind-blowing when plotting it out on a normal path of technical improvements and innovations.

New Ways to Communicate. The VR investors think that we are on the verge of finding new ways to communicate. Already today a lot of our forms of communication have moved to messaging platforms and away from phone calls and email. With the incorporation of A.I. the experts predict a fully integrated communications system that easily and automatically incorporates all kinds of communications mediums. And with the further introduction and use of bots companies will be able to automatically join in conversations without needing piles of people for much of it.

Is the Universal Translator Right Around the Corner?

star trek comm badgeWe all love a race. There is something about seeing somebody strive to win that gets our blood stirring. But there is one big race going on now that it’s likely you never heard of, which is the race to develop deep learning.

Deep learning is a specialized field of Artificial Intelligence research that looks to teach computers to learn by structuring them to mimic the neurons in the neocortex, that portion of our brain that does all of the thinking. The field has been around for decades, with limited success, and has needed faster computers to make any real headway.

The race is between a few firms that are working to be the best in the field. Microsoft and Google have gone back and forth with public announcements of breakthroughs, while other companies like Facebook and China’s Baidu are keeping their results quieter. It’s definitely a race, because breakthroughs are always compared to the other competitors.

The current public race deals with pattern recognition. The various teams are trying to get a computer to identify various objects in a defined data set of millions of pictures. In September Google announced that it had the best results on this test and just this month Microsoft said their computers beat not only Google, but did better than what people can do on the test.

All of the companies involved readily admit that their results are still far below what a human can do naturally in the real world, but they have made huge strides. One of the best known demonstrations was done last summer by Google who had their computer look at over 10 million YouTube videos and asked it to identify cats. Their computer did twice as good as any previous test, which was particularly impressive since the Google team had not pre-defined what a cat was to the computer ahead of time.

There are some deep learning techniques in IBM’s Watson computer that beat the best champs in Jeopardy. Watson is currently being groomed to help doctors make diagnoses, particularly in the third world where there is a huge lack of doctors. IBM has also started selling time on the machine to anybody and there is no telling all of the ways it is now being used.

Probably the most interesting current research is in teaching computers to learn on their own. This is done today by enabling multiple levels of ‘neurons’. The first layer learns the basic concept, like recognizing somebody speaking the letter S. Several first-layer inputs are fed to the second layer of neurons which can then recognize more complex patterns. This process is repeated until the computer is able to recognize complex sounds.

The computers being used for this research are already getting impressive. The Google computer that did well learning to recognize cats had a billion connections. This computer was 70% better at recognizing objects than any prior computer. For now, the breakthroughs in the field are being accomplished by applying brute computing force and the cat-test computer used over 16,000 computer processors, something that only a company like Google or Microsoft has available. .

Computer scientists all agree that we are probably still a few decades away from a time when computers can actually learn and think on their own. We need a few more turns of Moore’s Law for the speed of computers to increase and the size of the processors to decrease. But that does not mean that there are not a lot of current real life applications that can benefit from the current generation of deep learning computers.

There are real-world benefits of the research today. For instance, Google has used this research to improve the speech recognition in Android smartphones. But what is even more exciting is where this research is headed for the future. Sergey Brin says that his ultimate goal is to build a benign version of HAL from 2001: A Space Odyssey. It’s likely to take multiple approaches in addition to deep learning to get to such a computer.

But long before a HAL-like computer we could have some very useful real-world applications from deep learning. For instance, computers could monitor complex machines like electric generators and predict problems before they occur. They could be used to monitor traffic patterns to change traffic lights in real time to eliminate traffic jams. They could be used to enable self-driving cars. They could produce a universal translator that will let people with different languages converse in real-time. In fact, in October 2014, Microsoft researcher Rick Rashid gave a lecture in China. The deep learning computer transcribed his spoken lecture into written text with a 7% error rate. It then translated it into Chinese and spoke to the crowd while simulating his voice. It seems like with deep learning we are not far away from having that universal translator promised to us by science fiction.

The Start of the Information Age

Claude_Elwood_Shannon_(1916-2001)A few weeks ago I wrote a blog about the key events in the history of telecom. Today I am going to take a look at one of those events which is how today’s information age sprung out of a paper published in 1948 titled “A Mathematical Theory of Communication” by Claude Shannon. At the time of publication he was a young 32-year old researcher at Bell Laboratories.

But even prior to that paper he had made a name for himself when at MIT. His Master’s dissertation there was “A Symbolic Analysis of Relay and Switching Circuits” that pointed out that the logical values of true and false could easily be substituted for a one and a zero, and that this would allow for physical relays to perform logical calculations. Many have called this the most important Master’s thesis of the 1900s.

His paper was a profound breakthrough at the time and was done a decade before the development of computer components. Shannon’s thesis showed how a machine could be made to perform logical calculations and was not limited to just doing mathematical calculations. This made Shannon the first one to realize that a machine could be made to mimic the actions of human thought and some call this paper the genesis of artificial intelligence. This paper provided the push to develop computers since it made it clear that machines could do a lot more things that merely calculate.

Shannon joined Bell Labs as WWII was looming and he went to work immediately on military projects like cryptography and designing a fire control for antiaircraft guns. But in his spare time Shannon worked on his idea that he referred to as a fundamental theory of communications. He saw that it was possible to ‘quantify’ knowledge by the use of binary digits.

This paper was one of those rare breakthroughs in science that come along that are unique and not a refinement of earlier work. Shannon saw information in a way that nobody else had ever thought of it. He showed that information could be quantified in a very precise way. His paper was the first place to use the word ‘bit’ to describe a discrete piece of information.

For those who might be interested, a copy of this paper is here. I read this many years ago and I still find it well worth reading. The paper was unique and so clearly written that it is still used today to teach at MIT.

What Shannon had done was to show how we could measure and quantify the world around us. He made it clear how all measurable data in the world could be captured precisely and then transmitted without losing any precision. Since this was developed at Bell Labs, one of the first applications of the concept was applied to telephone signals. In the lab they were able to convert a voice signal into digital code of 1’s and 0’s and then transmit it to be decoded somewhere else. And the results were just as predicted in that the voice signal that came out at the receiving end was as good as what was recorded at the transmitting end. Until this time voice signals had been analog and that meant that any interference that happened on the line between callers would affect the quality of the call.

But of course, voice is not the only thing that can be encoded as digital signals and as a society we have converted about everything imaginable as 1s and 0s. We applied digital coding to music, pictures, film and text over time and today everything on the Internet has been digitized.

The world reacted quickly to Shannon’s paper and accolades were everywhere. Within two years everybody in science was talking about information theory and applying it to their particular fields of research. Shannon was not comfortable with the fame that came from his paper and he slowly withdrew from society. He left Bell Labs and returned to teach at MIT. But he even slowly withdrew from there and stopped teaching by the mid-60’s.

We owe a huge debt to Claude Shannon. His original thought gave rise to the components that let computers ‘think’, which gave a push to the nascent computer industry and was the genesis of the field of artificial intelligence. And he also developed information theory which is the basis for everything digital that we do today. His work was unique and probably has more real-world applications than anything else developed in the 20th century.