The North American Numbering Plan

Fuld-modell-frankfurtToday is another in the series of blogs that looks at the important developments in the history of the telecom industry as I look at the North American Numbering Plan (NANP). This may seem a  bit mundane, but it’s one of those things that has been done correctly and that has made it easy for the industry to grow and to adapt to changes quickly. NANP was originally operated directly by AT&T, but over time the administration was given first to Lockheed Martin and then to Neustar, which still administers it today.

The NANP was first devised in the 1940’s and introduced in 1947. It replaced a hodgepodge of different numbering schemes in different parts of the US. The numbering plan was expanded over the years to include Canada and much of the Caribbean. The first numbering plan assigned the first three digits of a phone number to Numbering Plan Areas (NPAs) now just called area codes. There was originally room for 152 area codes with 86 of them originally assigned, each of which could contain up to 540 central offices. Area codes were assigned to minimize the number of clicks on a rotary dial phone for the largest number of people. And so New York City, as the most populous place got area code 212 since that was the least number of rotary clicks that was available and that didn’t start or end with a 1.

Area codes made it easier for operators to place long distance calls. They could quickly tell by the phone number where a call was supposed to route rather than having to know the location of every small town in America. The first direct-dialed long distance call was made on November 10, 1951 between Englewood NJ and Alameda CA. Direct-dial quickly spread to larger cities and was introduced everywhere by the early 1960s, eliminating the majority of operators.

Digits 4 – 6 of a phone number designated the central office that handled a call, which meant a physical location and a switch in the early days. When facing a shortage of central office codes by the late 1980s Bellcore required that all long distance calls begin with the prefix 1+ to distinguish them from local calls or from operator calls that begin with 0+. This allowed them to assign central office codes that begin with the numbers of 1 or 0 and alleviated the number shortage. .

Starting in the 1990s there was an explosion of the demand for central office codes and telephone numbers, mostly due to the creation of CLECs in the US. Up until that time, when somebody was given a new central office code they got a block of 10,000 numbers even if they were only going to have a few customers. The shortage of codes was addressed with two numbering strategies – splits and overlays. Splits divided a central office code into two or more parts that were assigned to different central offices. Each carrier would get only some portion of the 10,000 numbers. Overlays were implemented as entire new area codes that were overlayed on top of an existing area code which could provide a whole new set of central office codes. The public hated overlays at first because they could no longer automatically know the area code of somebody who lived in their area.

Anywhere there was an area code overlay it became mandatory for callers to dial ten digits because there were now multiple local people sharing the same last seven digits. By the early 1990s it became mandatory everywhere to dial ten digits to make a local call.

By 2000 there was a growing shortage of phone numbers and this was solved by using number pooling. This involved assigning blocks of numbers in groups of 1,000 instead of 10,000. First, this allowed the recovery of a huge number of unused thousand blocks from existing carriers. It also means that there is a greater chance that all numbers will eventually get used. About this same time the FCC mandated number portability. This allowed customers to keep their number when moving in some situations. At first a customer could keep a number if they stayed within the same local calling scope. But over time number portability has been expanded and includes a portability between landlines and cellular phones. Cellular phones are completely geographically portable and you can take a cell number anywhere. But there are still some restrictions on geographically moving landline numbers.

It is currently estimated that we have enough telephone numbers to last until sometime in the 2030s. At that time it will be necessary to go to 11-digit dialing. It is amazing to me how gracefully the system has changed over the years as they ran out of area codes, central office codes and then numbers. It’s easy to discount the value of the NANP, but you have to admire how it has been able to accommodate the need for over 100 million new cell phone numbers over the last decade. NANP was devised in a deliberate fashion but has been flexible enough to accommodate huge changes in the industry that could not have been contemplated in 1947.

Retiring the Copper Networks

telephone cablesAttached is a copy of FCC Docket DA-14-1272 where Verizon is asking to discontinue copper service in the towns of Lynnfield, MA, Farmingdale, NJ, Belle Harbor, NY, Orchard Park, NY, Hummelstown, PA and Ocean View, VA. In this docket the FCC is asking for public comments before it will consider the request.

In these particular towns Verizon is claiming that almost all of the households are already served by fiber and they are seeking to move the remaining households to fiber so they can disconnect and discontinue the use of the copper networks there. And perhaps if there are only five percent of lines left on copper in these towns that might be a reasonable request by Verizon. But this does prompt me to talk about the whole idea of discontinuing older copper networks, because both Verizon and AT&T have said that they would like to eliminate most of their copper by 2020.

In the case of Verizon it’s a tall order to get rid of all copper because they still have 4.9 million customers on copper with 5.5 million customers that have been moved to fiber. AT&T has a much larger problem since they don’t use fiber to serve residential customers except in a few rare cases. But both big carriers have made it a priority to get people off copper.

Many customers are unhappy with the idea of losing their copper and many have complained that they are getting a lot of pressure from the big telcos to drop their copper. There are numerous Verizon customers who say they are contacted monthly to get off the copper and they feel like they are being harassed. There are a few different issues to consider when talking about this topic.

Not everybody that loses copper will get fiber. Of the big telcos only Verizon even owns a residential fiber network. But even the Verizon FiOS network doesn’t go everywhere and they are not expanding the fiber network to new neighborhoods. For customers that live where there is no fiber, the goal is to move them to a DSL-based service or, in the case of AT&T to cellular phones.

Interestingly when a telco moves a customer from POTs (Plain Old Telephone Service) on copper to VoIP on DSL the telco will keep using the identical old copper wires. They will have changed the technology being used from analog to digital. But more importantly in most cases they will have changed the customers from being on a regulated product to an unregulated one. And that is one of the primary thrusts to get people off POTs.

POTs service is fully covered by a slew of regulations that are aimed at protecting consumers, such as carrier-of-last-resort obligations that require telcos to connect anybody who asks for service.  But in most states those same protections don’t apply to VoIP or fiber service. The most important right that customers lose with VoIP are the capped prices, meaning that the prices for VoIP or fiber service could be raised at any time by any amount. And the carrier-of-last-resort obligations have real-life impact even for existing customers. If a customer is late paying their bill on a VoIP network, Verizon would be within their rights to refuse to connect them back to service when they pay.

There are customers who want to stay on POTs on copper for various reasons. One reason is that POTs phones are powered by the copper network and so they keep working when the power goes out. There are still parts of the country where the power goes out regularly or where there is a reasonable expectation of hurricanes or ice storms. For example, houses that still had copper could make calls for up to a week after hurricane Sandy.

Another reason to keep copper is for security and medical monitoring. The copper POTs network has always been very reliable. But it is much more common for households to lose Internet service. Once a phone is converted to VoIP, then any time the Internet is down for a customer then their security and medical monitoring services that use those phones don’t work.

The FCC is going to be flooded with requests like this one to disconnect people from POTs. Certainly the copper networks are getting old. There might be merit for disconnecting copper in towns that are almost entirely fiber and where the customer losing POTs will move to fiber. In most cases fiber seems to be as reliable as copper, although it cannot power the phones when the electricity goes out.

But it seems somewhat ludicrous for the FCC to approve shuttling people from POTs to DSL while still using the same old copper lines. That clearly is being done as way to avoid regulation and customer protections and not for the carrier to save money. And it is clearly not in the customer’s best interest to move customers from POTs to cellular.

New Tech – September 2014

glass highriseAs I do from time to time I highlight some of the more interesting technologies that I run across in my reading. A few of these might have a big impact on our industry.

First is news that IBM has developed a new storage method that would be a giant leap forward in storage density. The technology is being called racetrack memory. It works by lining up tiny magnets one atomic layer deep on sheets. The atoms can be moved upward and downward in charge in a ‘highly coherent manner’ by the application of a simple current.

That in itself is interesting, but when these layers are laid atop one another in sheets there is a multiplicative effect in the amount of storage available in a given space. IBM has already built a small flash device using the technology with 15 layers and it increased the storage capacity of the flash drive by a factor of 100.

The small size also means a much faster read/write time and reduced power requirements. One of the biggest advantages of the technology is that there are no moving parts. This makes the technology infinitely rewritable, which means it ought to never wear out. IBM believes that this can be manufactured affordably and that this will become the new storage standard. As they put together even more layers than the original 15 in the prototype they will get an even bigger multiplier of storage capacity compared to any of today’s storage technologies. Expect within a few years to be seeing multiple terabyte flash drives and cell phones.

The next new technology comes from a research team at the University of Washington. They call the new technology WiFi Backscatter and they will be formally publishing the paper on the research later this month. The promise of the technology is to create the ability to communicate with small sensors that won’t need to be powered in any other way.

WiFi Backscatter can communicate with battery-free RF devices by either reflecting or not reflecting a WiFi signal between a router and some other device like a laptop. The interruption in the reflections can be read as a binary code of off and on signals giving the RF device a way to communicate with the world without power. The team has not only detailed how this will work, but they have built a prototype that involves a tiny antenna and circuitry that can be connected to a wide range of electronic devices. This first generation antenna array draws a small amount of power from the electronic device, but the team believes that this ought to work with battery-free sensors and other IoT devices. This technique could be the first technology to enable multiple tiny IoT sensors scattered throughout our environment.

Finally, scientists at Michigan State have developed a ‘transparent luminescent solar concentrator’ that can generate electricity from any clear surface such as a window or a cellphone screen. The device works by absorbing the invisible spectrum in the ultraviolet and infrared spectrums and then re-emitting in a concentrated infrared frequency that then triggers tiny photovoltaic cells embedded around the edges of the clear surface.

The goal of the team is to make the technology more efficient. The current prototype is only about 1% efficient in converting light into electricity, but they believe they can get this up to about 5% efficiency. To compare that number, there are various non-transparent solar concentrators today that are about 7% efficient.

The big advantage of this technology is the transparency. Imagine a self-powering cellphone or a high-rise glass building that generates a lot of its power from the windows. This technology will allow putting solar generation in places that were never before contemplated. In a blog from last month I noted a new solar technology that could be painted onto any surface. It looks like we are headed for a time when any portion of a building can be generating electricity locally including the roof, the walls and the windows.

“Directory Assistance . . . . May I Help You?”

lilyt_ernestineI am old enough to remember a time when it was a real challenge to always have the phone number of somebody you needed to call. I had a pretty good memory for numbers and so I memorized the numbers of the twenty or thirty numbers I called the most. Many people relied on a little black book, but this never worked for me since I always seemed to lose or misplace the book when I most needed it. It was not unusual to visit a home and see a list of numbers sitting by each telephone. And every desk in a business had a rolodex.

I also recall the frustration of wresting with a phone book when I lived in the DC metropolitan area. The white pages were huge and it was often really hard to find the number of somebody with an unusually spelled name or somebody with a very common name such as J. Smith. And the Yellow Pages were even worse. I remember how hard it was sometimes to figure out which odd category the evil people at the Yellow Pages used to hide what I was looking for.

You always had the option of calling information. I recall that I got a few free directory calls each month by having a local phone number, but after that each call to 411 was relatively expensive. I remember seeing bills for customers who had spent $100 or more in calls to 411 and it always amazed me. Who needed to talk to that many people that they didn’t know?

Calling 411 was not always successful because sometimes the operator was not able to find the number you wanted, particularly for a business. If you didn’t remember the exact name of the business it was likely that you wouldn’t find them through 411. And the whole experience of calling information got worse after they implemented voice recognition and automated 411. It seems like that the computer could never comprehend what I was saying or else just handed me a wrong result and then hung up on me.

I remember being impressed by an early version of a cell phone, not because it was small and portable, but because it could store a large library of numbers. All of a sudden you could keep your little black book in your phone. Even if you didn’t store a number you could scroll back through calls and hopefully find the number you wanted. But I remember the frustration of not being able to transfer my phone list from an old phone to a new one, which happened to me after laundering my cell phone.

At about the same time as cell phones the web also grew and many companies began having web pages. This gave you the opportunity to not only see their products or services but also get their phone number. But over time the strangest thing happened. I found it harder and harder to find phone numbers on many web sites. Companies often encouraged you to contact them be email or by using a question box on their ‘Contact Us’ page and it often became harder and harder to find phone number to call.

I found that odd because there have been a number of studies done on the effectiveness of web advertising and such studies have always shown that getting a customer to call you is far more productive then having them contact you in any other way. The general marketing metric is that getting customers to click to your web site results in 1 or 2 percent of actual customers in most cases. But getting customers to call you results in getting a customer from 30 to 50 percent of the time.

Last year Google started embedding phone numbers from its search results, meaning that when the search engine recognizes a telephone number it allows a cellphone user to call the number by clicking on it. Google says that in the last year that the embedded numbers have resulted in 70 billion calls to businesses. Twitter and a number of other web sites have also been testing click-to-call and one imagines it will soon be ubiquitous.

So if you have a company web site make sure that your phone number is prominent on the front page. If it is, then when somebody looks at your web site on a cell phone they are likely going to be able to call your using a click from the web site. Google is doing this as yet one more way to make money. They will track and charge large companies for clicks on web sites the same as they sell clicks on advertising. But a small company can go along for the ride by just making sure that your phone number is prominent on the first page of your web site.

How Many Households Have Broadband? – Part II

Speed_Street_SignYesterday I described a few reasons why the National Broadband Map does not accurately capture who has broadband. My experience tells me that in rural America there are many places that deploy a ‘broadband’ technology without achieving broadband speeds. So I think there are many places where the Map overstates the speeds that are actually available. But there are also places where the Map shows broadband coverage where there is none and today I’ll look at a real life example.

This are several consequences of overstating actual broadband speeds. First, the Map is used by the FCC and others to talk about the state of broadband in the country. The speech that Chairman Wheeler gave last week assumes that the Map is right. To the extent that the Map stretches the truth about broadband speeds and availability we are basing policies upon incorrect facts.

Another use of the Map is to define those specific areas that don’t have broadband for purposes of defining where federal and state broadband grants and Connect America funds can be used. As an example, the FCC’s current $100 million Experimental Grants are aimed at areas that are either unserved (meaning that 90% of the households don’t have access to broadband) or underserved (meaning that at least 50% of the households there don’t have access to broadband.

Let’s look at a real life example of how the National Broadband Map doesn’t compare well to the real world. Below I am going to give you a link to some photographs, but before I do I want to explain what the pictures show. These pictures came to me from my good friends Melvin and Madonna Yawakie from TICOM. They show some CenturyLink telephone gear on the Wind River Indian Reservation in Wyoming. This area was formerly served by Qwest, and before that US West and before that the old Ma Bell version of AT&T.

There are a few interesting things to note in these pictures. First, they show that the customers in this area are served using an old technology called ALC carrier. I was very surprised to see this equipment still working because it is over forty years old, which is a remarkable age for field installed electronics of any kind. I guess it shows that Ma Bell built things to last. This carrier was installed in a lot of rural areas in the 70’s as part to an FCC initiative to get people off party lines. The technology works using ISDN to put two to four phone signals over one strand of copper. The ALC carrier gives each customer on a party-line their own phone number and a private connection while the customers continue to  share one strand of copper. The other thing to notice in the pictures is that some of the copper lines are strung over fencing instead of being on poles. Locals say that the wires have been that way for a long time.

What these pictures show is an area that has no broadband. There is no wireless ISP and no cable company. The only broadband option is DSL, and DSL cannot operate on lines that use ALC carrier. CenturyLink (and their predecessor Qwest) has told the tribe multiple times since 2009 that they have no intentions of upgrading the copper in the area or of bringing them broadband.

There are plenty of rural places in the US that have no broadband, so there is no surprise that this area does not have it. And it is no surprise that the areas without broadband are served by ancient technologies and by copper in poor working condition. What is surprising is that the National Broadband Map shows the tribal areas as largely having broadband available. That means that the tribal areas are not eligible for the FCC Experimental Grants, not will they be eligible for the Connect America Funds that are going to be available in the next few years to bring broadband to areas like this. The tribe is willing to build fiber in the area, but they need the help of these federal funds that are intended for exactly this kind of situation.

Unfortunately there are examples like this all over rural America. I have clients all over the US who say that the National Broadband Map is wrong in their area. What is most bothersome to me is that there is no easy way to fix the Map. The FCC has said that they will not award a grant to any area where there is any contention about the designation of the broadband in the area as defined by the Map. This means that a telco or WISP can refuse to correct errors in the Map and thus stave off competition from those who are willing to invest in broadband in areas that need it badly.

What is needed is some streamlined way for people to correct the National Broadband Map. The carriers seem unable (or unwilling) to define where their broadband coverage stops. And that is a bit ironic because in rural neighborhoods everybody knows where broadband is and isn’t. They will be able to tell you that “Nobody past Joe’s house can get DSL”. While there seem to be a lot of errors in the Map today the situation is only going to get worse since the FCC is expected to soon define broadband to be at least 10 Mbps download instead of the current paltry 4 Mbps. That change is going to declare overnight that many more millions of Americans don’t have broadband. But unfortunately the Map might say otherwise.

How Many Households Have Broadband? – Part I

Polk County SignFCC Chairman Wheeler made a speech last week about the lack of broadband competition in the country. As part of the speech he released four bar charts showing the percentage of households that have competitive alternatives at the download speeds of 4 Mbps, 10 Mbps, 25 Mbps and 50 Mbps. His conclusion was that a large portions of the households in the US can only buy broadband from one or two service providers. I was glad to hear him talking about this.

But unfortunately there is a lot of inaccuracy in the underlying data that he used to come to this conclusion, particularly at the charts showing the slower speeds. The data that the FCC relies on for measuring broadband is known as the National Broadband Map. While the data gathered for that effort results in a Map, it’s really a database, by census block, that shows the number of providers and the fastest data speed they offer in a given area.

A census block is the smallest area of population summarized by the US Census. It is generally bounded by streets and roads and will contain from 200 – 700 homes (with the more populated blocks generally just in urban areas with high-rise housing). A typical rural census block is going to have 200 – 400 homes. The National Broadband Map gathers data from carriers that describe the broadband services they offer in each census block. As it turns out, self-reporting by carriers is a big problem when it comes to the accuracy of the Map. In tomorrow’s blog I will show a real life example of how this affects new deployment of rural broadband.

Broadband service providers don’t generally track their network by census blocks, so part of the problem is that census block don’t match the physical way  that broadband networks are deployed in a rural area. Anybody who lives in rural America understands how utilities work there. In every small town there is a very definite line where utilities like City water and cable TV stop. Those utilities get to the edge of the area where people live and they stop. That doesn’t match up well with Census blocks that tend to extend outward from many small towns to include rural areas. Rural census blocks are not going to conveniently stop where the utilities stop.

There are three widely used rural broadband technologies – cable modem, DSL and fixed wireless. Let’s look briefly at how each of these match with the broadband mapping effort. Cable is the easiest because every cable network has a discrete boundary. There is some customer at the end of every cable route and the next house down the road cannot get cable. So it is not too likely that the cable companies are claiming to serve census blocks where they have no customers.

DSL and fixed wireless are a lot trickier. Both of these technologies share the characteristic that the bandwidth available with the technology drops quickly with distance. For example, DSL can transmit over a few miles of copper from the last DSLAM in the network. The household right next to that DSLAM can get the full speed offered by the specific brand of DSL while the last house at the end of the DSL signal gets only a small fraction of the speed, often with speeds that are not really any better than dial-up.

The same thing happens with fixed wireless. A WISP will install a transmitter on a tower or tall structure and the customers close to that tower will get decent broadband, and those transmitters tend to be installed in small towns where people live. But wireless broadband speeds drop rapidly with distance from the transmitter and if you go more than a few miles from any tower there is barely any bandwidth.

Both telcos and WISPs input their coverage areas into the National Broadband Map database. And in doing so, it appears that they claim broadband anywhere where they can provide service of any kind. But for DSL and fixed wireless, that service-of-any-kind area is much larger than the area where they can deliver actual broadband. Remember that broadband is currently defined as the ability to deliver 4 Mbps download. Because of the nature of their technologies, a lot of the people who can buy something from them will get a product that is slower than 4 Mbps, and at the outer ends of their network speeds are far slower than that.

I don’t necessarily want to say that the carriers inputting into the system are lying, because in a lot of cases customers can call and order broadband and a technician will show up and install a DSL modem or a wireless antenna. But if that customer is too far away from the network hub, then the product that gets delivered to them is not broadband. It is something slower than the FCC definition of broadband, but it is probably better than dial-up. But customers with slow connections can’t use the Internet to watch Netflix or do a lot of the basic things that require actual broadband. And as each year goes by, and as more and more video is built into everything we do on the Internet there are more and more web sites and services that out of reach for such customers.

But unfortunately, there are also areas where it appears that the carriers have declared that they offer broadband where there isn’t any. If you were to draw something like a 5-mile circle around every rural DSLAM and every WISP transmitter you will see the sort of broadband coverage that many rural carriers are claiming. But the reality is that broadband can only be delivered for 2-3 miles, which means that the actual broadband coverage area is maybe only a fourth of what is shown on the Map. If you go door-to-door and talk to people outside of rural towns you will find a very different story than what is shown on the National Broadband Map. Unfortunately, the Chairman’s numbers are distorted by these weaknesses and distortions underlying the Map. There are a lot more rural Americans without broadband than are counted in the Map and rural America has far fewer broadband options than what the Chairman’s charts claim.

Tomorrow, a real life example.

Beyond a Tipping Point

Cloud_computing_icon_svgA few weeks ago I wrote a blog called A Tipping Point for the Telecom Industry that looked at the consequences of the revolution in technology that is sweeping our industry. In that blog I made a number of predictions about the natural consequences for drastically cheaper cloud services such as the mass migration of IT services to the cloud, massive consolidation of switch and router makers, a shift to software defined networks and the consequent expansion explosion in specialized Cloud software.

I recently read an interview in Business Insider with Matthew Price, the founder of CloudFlare. It’s a company that many of you will never have heard of, but which today is carrying 5% of the traffic on the web and growing rapidly. CloudFlare started as a cyber-security service for businesses and its primary product helped companies fend off hacker attacks. But the company has also developed a suite of other cloud services. The combination of services has been so effective that the company says it has recently been adding 5,000 new customers per day and is growing at an annual rate of 450%.

In that interview Price pointed out two trends that define how quickly the traditional market is changing. The first trend is that the functions served traditionally by hardware from companies like Cisco and HP are moving to the cloud to companies like Amazon and CloudFlare. The second is that companies are quickly unbundling from traditional software packages.

CloudFlare is directly taking on the router and switching functions that have been served most successfully by Cisco. CloudFlare offers services such as routing and switching, load balancing, security, DDoS mitigation and performance acceleration. But by being cloud-based, the CloudFlare services are less expensive, nimbler and don’t require detailed knowledge of Cisco’s proprietary software. Cisco has had an amazing run in the industry and has had huge earnings for decades. Its model has been based upon performing network functions very well, but at a cost. Cisco sells fairly expensive boxes that then come with even more expensive annual maintenance agreements. Companies also need to hire technicians and engineers with Cisco certifications in order to operate a Cisco network.

But the same trends that are dropping the cost of cloud services exponentially are going to kill Cisco’s business model. It’s now possible for a company like CloudFlare to use brute computing power in data centers to perform the same functions as Cisco. Companies no longer need to buy boxes and only need to pay for the specific network functions they need. And companies no longer need to rely on expensive technicians with a Cisco bias. Companies can also be nimble and can change the network on the fly as needed without having to wait for boxes and having to plan for expensive network cutovers.

This change is a direct result of cheaper computing resources. The relentless exponential improvements in most of the major components of the computer world have resulted in a new world order where centralized computing in the cloud is now significantly cheaper than local computing. I summed it up in my last blog saying that 2014 will be remembered as the year the cloud won. It will take a few years, but a cloud that is cheaper today and that is going to continue to get exponentially cheaper will break the business models for companies like Cisco, HP, Dell and IBM. Where there were hundreds of companies making routers and other network components there will soon be only a few companies – those that are the preferred vendors of the companies that control the cloud.

The reverse is happening with software. Large corporations for the last few decades have largely used giant software packages from SAP, Oracle and Microsoft. These huge packages integrated all of the software functions of a business from database, CRM, accounting, sales and operations. But these software packages were incredibly expensive. They were proprietary and cumbersome to learn. And they never exactly fit what a company wanted and it was typical for the company to bend to meet the limitations of the software instead of changing the software to fit the company.

But this is rapidly changing because the world is being flooded by a generation of new software that handle the individual functions better than was done by the big packages. There are now dozens of different collaborations platforms available. There are numerous packages for the sales and CRM function. There are specialized packages for accounting, human resources and operations.

All of these new software packages are made for the cloud. This makes them cheaper to use and for the most part easier to learn and more intuitive to use. They are readily customizable by each company to fit their culture and needs. For the most part the new world of software is built from the user interface backwards, meaning that the user interface is made as easy and intuitive as possible. The older platforms were built with centralized functions in mind first and ended up with a lot of training required for users.

All of this means that over the next decade we are going to see a huge shift in the corporate landscape. We are going to see a handful of cloud providers performing all of the network functions instead of hundreds of box makers. And in place of a few huge software companies we are going to see thousands of specialized software companies selling into niche markets and giving companies cheaper and better software solutions.

The Start of the Information Age

Claude_Elwood_Shannon_(1916-2001)A few weeks ago I wrote a blog about the key events in the history of telecom. Today I am going to take a look at one of those events which is how today’s information age sprung out of a paper published in 1948 titled “A Mathematical Theory of Communication” by Claude Shannon. At the time of publication he was a young 32-year old researcher at Bell Laboratories.

But even prior to that paper he had made a name for himself when at MIT. His Master’s dissertation there was “A Symbolic Analysis of Relay and Switching Circuits” that pointed out that the logical values of true and false could easily be substituted for a one and a zero, and that this would allow for physical relays to perform logical calculations. Many have called this the most important Master’s thesis of the 1900s.

His paper was a profound breakthrough at the time and was done a decade before the development of computer components. Shannon’s thesis showed how a machine could be made to perform logical calculations and was not limited to just doing mathematical calculations. This made Shannon the first one to realize that a machine could be made to mimic the actions of human thought and some call this paper the genesis of artificial intelligence. This paper provided the push to develop computers since it made it clear that machines could do a lot more things that merely calculate.

Shannon joined Bell Labs as WWII was looming and he went to work immediately on military projects like cryptography and designing a fire control for antiaircraft guns. But in his spare time Shannon worked on his idea that he referred to as a fundamental theory of communications. He saw that it was possible to ‘quantify’ knowledge by the use of binary digits.

This paper was one of those rare breakthroughs in science that come along that are unique and not a refinement of earlier work. Shannon saw information in a way that nobody else had ever thought of it. He showed that information could be quantified in a very precise way. His paper was the first place to use the word ‘bit’ to describe a discrete piece of information.

For those who might be interested, a copy of this paper is here. I read this many years ago and I still find it well worth reading. The paper was unique and so clearly written that it is still used today to teach at MIT.

What Shannon had done was to show how we could measure and quantify the world around us. He made it clear how all measurable data in the world could be captured precisely and then transmitted without losing any precision. Since this was developed at Bell Labs, one of the first applications of the concept was applied to telephone signals. In the lab they were able to convert a voice signal into digital code of 1’s and 0’s and then transmit it to be decoded somewhere else. And the results were just as predicted in that the voice signal that came out at the receiving end was as good as what was recorded at the transmitting end. Until this time voice signals had been analog and that meant that any interference that happened on the line between callers would affect the quality of the call.

But of course, voice is not the only thing that can be encoded as digital signals and as a society we have converted about everything imaginable as 1s and 0s. We applied digital coding to music, pictures, film and text over time and today everything on the Internet has been digitized.

The world reacted quickly to Shannon’s paper and accolades were everywhere. Within two years everybody in science was talking about information theory and applying it to their particular fields of research. Shannon was not comfortable with the fame that came from his paper and he slowly withdrew from society. He left Bell Labs and returned to teach at MIT. But he even slowly withdrew from there and stopped teaching by the mid-60’s.

We owe a huge debt to Claude Shannon. His original thought gave rise to the components that let computers ‘think’, which gave a push to the nascent computer industry and was the genesis of the field of artificial intelligence. And he also developed information theory which is the basis for everything digital that we do today. His work was unique and probably has more real-world applications than anything else developed in the 20th century.

The Technology Revolution in Health Care

FantasticvoyageposterI’ve always kept an eye on how technology is improving health care. I think like a lot of baby boomers, I want to stay healthy and vigorous for as long as possible. It seems like I see a new article almost every day talking about how big data, or some new monitor that’s part of the Internet of Things is improving health care. Following are a few examples from very recent reading:

  • Enlitic is using the same sort of technology that can recognize faces on Facebook to make medical diagnosis. They are applying the technology to x-rays and other diagnostic tools to recognize cancer and other problems that might not always be obvious to a doctor. But the company is then using big data and taking the diagnosis process farther by looking at lab results, insurance claims and other records associated with a patient to try to refine a diagnosis. The software is only about a year old but it’s reported that it is already making big strides in providing accurate diagnoses. One use of this technology will be to identify cancers early. Possibly the biggest potential use of the software will be to provide diagnosis in third world countries where trained doctors are scarce.
  • Using a combination of wearables and big data, the Michael J. Fox Foundation is undertaking a large trial to gather data on people with Parkinson’s disease. It has always been difficult for doctors to understand the stage of the disease in a given patient due to the fact that the disease progresses in so many different ways. The trial is going to use smartwatches to monitor and report the movements of participants 24/7 and to gather other health data from thousands of Parkinson sufferers with the goal of building a database that will allow big data to identify trends and datapoints about the progression of the disease. The hope is that this will allow doctors to better understand the condition of any given patient. Big data ought to be able to spot trends in the progression of symptoms that have baffled doctors.
  • In a totally different use of technology, the combination of IBM’s Watson supercomputer and Siri are teaming up to bring the power of the supercomputer to the point of patient interaction. Apple and IBM are working together to create a platform that will allow doctors and first responders in making diagnosis and in gathering the needed data on a given patient, on site, anywhere. In essence they are bringing the diagnostic skills of a doctor into the field. The two companies believe that this same technology can be used in many other applications but are tackling mobile health first since they see it as a market in need of a solution. The two companies are providing the platform and have invited app writers to design programs that can take advantage of Watson’s computing power in a medical setting.
  • In news that many people might find disturbing, it’s been reported that medical insurance companies have been investigating the use of big databases that are commercially available from social media and other sources. These databases are already used to paint detailed pictures of us to use for advertising purposes, but some of this data also can tell a lot about our lifestyle. People don’t seem to much mind that this data is used to predict the next car they will buy, but are going to be a lot less comfortable if our insurance company looks to see how much alcohol they drink or if they watch a lot of TV. I doubt that the insurance companies even know how they might use this data yet, but one would think it might eventually be used in setting insurance rates that are specific by person based upon their health profile.
  • Finally, at the other end of the spectrum from big data is the whole trend of developing tiny devices that can be used to monitor us. There has been a lot of progress made in developing smart pills that can be swallowed and that can provide feedback from the intestinal track. There has also been a lot of progress in developing stick-on monitors with tiny chips that are allowing diagnostic tools to replace the maze of wires that have been connected to people in the past. A friend of mine just visited an emergency room and was connected to tiny wireless monitors rather than the typical array of wires. And there has been significant research into tiny blood monitors that swim around inside the blood stream. If anybody remembers the movie Fantastic Voyage you can see the potential for tiny medical devices inside the body that can find and fix problems at the cellular level. But we are probably still a long way from having tiny people inside of those devices!

Will 4K Video Make It?

Samsung_UHD_TVIt usually takes a while to see if a new technology gets traction with the public. For example, the 3D television craze of a few years ago fell flat on its face with the viewing public. And now 4K ultra high definition (UHD) video is making enough waves to gets its real world test in the marketplace.

The high-end TV makers certainly are pushing the technology and 2.1 million UHD televisions were shipped in the second quarter of 2014, up from 1.6 million sets for all of 2013. Amazon announced a deal with Samsung to roll out world-wide availability of 4K video streams to Samsung smart TVs. Amazon announced earlier this year that they are building a UHD library by filing all of the unique program made for Amazon in UHD. Netflix has already been filming Breaking Bad and House of Cards in UHD. Fox is marketing a set of 40 movies in UHD that includes Star Trek: Into Darkness.

But there are some obstacles to overcome before UHD becomes mainstream. The cameras and associated hardware and storage needed to film in UHD are expensive, so filmmakers are being cautious about converting to the technology until they know there is a market for it. But the big obstacle for UHD being universally accepted is getting the content into homes. There are issues of both bandwidth and quality.

Pure uncompressed UHD video is amazing. I saw a UHD clip at a trade show of House of Cards running next to an HD clip and the difference was astounding. But it is not practical to broadcast in uncompressed UHD and the compression techniques in use today reduce the quality of the picture. The UHD being delivered by Netflix today is better than their HD quality, but nearly as good as uncompressed UHD.

For those not familiar with compression techniques, they are techniques that reduce the transmission size of video signals, which is necessary to make programming fit into channels on traditional cable systems. And the same sorts of compression techniques are applied to video streams over the Internet from companies like Netflix and AmazonPrime. There are many different techniques used to compress video streams, but the one that saves the most bandwidth is called block-matching, which finds and then re-uses similarities between video frames.

Bandwidth is another roadblock to UHD acceptance. Netflix reports that it requires a steady 15 Mbps download stream to bring UHD to a home. A significant percentage of American homes don’t get enough bandwidth to view UHD. And even having enough bandwidth is no guarantee of a quality experience as has been witnessed with Netflix’s recent fights with Comcast and Verizon over the quality of the SD and HD video streams. It was  reported that even some customers who subscribed to 100 Mbps download products were not getting good Netflix streams.

There are also the normal issues we see in the television industry due to lack of standards. Each manufacturer is coming up with a different way to make UHD work. For example there are two different HDMI standards already in use by different TV manufacturers and the predictions are that HDMI might need to be abandoned altogether as the industry works to goose better quality out of UHD using higher frame rates and enhanced color resolution. And this all causes confusion to home owners or companies that install high-end TVs.

But there is some hope that there will be new technologies and new compression techniques that can be used to improve the quality and decrease the digital footprint of UHD streams. As an example, Faroudja Enterprises, owned by Yves Faroudja, one of the pioneers of HD television standards, announced it has found some improvements that will greatly benefit UHD. His new technique basically will pre-process content before compression and after decompression to get better efficiency in the sharing of bits between frames. He believes he can reliably cut the size of video streams in half using the new technology. His process also would bring efficiencies to HD streams, which is good news for an Internet that is getting bogged down today by video.

Only time is going to tell if the technology is widely accepted. Certainly there is going to be demand from cinephiles who want the absolute best quality from the movies they watch. But we’ll have to see if that creates enough demand to convince more filmmakers to shoot in the UHD format. This is like many new technologies in that there is some of the cart before the horse involved in bringing this fully to market. But there are many in the industry who are predicting that the extra quality that comes from UHD will make it a lasting technology.