2015 Broadband Growth

S vurveOne of the things I’ve figured out about the telecom industry is that statistics are often used to tell very different stories. Consider this example regarding wireline broadband adoption:

In December Pew Research released the results of a survey that suggested that overall wireline broadband adoption had dropped to 67% in 2015, down from a high of 70% in 2013. This was the first time I had ever heard any suggestion that the total number of landline broadband connections have flattened out, let alone dropped.

Pew went on to say that main culprit for the drop in broadband adoption is broadband prices and that a lot of homes feel they cannot afford a broadband connection, and instead rely solely upon broadband from their smartphone. That sounds plausible, and Pew was comparing to a very similar survey they had given in 2013.

But the Leichtman Research Group just released a report saying that the big cable companies added 3.3 million broadband customers in 2015. They said that during the year that the large telcos lost 187,000 landline broadband connections, meaning an overall net increase of over 3.1 million new broadband connections for the year.

The Census estimates there were 124.6 million housing units in the country in 2015, so the big companies in total brought broadband to an additional 2.5% of the total market. That sure does not sound like a year in which broadband has declined as suggested by Pew. And Leichtman has shown total market growth for the last several years as well.

In this case you have to believe the Leichtman numbers. They gather total subscriber numbers from all of the large carriers – cable companies and telcos. Since almost all of these companies are publicly traded, and since Wall Street keeps a close eye on subscribers, one has to think that the Leichtman numbers are pretty accurate.

On the other hand the Pew numbers come from nationwide surveys. Pew did three surveys in 2015 with a total of 6,687 adult respondents. The 2013 numbers they are comparing to was based on surveys of 6,010 adults.

I have always been suspicious of nationwide surveys. Our firm gives surveys and I have found that local surveys can be very accurate and the results can often be correlated with externally collected facts. For instance, I’ve had clients do surveys to find out how many customers their competition has in a market, and these surveys often prove themselves to be valid by also accurately showing the market penetration of my clients. That makes it easy to believe that the numbers for the other competitors in the market are also accurate.

I know that Pew is very careful about how they randomly choose survey subjects. For instance they will call people with cellphones as well as those with landline telephones. If you crunch through the statistical formulas that describe the predicted accuracy of a nationwide survey, then the Pew surveys should be very accurate.

The Liechtman numbers are not a 100% count of broadband customers and only count the customers of the biggest broadband providers – but those providers are something like 95% of the whole market. I know enough about a lot of companies in the rest of the market, the smaller carriers, to know that many of them are still seeing healthy broadband customer growth.

I have no way to explain this difference and I suspect that Pew can’t either. Their survey should be pretty accurate. Yet sometimes nationwide surveys just don’t give accurate results. This can often be seen with elections where different surveys given at almost the same time show fairly disparate predictions. The trouble is that surveys from groups like Pew influence decision makers and there are now going to be those who think that broadband growth has topped out. I was just on a call last week where somebody mentioned the Pew numbers. And while the Pew numbers of total broadband users might not be totally accurate, one can still believe that  their observation that some people are finding broadband increasingly expensive probably is valid. The problem is, you just can’t really know how many people that might be.

The Downside to Cloud Services

Cloud_computing_icon_svgI have a client who has been buying a cloud service for about two years and has a number of reasons to be unhappy with the service. I’m not going to name the specific service because there aren’t many vendors in this particular space. But the issues my client is experiencing look to be common with a lot of cloud services.

My client buys this service to resell along with other services to his own customers. His number one complaint is that he never knows what service he is going to wake up to each day. The cloud software will be working great one day and then the vendor will implement a software change and all of a sudden he gets calls from his end-user customers that things have gone wrong. And inevitably the problem turns out to be something that the cloud service vendor has introduced into the service as a supposed upgrade or improvement.

This is not a new phenomenon and anybody that purchased a voice switch, a cable TV headend, or a complex billing system can remember back to the day when this same sort of thing happened all of the time. Carriers would shudder each time that they got a software update from a vendor because it often caused more harm than it did good.

And the industry learned how to deal with this problem. First, carriers started to insist that vendors build test labs and that they try out new software first in the lab rather than foisting it onto the lab of end users. Second, they insisted that software vendors update software in discrete releases so that each carrier could decide if they needed to install a new update or not.

I remember a time in the late 90s when CCG routinely recommended that our clients not install software updates. There were so many problems with new software releases that we found that it was better to let the update hit the world and to let other carriers debug each new software release. My clients would purposefully fall numerous software upgrades behind, but as long as they weren’t experiencing end-user issues they were happy.

But now my clients are starting to buy services in the cloud, and in doing so we have gone back to the 90s all over again. The biggest problem with most cloud vendors is that they only run one version of their software – the latest. The vendor will update the cloud software and every one of their customers will have the same version. This certainly makes life easier on the cloud vendor.

But unless the vendor has amazing software coders that never make mistakes (and that is never going to happen) then the vendor can release an update that has dreadful bugs in it and the test lab for these bugs become the end-user customers of the carriers. A carrier might not even realize there was a software update until they get complaints from their own customers. But now the situation is much worse than the old days, because one of the most common ways to fix this sort of problem used to be to reinstall an earlier version of the software that they knew worked right.

I guess that cloud service providers need to learn the same lessons that the other vendors in the carrier industry learned a few decades ago. Just because software is in the cloud doesn’t change good software practices – in fact it makes them even more important. A software vendor that uses end-users as his software testing lab is going to get a horrible reputation and in the long run is not going to keep customers in the carrier space.

And so I hope that software vendors would implement the same kinds of changes that the industry forced vendors to implement decades ago:

  • It should be the carrier’s choice about accepting a software upgrade. Updates should never be automatic. This means that the cloud vendor needs to keep multiple versions of their software available online.
  • Software vendors need to maintain a test lab of some kind. Most software ultimately controls hardware and the vendor ought to have a lab on which to make certain that changes in the software do what they are intended to do while not screwing up something else.