The Price for Triple Play?

I was recently working on a project for client who is thinking about competing in a new city and wants to understand the real market rates customers are paying. We solicited copies of bills from existing subscribers to the incumbent telco and cable company to find out.

I doubt that anybody would be surprised from what we found, but it was good to be reminded of the billing practices of the big ISPs. Here are a few of the things we found:

  • Both incumbents use promotional rates to provide lower prices to new customers or to existing customers who are willing to negotiate and to sign up for a term contract. Promotional discounts were all over the board and seems to mostly range between a $5 and a $25 discount per month. But there was one customers who was getting a $60 discount on a $180 monthly bill.
  • Both incumbents also offer bundling discounts, but they were applied erratically. Our sample of bills was not a statistically valid sample, but roughly half of the bills we saw had a bundled discount while other customers buying the same products were not getting a discount.
  • The cable incumbent offers the typical three tiers of service offered by most cable companies. While every cable customer had one of these three packages, we surprisingly didn’t see any two customers paying the same price.
  • The cable company had programming fees that were separate from the base programming charges – one fee to cover local programming costs and another labeled as a sport fee. These were not always billed at the same rate and there were not being billed to all customers with the same packages.
  • There was also a varying range of fees for settop boxes and cable modems by the cable company and WiFi modems from the telco.
  • What surprised me most was how widely the taxes varied from bill to bill. Customers with the same products often had tax charges several dollars apart. This makes me wonder why more taxing authorities aren’t auditing bills from time to time to see if all of the tax due to them is even being billed.
  • Nowhere on the bills was any customer told the speed of their broadband products.
  • There were obvious billing errors. For example, I saw a bill charging the subscriber line charge to somebody who doesn’t have a telephone line. They probably had one in the past and are still paying $6.50 per month long after they dropped their landline.

I hadn’t looked at that many customer bills from a single market for a while. I’ve always known that prices vary by customers, but I didn’t expect them to vary this much. My primary take-away from this analysis is that there is no one price for telecom products. I hear clients all of the time saying things like “My primary competition comes from a $49 broadband connection from the cable company”. But that’s not really true if most people are paying something different than $49. Some customers have discounts to lower that price while others may be paying more after considering ancillary fees.

The bills were confusing, even to me who knows what to look for. It would be easy, for example, for a customer to think that a local programming fee or an FCC line charge are taxes rather than revenue that is kept by the service provider. Both ISPs mixed these fees on the bill with actual taxes to make it impossible for the average customer to distinguish between a tax and a fee that is just a piece of a product billed under a different name.

These bills also made me wonder if the corporate staff of these big ISPs realize the wide range that customers are paying. In many cases there were fees that could have been billed that weren’t. And there was a a wide variance tax billing that would make a corporate CFO cringe.

These bills reinforce the advice I always give to clients. I think customers like transparency and I think the best bill is one that informs customers about what they are buying. In this market most customers could not tell you what they are paying for the various products. Bills can be simple, yet informative and some of my clients have wonderful bills. After seeing the billing mess from these two big ISPs, I think honest straightforward billing is another advantage for a competitor.

Metering Broadband

A lot of the controversy about Comcast data caps disappeared last year when they raised the monthly threshold for data caps from 300 gigabytes to 1 terabyte. But lately I’ve been seeing folks complaining about being charged for exceeding the 1 TB data cap – so Comcast is still enforcing their data caps rules.

In order to enforce a data cap an ISP has to somehow meter the usage. It appears that in a lot of cases ISPs do a lousy job of measuring usage. Not all ISPs have data caps. The biggest ISPs that have them include Comcast, AT&T, CenturyLink for DSL, Cox and Mediacom. But even these ISPs don’t enforce data caps everywhere, like Comcast not enforcing them where they compete directly against Verizon FiOS.

Many customer home routers can measure usage and there are reports of cases where Comcast data usage measurements are massively different than what is being seen at the home. For example, there are customers who have seen big spikes in data measurement from Comcast at a time when their routers were disconnected or when power was out to the home. There are many customers who claim the Comcast readings always greatly exceed what they are seeing at their home routers.

Data caps matter because customer that exceed the caps get charged a fee. Comcast charges $10 for each 50 GB of monthly over the cap. Mediacom has the same fees, but with much smaller data caps such as a 150 GB monthly cap on customers with a 60 Mbps product.

It’s not hard to imagine homes now exceeding the Comcast data cap limit. Before I left Comcast a year ago they said that my family of three was using 600 – 700 GB per month. Since I didn’t measure my own usage I have no idea if their numbers were inflated. If my measurements were accurate it’s not hard to imagine somebody with several kids at home exceeding the 1 TB. The ISPs claim that only a small percentage of customers hit the data cap limits – but in world where data usage keep growing exponentially each year there are more homes that will hit the limit as time goes by.

What I find interesting is that there is zero regulation of the ISP data ‘meters’. Every other kind of meter that is used as a way to bill customers are regulated. Utilities selling water, electric or natural gas must use meters that are certified to be accurate. Meters on gas pumps are checked regularly for accuracy.

But there is nobody monitoring the ISPs and the way they are measuring data usage. The FCC effectively washed their hands from regulating ISPs for anything broadband when they killed Title II regulation of broadband. Theoretically the Federal Trade Commission could tackle the issue, but they are not required to do so. They regulate interactions with customers in all industries and can select the cases they want to pursue.

There are a few obvious reasons why the readings from an ISP would differ from a home, even under ideal conditions. ISPs measure usage at their network hub while a customer measurement happens at the home. There are always packets lost in the network due to interference or noise on the network, particularly with older copper and coaxial networks. The ISP would be counting all data passing through the hub as usage although many of the packets never make it to customers. But when you read some of the horror stories where homes that don’t watch video see daily readings from Comcast of over 100 GB in usage you know that there is something wrong in the way that Comcast is measuring usage. It has to be a daunting task to measure the usage directed for thousands of users simultaneously and obviously Comcast has problems in their measurement algorithms.

I’ve written about data caps before. It’s obvious that the caps are just a way for ISPs to charge more money, and it’s a gigantic amount of extra revenue if Comcast can bill $10 per month extra to only a few percent of their 23 million customers. Anybody that understand the math behind the cost of broadband understands that a $10 extra charge for 50 GB of usage is almost 100% profit. It doesn’t cost the ISP anything close to $10 for the connections for the first terabyte let alone an incrementally small additional amount. And there certainly is no cost at all if the Comcast meters are billing for phantom usage.

I don’t know that there is any fix for this. However, it’s clear that every customer being charged for exceeding data caps will switch to a new ISP at the first opportunity. The big ISPs wonder why many of their customers loathe them, and this is just one more way for a big ISP to antagonize their customers. It’s why every ISP that builds a fiber network to compete against a big cable companies understand that they will almost automatically get 30% of the market due to customers who have come to hate their cable ISP.

Fiber Electronics and International Politics

In February six us Intelligence agencies warned Americans against using cellphones made by Huawei, a Chinese manufacturer. They warned that the company is “beholden” to the Chinese government and that we shouldn’t trust their electronics.

Recently Sen Liz Cheney introduced a bill into Congress that would prohibit the US Government or any contractors working for it to use electronics from Huawei or from another Chinese company ZTE Corp. Additionally, any US military base would be prohibited from using any telecom provider who has equipment from these two vendors anywhere in their network.

For anybody who doesn’t know these two companies, they manufacture a wide array of telecom gear. ZTE is one of the five largest cellphone makers in the world. They also make electronics for cellular networks, FTTP networks and long-haul fiber electronics. The company sells under it’s own name, but also OEMs equipment for a number of other vendors. That might make it hard for a carrier to know if they have gear originally manufactured by the company.

Huawei is even larger and is the largest maker of telecom electronics in the world, having passed Ericsson a decade ago. The company’s founder has close ties to the Chinese government and their electronics have been used to build much of the huge wireless and FTTP networks in China. The company makes cellphones, FTTP equipment and also is an innovator in equipment that can be used to upgrade cable HFC network.

This is not the first time that there has been questions about the security of electronics. In 2014 Edward Snowden released documents that showed that the NSA had been planting backdoor software into Cisco routers being exported overseas from the US and that these backdoors could be used to monitor internet usage and emails passing through the routers. Cisco says that they had no idea that this practice was occurring and that it was being added to their equipment after it left their control.

Huawei and ZTE Corp also say that they are not monitoring users of their equipment. I would assume that the NSA and FBI have some evidence that at least the cellphones from these companies can be used to somehow monitor customers.

It must be hard to be a telecom company somewhere outside of the US and China because our two countries make much of the telecom gear in wide use. I have to wonder what a carrier in South America or Africa thinks about these accusations.

I have clients who have purchased electronics from these two Chinese companies. In the FTTP arena the two companies have highly competitive pricing, which is attractive to smaller ISPs updating their networks to fiber. Huawei also offers several upgrade solutions for HFC cable networks that are far less expensive than the handful of other vendors offering solutions.

The announcements by the US government creates a quandary for anybody who has already put this gear into their network. At least for now the potential problems from using this equipment have not been specifically identified. So a network owner has no way of knowing if the problem is only with cellphones, if it applies to everything made by these companies, or even if there is a political nature to these warnings rather than a technical one.

Any small carrier using this equipment likely cannot afford to remove and replace electronics from these companies in their networks. The folks I know using ZTE FTTP gear speak high praises of the ease of using the electronics – which makes sense since these two companies have far more installed fiber customers worldwide than any other manufacturer.

Somebody with this equipment in their network has several quandaries. Do they continue to complete networks that already use this gear or should they somehow introduce a second vendor into their network – an expensive undertaking. Do they owe any warnings to their own customers (at the risk of losing customers). Do they do anything at all?

For now all that is in place is a warning from US intelligence agencies not to use the gear, but there is no prohibition from doing so. And even should the Senate bill pass it would only prohibit ISPs using the gear from providing telecom services to military bases – a business line that is largely handled by the big telcos with nationwide government contracts.

I have no advice to give clients on this other than to strongly consider not choosing these vendors for future projects. If the gear is as bad as it’s being made to sound then it’s hard to understand why the US government wouldn’t ban it rather than just warn about it. I can’t help but wonder how much of this is international wrangling over trade rather than any specific threat or risk.

The Seasonality Dilemma

One issue that I often see neglected in looking at financial projections for potential fiber projects is seasonality. Seasonality is the term used among utilities to describe groups of customers who are not full-time residents.

There are a lot more kinds of seasonal customers than many people realize. Consider the following:

  • Tourists areas are the ones most used to this idea. While most tourists areas get busy in the summer there are also ski towns that are busy only in the winter. These communities are now finding that those that visit or have seasonal homes in these communities expect to have broadband.
  • College students. College towns with broadband face the unusual challenge that students not only generally leave for the summer, but since there is a big annual turnover in students each year, much student housing is vacant during that time.
  • Snowbirds are tourists who go south for the winter, but they come from somewhere in the north and I have clients with farming communities that see a big outflux during the winter with citizens going south for the winter.
  • While it’s not purely a seasonality issue, communities near to military bases often face similar issue. They experience high churn among customers and requests to put service on hold during deployments.

ISPs face some interesting challenges with seasonality. Consider college towns. They lose significant numbers of customers every summer, and not just from graduating students, but from those who will be moving to a new apartment or home in the fall. The, all of the students come back all at once at the end of August and expect to be immediately connected.

Students create several challenges for an ISP. First, a fiber overbuilder might not be well known and so has to market hard during that period so that new students know there is an alternative. There is also the issue of making many connections in a short period of time. Students are also a billing challenge and it’s not unusual for students to run out of money before the end of a school year. I have one client that offers a special discounted rate for the school year to students who will prepay.

Tourist areas area a a challenge because a lot of customers will strongly resist having to pay for broadband and other triple play products for the months they are gone. And unlike with schools, it’s not untypical in tourism areas for the customers to be gone for more of the year than they are present. This create a financial challenge to an ISP. It’s hard enough to justify the cost of adding a new customer to a fiber network. It’s even harder to justify making that same investment to get only a half year or less of revenue from each seasonal customer.

I’ve seen ISPs deal with this in several different ways, none of which are totally satisfactory. Some ISPs let seasonal customers disconnect and then charge a reconnect fee when they want service again. I know ISPs who charge a small monthly ‘maintenance’ fee that keeps service live in the offseason at a greatly reduced rate. These don’t usually include cable TV to relieve the ISP for paying for programming that nobody is watching. I also know a few ISPs that try to make seasonal customers pay for the whole year.

Communities that lose resident snowbirds are starting to see the same requests to suspend charges for service while residents leave for the winter.

Most communities don’t have a major seasonal issue. But for those that do, it’s important to anticipate this issue when predicting possible costs to build the network versus the revenues that will be used to pay for it. It’s a lot harder to justify building a new network if a significant percentage of the customers don’t want to pay for a whole year of service.

The Migration to an All-IP Network

Last month the FCC recommended that carriers adopt a number of security measures to help block against hacking in the SS7 Signaling System 7). Anybody with telephone network experience is familiar with the SS7 network. It has provided a second communication path that has been used to improve call routing and to implement the various calling features such as caller ID.

Last year it became public that the SS7 network has some serious vulnerabilities. In Germany hackers were able to use the SS7 network to connect to and empty bank accounts. Those specific flaws have been addressed, but security experts look at the old technology and realize that it’s open to attack in numerous ways.

It’s interesting to see the FCC make this recommendation because there was a time when it looked like SS7 would be retired and replaced. I remember reading articles over a decade ago that forecast the pending end of SS7. At that time everybody thought that our legacy telephone network was going to be quickly migrated to all-IP network and that older technologies like SS7 and TDM would retired from the telecom network.

This big push to convert to an IP voice network was referred by the FCC as the IP transition. The original goal of the transition was to replace the nationwide networks that connect voice providers. This nationwide network is referred to as the interconnection network and every telco, CLEC and cable company that is in the voice business is connected to it.

But somewhere along the line AT&T and Verizon high-jacked the IP transition. All of a sudden the transition was talking about converting last-mile TDM networks to digital. Verizon and AT&T want to tear down rural copper and largely replace it with cellular. This was not the intention of the original FCC plans. The agency wanted to require an orderly transition of the interconnection network, not the last-mile customer network. The idea was to design a new network that would better support an all-digital world while also still connecting to older legacy copper networks until they die a natural economic life. As an interesting side note, the same FCC has poured billions into extending the life of copper networks through the CAF II program.

Discussions about upgrading connections between carriers to IP fizzled out. The original FCC vision was to take a few years to study the best path to an all-IP interconnection network and then require telcos to move from the old TDM networks.

I recently had a client who wanted to establish an IP connection with one of the big legacy telcos. I know of some places where this is being done. The telco told my client that they still require interface using TDM, something that surprised my client. This particular big telco was not yet ready to accept IP trunking connections.

I’ve also noticed that the costs for my clients to buy connections into the SS7 network have climbed over the past few years. That’s really odd when you consider that these are old networks and the core technology is decades old. These networks have been fully depreciated for many years and the idea that the cost to use SS7 is climbing is absurd. This harkens back to paying $700 per month for a T1, something that sadly still exists in a few markets.

When the FCC first mentioned the IP transition I would have fully expected that TDM between carriers would have been long gone by now. And with that would have gone SS7. SS7 will still be around in the last-mile network and at the enterprise level since it’s built into the features used by telcos and in the older telephone systems owned by many businesses. The expectation from those articles a decade ago was that SS7 and other TDM-based technologies would slowly fizzle as older products were removed from the market. An IP-based telecom network is far more efficient and cost effective and eventually all telecom will be IP-based.

So I am a bit puzzled about what happened to the IP transition. I’m sure it’s still being talked about by policy-makers at the FCC, but the topic has publicly disappeared. Is this ever going to happen or will the FCC be happy to let the current interconnection network limp along in an IP world?

The Trajectory of Cord Cutting

2017 was the year that cord cutting became a real phenomenon. The industry has been talking about cord cutting for around 5 years. In the beginning the phenomenon manifested by a slowing and stalling of the growth of cable subscribers. Many industry pundits a few years ago opined that cord cutting was a minor phenomenon because they believed that people couldn’t walk away from their favorite programming.

But in 2016 the industry as a whole lost a million customers. That sounds like a lot, but in an industry with roughly 90 million customers, the hope of the industry was that cord cutting would take decades to have any major bottom line impact. 2017 then saw a loss of 2.4 million customers and the whole industry now agrees that cord cutting is real and that it is accelerating.

The big question now is the future trajectory for cord cutting – how is this going to affect the industry over the next five years? We have past experience from watching another major telecom product take a nose dive. Back in the mid-1990s almost 99% of US homes had landlines. Today that number is down to just under 44% according to surveys done annually by the Center for Disease Control (CDC). The government agency has been asking about landline penetrations as part of a much broader survey for several decades.

I would venture to say that hardly anybody in the industry can easily tell you how fast we have been losing landlines. It’s something we all know about, but I know I had no idea about the rate of decline of decline of landlines since the 1990s.

Just like with cable TV, in the early years the rate of landline loss was relatively slow. I remember being asked about landline losses in 1997, the year I started CCG Consulting. At that time the industry was losing around 1 million customers per year. But a lot of prognosticators predicted that sale of landlines would collapse since everybody was going to change to cellphones.

But the CDC statistics tell a different story. Those statistics show that by 2004 the industry still had a 93% market penetration. Since then there has been a steady decline of landline that make an almost straight line graph to end at today’s penetration rate of 44%. I doubt that there were any industry experts in 2004 who would have predicted that there would still be a 44% penetration of landlines in 2017. During the 13-year period from 2004 to 2017 roughly 4.5 million households dropped landlines each year. The rate of loss is neither accelerating or declining.

There is no reason to think that the decline of cable TV will happen in the identical fashion. But for the first five years of customer losses the two industries have nearly the same story. Losses started slowly, and even after five years the rate of loss of cable customers is still half of the annual loss of landlines.

The industries are also different. In telecom the two biggest phone companies at the beginning of the decline of landlines were Verizon and AT&T and they also have been the biggest beneficiaries of the growth of the cellphones that replace landlines. Both companies are larger and far more profitable now than they were in the mid-90s. We are unlikely to see the same thing happening in cable. It appears that cord cutters are fleeing to a wide array of programming alternatives – and most of those alternatives are not owned by the same companies that have been profiting from cable TV.

The cable companies are clearly losing customers and revenues. The two satellite TV companies alone lost 1.7 million customers just in 2017. Continued losses of that magnitude are going to quickly affect some of the biggest cable providers. The programmers are also losing paying customers at a rapid clip. When households flee to online video providers they replace traditional 200-channel lineups with much smaller ones, meaning that a lot of individual cable networks are bleeding customers.

What might make the difference between cable and landline industries is the way the industry is reacting to the losses. In the landline world we saw the emergence of lower-cost alternatives to telco landlines as the cable companies got into the business. Even today cable landlines mostly cost less than telco landlines. I would have to think that the ability for customers to cut costs helped to stave off landline losses.

But the cable industry seems to be reacting by raising rates even faster than historically. It looks like the programmers want to get as much money as possible out of the industry before it disappears. That mentality is pushing up cable rates faster than ever and high prices seem to be the major motivation behind cord cutting. My guess is that if the cable industry stays on the same trajectory as today that it’s going to lose customers far faster than the historic drop in landlines. But my crystal ball is no better than anybody else’s, so like everybody else I’ll keep watching the statistics.

Is the FCC Disguising the Rural Broadband Problem?

Buried within the FCC’s February Broadband Deployment Report are some tables that imply that over 95% of American homes can now get broadband at speeds of at least 25/3 Mbps. That is drastically higher than the report just a year earlier. The big change in the report is that the FCC is now counting fixed wireless and satellite broadband when compiling the numbers. This leads me to ask if the FCC is purposefully disguising the miserable condition of rural broadband?

I want to start with some examples from this FCC map that derives from the data supporting the FCC’s annual report. I started with some counties in Minnesota that I’m familiar with. The FCC database and map claims that Chippewa, Lyon, Mille Lacs and Pope Counties in Minnesota all have 100% coverage of 25/3 broadband. They also claim that Yellow Medicine County has 99.59% coverage of 25/3 Mbps broadband and the folks there must be wondering who is in that tiny percentage without broadband.

The facts on the ground tell a different story. In real life, the areas of these counties served by the incumbent telcos CenturyLink and Frontier have little or no broadband outside of towns. Within a short distance from each town and throughout the rural areas of the county there is no good broadband to speak of – certainly not anything that approaches 25/3 Mbps. I’d love to hear from others who look at this map to see if it tells the truth about where you live.

Let me start with the FCC’s decision to include satellite broadband in the numbers. When you go to the rural areas in these counties practically nobody buys satellite broadband. Many tried it years ago and using it is a miserable experience. There are a few satellite plans that offer speeds as fast as 25/3 Mbps. But satellite broadband today has terrible latency, as high as 900 milliseconds. Anything over 100 milliseconds makes it hard or impossible to do any real-time computing. That means on satellite broadband that you can’t stream video. You can’t have a Skype call. You can’t connect to a corporate WAN and work from home or connect to online classes. You will have problems staying on many web shopping sites. You can’t even make a VoIP call.

Satellite broadband also has stingy data caps that make it impossible to use as a home broadband connection. Most of the plans come with a monthly data caps of 10 GB to 20 GB, and unlike cellular plans where you can buy additional data, the satellite plans cut you off for the rest of the month when you hit your data cap. And even with all of these problems, it’s also expensive and is priced higher than landline broadband. Rural customers have voted with their pocketbooks that satellite broadband is not broadband that many people are willing to tolerate.

Fixed wireless is a more mixed bag. There are high-quality fixed wireless providers who are delivering speeds as fast as 100 Mbps. But as I’ve written about, most rural fixed broadband delivers speeds far below this and the more typical fixed wireless connection is somewhere between 2 Mbps and 6 Mbps.

There are a number of factors needed to make a quality fixed broadband connection. First, the technology must be only a few years old because older radios older were not capable of reaching the 25/3 speeds. Customers also need a clear line-of-sight back to the transmitter and must be within some reasonable distance from a tower. This means that there are usually s significant number of homes in wireless service areas that can’t get any coverage due to trees or being behind a hill. Finally, and probably most importantly, the wireless provider needs properly designed network and a solid backhaul data pipe. Many WISPs pack too many customers on a tower and dilute the broadband. Many wireless towers are fed by multi-hop wireless backhaul, meaning the tower doesn’t have enough raw bandwidth to deliver a vigorous customer product.

In the FCC’s defense, most of the data about fixed wireless that feeds the database and map is self-reported by the WISPs. I am personally a big fan of fixed wireless when it’s done right and I was a WISP customer for nine years. But there are a lot of WISPs who exaggerate in their marketing literature and tell customers they sell broadband up to 25/3 Mbps when their actual product might only be a tiny fraction of those speeds. I have no doubt that these WISPs also report those marketing speeds to the FCC, which leads to the errors in the maps.

The FCC should know better. In those counties listed above I would venture to say that there are practically no households who can get a 25/3 fixed wireless connection, but there are undoubtedly a few. I know people in these counties gave up on satellite broadband many years ago. My conclusion from the new FCC data is that this FCC has elected to disguise the facts by claiming that households have broadband when they don’t. This is how the FCC is letting themselves off the hook for trying to fix the rural broadband shortages that exist in most of rural America. We can’t fix a problem that we won’t even officially acknowledge, and this FCC, for some reason, is masking the truth.

SDN Finally Comes to Telecom

For years we’ve heard that Software Defined Networking (SDN) is coming to telecom. There have been some movement in that area in routing on long-haul fiber routes, but mostly this network concept is not being used in telecom networks.

AT&T just announced the first major deployment of SDN. They will be introducing more than 60,000 ‘white box’ routers into their cellular networks. White box means that the routers are essentially blank generic hardware that comes with no software or operating systems. This differs from the normal routers from companies like Cisco that come with a full suite of software that defines how the box will function. In fact, from a cost perspective the software costs a lot more than the software in a traditional router.

AT&T will now be buying low-cost hardware and will load their own software onto the boxes. This is not a new concept and the big data center companies like Facebook and Google have been doing this for several years. SDN let’s a provider load only the software they need to support just the functions they need. The data center providers say that simplifying the software saves them a fortune in power costs and air conditioning since the routers are far more efficient.

AT&T is a little late to the game compared to the big web companies, and it’s probably taken them a lot longer to develop their own proprietary suite of cell site software since it’s a lot more complicated than switches in a big data center. They wouldn’t want to hand their cell sites over to new software until it’s been tested hard in a variety of environments.

This move will save AT&T a lot of money over time. There’s the obvious savings on the white box routers. But the real savings is in efficiency. AT&T has a fleet of employees and contractors whose sole function is to upgrade cell sites. If you’ve followed the company you’ve seen that it takes them a while to introduce upgrades into their networks as technicians often have to visit every cell site, each with different generics of operating hardware and software.

The company will still need to visit cell sites to make hardware changes, but the promise of SDN is that software changes can be implemented across their whole network in a short period of time. This means they can fix security flaws or introduce new features quickly. They will have a far more homogeneous network where cell sites use the same generics of hardware and software, which should reduce glitches and local problems. The company will save a lot on labor and contractor costs.

This isn’t good news for the rest of the industry. This means that Cisco and other router makers are going to sell far fewer telecom-specific routers. The smaller companies in the country have always ridden the coattails of AT&T and Verizon, whose purchase of switches and routers pulled down the cost of these boxes for everybody else. These big companies also pushed the switch manufacturers to constantly improve their equipment, and the volume of boxes sold justified the router manufacturers to do the needed R&D.

You might think that smaller carriers could also buy their own white box routers to also save money. This looks particularly attractive since AT&T is developing some of the software collaboratively with other carriers and making the generic software available to everybody. But the generic base software is not the same software that will run AT&T’s new boxes. They’ve undoubtedly sunken tens of millions into customizing the software further. Smaller carriers won’t have the resources to customize this software to make it fully functional.

This change will ripple through the industry in other ways. For years companies often hired technicians who had Cisco certification on various types of equipment, knowing that they understood the basics of how the software could be operated. But as Cisco and other routers are edged out of the industry there are going to be far fewer jobs for those who are Cisco certified. I saw an article a few years ago that predicted that SDN would decimate the technician work force by eliminating a huge percentage of jobs over time. AT&T will need surprisingly few engineers and techs at a central hub now to update their whole network.

We’ve known this change has been coming for five years, but now the first wave of it is here. SDN will be one of the biggest transformational technologies we’ve seen in years – it will make the big carriers nimble, something they have never been. And they are going to make it harder over time for all of the smaller carriers that compete with them – something AT&T doesn’t mind in the least.

The Demand for Upload Speeds

I was recently at a public meeting about broadband in Davis, California and got a good reminder of why upload speeds are as important to a community as download speeds. One of the people making public comments talked about how uploading was essential to his household and how the current broadband products on the market were not sufficient for his family.

This man needed good upload speeds for several reasons. First, he works as a photographer and takes pictures and shoots videos. He says that it takes hours to upload and send raw, uncompressed video to one of his customers and says the experience still feels like the dial-up days. His full-time job is working as a network security consultant for a company that specializes in big data. As such he needs to send and receive large files, and his home upload bandwidth is also inadequate for that – forcing him to go to an office for work that could otherwise be done from his home. Finally, his daughter creates YouTube content and has the same problem uploading content – which is particularly a problem when her content deals with time-sensitive current events and waiting four hours to get the content to YouTube kills the timeliness of her content.

This family is not unusual any more. A decade ago, a photographer led the community effort to get faster broadband in a city I was working with. But he was the only one asking for faster upload speeds and most homes didn’t care about it.

Today a lot of homes need faster upload speeds. This particular family had numerous reasons including working from home, sending large data files and posting original content to the web. But these aren’t the only uses for faster upload speeds. Gamers now need faster upload speeds. Anybody who wants to remotely check their home security cameras cares about upload speeds. And more and more people are migrating to 2-way video communications, which requires those at both ends to have decent uploading. We are just now seeing the early trials of virtual presence where communications will be by big-bandwidth virtual holograms at each end of the communications.

Davis is like many urban areas in that the broadband products available have slow upload speeds. Comcast is the cable incumbent, and while they recently introduced a gigabit download product, their upload speeds are still paltry. DSL is offered by AT&T which has even slower upload speeds.

Technologies differ in their ability to offer upload speeds. For instance, DSL is technically capable of sending the data at the same speeds for upload or download. But DSL providers have elected to stress the download speed, which is what most people value. So DSL products are set with small upload and a lot of download. It would be possible to give a customer the choice to vary the mix between upload and download speeds, but I’ve never heard of an ISP who tried to provide this as an option to customers.

Cable modems are a different story. Historically the small upload speeds were baked directly into the DOCSIS standard. When Cable Labs created DOCSIS they made upload speeds small in response to what cable companies asked from them. Until recently, cable companies have had no option to increase upload speeds beyond the DOCSIS constraints. But Cable Labs recently amended the new DOCSIS 3.1 standard to allow for much upload speeds of nearly a gigabit. The first release of the new DOCSIS 3.1 standard didn’t include this, but it’s now available.

However, a cable company has to make sacrifices in their network if they want to offer faster uploads. It takes about 24 empty channels (meaning no TV signal) on a cable system to provide gigabit download speeds. A cable company would need to vacate many more channels of programming to also offer faster uploads and I don’t think many of them will elect to do so. Programming is still king and cable owners need to balance the demand for more channels compared to demand for faster uploads.

Fiber has no real constraints on upload speeds up to the capability of the lasers. The common technologies being used for residential fiber all allow for gigabit upload speeds. Many fiber providers set speeds to symmetrical, but others have elected to limit upload speeds. The reason I’ve heard for that is to limit the attractiveness of their network for spammers and others who would steal the use of fast uploading. But even these networks offer upload speeds that are far faster than the cable company products.

As more households want to use uploading we are going to hear more demands for a faster upload option. But for now, if you want super-fast upload speeds you have to be lucky enough to live in a neighborhood with fiber-to-the-home.

The Looming Backhaul Crisis

I look forward a few years and I think we are headed towards a backhaul crisis. Demand for bandwidth is exploding and we are developing last-mile technologies to deliver the needed bandwidth, but we are largely ignoring the backhaul network needed to feed customer demand. I foresee two kinds of backhaul becoming a big issue in the next few years.

First is intercity backhaul. I’ve read several predictions that we are already using most of the available bandwidth on the fibers that connect major cities and the major internet POPs. It’s not hard to understand why. Most of the fiber between major cities was built in the late 1990s or even earlier, and much of that construction was funded by the telecom craze of the 90s where huge money was dumped into the sector.

But there has been very little new fiber construction on major routes since then, and I don’t see any carriers with business plans to build more fiber. You’d think that we could get a lot more bandwidth out of the existing fiber routes by upgrading the electronics on those fiber, but that’s not the long-haul fiber network operates. Almost all of the fiber pairs on existing routes have been leased out to various entities for their own private uses. The reality is that nobody really ‘owns’ these fiber routes since the routes are full of carriers that each have a long-term contract to use a few of the fibers. As long as any of these entities has enough bandwidth for their own network purposes they are not going to sink the big money into upgrading to terabit lasers, which are still very expensive.

Underlying that is a problem that nobody wants to talk about. Many of those fibers are aging and deteriorating. Over time fiber runs into problems and gets opaque. This can come from having too many splices in the fiber, or from accumulated microscopic damage from stress during fiber construction or due to temperature fluctuations. Fiber technology has improved tremendously since the 1990s – contractors are more aware of how to handle fiber during the construction period and the glass itself has improved significantly through improvements by the manufacturers.

But older fiber routes are slowly getting into physical trouble. Fibers go bad or lose capacity over time. This is readily apparent when looking at smaller markets. I was helping a client look at fibers going to Harrisburg, PA and the fiber routes into the city are all old and built in the early 90s and are experiencing regular outages. I’m not pointing out Harrisburg as a unique case, because the same is true for a huge number of secondary communities.

We are going to see a second backhaul shortage that is related to the intercity bandwidth shortage. All of the big carriers are talking about building fiber-to-the-home and 5G networks that are capable of delivering gigabit speeds to customers. But nobody is talking about how to get the bandwidth to these neighborhoods. You are not going to be able to feed hundreds of 5G fixed wireless transmitters using the existing bandwidth that is available in most places.

Today the cellular companies are paying a lot of money to get gigabit pipes to the big cell towers. Most recent contracts include the ability for these connections to burst to 5 or 10 gigabits. Getting these connections is already a challenge. Picture multiplying that demand by hundreds and thousands of new cell sites. To use the earlier example of Harrisburg, PA – picture somebody trying to build a 100-node 5G network there, each with gigabit connections to customers. This kind of network might initially work with a 10 gigabit backhaul connection, but as bandwidth demand keeps growing (doubling every three years), it won’t take long until this 5G networks will need multiple 10 gigabit connections, up to perhaps 100 gigabits.

Today’s backhaul network is not ready to supply this kind of bandwidth. You could build all of the fiber you want locally in Harrisburg to feed the 5G nodes, but that won’t make any difference if you can’t feed that whole network with sufficient bandwidth to get back to an Internet POP.

Perhaps a few carriers will step up and build the needed backhaul network. But I don’t see that multi-billion dollar per year investment listed in anybody’s business plans today – all I hear about are plans to rush to capture the residential market with 5G. Even if carriers step up and bolster the major intercity routes (and somebody probably will), that is only a tiny portion of the backhaul network that stretches to all of the Harrisburg markets in the country.

The whole backhaul network is already getting swamped due the continued geometric growth of broadband demand. Local networks and backhaul networks that were vigorous just a few years ago can get overwhelmed by a continuous doubling of traffic volume. If you look at any one portion of our existing backhaul network you can already see the stress today, and that stress will turn into backhaul bottlenecks in the near future.