The Status of the CAF II Deployments

The Benton Foundation noted last month that both CenturyLink and Frontier have not met all of their milestones for deployment of CAF II. This funding from the FCC is supposed to be used to improve rural broadband to speeds of at least 10/1 Mbps. As of the end of 2018, the CAF II recipients were to have completed upgrades to at least 60% of the customers in each state covered by the funding.

CenturyLink took funding to improve broadband in 33 states covering over 1 million homes and businesses. CenturyLink claims to have met the 60% milestone in twenty-three states but didn’t make the goal in eleven states: Colorado, Idaho, Kansas, Michigan, Minnesota, Missouri, Montana, Ohio, Oregon, Washington, and Wisconsin.

Frontier received CAF II funding to improve broadband to over 774,000 locations in 29 states. Frontier says they’ve met the milestone in 27 states but haven’t reached the 60% deployment milestone in Nebraska and New Mexico.  There were a number of other large telcos that took CAF Ii funding like AT&T, Windstream, and Consolidated, and I have to assume that they’ve reported meeting the 60% milestone.

Back in 2014 when it looked like the CAF II program might be awarded by reverse auction, we helped a number of clients take a look at the CAF II service areas. In many cases, these are large rural areas that cover 50% or more of most of the rural counties in the country. Most of my clients were interested in the CAF II money as a funding mechanism to help pay for rural fiber, but all of the big telcos other than AT&T announced originally that they planned to upgrade existing DSL. AT&T announced a strategy early on to used fixed cellular wireless to satisfy their CAF II requirements. Since then a few big telcos like Frontier and Windstream have said that they are also using fixed wireless to meet their obligations.

To us, the announcement that the telcos were going to upgrade DSL set off red flag alarms. In a lot of rural counties there are only a small number of towns, and those towns are the only places where the big telcos have DSLAMs (the DSL hub). Rural telephone exchanges tend to be large and the vast majority of rural customers have always been far out of range of DSL that originates in the small towns. One only has to go a few miles – barely outside the towns – to see DSL speeds fall off to nothing.

The only way to make DSL work in the CAF II areas would be to build fiber to rural locations and establish new DSL hub sites. As any independent telco can tell you who deployed DSL the right way, this is expensive because it takes a lot of the rural DSLAMs to get within range of every customer. By electing DSL upgrades, the big telcos like CenturyLink and Frontier had essentially agreed to build a dozen or more fiber DSLAMs in each of the rural counties covered by CAF II. My back-of-the-envelope math showed that was going to cost a lot more than what the companies were receiving from the CAF fund. Since I knew these telcos didn’t want to spend their own money in rural America, I predicted execution failures for many of the planned DSL deployments.

I believe the big telcos are now facing a huge dilemma. They’ve reached 60% of customers in many places (but not all). However, it is going to cost two to three times more per home to reach the remaining 40% of homes. The remaining customers are the ones on extremely long copper loops and DSL is an expensive technology use for reaching these last customers. A DSLAM built to serve the customers at the ends of these loops might only serve a few customers – and it’s hard to justify the cost of the fiber and electronics needed to reach them.

I’ve believed from the beginning that the big telcos building DSL for the CAF II program would take the approach of covering the low hanging fruit – those customers that can be reached by the deployment of a few DSLAMs in a given rural area. If that’s true, then the big telcos aren’t going to spend the money to reach the most remote customers, meaning a huge number of CAF II customers are going to see zero improvements in broadband. The telcos mostly met their 60% targets by serving the low-hanging fruit. They are going to have a huge challenge meeting the next milestones of 80% and 100%.

Probably because I write this blog, I hear from folks at all levels of the industry about rural broadband. I’ve heard a lot of stories from technicians telling me that some of the big telcos have only tackled the low-hanging fruit in the CAF builds. I’ve heard from others that some telcos aren’t spending more than a fraction of the CAF II money they got from the FCC and are pocketing much of it. I’ve heard from rural customers who supposedly already got a CAF II upgrade and aren’t seeing speeds improved to the 10/1 threshold.

The CAF II program will be finished soon and I’m already wondering how the telcos are going to report the results to the FCC if they took shortcuts and didn’t make all of the CAF II upgrades. Will they say they’ve covered everybody when some homes saw no improvement? Will they claim 10/1 Mbps speeds when many households were upgraded to something slower? If they come clean, how will the FCC react? Will the FCC try to find the truth or sweep it under the rug?

Telecom R&D

In January AT&T announced the creation of the WarnerMedia Innovation Lab, which is a research group that will try to combine AT&T technology advances and the company’s huge new media content. The lab, based in New York City, will consider how 5G, the Internet of Things, artificial intelligence, machine learning and virtual reality can work to create new viewer entertainment experiences.

This is an example of a highly directed R&D effort to create specific results – in this case the lab will be working on next-generation technologies for entertainment. This contrasts with labs that engage in basic research that allow scientists to explore scientific theories. The closest we’ve ever come to basic research from a commercial company was with Bell Labs that was operated by the old Ma Bell monopoly.

Bell Labs was partially funded by the government and also got research funds from ratepayers of the nationwide monopoly telco. Bell Labs research was cutting edge and resulted in breakthroughs like the transistor, the charge coupled device, Unix, fiber optics, lasers, data networking and the creation of the big bang theory. The Lab created over 33,000 patents and its scientists won eight Nobel Prizes. I was lucky enough to have a tour of Bell Labs in the 80s and I was a bit sad today when I had to look on the Internet to see if it still exists; it does and is now called Nokia Bell Labs and operates at a much smaller scale than the original lab.

Another successor to Bell Labs is AT&T Labs, the research division of AT&T. The lab engages in a lot of directed research, but also in basic research. AT&T Labs is investigating topics such as the physics of optical transmission and the physics of computing. Since its creation in 1996 AT&T Labs has been issued over 2,000 US patents. The lab’s directed research concentrates on technologies involved in the technical challenges of large networks and of working with huge datasets. The Lab was the first to be able to transmit 100 gigabits per second over fiber.

Verizon has also been doing directed research since the spin-off of Nynex with the divestiture of the Bell system. Rather than operate one big public laboratory the company has research groups engaged in topics of specific interest to the company. Recently the company chose a more public profile and announced the creation of its 5G Lab in various locations. The Manhattan 5G Lab will focus on media and finance tech; the Los Angeles lab will work with augmented reality (AR) and holograms; the Washington DC lab will work on public safety, first responders, cybersecurity, and hospitality tech; the Palo Alto lab will look at emerging technologies, education, and big data; and its Waltham, Massachusetts, lab will focus on robotics, healthcare, and real-time enterprise services.

Our industry has other labs engaged in directed research. The best known of these is CableLabs, the research lab outside Denver that was founded in 1988 and is jointly funded by the world’s major cable companies. This lab is largely responsible for the cable industry’s success in broadband since the lab created the various generations of DOCSIS technology that have been used to operate hybrid-fiber coaxial networks. CableLabs also explores other areas of wireless and wired communications.

While Comcast relies on CableLabs for its underlying technology, the company has also created Comcast Labs. This lab is highly focused on the customer experience and developed Comcast’s X1 settop box and created the integrated smart home product being sold by Comcast. Comcast Labs doesn’t only develop consumer devices and is involved in software innovation efforts like OpenStack and GitHub development. The lab most recently announced a breakthrough that allows cable networks to deliver data speeds up to 10 Gbps.

Shrinking Competition for Transport

Bloomberg reported that CenturyLink and Alphabet are interested in buying Zayo. It’s been anticipated that Zayo would be the next fiber acquisition target since the Level 3 merger with CenturyLink since they are the largest remaining independent owner of fiber.

As you might expect, the biggest owners of fiber are the big telcos and cable companies. Consider the miles of fiber owned by the ten biggest fiber owners – I note these miles of fiber are from the end of 2017 and a few of these companies like Verizon have been building a lot of fiber since then.

AT&T 1,100 K
Verizon 520 K
CenturyLink / Level 3 450 K
Charter 233 K
Windstream 147 K
Comcast 145 K
Frontier 140 K
Zayo 113 K
Cogent 57 K
Consolidated 36 K

You might wonder why this matters? First, Zayo is the largest company on the list who’s only business is to sell transport. All of Zayo’s fiber is revenue producing. While the companies above it on the list have a lot more fiber, a lot of that fiber is in the last mile in neighborhoods where there is not a lot of opportunity to sell access to others. The biggest independent fiber owner used to be Level 3, with 200,000 miles of revenue-producing fiber before they merged with CenturyLink.

The numbers on this chart don’t tell the whole story. Companies like Zayo also swap fiber with other networks. They may trade a pair of fibers on a route they own for a route elsewhere that they want to reach. These swapping arrangements mean the transport providers like Zayo, Cogent and Level 3 control a lot more fiber than is indicated by these numbers.

It matters because as soon as you get outside of the metropolitan areas there are not many options for fiber transport. A few years ago I helped a City look for fiber transport and the three options they found that were reasonably priced were CenturyLink, Level 3 and Zayo. If CenturyLink buys Zayo they will have purchased both competitors in this region and will effectively eliminated fiber transport competition for this community. Without that competition it’s inevitable that transport prices will rise.

I think back to the early days of competition after the Telecommunications Act of 1996. I remember working with clients in the 1990s looking for fiber transport, and there were many cases where there was only one provider willing to sell transport to a community. If the sole provider was the local telco or cable company it was likely that the cost of transport was four or five times more expensive than prices in nearby communities with more choices. When I worked with rural providers in the early 2000s, one of the first question I always asked was about the availability of  transport – because lack of transport sometimes killed business plans.

Since then there has been a lot of rural fiber built by companies like statewide fiber networks and others who saw a market for rural transport. Much of the rural construction was egged on by the need to get to cellular towers.

My fear is that we’ll slide back to the bad-old-days when rural fiber was a roadblock for providing broadband. I don’t so much fear for the most rural places because those fiber networks are owned by smaller companies and they aren’t going away. I fear more for places like county seats. I worked with a city in Pennsylvania a few years ago where there was a decent number of competitors for transport – Verizon, Zayo, Level 3 and XO. Since then Verizon bought XO and CenturyLink might own the other two. That city is not going to lose transport options, but the reduction from four providers to two giant ones almost surely means higher transport costs over time.

I am intrigued that Alphabet (the parent of Google Fiber) would look at buying an extensive fiber network like Zayo. Google is one of the biggest users of bandwidth in the country due to the web traffic to Google and YouTube. Their desire for fiber might be as simple as wanting to control the fiber supply chain they use. If so, that’s almost as disconcerting as CenturyLink buying Zayo if Google wouldn’t remain as a fierce transport competitor.

ISPs Are Violating the Old Net Neutrality Rules

It’s been just over a year since the FCC repealed net neutrality. The FCC’s case is being appealed and oral arguments are underway in the appeal as I write this blog. One would have to assume that until that appeal is finished that the big ISPs will be on their best behavior. Even so, the press has covered a number of ISP actions during the last year that would have violated net neutrality if the old rules were still in place.

It’s not surprising that the cellular carriers were the first ones to violate the old net neutrality rules. This is the most competitive part of the industry and the cellular carriers are not going to miss any opportunity to gain a marketing edge.

AT&T is openly advertising that cellular customers can stream the company’s DirecTV Now product without it counting against monthly data caps. Meanwhile, all of the competing video services like Sling TV, Paystation Vue, YouTube TV, Netflix or Amazon Prime count against AT&T data caps – and video can quickly kill a monthly data plan download allotment. AT&T’s behavior is almost a pure textbook example of why net neutrality rules were put into place – to stop ISPs from putting competitor’s products at an automatic disadvantage. AT&T is the biggest cellular provider in the country and this creates a huge advantage for DirecTV Now. All of the major cellular carriers are doing something similar in allowing some video to not count against the monthly data cap, but AT&T is the only one pushing their own video product.

In November a large study of 100,000 cellphone users by Northeastern University and the University of Massachusetts showed that Sprint was throttling Skype. This is not something that the carrier announced, but it’s a clear case of pushing web traffic to the ‘Internet slow lane’. We can only speculate why Sprint would do this, but regardless of their motivation this is clearly a violation of net neutrality.

This same study showed numerous incidents where all of the major cellular carriers throttled video services at times. YouTube was the number one target of throttling, followed by Netflix, Amazon Prime, and the NBC Sports app. This throttling wasn’t as widespread as Sprint’s throttling of Skype, but the carriers must have algorithms in their network that throttles specific video traffic when cell sites get busy. In contrast to the big carriers, the smaller independent cellular carrier C.Spire had almost no instances of differentiation among video streams.

Practices that might violate net neutrality were not limited to cellular carriers. For example, Verizon FiOS recently began giving free Netflix for a year to new broadband customers. AT&T also started giving out free HBO to new customers last year. This practice is more subtle than the cellular carrier practice of blocking or throttling content. One of the purposes of net neutrality was for ISPs to not discriminate against web traffic. By giving away free video services the landline broadband companies are promoting specific web services over competitors.

This doesn’t sound harmful, but the discussions in the net neutrality order warned about a future where the biggest ISPs would partner with a handful of big web services like Facebook or Netflix to the detriment of all smaller and start-up web services. A new video service will have a much harder time gaining customers if the biggest ISPs are giving away their competitors for free.

There are probably more bad practices going on that we don’t know about. We wouldn’t have known about the cellular throttling of services without the big study. A lot of discrimination can be done through the network routing practices of the ISPs, which are hard to prove. For example, I’ve been seeing a growing number of complaints from consumers recently who are having trouble with streaming video services. If you recall, net neutrality first gained traction when it became known that the big ISPs like Comcast were blatantly interfering with Netflix streaming. There is nothing today to stop the big ISPs from implementing network practices that degrade certain kinds of traffic. There is also nothing stopping them from demanding payments from web services like Netflix so that their product is delivered cleanly.

Interestingly, most of the big ISPs made a public pledge to not violate the spirit of net neutrality even if the rules were abolished. That seems to be a hollow promise that was to soothe the public that worried about the end if net neutrality. The FCC implemented net neutrality to protect the open Internet. The biggest ISPs have virtual monopolies in most markets and public opinion is rarely going to change an ISP behavior if the ISP decides that the monetary gain is worth the public unhappiness. Broadband customers don’t have a lot of options to change providers and Cable broadband is becoming a near-monopoly in urban areas. There is no way for a consumer to avoid the bad practices of the cellular companies if they all engage in the same bad practices.

There is at least some chance that the courts will overturn the FCC repeal of net neutrality, but that seems unlikely to me. If the ISPs win in court and start blocking traffic and discriminating against web traffic it does seem likely that some future FCC or Congress will reinstitute net neutrality and starts the fight all over again. Regardless of the court’s decision, I think we are a long way from hearing the last about net neutrality.

Why Big ISPs Screw Up

I was recently joking with a colleague about some of the really dumb things that some of the big ISPs do – those things that get negative press or that make customers permanently dislike them. But after thinking about it a bit, it struck me that bad behavior by the big companies is almost inevitable – it’s a challenge for a big company to not behave badly. I can think of a number of reasons for the poor decisions that big ISPs seem to repeatedly make.

Good Intentions but Bad Policies. Some of the ugliest stories in the press from our industry have come from Comcast customer service. Customers have recorded customer service representatives saying some of the most awful things. Comcast executives have often been quoted as saying that they want to do a better job of customer service and the company has thrown big bucks at the issue over the last decade to try to improve.

But Comcast has corporate policies that undo all of their good intentions. Some of the most memorable press stories came from customer service reps who are compensated for stopping customers from disconnecting service or for upselling additional services to customers. Win-back programs and upselling are good for the Comcast bottom line, but they tempt poorly paid customer service reps into saying anything to stop a customer from disconnecting or entice a customer service rep to sneak unwanted products onto a customer’s bill. The bottom line is that policies that promote good behavior go out the window when employees are compensated for bad behavior.

Decentralized Management. I remember reading last year about the big push at Verizon to bring all of their fiber assets under one regime. The company built fiber over the years under a lot of different business units and there has been no centralized fiber inventory. This has to have cost Verizon a fortune over the years with lost revenue opportunities on fiber that already exists. An outsider like me looks at this and wonders why something this common sense wasn’t done fifteen years ago. Unfortunately, the poor communications inside the company is a natural consequence of operating different business units, each in silos. The FiOS folks never knew what the enterprise or the cellular folks were doing, and so the company frittered away the huge synergies that could have been gained by making all fiber available to all business units. We’ve seen attempts at the big ISPs to make the kind of consolidation Verizon is doing, but if they aren’t careful, in time they’ll slip back to the old bad practices.

No Emphasis on Being Good Corporate Citizens. I worked at Southwestern Bell pre-divestiture. There were some negative sides from being a giant monopoly,  but the company also put a lot of effort into instilling the message internally that the company had a nationwide mandate to do a good job. The company constantly extolled its accomplishments to employees and effectively indoctrinated them into being good citizens. I happened to sit close to the person who took ‘executive’ complaints – complaints from customers that had escalated to upper management. The company made a legitimate effort to deal with every problem that made it that high in the company. Employees were rewarded for loyalty and good behavior with lifetime jobs – phone company people were joked to have bell-shaped heads.

Big ISPs no longer promise jobs for life and working at a big ISP today is just a job. I know a mountain of people who currently work for the big ISPs and none of them have that same esprit de corps that was normal at Ma Bell.

Quarterly Profit-Driven. A lot of the problems I see from the big ISPs come from the modern emphasis on quarterly earnings. This emphasis permeates down into the ranks of management at an ISP. For example, a department head might decide to not make a major repair or upgrade if it causes a blip in the department’s budget. The constant drive for quarterly earnings improvements drives ISPs to lay-off needed technicians to meet an earnings goal. It drives companies to raise rates even when they haven’t increased costs. It makes companies chase new shiny ideas like 5G even if the technology is half-baked and premature. Unfortunately, Wall Street matters more than both employees and customers – and it shows.

We Need a Challenge Process for Broadband Maps

We all know that the broadband maps maintained by the FCC are terrible. Some of the inaccuracy is due to the fact that the data in the maps come from ISPs. For example, there are still obvious examples where carriers are reporting their marketing speeds rather than actual speeds, which they might not know. Some of the inaccuracy is due to the mapping rules, such as showing broadband by census block – when a few customers in a block have decent broadband it’s assumed that the whole census block has it. Some of the inaccuracy is due to the vagaries of technology – DSL can vary significantly from one house to the next due to the condition of local copper; wireless broadband can vary according to interference and impediments in the line-of-sight. The maps can be wrong due to bad behavior of an ISP who has a reason to either overstate or understate their actual speeds (I’ve seen both cases).

None of this would matter if the maps were just our best guess at seeing the state of broadband in the country. Unfortunately, the maps are used for real-life purposes. First, the maps are used at the FCC and state legislators to develop and support various policies related to broadband. It’s been my contention for a long time that the FCC has been hiding behind the bad maps because those maps grossly overstate the availability of rural broadband. The FCC has a good reason to do so because they are tasked by Congress to fix inadequate broadband.

Recently the maps have been used in a more concrete way and are used to define where grants can or cannot be awarded. Used in this manner the maps are being used to identify groups of homes that don’t already have adequate broadband. The maps were the basis of determining eligible areas for the CAF II reverse auction and now for the e-Connectivity grants.

This is where bad mapping really hurts. Every rural county in the country knows where broadband is terrible or non-existent. When I show the FCC maps to local politicians they are aghast at how inaccurate the maps are for their areas. The maps often show large swaths of phantom broadband that doesn’t exist. The maps will show towns that supposedly have universal 25/3 Mbps broadband or better when the real speeds in the town are 10 Mbps or less. The bad maps hurt every one of these places because if these maps were accurate these places would be eligible for grants to help fix the poor broadband. A lot of rural America is being royally screwed by the bad maps.

Of even more dismay, the maps seem to be getting worse instead of better. For example, in the CAF II program, the big telcos were supposed to bring broadband of at least 10/1 Mbps to huge swaths or rural America. A lot of the areas covered by the CAF II program are not going to see any improvement of broadband speeds. In some cases, the technology used, such as AT&T’s use of fixed cellular can’t deliver the desired speeds to customers who live too far from a tower. I also believe we’re going to find that in many cases the big carriers are electing to only upgrade the low-hanging fruit and are ignoring homes where the CAF upgrade costs too much. These carriers are likely to claim they’ve made the upgrades on the maps rather than admit to the FCC that they pocketed the subsidy money instead of spending it to improve broadband.

There have been a few suggested fixes for the problem. A few states have tried to tackle their own broadband maps that are more accurate, but they can’t get access to any better data from the ISPs. There are a few states now that are asking citizens to run speed tests to try to map the real broadband situation, but unless the speeds tests are run under specific and rigorous conditions they won’t, by themselves, serve as proof of poor broadband.

The easiest fix for the problem is staring us right in the face. Last year the FCC got a lot of complaints about the soon-to-be-awarded Mobility Fund Phase II grants. This money was to go to cellular carriers to bring cell coverage to areas that don’t have it. The FCC maps used for those efforts were even worse than the broadband maps and the biggest cellular companies were accused of fudging their coverage data to try to stop smaller rival cell providers from getting the federal money. The outcry was so loud that the FCC created a challenge process where state and local governments could challenge the cellular coverage maps. I know a lot of governments that took part in these challenges. The remapping isn’t yet complete, but it’s clear that local input improved the maps.

We need the same thing for the FCC broadband maps. There needs to be a permanent challenge process where a state or local government can challenge the maps and can supply what they believe to be a more accurate map of coverage. Once counties understand that they are getting bypassed for federal grant money due to crappy maps they will jump all over a challenge process. I know places that will go door-to-door if the effort can help bring funds to get better broadband.

Unfortunately, only the FCC can order a challenge process, and I don’t think they will even consider it unless they got the same kind of outcry that came with the Mobility II Funding. It’s sad to say, but the FCC has a vested interest in burying their head in the sand and pretending that rural broadband is okay – otherwise they have to try to fix it.

I think states ought to consider this. If a state undertakes a program to allow challenges to the map, then governors and federal legislators can use the evidence gathered to pressure the USDA to accept alternate maps for areas with poor broadband. These challenges have to come from the local level where people know the broadband story. This can’t come from a state broadband mapping process that starts with carrier data. If local people are allowed to challenge the maps then the maps will get better and will better define areas that deserve federal grants. I believe a lot of county governments and small towns would leap at the opportunity to tell their broadband story.

Comcast’s Quiet Expansion

It’s been conventional wisdom in the industry that cable companies stick to their historic cable system boundaries and don’t really expand much. In much of the country, this is well understood and everybody can point to customers that have lived for decades just a house or two past the end of the coaxial cable network.

However, not all cable companies have stuck with this historic entrenchment. A good case in point is Comcast, which passed 53.8 million homes in 2013 but had grown that to 57.5 million passings by the end of 2017. A few of the new 3.7 million new passings came from the purchase of small cable systems, but most came through the growth of the Comcast network.

Many of the new passings came about as the result of the continued growth of urban America. As a country we’re still seeing rural residents migrate to urban centers – which are growing while rural America is mostly stagnant or even shrinking. Recent years have seen some of the largest ever growth in new housing construction – 1.5 million new living units over the last year – and Comcast gets its share of these opportunities in its franchise areas.

But the company is also expanding outward from its core cable franchise areas where that makes sense. This has mostly been done quietly with a street added here, a small neighborhood added there, and new subdivisions always pursued; Comcast is obviously looking around for growth when it can be done affordably.

The most surprising source of Comcast growth comes from expansion into areas served by other cable companies. Historically there was a gentleman’s agreement in the cable industry to not poach on neighboring franchises, but Comcast is no longer sticking to that industry norm. Over the last few years, Comcast has gotten franchises to operate in communities already served by other cable companies.

In 2017 Comcast got a franchise in Rochester, New Hampshire in an area already served by Atlantic Broadband. In 2018 Comcast got franchises in Waterford and New London, Connecticut in areas also served by Atlantic Broadband. Last year Comcast also got franchises to operate in five communities in Pennsylvania operated by Blue Ridge Cable – Warwick Township, Warwick Borough, Ephrata Township, Ephrata Borough and Lititz.

Incumbent cable companies have rarely competed with each other. One of the few exceptions was Midcontinent that overbuilt CableONE in Fargo, North Dakota in 2013. There are also two overbuilders that have built competing cable networks – RCN and WideOpenWest – but these companies started as overbuilders and were not incumbent providers.

For now, it looks like Comcast might be going after these markets to get lucrative business customers. For instance, New London, Connecticut is the home to two colleges and the Coast Guard Academy. There are some large businesses and medical centers in some of the towns in Pennsylvania. Even if Comcast only goes after large businesses that can be a big blow to the smaller cable companies already serving these markets. When I create business plans I always refer to the revenues from the few biggest customers in a community as ‘home-run’ revenues because just a few customers can make or break a business plan. Comcast will do great harm to its neighbors if they pick off their home-run customers.

Comcast has gotten so large that they probably don’t care any longer about the historic gentleman’s agreements that put a fence around a franchise area. Comcast is under constant pressure to grow revenues and profits and it’s almost inevitable that they’ll chase anything they view as low-hanging fruit. This is one of the characteristics of companies that become virtual monopolies – they almost can’t stop themselves from engaging in business practices that make money. A company as big as Comcast doesn’t make all of the decisions at the corporate level – rather, they give revenue and earnings targets to differrent parts of the company and those business units often decide to chase revenues in ways the parent might not have dictated. In many big corporations it is the bonus structure that often drives local decisions rather than corporate policy.

It will be interesting to see how this might change the nature of cable company cooperation. The cable companies have been incredibly effective in having a unified message across the country in terms of lobbying at the federal, state and local levels – what was good for one was good for all. But that’s no longer the case if Comcast starts competing with smaller neighboring cable companies. We saw this same phenomenon a few decades ago in the telephone industry and the small telcos all started lobbying separately from the big companies. The small and large telcos still sometimes agree on issues, but often they do not. It’s almost inevitable that the unified voice of the cable industry can’t survive competition between cable companies – but I also suspect Comcast doesn’t care about that.

What’s the Future for Big Towers?

Late last year AT&T announced that is had contracted for the construction of hundreds of new big cellular towers through Tillman Infrastructure. AT&T and Verizon jointly struck a deal to build with Tillman in 2017 and by late last year some of the new towers came online. This doesn’t sound like big news because towers are built every year – but these new towers were built to directly compete with and replace existing big towers. AT&T’s announcement was a warning to existing tower owners – lower your prices or we’ll bypass you.

You can’t blame AT&T and Verizon for this because they pay some of the highest prices for any telecom products to hang radios and to bring bandwidth to big towers. To a large degree, this is a problem of their own making, and the history of big towers is a great example of economics that has gone awry.

When the two companies first got into the cellular business they mostly built their own towers. There were some tall towers in existence – some to support public safety radio networks and many more that were part of the AT&T, MCI, and Verizon microwave backbone networks. You might remember the towers with the big horn antennas. When AT&T longlines started to replace microwave backhaul with fiber in the 1980s they sold the whole tower network to a newly formed company, American Tower. American Tower went on to remove the big horn antennas and leased space back on these towers to AT&T and Verizon for cellular use.

Within a few years, both big cellular carriers agreed to lease towers almost everywhere from American Tower and a few other big tower companies. At the time, both AT&T and Verizon were spinning off huge cash from the rapidly growing cellular business and they both decided to avoid the capital costs of building towers and allowed others to invest in the key infrastructure component of cellular networks. Both carriers also made similar choices about allowing others to construct the fiber needed to connect to their cell sites. Their decision to avoid capital costs turns out to have been a giant mistake in the long run.

Today, cellular companies are feeling huge pressure from competition as the prices of cellular plans have tumbled. Had the big carriers decided years ago to own their key infrastructure – towers and fiber – they would have minimal costs for operating these assets today. Instead, they are paying ever-escalating prices for tower space and fiber transport.

AT&T is now demanding big reductions in tower space rental prices. Building the new towers is an obvious threat that the company is willing to bypass anybody who won’t cut prices. A few hundred new towers is barely a blip in the tower market, but the AT&T message is clear. Last year Verizon used the same tactic to put pressure on fiber providers to lower transport costs – at the risk of Verizon building fiber to their towers and bypassing existing fiber.

All of this is happening at a time when we’re also seeing the proliferation of small cell sites. When I look at the architecture of cellular networks, a significant number of tall towers could be replaced with a network of small cell sites. The cellular network today is really two separate networks. There is the network built to provide cellular traffic along major highways – you see these towers at every few exits along every interstate highway. These towers are not likely to go away, and in fact, the tall towers are needed to provide coverage across large stretches of highway.

But there are a lot of cellular towers that have been built to serve where people live and work. There has been a long-standing unease in many communities about having the big towers in somebody’s back yard. Over time the cellular companies can make many of these towers obsolete as the smaller cell sites take over. (Of course, there is also now unease about having a lot of smaller towers in neighborhoods).

The big tower companies understand this transition. American Tower is leading the way in acquiring pole rights and is building electronics vaults along city streets for small cell sites to support 5G. Like other parts of the telecom market, the cell tower market segment is facing big changes. Just five years ago the big cellular carriers, the tower companies, and the fiber transport companies were all making big money from the cellular market. Today, all are feeling the pinch due to the advent of cellular price competition. It’s going to be interesting to see if AT&T and Verizon make the same choice all over again and lease small cell sites rather than building themselves.

Looking Back at the Net Neutrality Order

Chairman Ajit Pai used three arguments to justify ending net neutrality. First, he claimed that the net neutrality rules in effect were a disincentive for big ISPs to make investments and that ending net neutrality would lead to a boom in broadband investment. He also argued that ending net neutrality would free the big ISPs to make broadband investments in rural parts of the US that were underserved. Finally, he argued that the end of net neutrality would spark the growth of telecom jobs. It’s been two years since he used those arguments to justify the repeal net neutrality and it’s easy to see that none of those things have come to pass.

The investment claim is easy to check. The big ISPs are starting to release their 2018 financial results and it looks like capital spending in 2018 – the first year after the end of net neutrality – are lower than in 2017. We’ve already heard from Comcast and Charter and that capital spending was down in 2018 over 2017. The industry analyst MoffettNathanson has already predicted that capital spending for the four biggest cable companies – Comcast, Charter, Altice, and CableONE is expected to drop by 5.8% more in 2019. Anybody who watches the cable companies understands that they all just made big investments in upgrading to DOCSIS 3.1 and that capital spending ought to drop significantly for the next several years.

MoffettNathanson also predicts that wireline capital spending for Verizon and AT&T will drop from $20.3 billion in 2018 to $19.6 billion in 2019. The press is also full of articles lamenting that investments in 5G by these companies is far smaller than hoped for by industry vendors. It seems that net neutrality had no impact on telecom spending (as anybody who has spent time at an ISP could have told you). It’s virtually unheard of for regulation to drive capital spending.

The jobs claim was a ludicrous one because the big companies have been downsizing for years and have continued to do so after net neutrality was repealed. The biggest layoff came from Verizon in October 2018 when the company announced that it was eliminating 44,000 jobs and transferring another 2,500 to India. This layoff is an astronomical 30% of its workforce. AT&T just announced on January 25 that it would eliminate 4,600 jobs, the first part of a 3-year plan to eliminate 10,000 positions. While the numbers are smaller for Comcast, they laid off 500 employees on January 4 and also announced the close of a facility with 405 employees in Atlanta.

Pai’s claim that net neutrality was stopping the big ISPs from investing in underserved areas might be the most blatantly false claim the Chairman has made since he took the Chairman position. The big ISPs haven’t made investments in rural America in the last decade. They have been spending money in rural America in the last few years – but only funds handed to them by the FCC through the CAF II program to expand rural broadband and the FCC’s Mobility Fund to expand rural cellular coverage. I’ve been hearing rumors all over the industry that most of the big ISPs aren’t even spending a lot of the money from those two programs – something I think will soon surface as a scandal. There is no regulatory policy that is going to get the big ISPs to invest in rural America and it was incredibly unfair to rural America for the Chairman to imply they ever would.

Chairman Pai’s arguments for repealing net neutrality were all false and industry insiders knew it at the time. I probably wrote a dozen blog posts about the obvious falsehoods being peddled. The Chairman took over the FCC with the goal of eliminating net neutrality at the top of his wish list and he adopted these three talking points because they were the same ones being suggested by big ISP lobbyists.

What bothers me is this is not how regulation is supposed to work. Federal and state regulatory agencies are supposed to gather the facts on both sides of a regulatory issue, and once they choose a direction they are expected to explain why. The orders published by the FCC and other regulatory bodies act similar to court orders in that the language in these orders are then part of the ongoing record that is used later to understand the ‘why’ behind an order. In later years courts rely on the discussion in regulatory orders to evaluate disputes based upon the new rules. The order that repeals net neutrality sadly repeats these same falsehoods that were used to justify the repeal.

There are always two sides for every regulatory issue and there are arguments that could be made against net neutrality. However, the Chairman and the big ISPs didn’t want to publicly make the logical arguments against net neutrality because they knew these arguments would be unpopular. For example, there is a legitimate argument to made for allowing ISPs to discriminate against certain kinds of web traffic – any network engineer will tell you that it’s nearly mandatory to give priority to some bits over others. But the ISPs know that making that argument makes it sound like they want the right to shuttle customers into the ’slow lane’, and that’s a PR battle they didn’t want to fight. Instead, telecom lobbyists cooked up the false narrative peddled by Chairman Pai. The hoped the public would swallow these false arguments rather than argue for the end of net neutrality on its merits.

Broadband Usage Continues to Grow

The firm OpenVault, a provider of software that measures data consumption for ISPs reported that the average monthly data use by households grew from 201.6 gigabytes in 2017 to 268.7 gigabytes in 2018 – a growth rate of 33%. The company also reported that the medium use per household grew from 103.6 gigabytes in 2017 to 145.2 gigabytes in 2018 – a growth rate of 40%. The medium represents the midpoint of users, with half of all households above and half below the medium.

To some degree, these statistics are not news because we’ve known for a long time that broadband usage at homes, both in total download and in desired speeds has been doubling every three years since the early 1980s. The growth in 2018 is actually a little faster than that historical average and if the 2018 growth rate was sustained, in three years usage would grow by 235%. What I find most impressive about these new statistics is the magnitude of the annual change – the average home used 67 more gigabytes of data per month in 2018 than the year before – a number that would have seemed unbelievable only a decade ago when the average household used a total of only 25 gigabytes per month.

There are still many in the industry who are surprised by these numbers. I’ve heard people claim that now that homes are watching all the video they want that the rate of growth is bound to slow down – but if anything, the rate of growth seems to be accelerating. We also know that cellular data consumption is also now doubling every two years.

This kind of growth has huge implications for the industry. From a network perspective, this kind of bandwidth usage puts a big strain on networks. Typically the most strained part of a network is the backbones that connect to neighborhood nodes. That’s the primary stress point in many networks, including FTTH networks, and when there isn’t enough bandwidth to a neighborhood then everybody’s bandwidth suffers. Somebody that designed a network ten years ago would never have believed the numbers that OpenVault is reporting and would likely not have designed a network that would still be sufficient today.

One consequence of the bandwidth growth is that it’s got to be driving homes to change to faster service providers when they have the option. A household that might have been happy with a 5 Mbps or 10 Mbps connection a few years ago is likely no longer happy with it. This has to be one of the reasons we are seeing millions of homes each year upgrade from DSL to cable modem each year in metropolitan areas. The kind of usage growth we are seeing today has to be accelerating the death of DSL.

This growth also should be affecting policy. The FCC set the definition of broadband at 25/3 Mbps in January of 2015. If that was a good definition in 2015 then the definition of broadband should have been increased to 63 Mbps in 2019. At the time the FCC set that threshold I thought they were a little generous. In 2014, as the FCC was having this debate, the average home downloaded around 100 gigabytes per month. In 2014 the right definition of broadband was probably more realistically 15 – 20 Mbps and the FCC was obviously a little forward-looking in setting the definition. Even so, the definition of broadband should be increased – if the right definition of broadband in 2014 was 20 Mbps, then today the definition of broadband ought to have been increased to 50 Mbps today.

The current FCC is ignoring these statistics for policy purposes – if they raise the definition of broadband then huge numbers of homes will be classified as not having broadband. The FCC does not want to do that since they are required by Congressional edict to make sure that all homes have broadband. When the FCC set a realistic definition of broadband in 2015 they created a dilemma for themselves. That 2015 definition is already obsolete and if they don’t change it, in a few years it is going to be absurdly ridiculous. One only has to look forward three years from now, when the definition of broadband ought to be 100 Mbps.

These statistics also remind us of the stupidity of handing out federal subsidies to build technologies that deliver less than 100 Mbps. We still have two more years of CAF II construction to upgrade speeds to an anemic 10 Mbps. We are still handing out new subsidies to build networks that can deliver 25/3 Mbps – networks that are obsolete before they are completed.

Network designers will tell you that they try to design networks to satisfy demands at least seven years into the future (which is the average life of many kinds of fiber electronics). If broadband usage keeps doubling every three years, then looking forward seven years to 2026, the average home is going to download 1.7 terabytes per month and will expect download speeds of 318 Mbps. I wonder how many network planners are using that target?

The final implications of this growth are for data caps. Two years ago when Comcast set a terabyte monthly data cap they said that it affected only a few homes – and I’m sure they were right at the time. However, the OpenVault statistics show that 4.12% of homes used a terabyte per month in 2018, almost double from 2.11% in 2017. We’ve now reached that point when the terabyte data cap is going to have teeth, and over the next few years a lot of homes are going to pass that threshold and have to pay a lot more for their broadband. While much of the industry has a hard time believing the growth statistics, I think Comcast knew exactly what they were doing when they established the terabyte cap that seemed so high just a few years ago.