AT&T’s Fiber Strategy

On the most recent earnings call with investors, AT&T’s EVP and CFO John Stevens reported that AT&T has only 800,000 customers nationwide remaining on traditional DSL. That’s down from 4.5 million DSL customers just four years ago. The company has been working hard to work its way out of the older technology.

The company overall has 15.8 million total broadband customers including a net gain of 82,000 customers in the first quarter. This compares to overall net growth for the year of 2017 of only 114,000 customers. The company has obviously turned the corner and after years of stagnant growth is adding broadband customers again. The overall number of AT&T broadband customers has been stagnant for many years, and if you go nearly a decade the company had 15 million broadband customers, with 14 million on traditional DSL.

The 15 million customers not served by traditional DSL are served directly by fiber-to-the-premises (FTTP) or fiber-to-the-node (FTTN) – the company doesn’t disclose the number on each technology. The FTTN customers in AT&T are served with newer DSL technologies that bond two copper pairs. This technology generally has relatively short copper drops of less than 3,000 feet and can deliver broadband download speeds above 40 Mbps download. AT&T still has a goal to pass 12.5 million possible customers with fiber by the end of 2019, with an eventual goal to pass around 14 million customers.

The AT&T fiber buildout differs drastically from that done by Verizon FiOS. Verizon built to serve large contiguous neighborhoods to enable mass marketing. AT&T instead is concentrating on three different customer segments to reach the desired passings. They are building fiber to business corridors, building fiber to apartment complexes and finally, offering fiber to homes and businesses that are close to their many existing fiber nodes. Homes close enough to one of these nodes can get fiber while those only a block away probably can’t. It’s an interesting strategy that doesn’t lend itself to mass marketing, which is probably why the press has not been flooded with stories of the company’s fiber expansion. With this buildout strategy I assume the company has a highly targeted marketing effort that reaches out only to locations it can easily reach with fiber.

To a large degree AT&T’s entire fiber strategy is one of cherry picking. They are staying disciplined and are extending fiber to locations that are near to their huge existing fiber networks that were built to reach large businesses, cell sites, schools, etc. I work across the country and I’ve encountered small pockets of AT&T fiber customers in towns of all sizes. The cherry picking strategy makes it impossible to map their fiber footprint since it consists of an apartment complex here and a small cluster of homes there. Interestingly, when AT&T reports these various pockets they end up distorting the FCC’s broadband maps, since those maps count a whole census block as having gigabit fiber speeds if even only one customer can actually get fiber.

Another part of AT&T’s strategy for eliminating traditional DSL is to tear down rural copper and replace DSL with cellular broadband. That effort is being funded to a large extent by the FCC’s CAF II program. The company took $427 million in federal funding to bring broadband to over 1.1 million rural homes and businesses. The CAF II program only requires AT&T and the other telcos to deliver speeds of 10/1 Mbps. Many of these 1.1 million customers had slow DSL with typical speeds in the range of 1 Mbps or even less.

AT&T recently said that they are not pursuing 5G wireless local loops. They’ve looked at the technology that uses 5G wireless links to reach from poles to nearby homes and said that they can’t make a reasonable business case for the technology. They say that it’s just as affordable in their expansion model to build fiber directly to customers. They also know that fiber provides a quality connection but are unsure of the quality of a 5G wireless connection. That announcement takes some of the wind out of the sails for the FCC and legislators who are pressing hard to mandate cheap pole connections for 5G. There are only a few companies that have the capital dollars and footprint to pursue widespread 5G, and if AT&T isn’t pursuing this technology then the whole argument that 5G is the future of residential broadband is suspect.

This is one of the first times that AT&T has clearly described their fiber strategy. Over the last few years I wrote blogs that wondered where AT&T was building fiber, because outside of a few markets where they are competing with companies like Google Fiber it was hard to find any evidence of fiber construction. Instead of large fiber roll-outs across whole markets it turns out that the company has been quietly building a fiber network that adds pockets of fiber customer across their whole footprint. One interesting aspect of this strategy is that those who don’t live close to an AT&T fiber node are not likely to ever get their fiber.

Comcast Broadband Bundles

Comcast recently announced unilateral broadband speed increases for some customers. Customers with current 60 Mbps service today are being increased to 150 Mbps, those with 150 Mbps are moving up to 250 Mbps, and those with 250 Mbps are being bumped up to 400 Mbps or 1 Gbps depending upon their cable package.

The Houston Chronicle reported that the speed upgrades are only available to customers who have a cable package and an X1 settop box. This article has spawned a number of outraged reactions from customers and industry journalists.

This is not news, and in my experience has been a long-term practice of the company. When there is an event like this speed increase the Comcast practice percolates up to the surface again. The company has been reserving their fastest broadband speeds for customers who buy cable TV for years. When I moved to Florida five years ago Comcast would not sell me standalone broadband any faster than 20 Mbps unless I purchased a cable package.

That speed was not adequate for my family and home office and so I was corralled into buying their basic TV package in order to get 100 Mbps broadband. They wouldn’t let me buy the faster standalone broadband at any price. The cable settop box went immediately into my closet and was never plugged in. The $20 basic TV package ended up costing me over $40 per month after layering on the settop box and local programming fees. I felt like I was being extorted every time I paid my Comcast bill. I called periodically to try to drop the cable package but was always told that would mean reducing my broadband speed.

The articles I’ve read assume that this pricing structure is intended to hurt cord cutters. But when this happened to me five years ago there were very few cord cutters. I’ve always assumed that Comcast wanted to maintain cable customer counts to please Wall Street and were willing to strongarm customers to do so. I was a cable customer in terms of counting, but I never watched any of the TV I was forced to buy. I always wondered how many other people were in the same position. For the last few years Comcast has lost fewer cable customers than the other big cable companies and perhaps this one policy is a big part of the reason for that.

Today it’s easier to make the argument that this is to punish cord cutters. This policy clearly harms those who refuse to buy the company’s cable products by forcing them into the company’s smallest bandwidth data products. Last year Comcast declared that they are now a broadband company and not just a traditional cable company – but this policy challenges that assertion.

Comcast is further punishing card cutters by enforcing their data caps. Due to public outcry a few years ago they raised the monthly data limit to one terabyte. While that sounds generous, it’s a number that is not that hard to hit for a house full of cord cutters. Over time more households will hit that limit and have to pay even more money for their broadband.

This policy is a clear example of monopolist behavior. I’m positive that this policy is not invoked in those markets where Comcast is competing with a fiber overbuilder. There is no better way to identify the monopolist policies than by seeing what gets waived in competitive markets.

Unfortunately for the public there is no recourse to monopolistic behavior. The FCC has largely washed their hands of broadband regulations and is going to turn a deaf ear to issues like this. Comcast and the other big ISPs are now emboldened to implement any policies that will maximize their revenues at the expense of customers.

It’s not hard to understand some of the ramifications of this policy. My 100 Mbps connection from Comcast was costing me over $100 per month and this is both a ridiculous price and unaffordable to many homes. The scariest thing about these kinds of policies is that the cable company monopoly is strengthening as they chase out the last remnants of DSL. There will be huge numbers of markets where Comcast and the other large cable companies will be the only realistic broadband option.

I’ve noted in a few blogs that there seem to be consensus on Wall Street that the big ISPs are going to significantly increase broadband prices over the next few years. They continue to also bill outrageous rates for a cable modem and slap on hidden fees to further jack up prices. When you layer in policies like this one and data caps it’s clear that Comcast cares about profits a whole lot more than they care if households can afford broadband. I know that’s inevitable monopoly behavior, and in the ideal world the federal government would step in to stop the worst monopoly abuses.

CenturyLink and Residential Broadband

CenturyLink is in the midst of a corporate reorganization that is going to result is a major shift in the focus of the company. The company merged with Level 3 in 2016 and the management team from Level 3 will soon be in charge of the combined business. Long-time CEO Glen Post is being pushed out of day-to-day management of the company and Jeff Storey, the former CEO of Level 3 will become the new CEO of CenturyLink. Storey was originally slated to take the top spot in 2019, but the transition has been accelerated and will happen this month.

It’s a shift that makes good financial sense for the company. Mr. Storey had huge success at Level 3 and dramatically boosted earnings and stock prices over the last four years. Mr. Storey and CenturyLink CFO Sunit Patel have both made it clear that they are going to focus on the more profitable enterprise business opportunities and that they will judge any investments in last-mile broadband in terms of the expected returns. This differs drastically from Mr. Post who comes from a background as an independent telephone company owner. As recently as a year ago Mr. Post publicly pledged to make the capital investments needed to improve CenturyLink’s last-mile broadband networks.

This is going to mean a drastic shift in the way that CenturyLink views residential broadband. The company lost 283,000 broadband customers for the year ending in December 2017, dropping them to 5.7 million broadband customers. The company blames the losses on the continued success of the cable companies to woo away DSL customers.

This size of the customer losses is a bit surprising. CenturyLink said at the end of 2017 that they were roughly 60% through their CAF II upgrades which is bringing better broadband to over 1.1 million rural households. Additionally, the company built FTTP past 900,000 potential business and residential customers in 2017. If the company was having even a modest amount of success with those two new ventures it’s hard to understand how they lost so many broadband customers.

What might all of this mean for CenturyLink broadband customers? For rural customers it means that any upgrades that are being made using CAF II funding are likely the last upgrades they will ever see. Customers in these rural areas are already used to being neglected and their copper networks are in lousy condition due to decades of neglect by former owner Qwest.

CenturyLink is required by the CAF II program to upgrade broadband speeds in the rural areas to at least 10/1 Mbps. The company says that over half of the upgraded customers are seeing speeds of at least twice that. I’ve always had a concern about any of the big telcos reaching the whole CAF II footprint, and I suspect that when the CAF II money is gone, anybody that was not upgraded as promised will never see upgrades. I’ve also always felt that the CAF II money was a waste of money –  if CenturyLink walks away from the cost of maintaining these newly upgraded DSL networks they will quickly slide back into poor condition.

There are already speculation on Wall Street that CenturyLink might try to find a buyer for their rural networks. After looking at the problems experienced by Frontier and Fairpoint after buying rural telco copper networks one has to wonder if there is a buyer for these properties. But in today’s world of big-deal corporate finance it’s not impossible to imagine some group of investors willing to tackle this. The company could also take a shot at selling rural exchanges to independent telcos – something US West did over twenty years ago.

It’s also likely that the company’s foray into building widespread FTTP in urban areas is done. This effort is capital intensive and only earns infrastructure returns that are not going to be attractive to the new management. I wouldn’t even be surprised to see the company sell off these new FTTP assets to raise cash.

The company will continue to build fiber, but with the emphasis on enterprise opportunities. They are likely to adopt a philosophy similar to AT&T’s which has been building residential fiber only to large apartment complexes and to households that are within short distances from existing fiber pops. This might bring fiber broadband to a lucky few, but mostly the new management team has made it clear they are deemphasizing residential broadband.

This management transition probably closes the book on CenturyLink as a last-mile ISP. If they are unable to find a buyer for these properties it might take a decade or more for their broadband business to quietly die. This is bad news for existing broadband customers because the company is unlikely to invest in keeping the networks in operational shape. They only ones who might perceive this as good news are those who have been thinking about overbuilding the company – they are not going to see any resistance.

Fiber Electronics and International Politics

In February six us Intelligence agencies warned Americans against using cellphones made by Huawei, a Chinese manufacturer. They warned that the company is “beholden” to the Chinese government and that we shouldn’t trust their electronics.

Recently Sen Liz Cheney introduced a bill into Congress that would prohibit the US Government or any contractors working for it to use electronics from Huawei or from another Chinese company ZTE Corp. Additionally, any US military base would be prohibited from using any telecom provider who has equipment from these two vendors anywhere in their network.

For anybody who doesn’t know these two companies, they manufacture a wide array of telecom gear. ZTE is one of the five largest cellphone makers in the world. They also make electronics for cellular networks, FTTP networks and long-haul fiber electronics. The company sells under it’s own name, but also OEMs equipment for a number of other vendors. That might make it hard for a carrier to know if they have gear originally manufactured by the company.

Huawei is even larger and is the largest maker of telecom electronics in the world, having passed Ericsson a decade ago. The company’s founder has close ties to the Chinese government and their electronics have been used to build much of the huge wireless and FTTP networks in China. The company makes cellphones, FTTP equipment and also is an innovator in equipment that can be used to upgrade cable HFC network.

This is not the first time that there has been questions about the security of electronics. In 2014 Edward Snowden released documents that showed that the NSA had been planting backdoor software into Cisco routers being exported overseas from the US and that these backdoors could be used to monitor internet usage and emails passing through the routers. Cisco says that they had no idea that this practice was occurring and that it was being added to their equipment after it left their control.

Huawei and ZTE Corp also say that they are not monitoring users of their equipment. I would assume that the NSA and FBI have some evidence that at least the cellphones from these companies can be used to somehow monitor customers.

It must be hard to be a telecom company somewhere outside of the US and China because our two countries make much of the telecom gear in wide use. I have to wonder what a carrier in South America or Africa thinks about these accusations.

I have clients who have purchased electronics from these two Chinese companies. In the FTTP arena the two companies have highly competitive pricing, which is attractive to smaller ISPs updating their networks to fiber. Huawei also offers several upgrade solutions for HFC cable networks that are far less expensive than the handful of other vendors offering solutions.

The announcements by the US government creates a quandary for anybody who has already put this gear into their network. At least for now the potential problems from using this equipment have not been specifically identified. So a network owner has no way of knowing if the problem is only with cellphones, if it applies to everything made by these companies, or even if there is a political nature to these warnings rather than a technical one.

Any small carrier using this equipment likely cannot afford to remove and replace electronics from these companies in their networks. The folks I know using ZTE FTTP gear speak high praises of the ease of using the electronics – which makes sense since these two companies have far more installed fiber customers worldwide than any other manufacturer.

Somebody with this equipment in their network has several quandaries. Do they continue to complete networks that already use this gear or should they somehow introduce a second vendor into their network – an expensive undertaking. Do they owe any warnings to their own customers (at the risk of losing customers). Do they do anything at all?

For now all that is in place is a warning from US intelligence agencies not to use the gear, but there is no prohibition from doing so. And even should the Senate bill pass it would only prohibit ISPs using the gear from providing telecom services to military bases – a business line that is largely handled by the big telcos with nationwide government contracts.

I have no advice to give clients on this other than to strongly consider not choosing these vendors for future projects. If the gear is as bad as it’s being made to sound then it’s hard to understand why the US government wouldn’t ban it rather than just warn about it. I can’t help but wonder how much of this is international wrangling over trade rather than any specific threat or risk.

Should We Regulate Google and Facebook?

I started to write a blog a few weeks ago asking the question of whether we should be regulating big web companies like Google and Facebook. I put that blog on hold due to the furor about Cambridge Analytica and Facebook. The original genesis for the blog was comments made by Michael Powell, the President and CEO of NCTA, the lobbying arm for the big cable companies.

At a speech given at the Cable Congress in Dublin, Ireland Powell said that edge providers like Facebook, Google, Amazon and Apple “have the size, power and influence of a nation state”. He said that there is a need for antitrust rules to reign in the power of the big web companies. Powell put these comments into a framework of arguing that net neutrality is a weak attempt to regulate web issues and that regulation ought to instead focus on the real problems with the web for issues like data privacy, technology addiction and fake news.

It was fairly obvious that Powell was trying to deflect attention away from the lawsuits and state legislation that are trying to bring back net neutrality and Title II regulations. Powell did make same some good points about the need to regulate big web companies. But in doing so I think he also focuses the attention back on ISPs for some of the same behavior he sees at the big web providers.

I believe that Powell is right that there needs to be some regulation of the big edge providers. The US has made almost no regulations concerning these companies. It’s easy to contrast our lack of laws here to the regulations of these companies in the European Union. While the EU hasn’t tackled everything, they have regulations in place in a number of areas.

The EU has tackled the monopoly power of Google as a search engine and advertiser. I think many people don’t understand the power of Google ads. I recently stayed at a bed and breakfast and the owner told me that his Google ranking had become the most important factor in his ability to function as a business. Any time they change their algorithms and his ranking drops in searches he sees an immediate drop-off in business.

The EU also recently introduced strong privacy regulations for web companies. Under the new rules consumers must opt-in the having their data collected and used. In the US web companies are free to use customer information in any manner they choose – and we just saw from the example of Cambridge Analytica how big web companies like Facebook monetize consumer data.

But even the EU regulations are going to have little impact if people grant the ability for the big companies to use their data. One thing that these companies know about us is that we willingly give them access to our lives. People take Facebook personality tests without realizing that they are providing a detailed portrait of themselves to marketeers. People grant permissions to apps to gather all sorts of information about them, such a log of every call made from their cellphone. Recent revelations show that people even unknowingly grant the right to some apps to read their personal messages.

So I think Powell is right in that there needs to be some regulations of the big web companies. Probably the most needed regulation is one of total transparency where people are told in a clear manner how their data will be used. I suspect people might be less willing to sign up for a game or app if they understood that the app provider is going to glean all of the call records from their cellphone.

But Powell is off base when he thinks that the actions of the edge providers somehow lets ISPs off the hook for similar regulation. There is one big difference between all of the edge providers and the ISPs. Regardless of how much market power the web companies have, people are not required to use them. I dropped off Facebook over a year ago because of my discomfort from their data gathering.

But you can’t avoid having an ISP. For most of us the only ISP options are one or two of the big ISPs. Most people are in the same boat as me – my choice for ISP is either Charter or AT&T. There is some small percentage of consumers in the US who can instead use a municipal ISP, an independent telco or a small fiber overbuilder that promises not to use their data. But everybody else has little option but to use one of the big ISPs and is then at their mercy of their data gathering practices. We have even fewer choices in the cellular world since four providers serve almost every customer in the country.

I was never convinced that Title II regulation went far enough – but it was better than nothing as a tool to put some constraints on the big ISPs. When the current FCC killed Title II regulation they essentially set the ISPs free to do anything they want – broadband is nearly totally unregulated. I find it ironic that Powell wants to see some rules the curb market abuse for Google and Facebook while saying at the same time that the ISPs ought to be off the hook. The fact is that they all need to be regulated unless we are willing to live with the current state of affairs where ISPs and edge providers are able to use customer data in any manner they choose.

$600M Grants Only for Telcos?

The Omnibus Budget bill that was passed by Congress last Thursday and signed by the President on Friday includes $600 million of grant funding for rural broadband. This is hopefully a small down payment towards the billions of funding needed to improve rural broadband everywhere. As you might imagine, as a consultant I got a lot of inquiries about this program right away on Friday.

The program will be administered by the Rural Utility Service (RUS). Awards can consist of grants and loans, although it’s not clear at this early point if loan funding would be included as part of the $600 million or made in addition to it.

The grants only require a 15% matching from applicants, although past federal grant programs would indicate that recipients willing to contribute more matching funds will get a higher consideration.

When I look at the first details of the new program I have a hard time seeing this money being used by anybody other than telcos. One of the provisions of the grant money is that it cannot be used to fund projects except in areas where at least 90% of households don’t already have access to 10/1 Mbps broadband. One could argue that there are no longer any such places in the US.

The FCC previously awarded billions to the large telcos to upgrade broadband throughout rural America to at least 10/1 Mbps. The FCC also has been providing money from the A-CAM program to fund broadband upgrades in areas served by the smaller independent telephone companies. Except for a few places where the incumbents elected to not take the previous money – such in some Verizon areas – these programs effectively cover any sizable pocket of households without access to 10/1 broadband.

Obviously, many of the areas that got the earlier federal funding have not yet been upgraded, and I had a recent blog that noted the progress of the CAF II program. But I have a hard time thinking that the RUS is going to provide grants to bring faster broadband to areas that are already slated to get CAF II upgrades within the next 2 ½ years. Once upgraded, all of these areas will theoretically have enough homes with broadband to fail the new 90% test.

If we look at past federal grant programs, the large incumbent telcos have been allowed a chance to intervene and block any grant requests for their service areas that don’t meet all of the grant rules. I can foresee AT&T, CenturyLink and Frontier intervening in any grant request that seeks to build in areas that are slated for near-term CAF II upgrades. I would envision the same if somebody tried to get grant money to build in an area served by smaller telcos who will be using A-CAM money to upgrade broadband.

To make matters even more complicated, the upcoming CAF II reverse auction will be providing funds to fill in the service gaps left from the CAF II program. But for the most part the homes covered by the reverse auctions are not in any coherent geographic pockets but are widely scattered within existing large telco service areas. In my investigation of the reverse auction maps I don’t see many pockets of homes that will not already have at least 10% of homes with access to 10/1 broadband.

Almost everybody I know in the industry doesn’t think the large telcos are actually going to give everybody in the CAF II areas 10/1 Mbps broadband. But it’s likely that they will tell the FCC that they’ve made the needed upgrades. Since these companies are also the ones that update the national broadband map, it’s likely that CAF II areas will all be shown as having 10/1 Mbps broadband, even if they don’t.

There may be some instances where some little pockets of homes might qualify for these grants, and where somebody other than telcos could ask for the funding. But if the RUS strictly follows the mandates of the funding and won’t provide fund for places where more than 10% of homes already have 10/1 Mbps, then this money almost has to go to telcos, by definition. Telcos will be able to ask for this money to help pay for the remaining CAF II and A-CAM upgrades. There is nothing wrong with that, and that’s obviously what the lobbyist who authored this grant language intended – but the public announcement of the grant program is not likely to make that clear to the many others entities who might want to seek this funding. It will be shameful if most of this money goes to AT&T, CenturyLink and Frontier who were already handed billions to make these same upgrades.

I also foresee one other effect of this program. Anybody who is in the process of seeking new RUS funding should expect their request to go on hold for a year since the RUS will now be swamped with administering this new crash grant program. It took years for the RUS to recover from the crush of the Stimulus broadband grants and they are about to get buried in grant requests again.

Data Caps Again?

My prediction is that we are going to see more stringent data caps in our future. Some of the bigger ISPs have data caps today, but for the most part the caps are not onerous. But I foresee data caps being reintroduced as another way for big ISPs to improve revenues.

You might recall that Comcast tried to introduce a monthly 300 GB data cap in 2015. When customers hit that mark Comcast was going to charge $10 for every additional 50 GB of download, or $30 extra for unlimited downloading.

There was a lot of public outcry about those data caps. Comcast backed down from the plan due to pressure from the Tom Wheeler FCC. At the time the FCC probably didn’t have the authority to force Comcast to kill the data caps, but the nature of regulation is that big companies don’t go out of their way to antagonize regulators who can instead cause them trouble in other areas.

To put that Comcast data cap into perspective, in September of 2017 Cisco predicted that home downloading of video would increase 31% per year through 2021. They estimated the average household data download in 2017 was already around 130 GB per month. You might think that means that most people wouldn’t be worried about the data caps. But it’s easy to underestimate the impact of compound growth and at a 31% growth rate the average household download of 130 GB would grow to 383 gigabits by 2021 – considerably over Comcast’s propose data cap.

Even now there are a lot of households that would be over that caps. It’s likely that most cord cutters use more than 300 GB per month – and it can be argued that the Comcast’s data caps would punish those who drop their video. My daughter is off to college now and our usage has dropped, but we got a report from Comcast when she was a senior that said we used over 600 GB per month.

So what are the data caps for the largest ISPs today?

  • Charter, Altice, Verizon and Frontier have no data caps.
  • Comcast moved their data cap to 1 terabyte, with $10 for the first 50 GB and $50 monthly for unlimited download.
  • AT&T has almost the stingiest data caps. The cap on DSL is 150 GB, on U-verse is 250 GB, on 300 Mbps FTTH is 1 TB and is unlimited for a Gbps service. They charge $10 per extra 50 GB.
  • CenturyLink has a 1 TB cap on DSL and no cap on fiber.
  • Cox has a 1 TB cap with $30 for an extra 500 GB or $50 unlimited.
  • Cable One has no charge but largely forces customers who go over caps to upgrade to more expensive data plans. Their caps are stingy – the cap on a 15 Mbps DSL connection is 50 GB.
  • Mediacom has perhaps the most expensive data caps – 60 Mbps cap is 150 GB, 100 Mbps is 1 TB. But the charge for violating the cap is $10 per GB or $50 for unlimited.

Other than AT&T, Mediacom and Cable One none of the other caps sound too restrictive.

Why do I think we’ll see data caps again? All of the ISPs are looking forward just a few years and wondering where they will find the revenues to increase the demand from Wall Street for ever-increasing earnings. The biggest cable companies are still growing broadband customers, mostly by taking customers from DSL. But they understand that the US broadband market is approaching saturation – much like has happened with cellphones. Once every home that wants broadband has it, these companies are in trouble because bottom line growth for the last decade has been fueled by the growth of broadband customers and revenues.

A few big ISPs are hoping for new revenues from other sources. For instance, Comcast has already launched a cellular product and also is seeing good success with security and smart home service. But even they will be impacted when broadband sales inevitably stall – other ISPs will feel the pinch before Comcast.

ISPs only have a few ways to make more money once customer growth has stalled, with the primary one being higher rates. We saw some modest increases earlier this year in broadband rates – something that was noticeable because rates have been the same for many years. I fully expect we’ll start seeing sizable annual increases in broadband rates – which go straight to the bottom line for ISPs. The impact from broadband rate increases is major for these companies – Comcast and Charter, for example, make an extra $250 million per year from a $1 increase in broadband rates.

Imposing stricter data caps can be as good as a rate increase for an ISPs. They can justify it by saying that they are charging more only for those who use the network the most. As we see earnings pressure on these companies I can’t see them passing up such an easy way to increase earnings. In most markets the big cable companies are a near monopoly and consumers who need decent speeds have fewer alternative as each year passes.Since the FCC has now walked away from broadband regulations there will be future regulatory hindrance to the return of stricter data caps.

Charter Upgrading Broadband

We are now starting to see the results of cable companies upgrading to DOCSIS 3.1. Charter, the second biggest ISP in the country recently announced that it will be able to offer gigabit speeds to virtually it’s whole footprint of over 40 million passings.

DOCSIS 3.1 is the newest protocol from Cable Labs that allows bonding an unlimited number of spare channel slots for broadband. A gigabit data path requires roughly 24 channels on a cable network using the new DOCSIS protocol. In bigger markets this replaces DOCSIS 3.0 that was limited to maximum download speeds in the range of 250 Mbps. I know there are Charter markets with even slower speeds that either operate under older DOCSIS standards or that are slow for some other reason.

Charter has already begun the upgrades and is now offering gigabit speeds to 9 million passings in major markets like Oahu, Hawaii; Austin, Texas; San Antonio, Texas, Charlotte, North Carolina; Cincinnati, Ohio; Kansas City, Missouri; New York City; and Raleigh-Durham, North Carolina. It’s worth noting that those are all markets where there is fiber competition, so it’s natural they would upgrade these first.

The new increased speed won’t actually be a gigabit and will be 940 Mbps download and 35 Mbps upload. (It’s hard to think there is anybody who is really going to care about that distinction). Cable Labs recently came out with a DOCSIS upgrade that can increase upload speeds, but there’s been no talk from Charter about making that upgrade. Like the other big cable companies, Charter serves businesses that want faster upload speeds with fiber.

Along with the introduction of gigabit broadband the company also says it’s going to increase the speed of it’s minimum broadband product. In the competitive markets listed above Charter has already increased the speed of its base product to 200 Mbps download, up from 100 Mbps.

It’s going to be interesting to find out what Charter means by the promise to cover “virtually’ their whole footprint. Charter grew by purchasing systems in a wide range of conditions. I know of smaller Charter markets where customers don’t get more than 20 Mbps. There is also a well-known lawsuit against Charter in New York State that claims that a lot of households in upstate New York are getting speeds far slower than advertised due to having outdated cable modems.

The upgrade to DOCSIS 3.1 can be expensive in markets that have not yet been upgraded to DOCSIS 3.0. An upgrade might mean replacing power taps and other portions of the network, and in some cases might even require a replacement of the coaxial cable. My guess is that the company won’t rush to upgrade these markets the upgrade to DOCSIS 3.1 this year. I’m sure the company will look at them on a case-by-case basis.

The company has set a target price for a gigabit at $124.95. But already in the competitive markets like Oahu the company was selling introductory packages for $104.99. There is also a bundling discount for cable subscribers.

The pricing list highlights that they still have markets with advertised speeds as low as 30 Mbps – and the company’s price for the minim speeds is the same everywhere, regardless if that product is 30 Mbps or 200 Mbps. And as always with cable networks, these are ‘up to’ speeds and as I mentioned, there are markets that don’t meet these advertised speeds today.

Overall this ought to result in a lot of home and businesses getting faster broadband than today. We saw something similar back when the cable companies implemented DOCSIS 3.0 and the bigger companies unilaterally increased speeds to customers without increasing the prices. Like other Charter customers, I will be interested in what they do in my market. I have the 60 Mbps product and I’ll be interested to see if my minimum speeds is increased to 100 Mbps or 200 Mbps and if I’m offered a gigabit here. With the upgrade time frame they are promising I shouldn’t have to wait long to find out.

Spectrum and 5G

All of the 5G press has been talking about how 5G is going to be bringing gigabit wireless speeds everywhere. But that is only going to be possible with millimeter wave spectrum, and even then it requires a reasonably short distance between sender and receiver as well as bonding together more than one signal using multiple MIMO antennae.

It’s a shame that we’ve let the wireless marketeers equate 5G with gigabit because that’s what the public is going to expect from every 5G deployment. As I look around the industry I see a lot of other uses for 5G that are going to produce speeds far slower than a gigabit. 5G is a standard that can be applied to any wireless spectrum and which brings some benefits over earlier standards. 5G makes it easier to bond multiple channels together for reaching one customer. It also can increase the number of connections that can be made from any given transmitter – with the biggest promise that the technology will eventually allow connections to large quantities of IOT devices.

Anybody who follows the industry knows about the 5G gigabit trials. Verizon has been loudly touting its gigabit 5G connections using the 28 GHz frequency and plans to launch the product in up to 28 markets this year. They will likely use this as a short-haul fiber replacement to allow them to more quickly add a new customer to a fiber network or to provide a redundant data path to a big data customer. AT&T has been a little less loud about their plans and is going to launch a similar gigabit product using 39 GHz spectrum in three test markets soon.

But there are also a number of announcements for using 5G with other spectrum. For example, T-Mobile has promised to launch 5G nationwide using its 600 MHz spectrum. This is a traditional cellular spectrum that is great for carrying signals for several miles and for going around and through obstacles. T-Mobile has not announced the speeds it hopes to achieve with this spectrum. But the data capacity for 600 MHz is limited and binding numerous signals together for one customer will create something faster then LTE, but not spectacularly so. It will be interesting to see what speeds they can achieve in a busy cellular environment.

Sprint is taking a different approach and is deploying 5G using the 2.5 GHz spectrum. They have been testing the use of massive MIMO antenna that contain 64 transmit and 64 receive channels. This spectrum doesn’t travel far when used for broadcast, so this technology is going to be used best with small cell deployments. The company claims to have achieved speeds as fast as 300 Mbps in trials in Seattle, but that would require binding together a lot of channels, so a commercial deployment is going to be a lot slower in a congested cellular environment.

Outside of the US there seems to be growing consensus to use 3.5 GHz – the Citizens Band radio frequency. That raises the interesting question of which frequencies will end up winning the 5G race. In every new wireless deployment the industry needs to reach an economy of scale in the manufacture of both the radio transmitters and the cellphones or other receivers. Only then can equipment prices drop to the point where a 5G capable phone will be similar in price to a 4GLTE phone. So the industry at some point soon will need to reach a consensus on the frequencies to be used.

In the past we rarely saw a consensus, but rather some manufacturer and wireless company won the race to get customers and dragged the rest of the industry along. This has practical implications for early adapters of 5G. For instance, somebody buying a 600 MHz phone from T-Mobile is only going to be able to use that data function when near to a T-Mobile tower or mini-cell. Until industry consensus is reached, phones that use a unique spectrum are not going to be able to roam on other networks like happens today with LTE.

Even phones that use the same spectrum might not be able to roam on other carriers if they are using the frequency differently. There are now 5G standards, but we know from practical experience with other wireless deployments in the past that true portability between networks often takes a few years as the industry works out bugs. This interoperability might be sped up a bit this time because it looks like Qualcomm has an early lead in the manufacture of 5G chip sets. But there are other chip manufacturers entering the game, so we’ll have to watch this race as well.

The word of warning to buyers of first generation 5G smartphones is that they are going to have issues. For now it’s likely that the MIMO antennae are going to use a lot of power and will drain cellphone batteries quickly. And the ability to reach a 5G data signal is going to be severely limited for a number of years as the cellular providers extend their 5G networks. Unless you live and work in the heart of one of the trial 5G markets it’s likely that these phones will be a bit of a novelty for a while – but will still give a user bragging rights for the ability to get a fast data connection on a cellphone.

Edging Closer to Satellite Broadband

A few weeks ago Elon Musk’s SpaceX launched two test satellites that are the first in a planned low-orbit satellite network that will blanket the earth with broadband. The eventual network, branded as Starlink, will consist of 4,425 satellites deployed at 700 miles above earth and another 7,518 deployed at around 210 miles of altitude.

Getting that many satellites into orbit is a daunting logistical task. To put this into perspective, the nearly 12,000 satellites needed are twice the number of satellites that have been launched in history. It’s going to take a lot of launches to get these into the sky. SpaceX’s workhorse rocket the Falcon 9 can carry about ten satellites at a time. They also have tested a Falcon Heavy system that could carry 20 or so satellites at a time. If they can make a weekly launch of the larger rocket that’s still 596 launches and would take 11.5 years. To put that number into perspective, the US led the world with 29 successful satellite launches last year, with Russia second with 21 and China with 16.

SpaceX is still touting this as a network that can make gigabit connections to customers. I’ve read the FCC filing for the proposed network several times and it looks to me like that kind of speed will require combining signals from multiple satellites to a single customer and I have to wonder if that’s practical when talking about deploying this networks to tens of millions of simultaneous subscribers. It’s likely that their standard bandwidth offering is going to be something significantly less.

There is also a big question to me about the capacity of the backhaul network that carry signal to and from the satellites. It’s going to take some major bandwidth to handle the volume of broadband users that SpaceX has in mind. We are seeing landline long-haul fiber networks today that are stressed and reaching capacity. The satellite network will face the same backhaul problems as everybody else and will have to find ways to cope with a world where broadband demand doubles every 3 years or so. If the satellite backhaul gets clogged or if the satellites get over-subscribed then the quality of broadband will degrade like with any other network.

Interestingly, SpaceX is not the only one chasing this business plan. For instance, billionaire Richard Branson wants to build a similar network that would put 720 low-orbit satellites over North America. Telesat has launched two different test satellites and also want to deploy a large satellite network. Boeing also announced intentions to launch a 1,000-satellite network over North America. It’s sounding like our skies are going to get pretty full!

SpaceX is still predicting that the network is going to cost roughly $10 billion to deploy. There’s been no talk of consumer prices yet, but the company obviously has a business plan – Musk want to use this business as the primary way to fund the colonization of Mars. But pricing is an issue for a number of reasons. The satellites will have some finite capacity for customer connections. In one of the many articles I read I saw the goal for the network is 40 million customers (and I don’t know if that’s the right number, but there is some number of simultaneous connections the network can handle). 40 million customers sounds huge, but with a current worldwide population of over 7.6 billion people it’s miniscule for a worldwide market.

There are those predicting that this will be the salvation for rural broadband. But I think that’s going to depend on pricing. If this is priced affordably then there will be millions in cities who would love to escape the cable company monopoly, and who could overwhelm the satellite network. There is also the issue of local demand. Only a limited number of satellites can see any given slice of geography. The network might easily accommodate everybody in Wyoming or Alaska, but won’t be able to do the same anywhere close to a big city.

Another issue is worldwide pricing. A price that might be right in the US might be ten times higher than what will be affordable in Africa or Asia. So there is bound to be pricing differences based upon regional incomes.

One of the stickier issues will be the reaction of governments that don’t want citizens using the network. There is no way China is going to let citizens bypass the great firewall of China by going through these satellites. Repressive regimes like North Kora will likely make it illegal to use the network. And even democratic countries like India might not like the idea – last year they turned down free Internet from Facebook because it wasn’t an ‘Indian’ solution.

Bottom line is that this is an intriguing idea. If the technology works as promised, and if Musk can find the money and can figure out the logistics to get this launched it’s going to be another new source of broadband. But satellite networks are not going to solve the world’s broadband problems because they are only going to be able to help some small limited percentage of the world’s population. But with that said, a remote farm in the US or a village in Africa is going to love this when it’s available.