The Fastest and Slowest Internet in the US

The web site has calculated and ranked the average Internet speeds by state. The site offers a speed test and then connects visitors to the web pages for the various ISPs in each zip code in the country. I have to imagine the site makes a commission for broadband customers that subscribe through their links.

Not surprisingly, the east coast states with Verizon FiOS ranked at the top of the list for Internet speeds since many customers in those states have the choice between a fiber network and a big cable company network.

For example, Maryland was top on the list with an average speed of 65 Mbps, as measured by the site’s speed tests. This was followed by New Jersey at 59.6 Mbps, Delaware at 59.1 Mbps, Rhode Island at 56.8 Mbps and Virginia at 56 Mbps.

Even though they are at the top of the list, Maryland is like most states and there are still rural areas of the state with slow or non-existent broadband. The average speed test results are the aggregation of all of the various kinds of broadband customers in the state:

  • Customers with fast Verizon FiOS products
  • Customers with fast broadband from Comcast, the largest ISP in the state
  • Customers that have elected slower, but less expensive DSL options
  • Rural customers with inferior broadband connections

Considering all of the types of customers in the state, an average speed test result of 65 Mbps is impressive. This means that a lot of households in the state have speeds of 65 Mbps or faster. That’s not a surprise considering that both Verizon FiOS and Comcast have base product speeds considerably faster than 65 Mbps. If I was a Maryland politician, I’d be more interested in the distribution curve making up this average. I’d want to know how many speed tests were done by households getting only a few Mbps speeds. I’d want to know how many gigabit homes were in the mix – gigabit is so much faster than the other broadband products that it pulls up the average speed.

I’d also be interested in speeds by zip code. I took a look at the FCC broadband data reported on the 477 forms just for the city of Baltimore and I see widely disparate neighborhoods in terms of broadband adoption. There are numerous neighborhoods just north of downtown Baltimore with broadband adoption rates as low as 30%, and numerous neighborhoods under 40%. Just south of downtown and in the northernmost extremes of the city, the broadband adoption rates are between 80% and 90%. I have to guess that the average broadband speeds are also quite different in these various neighborhoods.

I’ve always wondered about the accuracy of compiling the results of mass speed tests. Who takes these tests? Are people with broadband issues more likely to take the tests? I have a friend who has gigabit broadband and he tests his speed all of the time just to see that he’s still getting what’s he’s paying for (just FYI, he’s never measured a true gigabit, just readings in the high 900s Mbps). I take a speed test every time I read something about speeds. I took the speed test at this site from my office and got a download speed of 43 Mbps. My office happens to be in the most distant corner of the house from the incoming cable modem, and at the connection to the Charter modem we get 135 Mbps. My slower results on this test are due to WiFi and yet this website will log me as an underperforming Charter connection.

There were five states at the bottom of the ranking. Last was Alaska at 17 Mbps, Mississippi at 24.8 Mbps, Idaho at 25.3 Mbps, Montana at 25.7 Mbps and Maine at 26 Mbps. That’s five states where the average internet speed is at or below the FCC’s definition of broadband.

The speeds in Alaska are understandable due to the remoteness of many of the communities. There are still numerous towns and villages that receive Internet backhaul through satellite links. I recently read that the first fiber connection between the US mainland and Alaska is just now being built. That might help speeds some, but there is a long way to go to string fiber backhaul to the remote parts of the state.

Mostly what the bottom of the scale shows is that states that are both rural and somewhat poor end up at the bottom of the list. Interestingly, the states with the lowest household densities such as Wyoming and South Dakota are not in the bottom five due to the widespread presence of rural fiber built by small telcos.

What most matters about this kind of headline is that even in the states with fast broadband there are still plenty of customers with lousy broadband. I would hope that Maryland politicians don’t look at this headline and think that their job is done – by square miles of geography the majority of the state still lacks good broadband.

Access to Low-Price Broadband

The consumer advocate BroadbandNow recently made an analysis of broadband prices across the US and came up with several conclusions:

  • Broadband prices are higher in rural America.
  • They conclude that 45% of households don’t have access to a ‘low-priced plan’ for a wired Internet connection.

They based their research by looking at the published prices of over 2,000 ISPs. As somebody who does that same kind of research in individual markets, I can say that there is often a big difference between published rates and actual rates. Smaller ISPs tend to charge the prices they advertise, so the prices that BroadbandNow found in rural America are likely the prices most customers really pay.

However, the big ISPs in urban areas routinely negotiate rates with customers and a significant percentage of urban broadband customers pay something less than the advertised rates. But the reality is messier even than that since a majority of customers still participate in a bundle of services. It’s usually almost impossible to know the price of any one service inside a bundle and the ISP only reveals the actual rate when a customer tries to break the bundle to drop one of the bundled services. For example, a customer may think they are paying $50 for broadband in a bundle but find out their real rate is $70 if they try to drop cable TV. These issues make it hard to make any sense out of urban broadband rates.

I can affirm that rural broadband rates are generally higher. A lot of rural areas are served by smaller telcos and these companies realize that they need to charge higher rates in order to survive. As the federal subsidies to rural telcos have been reduced over the years these smaller companies have had to charge realistic rates that match their higher costs of doing business in rural America.

I think rural customers understand this. It’s a lot more expensive for an ISP to provide broadband in a place where there are only a few customers per road-mile of network than in urban areas where there might be hundreds of customers per mile. A lot of other commodities cost more in rural America for this same reason.

What this report is not highlighting is that the lower-price broadband in urban areas is DSL. The big telcos have purposefully priced DSL below the cost of cable modem broadband as their best strategy to keep customers. When you find an urban customer that’s paying $40 or $50 for broadband it’s almost always going to be somebody using DSL.

This raises the question of how much longer urban customers will continue to have the DSL option. We’ve already seen Verizon abandon copper-based products in hundreds of urban exchanges in the last few years. Customers in those exchanges can theoretically now buy FiOS on fiber – and pay more for the fiber broadband. This means for large swaths of the northeast urban centers that the DSL option will soon be gone forever. There are persistent industry rumors that CenturyLink would like to get out of the copper business, although I’ve heard no ideas of how they might do it. It’s also just a matter of time before AT&T starts walking away from copper. Will there even be any urban copper a decade from now? Realistically, as DSL disappears with the removal of copper the lowest prices in the market will disappear as well.

There is another trend that impacts the idea of affordable broadband. We know that the big cable companies now understand that their primary way to keep their bottom line growing is to raise broadband rates. We’ve already seen big broadband rate increases in the last year, such as the $5 rate increase from Charter for bundled broadband.

The expectation on Wall Street is that the cable companies will regularly increase broadband rates going into the future. One analyst a year ago advised Comcast that basic broadband ought to cost $90. The cable companies are raising broadband rates in other quieter ways. Several big cable companies have told their boards that they are going to cut back on offering sales incentives for new customers and they want to slow down on negotiating rates with existing customers. It would be a huge rate increase for most customers if they are forced to pay the ‘list’ prices for broadband.

We also see carriers like Comcast starting to collect some significant revenues for customers going over the month data caps. As household broadband volumes continue to grow the percentage of people using their monthly cap should grow rapidly. We’ve also seen ISPs jack up the cost of WiFi or other modems as a backdoor way to get more broadband revenue.

As the cable companies find way to extract more revenue out of broadband customers and as the big telcos migrate out of DSL my bet is that by a decade from now there will be very few customers with ‘affordable’ broadband. Every trend is moving in the opposite direction.

Gaming Migrates to the Cloud

We are about to see a new surge in demand for broadband as major players in the game industry have decided to move gaming to the cloud. At the recent Game Developer’s Conference in San Francisco both Google and Microsoft announce major new cloud-based gaming initiatives.

Google announced Stadia, a platform that they tout as being able to play games from anywhere with a broadband connection on any device. During the announcement they showed transferring a live streaming game from desktop to laptop to cellphone. Microsoft announced the new xCloud platform that let’s Xbox gamers play a game from any connected device. Sony Playstation has been promoting online play between gamers from many years and now also offers some cloud gaming on the Playstation Now platform.

OnLive tried this in 2011, offering a platform that was played in the cloud using OnLive controllers, but without needing a computer. The company failed due to the quality of broadband connections in 2011, but also due to limitations at the gaming data centers. Both Google and Microsoft now operate regional data centers around the country that house state-of-the-art whitebox routers and switches that are capable of handling large volumes of simultaneous gaming sessions. As those companies have moved large commercial users to the cloud they created the capability to also handle gaming.

The gaming world was ripe for this innovation. Current gaming ties gamers to gaming consoles or expensive gaming computers. Cloud gaming brings mobility to gamers, but also eliminates need to buy expensive gaming consoles. This move to the cloud probably signals the beginning of the end for the Xbox, Playstation, and Nintendo consoles.

Google says it will support some games at the equivalent of an HD video stream, at 1080p and 60 frames per second. That equates to about 3GB of downloaded per hour. But most of the Google platform is going to operate at 4K video speeds, requiring download speeds of at least 25 Mbps per gaming stream and using 7.2 GB of data per hour. Nvidia has been telling gamers that they need 50 Mbps per 4K gaming connection.

This shift has huge implications for broadband networks. First, streaming causes the most stress on local broadband networks since the usage is continuous over long periods of times. A lot of ISP networks are going to start showing data bottlenecks when significant numbers of additional users stream 4K connections for hours on end. Until ISPs react to this shift, we might return to those times when broadband networks bogged down in prime time.

This is also going to increase the need for download and upload speeds. Households won’t be happy with a connection that can’t stream 4K, so they aren’t going to be satisfied with a 25 Mbps connection that the FCC says is broadband. I have a friend with two teenage sons that both run two simultaneous game streams while watching a steaming gaming TV site. It’s good that he is able to buy a gigabit connection on Verizon FiOS, because his sons alone are using a continuous broadband connection of at least 110 Mbps, and probably more

We are also going to see more people looking at the latency on networks. The conventional wisdom is that a gamer with the fastest connection has an edge. Gamers value fiber over cable modems and value cable modems over DSL.

This also is going to bring new discussion to the topic of data caps. Gaming industry statistics say that the average serious gamer averages 16 hours per week of gaming. Obviously, many play longer than the average. My friend with the two teenagers is probably looking at least at 30 GB per hour of broadband download usage plus a decent chunk of upload usage. Luckily for my friend, Verizon FiOS has no data cap. Many other big ISPs like Comcast start charging for data usage over one terabyte per month – a number that won’t be hard to reach for a household with gamers.

I think this also opens up the possibility for ISPs to sell gamer-only connections. These connections could be routed straight to peering arrangements with the Google or Microsoft to guarantee the fastest connection through their network and wouldn’t mix gaming streams with other household broadband streams. Many gamers will pay extra to have a speed edge.

This is just another example of how the world find ways to use broadband when it’s available. We’ve obviously reached a time when online gaming can be supported. When OnLive tried is there were not enough households with fast enough connections, there weren’t fast enough regional data centers, and there wasn’t a peering network in place where ISPs connect directly to big data companies like Google and bypass the open Internet.

The gaming industry is going to keep demanding faster broadband and I doubt they’ll be satisfied until we have a holodeck in every gamer’s home. But numerous other industries are finding ways to use our increasing household broadband capcity and the overall demand keeps growing at a torrid pace.


Verizon’s Case for 5G, Part 4

Ronan Dunne, an EVP and President of Verizon Wireless recently made Verizon’s case for aggressively pursuing 5G. This last blog in the series looks at Verizon’s claim that they are going to use 5G to offer residential broadband. The company has tested the technology over the last year and announced plans to soon introduce the technology into a number of cities.

I’ve been reading everything I can about Verizon and I think I finally figured out what they are up to. They have been saying that within a few years that they will make fixed 5G broadband available to millions of homes. One of the first cities they will be building is Sacramento. It’s clear that in order to offer fast speeds that each 5G transmitter will have to be fiber fed. To cover all neighborhoods in Sacramento would require building a lot of new fiber. Building new fiber is both expensive and time-consuming. And it’s still a head scratcher about how this might work in neighborhoods without poles where other utilities are underground.

Last week I read of an announcement by Lee Hick’s of Verizon for a new initiative called One Fiber. Like many large telecoms Verizon has numerous divisions that own fiber assets like the FiOS group, the wireless group and the old MCI business CLEC group. The new policy will consolidate all of this fiber under into a centralized system, making existing and new fiber available to every part of the business. It might be hard for people to believe, but within Verizon each of these groups managed their own fiber separately. Anybody who has ever worked with the big telcos understands what a colossal undertaking it will be to consolidate this.

Sharing existing fiber and new fiber builds among its various business units is the change that will unleash the potential for 5G deployment. My guess is that Verizon has eyed AT&T’s fiber the strategy and is copying the best parts of it. AT&T has quietly been extending its fiber-to-the-premise (FTTP) network by extending fiber for short distances around the numerous existing fiber nodes in the AT&T network. A node on an AT&T fiber built to get to a cell tower or to a school is now also a candidate to function as a network node for FTTP. Using existing fiber wisely has allowed AT&T to claim they will soon be reaching over 12 million premises with fiber – without having to build a huge amount of new fiber.

Verizon’s One Fiber policy will enable them to emulate AT&T. Where AT&T has elected to build GPON fiber-to-the-premise, Verizon is going to try 5G wireless. They’ll deploy 5G cell sites at their existing fiber nodes where it makes financial sense. Verizon doesn’t have as extensive of a fiber network as AT&T and I’ve seen a few speculations that they might pass as many as 7 million premises with 5G within five years.

Verizon has been making claims about 5G that it can deliver gigabit speeds out to 3,000 feet. It might be able to do that in ideal conditions, but their technology is proprietary and nobody knows the real capabilities. One thing we know about all wireless technologies is that it’s temperamental and varies a lot by local conditions. The whole industry is waiting to the speeds and distances Verizon will really achieve with the first generation gear.

The company certainly has some work in front of it to pursue this philosophy. Not all fiber is the same and their existing fiber network probably has fibers of many sizes, ages and conditions using a wide range of electronics. After inventorying and consolidating control over the fiber they will have to upgrade electronics and backbone networks to enable the kind of bandwidth needed for 5G.

The Verizon 5G network is likely to consist of a series of cell sites serving small neighborhood circles – the size of the circle depending upon topography. This means the Verizon networks will  not likely be ubiquitous in big cities – they will reach out to whatever is in range of 5G cell sites placed on existing Verizon fiber. After the initial deployment, which is likely to take a number of years, the company will have to assess if building additional fiber makes economic sense. That determination will consider all of the Verizon departments and not just 5G.

I expect the company to follow the same philosophy they did when they built FiOS. They were disciplined and only built in places that met certain cost criteria. This resulted in a network that, even today, bring fiber to one block but not the one next door. FiOS fiber was largely built where Verizon could overlash fiber onto their telephone wires or drag fiber through existing conduits – I expect their 5G expansion to be just as disciplined.

The whole industry is dying to see what Verizon can really deliver with 5G in the wild. Even if it’s 100 Mbps broadband they will be a competitive alternative to the cable companies. If they can really deliver gigabit speeds to entire neighborhoods then will have shaken the industry. But in the end, if they stick to the One Fiber model and only deploy 5G where it’s affordable they will be bringing a broadband alternative to those that happen to live near their fiber nodes – and that will mean passing millions of homes and tens of millions.

The Growing Dislike of Big ISPs

The annual ratings from the American Consumer Satisfaction Index came out recently, and they show that consumer dislike for the big ISPs is increasing. This survey looks at how consumers feel about a wide range of businesses, and the ISPs have been ranked as some of the most disliked corporations for a number of years.

The survey asks numerous questions and creates a satisfaction scale from 1 to 100. The survey looks at several different categories of telecom companies and has separate rankings for for cable TV providers, broadband providers and a new category for streaming video providers.

Among the big ISPs that offer cable TV service, the rank of every provider except AT&T U-Verse sank compared to last year. AT&T was the highest rated company in this group with a rating of 70. At the bottom was Mediacom with a rating of 55, down from 56 a year ago. The two giant cable companies both saw a drop in consumer satisfaction: Charter had a huge drop from 63 down to 58, Comcast dropped from 58 to 57.

The rankings for how consumers feel about their broadband provider were similar. The only big ISP that didn’t drop was Comcast that stayed at a ranking of 60 for two years running. Everybody other big ISP dropped. At the top of the list was Verizon FiOS which dropped from 71 to 70. At the bottom was Mediacom again which had a big drop from 58 to 53. Charter also had a big drop from 63 to 58. Rounding out the bottom rankings were Frontier (54), Windstream (56) and CenturyLink (58)

Streaming services got significantly higher rankings. Topping this first time list were Netflix, Playstation Vue and Twitch with a ranking of 78. At the bottom were Sony Crackle (68), Showtime Anywhere (70) and DirecTV Now (70), all still significantly better than traditional cable companies.

It must be frustrating for the big ISPs to see their customer satisfaction drop year after year. The rankings of the ISPs are lower than other unpopular industries like airlines, banks, insurance companies and even the Internal Revenue Service.

If there is any upside to the low customer satisfaction rankings it’s that it creates opportunities for competitors. It’s been conventional wisdom for years that a new competitor will get up to 30% of a market just for showing up with an alternative network – assuming they know how to sell and have decent customer service.

They survey doesn’t dig into the reasons for the sinking dissatisfaction, but it’s easy to speculate on some of the reasons. People are certainly unhappy with traditional cable TV due to the ever-rising prices. High prices are the number one factor cited for consumers who are cutting the cord, and the dropping satisfaction shows there is likely another growing pile of future cord cutters.

It’s a little harder to understand the dissatisfaction with broadband. At least in major metropolitan areas the ISPs have continued to unilaterally increase download speeds with only modest rate hikes. One would expect satisfaction with the the broadband product to be higher and my guess is that the low ranking deal more with the pain involved in having to ever call these big companies. Compared to other businesses we all deal with, the interaction with the cable company / ISP is often the one we dread the most. The other likely cause for dissatisfaction is that ISPs often don’t deliver the speeds they promise. This varies by market, but we’ve seen cities where consumers only get a fraction of the speed they are paying for.

It’s much easier to understand unhappiness with ISPs immediately outside of big cities. Broadband is smaller towns is often still generations behind and is inadequate for what households expect today in terms of download speeds and latency. Anybody who reads this blog will understand the near-hatred for the ISPs in rural areas. The cable companies don’t come to rural America and the big telcos have abandoned maintenance of the copper networks for decades. Rural broadband is either poor or nonexistent with practically everybody hating the companies that won’t bring them broadband.


Operating on a Leased Network

One of the comments posted on a recent blog mentioned that CenturyLink recently had agreed to operate on somebody else’s fiber network to serve residential customers – the first time that one of the big telcos or cable companies had agreed to do so. One of the major reasons cited for lack of competition in the US is the unwillingness of the major ISPs to operate outside their own networks. This certainly sounded newsworthy and I looked into the example cited.

CenturyLink has agreed to use the fiber network provided by Lumiere Fiber, an affiliate company of Sterling Ranch, a new planned community outside of Denver. CenturyLink won the ability to serve the community through an RFP competition with Comcast, the cable company serving the area. As the winner, CenturyLink will be the exclusive ISP on the network – which only has a few homes now but has plans to grow to 12,000 residences.

So is this really newsworthy? I think the answer is both yes and no – but mostly no. It is true that CenturyLink will be using somebody else’s fiber network, and a large one at that, when the community is ultimately built. But there are a number of reasons why this is not as groundbreaking as it sounds.

First, this is not really unique. While this is a large new subdivision, in many ways this is similar to the thousands of arrangements that ISPs routinely have made to serve large apartment complexes. In the vast majority of apartments the wiring is owned by the landlord and not the ISPs. There are some large apartment units around the country numbering in the thousands of units and this opportunity is unique only from this perspective of being larger than most MDUS.

CenturyLink is already building a lot of fiber to residential neighborhoods, with nearly 1 million new units passed this year – so this isn’t going to present any technological challenges. I am sure that the company will use the identical electronics and provisioning software it uses everywhere else.

This also is not going to stretch the operational systems of CenturyLink. The only real difference between this and other CenturyLink fiber is that the company doesn’t own the fiber. But they are going to take orders and connect new customers using their normal processes. They will dispatch technicians for trouble calls in the usual manner. And if Lumiere hires CenturyLink to do the fiber maintenance then they would even make fiber repairs in almost the same manner (this detail was not specified in the press releases).

There seems to be two reasons why the big ISPs don’t generally use networks owned by others. In the case of the big cable companies there seems be a gentleman’s agreement to never cross those lines. I can’t find one example of a big cable company crossing the line to compete for residential customers.

But the hardest barrier for the big ISPs to use other networks is the fact that their systems are largely incapable of making operational exceptions. They have created operation systems and processes that work for them, on their own networks, with their own employees. These processes are often highly decentralized and it takes employees scattered across the country to accomplish normal daily tasks like adding a new customer or answering a trouble call. It’s extremely difficult for a decentralized company to make exceptions for customers that are treated different than everybody else – that always results in chaos.

An example of this is Verizon FiOS. When the company decided to build fiber they realized that they could not reshape their existing copper work processes and people to accommodate the new technology. They solved this by creating a totally new company and FiOS was new from top to bottom – from technology, to people, to processes.

The real headline I want to see is where one of the big ISPs gets on somebody else’s network in a competitive environment. For example, there are a number of open access fiber networks in Washington state that are significantly larger than the Sterling Ranch opportunity. There are numerous smaller open access networks around the country, and no big ISP has ever served residents on these networks. If the big companies would jump on competitive networks then a lot more of these networks would get built.

San Francisco is talking about building an open access fiber network and if it’s built will really challenge the big ISPs. If that network comes to fruition, will one of the other big cable companies decide to take on Comcast? That would be the big news we’ve always wanted to hear.

Big Telcos and Broadband

A recent article in Telecompetitor reports that analysts at Moffett Nathanson expect the big telcos to start making inroads into the near-monopoly for broadband currently enjoyed by the cable companies. The article focused specifically on AT&T, but some other big telcos like CenturyLink are also aggressively expanding fiber networks.

I would have to assume that the analysts got the following goals directly from AT&T because I can’t find any other references to these specific goals. But each of these is in line with statements made by AT&T executives over the last year. According to the article, AT&T broadband goals over the next few years are as follows:

  • Offer broadband speeds below 50 Mbps to 30 million passings using DSL;
  • Offer broadband speeds between 50 – 100 Mbps to 20 million passings using paired copper VDSL;
  • Offer ‘near gigabit’ speeds to 10 million passings using via 5G wireless;
  • Offer gigabit speeds using FTTH technology to 14 million residential passings and 8 million business passings.

The real news here is in the last two bullet points. AT&T accepted the goal from the FCC for passing 12.5 million customers with FTTH from the merger with DirecTV. It’s big news if they intend to extend that to 22 million passings. And the goal of using millimeter wave radios to reach another 10 million potential customers is something new.

If AT&T meets these goals they will be bringing serious competition to the cable companies. AT&T and the other telcos have been bleeding DSL customers for over a decade and handed the cable companies a near-monopoly on fast broadband in most urban and suburban markets. According to Moffett Nathanson the telco expansion will bring near-gigabit speeds on telco networks to 32% of the country.

It’s important to understand where the new AT&T broadband is being built. The majority of the new coverage is in three market niches – apartment buildings, new greenfield housing developments and business districts. AT&T’s expansion has largely focused on these specific market niches and is likely to continue to do so. AT&T is not proposing to duplicate what Verizon did with its FiOS network and bring broadband to older single family home neighborhoods. They are instead focusing on buildouts where the the cost of construction per customer is the lowest – the ultimate cherry-picking network.

This means that the AT&T coverage will bring the opportunity for gigabit broadband to a much larger footprint, but that’s not always going to bring customer choice. In the MDU market many landlords are still allowing only one ISP into their apartment complexes. As telcos like AT&T compete with the cable companies for this market the broadband speeds in apartments and condos will get much faster, but many customers will still only have the option to buy from whatever ISP that landlord has allowed.

I have to admit that this market shift to bring broadband to MDUs caught me a bit by surprise. Many years ago Verizon showed that there is a successful business plan for building fiber to older residential neighborhoods. In the northeast Verizon still carries significant market share in its FiOS neighborhoods, and customers consistently rate them as having better customer service than the cable companies. Other telcos like CenturyLink are copying the Verizon model and are building swaths of fiber in residential neighborhoods.

The traditional wisdom was that it is too costly to bring fast broadband to apartments. A decade ago bringing fiber to an apartment meant rewiring the whole building with fiber – and for many apartments that is prohibitively expensive. But there have been technology advances that have made this more feasible. For example, much of the ‘near-gigabit’ speeds can be achieved by using G.Fast technology over existing coaxial or telco cable in older apartments. There have also been big improvements for indoor fiber deployments that include small flexible fibers and techniques for installing fiber inconspicuously in hallways. Many buildings that seemed too costly to serve years ago now make economic sense. Finally, the potential to deliver backhaul to an MDU using millimeter wave radios is going to eliminate the need to build as much fiber.

The real big unknown is how successful any of these big companies will be with 5G. As I’ve been writing lately there are still a lot of barriers that might make it difficult for AT&T to use the wireless technology to cover 10 million passings. We’re going to have to wait to see some real deployments over the next few years to see if the technology works as promised and if the cost of deployment is as cheap as anticipated. But the one thing that these analysts have gotten right is that the big telcos are finally fighting back against the cable monopolies they helped to create by sticking with DSL too long. It’s going to be interesting to see how well they do in winning back customers that they lost over the last two decades.

Is 5G Really a Fiber Replacement?

I recently saw a short interview in FierceWireless with Balan Nair, CTO of Liberty Global. In case you haven’t heard of the company, they are the biggest cable company in the world with over 28 million customers.

One of the things he discussed was the practical widespread implementation of 5G gigabit technology. He voiced the same thing I have been thinking for the last year about the economics of deploying 5G. He was quoted as saying, “5G will be a ‘game-changer’ in its superior ability to transfer data, but the technology will not replace fixed-network broadband services anytime soon. The economics just aren’t there. You’re talking about buying hundreds of towers and all of that spectrum. And on the residential end, putting a device outside the window and wiring it back into the home. It’s a question of business model and if you plan on making any money. The economics benefit fixed.”

The big telcos are making a big deal out of 5G, mostly I think to appear cutting-edge to their investors. And I have no doubt that in certain places like dense urban downtowns that 5G might be the best way to speed up gigabit broadband deployment. But I look at what’s involved in deploying the technology anywhere else and I have a hard time seeing the economic case for using 5G to bring fast broadband to the masses.

5G will definitely make an impact in urban downtowns. You might assume that cities already have a great fiber infrastructure, but this often isn’t the case. Look at Verizon’s FiOS deployment strategy in the past – they deployed fiber where the construction was the most cost effective, and that meant suburban areas where they had existing pole lines or conduit. Verizon largely avoided much of the downtowns of eastern cities because the cost per mile of fiber construction was too expensive.

Now, 5G can be deployed from the top of high-rises to reach the many downtown buildings that never got fiber. New York City recently sued Verizon since the company reneged on its promise to build fiber everywhere and there are still 1 million living units in the city that never got fiber broadband. Verizon, or somebody else is going to be able to use 5G in the densely populated cities to bring faster broadband, and as Nair said, this might be a game changer.

But as soon as you get out of downtowns and high-rises the math no longer favors 5G. There are three components of a 5G network that are not going to be cheap in suburbia. First, 5G needs fiber. You might be able to use a little wireless backhaul in a 5G network, but a significant portion of the network must be fiber fed. And in most of the country that fiber is not in place. Deloitte recently estimated that the cost for just the fiber to bring 5G everywhere is $130 billion. There is nobody rushing to make that investment.

5G then needs somewhere to place the transmitters. This is more easily achieved in a downtown where there are many tall rooftops and existing towers. But the short delivery distances for millimeter wave frequencies mean that transmitters need to be relatively close the end-user. And in suburban areas that’s going to mean somehow building a lot of new towers or else placing smaller transmitters on existing poles. We know suburbia hates tall towers and it’s always a struggle to build new ones. And the issues associated with getting access to suburban poles are well documented. An ISP needs to affordably get onto poles and also get fiber to those poles – two expensive and time-consuming challenges.

And then there is the economics of the electronics. Because millimeter wave spectrum is easily disrupted by foliage or any impediments it means that there won’t be too many homes served from any one pole-mounted transmitter. But the 5G revenue stream still has to cover both ends of the radios as well as wiring into the home.

I build a lot of landline business plans and I can’t see this making any economic sense for widespread deployment. In many cases this 5G network might be more expensive and slower to deploy than an all-fiber network.

I instead envision companies using 5G technology to cherry pick. There will be plenty of places where there is existing fiber and poles that can be used to serve suburban apartment complexes or business districts. I can see strategic deployment in those areas and the technology used in the same way that Verizon deployed fiber – 5G will deployed only where it makes sense. But like with FiOS, there are going to be huge areas where there will be no 5G deployment, even in relatively dense suburbia. And the business case for rural America is even bleaker. 5G will find a market niche and will be one more technology tool for bringing faster broadband – where it makes economic sense.

5G Needs Fiber

I am finally starting to see an acknowledgement by the cellular industry that 5G implementation is going to require fiber – a lot of fiber. For the last year or so the industry press – prompted by misleading press releases from the wireless companies – made it sound like wireless was our future and that there would soon not be any need for building more wires.

As always, when there is talk about 5G there is a need to make sure which 5G we are talking about, because there are two distinct 5G technologies on the horizon. One is high-speed wireless loops send directly to homes and businesses as a replacement for a wired broadband connection. The other is 5G cellular providing bandwidth to our cellphones.

It’s interesting to see the term 5G being used for a wireless microwave connection to a home or business. For the past twenty years this same technology has been referred to as wireless local loop, but in the broadband world the term 5G has marketing cachet. Interestingly, a lot of these high-speed data connections won’t even be using the 5G standards and could just as easily be transmitting the signals using Ethernet or some other transmission protocol. But the marketing folks have declared that everything that uses the millimeter wave spectrum will be deemed 5G, and so it shall be.

These fixed broadband connections are going to require a lot of fiber close-by to customers. The current millimeter radios are capable of deliver speeds up to a gigabit on a point-to-point microwave basis. And this means that every 5G millimeter wave transmitter needs to be fiber fed if there is any desire to offer gigabit-like speeds at the customer end. You can’t use a 1-gigabit wireless backhaul to feed multiple gigabit transmitters, and thus fiber is the only way to get the desired speeds to the end locations.

The amount of fiber needed for this application is going to depend upon the specific way the network is being deployed. Right now the predominant early use for this technology is to use the millimeter wave radios to serve an entire apartment building. That means putting one receiver on the apartment roof and somehow distributing the signal through the building. This kind of configuration requires fiber only to those tall towers or rooftops used to beam a signal to nearby apartment buildings. Most urban areas already have the fiber to tall structures to support this kind of network.

But for the millimeter technology to bring gigabit speeds everywhere it is going to mean bringing fiber much closer to the customer. For example, the original Starry business plan in Boston had customers receiving the wireless signal through a window, and that means having numerous transmitters around a neighborhood so that a given apartment or business can see one of them. This kind of network configuration will require more fiber than the rooftop-only network.

But Google, AT&T and Verizon are all talking about using millimeter wave radios to bring broadband directly into homes. That kind of network is going to require even more fiber since a transmitter is going to need a clear shot near to street-level to see a given home. I look around my own downtown neighborhood and can see that one or two transmitters would only reach a fraction of homes and that it would take a pole-mounted transmitter in front of homes to do what these companies are promising. And those transmitters on poles are going to need to be fiber-fed if they want to deliver gigabit broadband.

Verizon seems to understand this and they have recently talked about needing a ‘fiber-rich’ environment to deploy 5G. The company has committed to building a lot of fiber to support this coming business plan.

But, as always, there is a flip side to this. These companies are only going to deploy these fast wireless loops in neighborhoods that already have fiber or in places where it makes economic sense to build it. And this is going to mean cherry-picking – the same as the big ISPs do today. They are not going to build the fiber in neighborhoods where they don’t foresee enough demand for the wireless broadband. They won’t build in neighborhoods where the fiber construction costs are too high. One only has to look at the hodgepodge Verizon FiOS fiber network to see what this is going to look like. There will be homes and businesses offered the new fast wireless loops while a block or two away there will be no use of the technology. Verizon has already created fiber haves and have-nots due to the way they built FiOS and 5G wireless loops are going to follow the same pattern.

I think the big ISPs have convinced politicians that they will be solving all future broadband problems with 5G, just as they made similar promises in the past with other broadband technologies. But let’s face it – money talks and these ISPs are only going to deploy 5G / fiber networks where they can make their desired returns.

And that means no 5G in poorer neighborhoods. It might mean little or limited 5G in neighborhoods with terrain or other similar issues. And it certainly means no 5G in rural America because the cost to build a 5G network is basically the same as building a landline fiber network – it’s not going to happen, at least not by the big ISPs.

Cellular Networks and Fiber

We’ve known for a while that the future 5G that the cellular companies are promising is going to need a lot of fiber. Recently Verizon CEO Lowell McAdam verified this when he said that the company will be building dense fiber networks for this purpose. The company has ordered fiber cables as large as 1,700 strands for their upcoming build in Boston in order to support the future fiber and wireless network there. That’s a huge contrast from Verizon’s initial FiOS builds that largely built a network using mostly 6-strand fibers in a lot of the Northeast.

McAdams believes that the future of urban broadband will be wireless and that Verizon intends to build the fiber infrastructure needed to support that future. Of course, with that much fiber in the environment the company will also be able to supply fiber-to-the-premise to those that need the largest amounts of bandwidth.

Boston is an interesting test case for Verizon. They announced in 2015 that they would be expanding their FiOS network to bring fiber to the city – one of many urban areas that they skipped during their first deployment of fiber-to-the-premise. The company also has engaged with the City government in Boston to develop a smart city – meaning using broadband to enhance the livability of the city and to improve the way the government delivers services to constituents. That effort means building fiber to control traffic systems, police surveillance systems and other similar uses.

And now it’s obvious that the company has decided that building for wireless deployment in Boston is part of that vision. It’s clear that Verizon and AT&T are both hoping for a world where most devices are wireless and that the wireless connections use their networks. They both picture a world where their wireless is not just used for cellphones like today, but will also be used to act as the last mile broadband connection for homes, for connected cars, and for the billions of devices used for the Internet of Things.

With the kind of money Verizon is talking about spending in Boston this might just become the test case for a connected urban area that is both fiber rich and wireless rich. To the extent that they can do it with today’s technology it sounds like Verizon is hoping to serve homes in the City with wireless connections of some sort.

I’ve discussed several times how millimeter wave radios have become cheap enough to be a viable alternative for bringing broadband to urban apartment buildings. That’s a business plan that is also being pursued by companies like Google. But I still am not aware of hardware that can reasonably be used with this same technology to serve large numbers of single family homes. At this point the electronics are still too expensive and there are other technological issues to overcome (such as having fiber deep in neighborhoods for backhaul).

So it will be interesting to watch how Verizon handles their promise to bring fiber to the homes in Boston. Will they continue with the promised FTTP deployment or will they wait to see if there is a wireless alternative on the horizon?

It’s also worth noting that Verizon is tackling this because of the density of Boston. The city has over 3,000 housing units per square mile, making it, and many other urban centers, a great place to consider wireless alternatives instead of fiber. But I have to contrast this with rural America. I’m working with several rural counties right now in Minnesota that have housing densities of between 10 and 15 homes per square mile.

This contrast alone shows why I don’t think rural areas are ever going to see much of the advantages of 5G. Even though it’s expensive to build fiber in a place like Boston, the potential payback is commensurate with the cost of the construction. I’ve always thought that Verizon made a bad strategic decision years ago when they halted their FiOS  construction before finishing building in the metropolitan areas on the east coast. Verizon has fared well in its competition with Comcast and others.

But there is no compelling argument for the wireless companies or anybody else to build fiber in the rural areas. The cost per subscriber is high and the paybacks on investment are painfully long. If somebody is going to invest in rural fiber they might as well use it to connect directly to customers rather than to spend the money in fiber plus adding a wireless network on top of it.

We are going to continue to see headlines about how wireless is the future, and for some places like Boston it might be. Past experience has shown us that wireless technology often works a lot different in the field compared to the lab, so we need to see if the wireless technologies being considered really work as promised. But even if they do, those same technologies are going to have no relevance to rural America. If anything the explosion of urban wireless might further highlight the stark differences between urban and rural America.