Converged Networks

I’ve been reading and thinking about converged networks – networks that are enabled to tackle multiple market segments. The best example of this is the largest cable companies that are using their residential last-mile broadband networks to support the cellular business.

The cellular business is a perfect fit for a cable company. They already have fiber deep into every neighborhood, which makes it easy to strategically locate small cell sites without building additional fiber. The big cable companies have put a lot of effort into WiFi which can save money by capturing a lot of cellular backhaul traffic from customer phones.

Having the ability to leverage the existing network also gives cable companies a lot of flexibility. They can continue to buy wholesale cellular minutes in areas where the cell traffic volume is light and use their own cellular network where customer usage is high. This is a cost advantage over the cellular companies that must provide their networks everywhere.

It’s an interesting dynamic. I think the cable companies got into the cellular business as a way to increase customer stickiness – meaning making it harder for customers to leave them. The cable companies will only sell cellular to customers who buy their broadband, meaning that a customer that wants a new ISP must also change to a new cellular provider. But now that cable companies are gathering a mass of customers, I have to think they are now looking at cellular as a big profit opportunity.

To a lesser degree, large cellular companies are building a converged network when they are using excess capacity on the cellular network to provide FWA home broadband. This has obviously been a winning strategy in the last year when Verizon and T-Mobile were the only two ISPs with big growth.

But as I look at the long-term outlook for FWA, this doesn’t seem like as strong of a converged strategy as what the cable companies are doing. To me, the difference is in the capability of the two networks. A cable company’s last-mile network can absorb cellular backhaul from customers with barely a blip in network performance. But the same can’t be said for cell sites. It’s far easier for cell sites to reach capacity, and cellular companies have made it clear that they will prioritize cellular data over FWA broadband performance. Maybe cellular carriers can solve this problem by eventually fully implementing the 5G specifications. But for now, cable company networks can handle convergence much more easily than cellular networks.

I have been wondering why fiber providers have not made the same push for convergence. The one exception might be Verizon, which has said in recent years that it now considers all arms of its business when building fiber assets. In the past, the company treated its fiber Fios business, the cellular business, and the CLEC business as arms-length businesses. From what I can tell, Verizon is still not as converged into what the cable companies are heading for – but there might be a lot more of that going on behind the scenes that we don’t know about.

I’m surprised that nobody has tried to integrate the cellular business for small fiber providers. There is a pretty decent list of fiber providers today that have between 100,000 and 1 million customers – and most of them are growing rapidly. It would be a major challenge for a single ISP with a few hundred thousand customers to launch the same kind of MVNO cellular operation that has been done by Comcast and Charter. But it seems like there ought to be a business plan for fiber ISPs to collectively tackle the cellular business. A last-mile fiber company can bring all of the same benefits to an integrated cellular business as the cable companies and are only lacking economy of scale.

I can think of a few reasons nobody has made this work. Taking time to consider cellular is a major distraction for a fiber ISP that is building fiber passings as quickly as possible. There is also getting the many mid-sized fiber providers to trust each other enough to be partners. But at some point in the future, it’s hard to think that somebody won’t figure this out.

If fiber ISPs enter the cellular business, broadband becomes a truly converged market where cable companies, cellular companies, and independent fiber providers compete with the same suite of products. I know that’s what the public wants because it breaks some of the monopolies and increases choice. My crystal ball says we will get there – I’m just fuzzy about how long it will take.

Only Twenty Years

I’ve written several blogs that make the argument that we should only award broadband grants based on future-looking broadband demand. I think it is bad policy to provide federal grant funding for any technology that delivers speeds that are already slower than the speeds already available to most broadband customers in the country.

The current BEAD grants currently use a definition of 100/20 Mbps to define who households that aren’t considered to have broadband today. But inexplicably, the BEAD grants then allow grant winners to build technologies that deliver that same 100/20 Mbps speeds. The policymakers who designed the grants would allow federal funding to go to a new network that, by definition, sits at the nexus between served and unserved today. That is a bad policy for so many reasons that I don’t even know where to begin lambasting it.

One way to demonstrate the shortsightedness of that decision is a history lesson. Almost everybody in the industry tosses out a statistic that a fiber network built today should be good for at least thirty years. I think that numbers is incredibly low and that modern fiber ought to easily last for twice that time. But for the sake of argument, let’s accept a thirty-year life of fiber.

Just over twenty years ago, I lived inside the D.C. Beltway, and I was able to buy 1 Mbps DSL from Verizon or from a Comcast cable modem. I remember a lot of discussion at the time that there wouldn’t be a need for upgrades in broadband speeds for a while. The 1 Mbps speed from the telco and cable company was an 18-times increase in speed over dial-up, and that seemed to provide a future-proof cushion against homes needing more broadband. That conclusion was quickly shattered when AOL and other online content providers took advantage of the faster broadband speeds to flood the Internet with picture files that used all of the speed. It took only a few years for 1 Mbps per second to feel slow.

By 2004, I changed to a 6 Mbps download offering from Comcast – they never mentioned the upload speed. This was a great upgrade over the 1 Mbps DSL. Verizon made a huge leap forward in 2004 and introduced Verizon FiOS on fiber. That product didn’t make it to my neighborhood until 2006, at which time I bought a 30 Mbps symmetrical connection on fiber. In 2006 I was buying broadband that was thirty times faster than my DSL from 2000. Over time, the two ISPs got into a speed battle. Comcast had numerous upgrades that increased speeds to 12 Mbps, then 30 Mbps, 60 Mbps, 100 Mbps, 200 Mbps, and most recently 1.2 Gbps. Verizon always stayed a little ahead of cable download speeds and continued to offer much faster upload speeds.

The explosion of broadband demand after the introduction of new technology should be a lesson for us. An 18-time speed increase from dial-up to DSL seemed like a huge technology leap, but public demand for faster broadband quickly swamped that technology upgrade, and 1 Mbps DSL felt obsolete almost as soon as it was deployed. It seems that every time there has been a technology upgrade that the public found a way to use the greater capacity.

In 2010, Google rocked the Internet world by announcing gigabit speeds. That was a 33-time increase over the 30 Mbps download speeds offered at the time by the cable companies. The cable companies and telcos said at the time that nobody needed speeds that fast and that it was a marketing gimmick (but they all went furiously to work to match the faster fiber speeds).

I know homes and businesses today that are using most of the gigabit capacity. That is still a relatively small percentage of homes, but the number is growing. Over twenty years, the broadband use by the average home has skyrocketed, and the average U.S. home now uses almost 600 gigabytes of broadband per month – a number that would have been unthinkable in the early 2000s.

I look at this history, and I marvel that anybody would think that it’s wise to use federal funds to build a 100/20 Mbps network today. Already today, something like 80% of homes in the country can buy a gigabit broadband product. The latest OpenVault report says that over a quarter of homes are already subscribing to gigabit speeds. Why would we contemplate using federal grants to build a network with a tenth of the download capacity that is already available to most American homes today?

The answer is obvious. Choosing the technologies that are eligible for grant funding is a political decision, not a technical or economic one. There are vocal constituencies that want some of the federal grant money, and they have obviously convinced the folks who wrote the grant rules that they should have that chance. The biggest constituency lobbying for 100/20 Mbps was the cable companies, which feared that grants could be used to compete against their slow upload speeds. But just as cable companies responded to Verizon FiOS and Google Fiber, the cable companies are now planning for a huge leap upward in upload speeds. WISPs and Starlink also lobbied for the 100/20 Mbps grant threshold, although most WISPs seeking grant funding are now also claiming much faster speed capabilities.

If we learn anything from looking back twenty years, it’s that broadband demand will continue to grow, and that homes in twenty years will use an immensely greater amount of broadband than today. I can only groan and moan that the federal rules allow grants to be awarded to technologies that can deliver only 100/20 Mbps. But I hope that state Broadband Grant offices will ignore that measly, obsolete, politically-absurd option and only award grant funding to networks that might still be serving folks in twenty years.

Who Has the Fastest Broadband?

Ookla recently released a report for the second quarter that summarizes its findings on speed tests conducted throughout the US. The report was generated using the results from 85.1 million speed tests taken during the quarter at the speed test site operated by Ookla. This kind of summary is always interesting, but I’m not sure how useful the results are.

The report looks at both wireless and landline speeds. Ookla says that AT&T was the fastest of the four major wireless carriers in the first quarter, with a ‘speed score’ of 41.23, with Verizon the slowest with a speed score of 30.77. The speed score is a unique metric from Ookla that weights 90% of the download speed and 10% of the upload speed. The reported speeds also toss out the slowest and fastest speeds and concentrate on the median speed.

T-Mobile had the best average latency at 31 milliseconds with Sprint the slowest at 39 milliseconds. The most interesting wireless statistic in the report is called the ‘consistency score’. This is the measure of the percentage of the traffic from each wireless carrier that was at least 5 Mbps download and 1 Mbps upload. AT&T had the highest consistency score at 79.7% with Sprint at the bottom with 66.1%. This score implies that between 20% and 35% of cellular data connections were are at speeds under 5/1 Mbps.

The landline speed results used the same criteria for summarizing the results of the many speed tests. For example, Ookla used the ‘speed score’ that uses 90% of the download speed and 10% of the upload speed – and the results also throw out the slowest and fastest speeds. Verizon had the highest speed score at 117.1, with Comcast and Cox being the only two other ISPs with speed scores over 100. Charter achieved a speed score of 95, AT&T at 82.8, and CenturyLink at 36.1. The AT&T and CenturyLink scores are lower due to customers still using DSL.

Verizon had the best latency at 9 milliseconds, which is a good indication that a large percentage of their customers are using Verizon FiOS on fiber. AT&T and Sprint had the highest latency of the big ISPs at 18 and 22 milliseconds, indicating that the two companies still have a lot of customers on DSL.

The consistency score is more of a headscratcher for the landline ISPs. For example. Spectrum and Comcast had the highest consistency ratings at over 84%, meaning that only 16% of the speed tests on these companies didn’t meet the 25/3 Mbps landline target speed. However, other than perhaps a few grandfathered customers that are still being sold slow products, these companies don’t sell products that should fail that test.

This raises the question of what speed test results mean since there are factors that likely influence the results. For example, I would guess that a lot of customers take a speed test when they are experiencing a problem. I know that’s what prompts me to take speed tests. The other issue that might make Comcast or Charter test at slower than 100 Mbps download is customer WiFi connections. It’s hard to know how many people get slow readings due to poor WiFi. I again understand this issue first-hand. I have a 3-story narrow and long house. The broadband enters on the first floor at the front of the house and my office is at the top of the rear of the house, with some thick hundred-year-old walls in between. Even with an array of WiFi repeaters, the speed in my office varies between 35 and 45 Mbps download – about one-third of the speed delivered at the router. How can Ookla understand the context of a given speed test result? Maybe it doesn’t matter since all of the ISPs have customers with WiFi issues and maybe it averages out. I would think situations like mine are what drive the consistency score. These kinds of questions make it hard to make meaningful sense out of the Ookla results in the report.

Ookla also uses the median broadband speeds to rank the 100 cities with the fastest broadband and also ranks the states. As would be expected, the states in the northeast with a lot of Verizon Fios like New Jersey, Massachusetts, and Rhode Island top the list as having the fastest average broadband speeds. More interesting to me is the bottom of the list. Ookla says that the states with the slowest median broadband are Wyoming, Montana, Idaho, and Alaska. Several other entities that rank state broadband usually put West Virginia and New Mexico at the bottom, followed by Idaho and Arkansas. Those other rankings include an assessment that there are many homes in some states with little or no broadband options at home, while a ranking using speed tests only counts home with broadband.

Overall, this is an interesting way to look at broadband. States with median download speeds under 50 Mbps (6 states) certainly have a different broadband environment than states with the median broadband speeds over 90 Mbps (11 states). But there are places in the highest-ranked states with no broadband options and places in the states with the poorest broadband that are served by fiber.

AT&T’s Fiber Play

AT&T has quietly become a major player in the fiber-to-the-home market. It’s reported that AT&T added 1.1 million customers on fiber in 2019, bringing its base of homes on fiber to 3.1 million. This puts the company in clear second place for residential fiber behind Verizon’s FiOS deployment.

AT&T got prompted to build fiber due to an agreement with the government as part of the approval for the merger with DirecTV. The company agreed in the summer of 2015 to build fiber to pass 12.5 million homes within four years.

AT&T has been in the fiber business for many years. Like all of the big telcos, AT&T built fiber to large businesses over the last couple of decades. AT&T got dragged into the FTTH business in a few markets when it reacted to the Google Fiber overbuild in markets like Atlanta and the North Carolina research triangle. AT&T has been selectively bringing fiber to large apartment complexes for much of the last decade.

In the first few years of the mandated buildout, AT&T seemed to be only halfheartedly going along with the mandated expansion. They claimed to have passed millions of homes with fiber builds, but there was no press or chatter from customers having received AT&T fiber service. For the first few years after the mandate, AT&T was meeting its mandate by counting passed apartment complexes – many which were likely already within range of AT&T fiber.

But it looks like everything changed at AT&T a few years ago and fiber suddenly appeared in pockets of the many cities where AT&T is the incumbent telephone provider. There were several changes in the industry that likely prompted this turnaround at AT&T. First, they won the FirstNet contract to provide modern connectivity to all first responders nationwide. In many cases this requires building new fiber – financed by the federal government. Second, AT&T needs to connect to huge numbers of small cell sites – something that was not predicted in 2015.

It seems that AT&T management looked at those two opportunities and decided that they could best capitalize on the new fiber by adding residential and small businesses to the fiber network. That was a big change at AT&T. They had long refused to follow in the wake of Verizon and their FiOS network. They instead took the path of beefing up urban DSL with their U-verse business where they paired two copper wires to offer DSL speeds as fast as 48 Mbps. I think the company was likely surprised about how quickly that offering became obsolete as cable companies now routinely offer two to four times that speed.

For the past several years AT&T has been losing DSL customers in droves to the cable companies. For example, in the year ending in the third quarter of 2019, AT&T had lost a net of 123,000 broadband customers, even with the big gains during that period for fiber. The company will likely continue to lose DSL customers as copper networks age and the speeds fall further behind cable company offerings. AT&T has been petitioning the FCC to tear down copper wires, particularly in rural areas, further killing the DSL business.

AT&T’s new strategy for building fiber is interesting. They are only building FTTH in small pockets where they already have fiber. That fiber might be there to serve a large business, a school, or a cell tower. AT&T extends fiber for two to four blocks around these fiber hubs, only where construction costs look reasonable. AT&T has a big cost advantage of building fiber cheaply in areas where the company already has copper wires on poles – the new fiber is overlashed to the existing copper wires.

Late last year, AT&T announced they had met their government mandate and were taking a pause in building new fiber in neighborhoods. The company is instead focused in selling where it has fiber and has a goal of a 50% market share in those areas. That’s an aggressive goal when considering that Comcast and Charter are likely their most common competitor.

AT&T fiber must be considered by anybody building a new fiber network. If AT&T is already in the market, they will likely have sewn up small pockets of the community. It also wouldn’t be hard for AT&T to expand these small pockets to become larger, making them a real competitor to a fiber overbuilder. This will be an odd kind of competition where AT&T is on some blocks and not others – almost making an overbuilder have two marketing plans, for the neighborhoods with and without fiber.

Counting Gigabit Households

I ran across a website called the Gigabit Monitor that is tracking the population worldwide that has access to gigabit broadband. The website is sponsored by VIAVI Solutions, a manufacturer of network test equipment.

The website claims that in the US over 68.5 million people have access to gigabit broadband, or 21% of the population. That number gets sketchy when you look at the details. The claimed 68.5 million people includes 40.3 million served by fiber, 27.2 million served by cable company HFC networks, 822,000 served by cellular and 233,000 served by WiFi.

Each of those numbers is highly suspect. For example, the fiber numbers don’t include Verizon FiOS or the FiOS properties sold to Frontier. Technically that’s correct since most FiOS customers can buy maximum broadband speeds in the range of 800-900 Mbps. But there can’t be 40 million people other people outside of FiOS who can buy gigabit broadband from other fiber providers. I’m also puzzled by the cellular and WiFi categories and can’t imagine there is anybody that can buy gigabit products of either type.

VIAVI makes similar odd claims for the rest of the world. For example, they say that China has 61.5 million people that can get gigabit service. But that number includes 12.3 million on cellular and 6.2 million on WiFi.

Finally, the website lists the carriers that they believe offer gigabit speeds. I have numerous clients that own FTTH networks that are not listed, and I stopped counting when I counted 15 of my clients that are not on the list.

It’s clear this web site is flawed and doesn’t accurately count gigabit-capable people. However, it raises the question of how to count the number of people who have access to gigabit service. Unfortunately, the only way to do that today is by accepting claims by ISPs. We’ve already seen with the FCC broadband maps how unreliable the ISPs are when reporting broadband capabilities.

As I think about each broadband technology there are challenges in defining gigabit-capable customers. The Verizon situation is a great example. It’s not a gigabit product if an ISP caps broadband speeds at something lower than a gigabit – even if the technology can support a gigabit.

There are challenges in counting gigabit-capable customers on cable company networks as well. The cable companies are smart to market all of their products as ‘up to’ speeds because of the shared nature of their networks. The customers in a given neighborhood node share bandwidth and the speeds can drop when the network gets busy. Can you count a household as gigabit-capable if they can only get gigabit speeds at 4:00 AM but get something slower during the evening hours?

It’s going to get even harder to count gigabit capability when there are reliable cellular networks using millimeter wave spectrum. That spectrum is only going to able to achieve gigabit speeds outdoors when in direct line-of-site from a nearby cell site. Can you count a technology as gigabit-capable when the service only works outdoors and drops when walking into a building or walking a few hundred feet away from a cell site?

It’s also hard to know how to count apartment buildings. There are a few technologies being used today in the US that bring gigabit speeds to the front of an apartment building. However, by the time that the broadband suffers packet losses due to inside wiring and is diluted by sharing among multiple apartments, nobody gets a true gigabit product. But ISPs routinely count them as gigabit customers.

There is also the issue of how to not double-count households that can get gigabit speeds from multiple ISPs. There are urban markets with fiber providers like Google Fiber, Sonic, US Internet, EPB Chattanooga, and others where customers can buy gigabit broadband on fiber and also from the cable company. There are even a few lucky customers in places like Austin, Texas and the research triangle in North Carolina where some homes have three choices of gigabit networks after the telco (AT&T) also built fiber.

I’m not sure we need to put much energy into accurately counting gigabit-capable customers. I think everybody would agree an 850 to 950 Mbps connection on Verizon FiOS is blazingly fast. Certainly, a customer getting over 800 Mbps from a cable company has tremendous broadband capability. Technically such connections are not gigabit connections, but the difference between a gigabit connection and a near-gigabit connection for a household is so negligible as to not practically matter.

AT&T and Verizon Fiber

If you look at the annual reports or listen to the quarterly investor calls, you’d think that AT&T and Verizon’s entire future depends upon 5G. As I’ve written in several blogs, there doesn’t seem to be an immediate financial business case for 5G and the big carriers are going to have to figure out how to monetize 5G – something that’s going to take years. Meanwhile, both companies have been expanding their fiber footprints and aggressively adding fiber-based broadband customers.

According to the Leichtman Research Group, AT&T added only 34,000 net broadband customers in the first quarter of this year – not an impressive number when considering that they have 15.7 million broadband numbers. But the underlying story is more compelling. One the 1Q investor call, the company says they added 297,000 fiber customers during the first quarter, and the smaller net number recognizes the decline of DSL customers. The overall financial impact was a net gain of 8% for broadband revenues.

AT&T is starting to understand the dynamics of being a multimedia company in addition to being a wireless carrier and an ISP. According to John Stephens, the AT&T CFO, the company experiences little churn when they are able to sell fiber-based Internet, a video product and cellular service to a customer.

The company views its fiber business as a key part of its growth strategy. AT&T now passes over 20 million homes and businesses with fiber and is aggressively pushing fiber broadband. The company has also undergone an internal consolidation so that all fiber assets are available to every business unit. The company has been expanding its fiber footprint significantly for the last few years, but recently announced they are at the end of major fiber expansion. However, the company will continue to take advantage of the new fiber being built for the nationwide FirstNet network for first responders. In past years the company would have kept FirstNet fiber in its own silo and not gotten the full value out of the investment.

Verizon has a similar story. The company undertook an internal project they call One Fiber where every fiber asset of the company is made available to all Verizon business units. There were over a dozen Verizon business units with separate fiber networks in silos.

Verizon is currently taking advantage of the One Fiber plan for expanding its small cell site strategy. The company knows that small cell sites are vital for maintaining a quality cellular network and they are also still weighing how heavily to invest in 5G wireless loops that deliver wireless broadband in residential neighborhoods.

Verizon has also been quietly expanding its FiOS fiber footprint. The company has gotten regulatory approval to abandon the copper business in over 100 exchanges in the northeast where it operates FiOS. In those exchanges, the company will no longer connect customers to copper service and says they will eventually tear down the copper and become fully fiber-based. That strategy means filling in neighborhoods that were bypassed by FiOS when the network was first built 20 years ago.

Verizon is leading the pack in terms of new fiber construction. They say that are building over 1,000 route miles of fiber every month. This alone is having a big impact on the industry as everybody else is having a harder time locating fiber construction crews.

Verizon’s wireline revenues were down 4% in the first quarter of this year compared to 2018. The company expects to start benefitting from the aggressive fiber construction program and turn that trend around over the next few years. One of the most promising opportunities for the company is to start driving revenues in markets where it’s owned fiber but had never fully monetized the opportunity.

The main competitor for all of the fiber construction by both companies are the big cable companies. The big telcos have been losing broadband customers for years as the cable company broadband has been clobbering DSL. The two telcos are counting on their fiber products to be a fierce competitor to cable company broadband and the companies hope to start recapturing their lost market share. As an outsider I’ve wondered for years why they didn’t do this, and the easy answer was that both companies sunk most of their capital investments into wireless. Now they are seeing that 5G wireless needs fiber, and both companies have decided to capitalize on the new fiber by also selling landline broadband. It’s going to be an interesting battle to watch since both telcos still face the loss of huge numbers of DSL customers – but they are counting on fiber to position them well for the decades to come.

The Fastest and Slowest Internet in the US

The web site HighSpeedInternet.com has calculated and ranked the average Internet speeds by state. The site offers a speed test and then connects visitors to the web pages for the various ISPs in each zip code in the country. I have to imagine the site makes a commission for broadband customers that subscribe through their links.

Not surprisingly, the east coast states with Verizon FiOS ranked at the top of the list for Internet speeds since many customers in those states have the choice between a fiber network and a big cable company network.

For example, Maryland was top on the list with an average speed of 65 Mbps, as measured by the site’s speed tests. This was followed by New Jersey at 59.6 Mbps, Delaware at 59.1 Mbps, Rhode Island at 56.8 Mbps and Virginia at 56 Mbps.

Even though they are at the top of the list, Maryland is like most states and there are still rural areas of the state with slow or non-existent broadband. The average speed test results are the aggregation of all of the various kinds of broadband customers in the state:

  • Customers with fast Verizon FiOS products
  • Customers with fast broadband from Comcast, the largest ISP in the state
  • Customers that have elected slower, but less expensive DSL options
  • Rural customers with inferior broadband connections

Considering all of the types of customers in the state, an average speed test result of 65 Mbps is impressive. This means that a lot of households in the state have speeds of 65 Mbps or faster. That’s not a surprise considering that both Verizon FiOS and Comcast have base product speeds considerably faster than 65 Mbps. If I was a Maryland politician, I’d be more interested in the distribution curve making up this average. I’d want to know how many speed tests were done by households getting only a few Mbps speeds. I’d want to know how many gigabit homes were in the mix – gigabit is so much faster than the other broadband products that it pulls up the average speed.

I’d also be interested in speeds by zip code. I took a look at the FCC broadband data reported on the 477 forms just for the city of Baltimore and I see widely disparate neighborhoods in terms of broadband adoption. There are numerous neighborhoods just north of downtown Baltimore with broadband adoption rates as low as 30%, and numerous neighborhoods under 40%. Just south of downtown and in the northernmost extremes of the city, the broadband adoption rates are between 80% and 90%. I have to guess that the average broadband speeds are also quite different in these various neighborhoods.

I’ve always wondered about the accuracy of compiling the results of mass speed tests. Who takes these tests? Are people with broadband issues more likely to take the tests? I have a friend who has gigabit broadband and he tests his speed all of the time just to see that he’s still getting what’s he’s paying for (just FYI, he’s never measured a true gigabit, just readings in the high 900s Mbps). I take a speed test every time I read something about speeds. I took the speed test at this site from my office and got a download speed of 43 Mbps. My office happens to be in the most distant corner of the house from the incoming cable modem, and at the connection to the Charter modem we get 135 Mbps. My slower results on this test are due to WiFi and yet this website will log me as an underperforming Charter connection.

There were five states at the bottom of the ranking. Last was Alaska at 17 Mbps, Mississippi at 24.8 Mbps, Idaho at 25.3 Mbps, Montana at 25.7 Mbps and Maine at 26 Mbps. That’s five states where the average internet speed is at or below the FCC’s definition of broadband.

The speeds in Alaska are understandable due to the remoteness of many of the communities. There are still numerous towns and villages that receive Internet backhaul through satellite links. I recently read that the first fiber connection between the US mainland and Alaska is just now being built. That might help speeds some, but there is a long way to go to string fiber backhaul to the remote parts of the state.

Mostly what the bottom of the scale shows is that states that are both rural and somewhat poor end up at the bottom of the list. Interestingly, the states with the lowest household densities such as Wyoming and South Dakota are not in the bottom five due to the widespread presence of rural fiber built by small telcos.

What most matters about this kind of headline is that even in the states with fast broadband there are still plenty of customers with lousy broadband. I would hope that Maryland politicians don’t look at this headline and think that their job is done – by square miles of geography the majority of the state still lacks good broadband.

Access to Low-Price Broadband

The consumer advocate BroadbandNow recently made an analysis of broadband prices across the US and came up with several conclusions:

  • Broadband prices are higher in rural America.
  • They conclude that 45% of households don’t have access to a ‘low-priced plan’ for a wired Internet connection.

They based their research by looking at the published prices of over 2,000 ISPs. As somebody who does that same kind of research in individual markets, I can say that there is often a big difference between published rates and actual rates. Smaller ISPs tend to charge the prices they advertise, so the prices that BroadbandNow found in rural America are likely the prices most customers really pay.

However, the big ISPs in urban areas routinely negotiate rates with customers and a significant percentage of urban broadband customers pay something less than the advertised rates. But the reality is messier even than that since a majority of customers still participate in a bundle of services. It’s usually almost impossible to know the price of any one service inside a bundle and the ISP only reveals the actual rate when a customer tries to break the bundle to drop one of the bundled services. For example, a customer may think they are paying $50 for broadband in a bundle but find out their real rate is $70 if they try to drop cable TV. These issues make it hard to make any sense out of urban broadband rates.

I can affirm that rural broadband rates are generally higher. A lot of rural areas are served by smaller telcos and these companies realize that they need to charge higher rates in order to survive. As the federal subsidies to rural telcos have been reduced over the years these smaller companies have had to charge realistic rates that match their higher costs of doing business in rural America.

I think rural customers understand this. It’s a lot more expensive for an ISP to provide broadband in a place where there are only a few customers per road-mile of network than in urban areas where there might be hundreds of customers per mile. A lot of other commodities cost more in rural America for this same reason.

What this report is not highlighting is that the lower-price broadband in urban areas is DSL. The big telcos have purposefully priced DSL below the cost of cable modem broadband as their best strategy to keep customers. When you find an urban customer that’s paying $40 or $50 for broadband it’s almost always going to be somebody using DSL.

This raises the question of how much longer urban customers will continue to have the DSL option. We’ve already seen Verizon abandon copper-based products in hundreds of urban exchanges in the last few years. Customers in those exchanges can theoretically now buy FiOS on fiber – and pay more for the fiber broadband. This means for large swaths of the northeast urban centers that the DSL option will soon be gone forever. There are persistent industry rumors that CenturyLink would like to get out of the copper business, although I’ve heard no ideas of how they might do it. It’s also just a matter of time before AT&T starts walking away from copper. Will there even be any urban copper a decade from now? Realistically, as DSL disappears with the removal of copper the lowest prices in the market will disappear as well.

There is another trend that impacts the idea of affordable broadband. We know that the big cable companies now understand that their primary way to keep their bottom line growing is to raise broadband rates. We’ve already seen big broadband rate increases in the last year, such as the $5 rate increase from Charter for bundled broadband.

The expectation on Wall Street is that the cable companies will regularly increase broadband rates going into the future. One analyst a year ago advised Comcast that basic broadband ought to cost $90. The cable companies are raising broadband rates in other quieter ways. Several big cable companies have told their boards that they are going to cut back on offering sales incentives for new customers and they want to slow down on negotiating rates with existing customers. It would be a huge rate increase for most customers if they are forced to pay the ‘list’ prices for broadband.

We also see carriers like Comcast starting to collect some significant revenues for customers going over the month data caps. As household broadband volumes continue to grow the percentage of people using their monthly cap should grow rapidly. We’ve also seen ISPs jack up the cost of WiFi or other modems as a backdoor way to get more broadband revenue.

As the cable companies find way to extract more revenue out of broadband customers and as the big telcos migrate out of DSL my bet is that by a decade from now there will be very few customers with ‘affordable’ broadband. Every trend is moving in the opposite direction.

Gaming Migrates to the Cloud

We are about to see a new surge in demand for broadband as major players in the game industry have decided to move gaming to the cloud. At the recent Game Developer’s Conference in San Francisco both Google and Microsoft announce major new cloud-based gaming initiatives.

Google announced Stadia, a platform that they tout as being able to play games from anywhere with a broadband connection on any device. During the announcement they showed transferring a live streaming game from desktop to laptop to cellphone. Microsoft announced the new xCloud platform that let’s Xbox gamers play a game from any connected device. Sony Playstation has been promoting online play between gamers from many years and now also offers some cloud gaming on the Playstation Now platform.

OnLive tried this in 2011, offering a platform that was played in the cloud using OnLive controllers, but without needing a computer. The company failed due to the quality of broadband connections in 2011, but also due to limitations at the gaming data centers. Both Google and Microsoft now operate regional data centers around the country that house state-of-the-art whitebox routers and switches that are capable of handling large volumes of simultaneous gaming sessions. As those companies have moved large commercial users to the cloud they created the capability to also handle gaming.

The gaming world was ripe for this innovation. Current gaming ties gamers to gaming consoles or expensive gaming computers. Cloud gaming brings mobility to gamers, but also eliminates need to buy expensive gaming consoles. This move to the cloud probably signals the beginning of the end for the Xbox, Playstation, and Nintendo consoles.

Google says it will support some games at the equivalent of an HD video stream, at 1080p and 60 frames per second. That equates to about 3GB of downloaded per hour. But most of the Google platform is going to operate at 4K video speeds, requiring download speeds of at least 25 Mbps per gaming stream and using 7.2 GB of data per hour. Nvidia has been telling gamers that they need 50 Mbps per 4K gaming connection.

This shift has huge implications for broadband networks. First, streaming causes the most stress on local broadband networks since the usage is continuous over long periods of times. A lot of ISP networks are going to start showing data bottlenecks when significant numbers of additional users stream 4K connections for hours on end. Until ISPs react to this shift, we might return to those times when broadband networks bogged down in prime time.

This is also going to increase the need for download and upload speeds. Households won’t be happy with a connection that can’t stream 4K, so they aren’t going to be satisfied with a 25 Mbps connection that the FCC says is broadband. I have a friend with two teenage sons that both run two simultaneous game streams while watching a steaming gaming TV site. It’s good that he is able to buy a gigabit connection on Verizon FiOS, because his sons alone are using a continuous broadband connection of at least 110 Mbps, and probably more

We are also going to see more people looking at the latency on networks. The conventional wisdom is that a gamer with the fastest connection has an edge. Gamers value fiber over cable modems and value cable modems over DSL.

This also is going to bring new discussion to the topic of data caps. Gaming industry statistics say that the average serious gamer averages 16 hours per week of gaming. Obviously, many play longer than the average. My friend with the two teenagers is probably looking at least at 30 GB per hour of broadband download usage plus a decent chunk of upload usage. Luckily for my friend, Verizon FiOS has no data cap. Many other big ISPs like Comcast start charging for data usage over one terabyte per month – a number that won’t be hard to reach for a household with gamers.

I think this also opens up the possibility for ISPs to sell gamer-only connections. These connections could be routed straight to peering arrangements with the Google or Microsoft to guarantee the fastest connection through their network and wouldn’t mix gaming streams with other household broadband streams. Many gamers will pay extra to have a speed edge.

This is just another example of how the world find ways to use broadband when it’s available. We’ve obviously reached a time when online gaming can be supported. When OnLive tried is there were not enough households with fast enough connections, there weren’t fast enough regional data centers, and there wasn’t a peering network in place where ISPs connect directly to big data companies like Google and bypass the open Internet.

The gaming industry is going to keep demanding faster broadband and I doubt they’ll be satisfied until we have a holodeck in every gamer’s home. But numerous other industries are finding ways to use our increasing household broadband capcity and the overall demand keeps growing at a torrid pace.

 

Verizon’s Case for 5G, Part 4

Ronan Dunne, an EVP and President of Verizon Wireless recently made Verizon’s case for aggressively pursuing 5G. This last blog in the series looks at Verizon’s claim that they are going to use 5G to offer residential broadband. The company has tested the technology over the last year and announced plans to soon introduce the technology into a number of cities.

I’ve been reading everything I can about Verizon and I think I finally figured out what they are up to. They have been saying that within a few years that they will make fixed 5G broadband available to millions of homes. One of the first cities they will be building is Sacramento. It’s clear that in order to offer fast speeds that each 5G transmitter will have to be fiber fed. To cover all neighborhoods in Sacramento would require building a lot of new fiber. Building new fiber is both expensive and time-consuming. And it’s still a head scratcher about how this might work in neighborhoods without poles where other utilities are underground.

Last week I read of an announcement by Lee Hick’s of Verizon for a new initiative called One Fiber. Like many large telecoms Verizon has numerous divisions that own fiber assets like the FiOS group, the wireless group and the old MCI business CLEC group. The new policy will consolidate all of this fiber under into a centralized system, making existing and new fiber available to every part of the business. It might be hard for people to believe, but within Verizon each of these groups managed their own fiber separately. Anybody who has ever worked with the big telcos understands what a colossal undertaking it will be to consolidate this.

Sharing existing fiber and new fiber builds among its various business units is the change that will unleash the potential for 5G deployment. My guess is that Verizon has eyed AT&T’s fiber the strategy and is copying the best parts of it. AT&T has quietly been extending its fiber-to-the-premise (FTTP) network by extending fiber for short distances around the numerous existing fiber nodes in the AT&T network. A node on an AT&T fiber built to get to a cell tower or to a school is now also a candidate to function as a network node for FTTP. Using existing fiber wisely has allowed AT&T to claim they will soon be reaching over 12 million premises with fiber – without having to build a huge amount of new fiber.

Verizon’s One Fiber policy will enable them to emulate AT&T. Where AT&T has elected to build GPON fiber-to-the-premise, Verizon is going to try 5G wireless. They’ll deploy 5G cell sites at their existing fiber nodes where it makes financial sense. Verizon doesn’t have as extensive of a fiber network as AT&T and I’ve seen a few speculations that they might pass as many as 7 million premises with 5G within five years.

Verizon has been making claims about 5G that it can deliver gigabit speeds out to 3,000 feet. It might be able to do that in ideal conditions, but their technology is proprietary and nobody knows the real capabilities. One thing we know about all wireless technologies is that it’s temperamental and varies a lot by local conditions. The whole industry is waiting to the speeds and distances Verizon will really achieve with the first generation gear.

The company certainly has some work in front of it to pursue this philosophy. Not all fiber is the same and their existing fiber network probably has fibers of many sizes, ages and conditions using a wide range of electronics. After inventorying and consolidating control over the fiber they will have to upgrade electronics and backbone networks to enable the kind of bandwidth needed for 5G.

The Verizon 5G network is likely to consist of a series of cell sites serving small neighborhood circles – the size of the circle depending upon topography. This means the Verizon networks will  not likely be ubiquitous in big cities – they will reach out to whatever is in range of 5G cell sites placed on existing Verizon fiber. After the initial deployment, which is likely to take a number of years, the company will have to assess if building additional fiber makes economic sense. That determination will consider all of the Verizon departments and not just 5G.

I expect the company to follow the same philosophy they did when they built FiOS. They were disciplined and only built in places that met certain cost criteria. This resulted in a network that, even today, bring fiber to one block but not the one next door. FiOS fiber was largely built where Verizon could overlash fiber onto their telephone wires or drag fiber through existing conduits – I expect their 5G expansion to be just as disciplined.

The whole industry is dying to see what Verizon can really deliver with 5G in the wild. Even if it’s 100 Mbps broadband they will be a competitive alternative to the cable companies. If they can really deliver gigabit speeds to entire neighborhoods then will have shaken the industry. But in the end, if they stick to the One Fiber model and only deploy 5G where it’s affordable they will be bringing a broadband alternative to those that happen to live near their fiber nodes – and that will mean passing millions of homes and tens of millions.