Cellular Broadband Speeds – 2019

Opensignal recently released their latest report on worldwide cellular data speeds. The company examined over 139 billion cellphone connections in 87 countries in creating this latest report.

South Korea continues to have the fastest cellular coverage in the world with an average download speed of 52.4 Mbps. Norway is second at 48.2 Mbps and Canada third at 42.5 Mbps. The US was far down the list in 30th place with an average download speed of 21.3 Mbps. Our other neighbor Mexico had an average download speed of 14.9 Mbps. At the bottom of the list are Iraq (1.6 Mbps), Algeria (2.1 Mbps) and Nepal (4.4 Mbps). Note that these average speeds represent all types of cellular data connections including 2G and 3G.

Cellular broadband speeds have been improving raoidly in most countries. For instance, in the 2017 report, Opensignal showed South Korea at 37.5 Mbps and Norway at 34.8 Mbps. The US in 2017 was in 36th place at only 12.5 Mbps.

Earlier this year Opensignal released their detailed report about the state of mobile broadband in the United States. This report looks at speeds by carrier and also by major metropolitan area. The US cellular carriers have made big strides just since 2017. The following table compares download speeds for 4G LTE by US carrier for 2017 and 2019.

2019 2017
Download Latency Download Latency
AT&T 17.8 Mbps 57.8 ms 12.9 Mbps 63.8 ms
Sprint 13.9 Mbps 70.0 ms 9.8 Mbps 70.1 ms
T-Mobile 21.1 Mbps 60.6 ms 17.5 Mbps 62.8 ms
Verizon 20.9 Mbps 62.6 ms 14.9 Mbps 67.3 ms

Speeds are up across the board. Sprint increased speeds over the two years by 40%. Latency for 4G is still relatively high. For comparison, fiber-to-the-home networks have latency in the range of 10 ms and coaxial cable networks have latency between 25 – 40 ms. The poor latency in cellular networks is one of the reasons why browsing the web on a cellphone seems so slow. (the other reason is that cellphone browsers focus on graphics rather than speed).

Cellular upload speeds are still slow. In the 2019 tests, the average upload speeds were AT&T (4.6 Mbps), Sprint (2.4 Mbps), T-Mobile (6.7 Mbps) and Verizon (7.0 Mbps).

Speeds vary widely by carrier and city. The fastest cellular broadband market identified in the 2019 tests was T-Mobile in Grand Rapids, Michigan with an average 4G speed of 38.3 Mbps. The fastest upload speed was provided by Verizon in New York City at 12.5 Mbps. Speeds vary by market for several reasons. First, the carriers don’t deploy the same spectrum everywhere in the US, so some markets have less spectrum than others. Markets vary in speed due to the state of upgrades – at any given time cell sites are at different levels of software and hardware upgrades. Finally, markets also vary by cell tower density and markets that serve more customers for each tower are likely to be slower.

Many people routinely take speed tests for their home landline broadband connection. If you’ve not taken a cellular speed test it’s an interesting experience. I’ve always found that speeds vary significantly with each speed test, even when run back-to-back As I was writing this blog I took several speed tests that varied in download speeds between 12 Mbps and 23 Mbps (I use AT&T). My upload speeds also varied with a top speed of 3 Mbps, and one test that couldn’t maintain the upload connection and measured 0.1 Mbps on the test. While landlines broadband connections maintain a steady connection to an ISP, a cellphone establishes a new connection every time you try to download and speeds can vary depending upon the cell site and the channel your phone connects to and the overall traffic at the cell site at the time of connection. Cellular speeds can also be affected by temperature, precipitation and all of those factors that make wireless coverage a bit squirrelly.

It’s going to be a few years until we see any impact on the speed test results from 5G. As you can see by comparing to other countries, the US still has a long way to go to bring 4G networks up to snuff. One of the most interesting aspects of 5G is that speed tests might lose some of their importance. With frequency slicing, a cell site will size a data channel to meet a specific customer need. Somebody downloading a large software update should be assigned a bigger data channel with 5G than somebody who’s just keeping up with sports scores. It will be interesting to see how Opensignal accounts for data slicing.

Should Satellite Broadband be Subsidized?

I don’t get surprised very often in this industry, but I must admit that I was surprised by the amount of money awarded for satellite broadband in the reverse auction for CAF II earlier this year. Viasat, Inc., which markets as Exede, was the fourth largest winner, collecting $122.5 million in the auction.

I understand how Viasat won – it’s largely a function of the way that reverse auctions work. In a reverse auction, each bidder lowers the amount of their bid in successive rounds until only one bidder is left in any competitive situation. The whole pool of bids is then adjusted to meet the available funds, which could mean an additional reduction of what winning bidders finally receive.

Satellite providers, by definition, have a huge unfair advantage over every other broadband technology. Viasat was already in the process of launching new satellites – and they would have launched them with or without the FCC grant money. Because of that, there is no grant level too low for them to accept out of the grant process – they would gladly accept getting only 1% of what they initially requested. A satellite company can simply outlast any other bidder in the auction.

This is particularly galling since Viasat delivers what the market has already deemed to be inferior broadband. The download speeds are fast enough to satisfy the reverse auction at speeds of at least 12 Mbps. The other current satellite provider HughesNet offer speeds of at least 25 Mbps. The two issues that customers have with satellite broadband is the latency and the data caps.

By definition, the latency for a satellite at a 23,000 orbit is at least 476 ms (milliseconds) just to account for the distance traveled to and from the earth. Actual latency is often above 600 ms. The rule of thumb is that real-time applications like VoIP, gaming, or holding a connection at a corporate LAN start having problems when latency is greater than 100-150 ms.

Exede no longer cuts customers dead for the month once they reach the data cap, but they instead reduce speeds when the network is busy for any customer over the cap. Customer reviews say this can be extremely slow during prime times. The monthly data caps are small and range from $49.99 monthly for a 10 GB data cap to $99.95 per month for a 150 GB data cap. To put those caps into perspective, OpenVault recently reported that the average landline broadband household used 273.5 GB per month of data in the first quarter of 2019.

Viasat has to be thrilled with the result of the reverse auction. They got $122.5 million for something they were already doing. The grant money isn’t bringing any new option to customers who were already free to buy these products before the auction. There is no better way to say it other than Viasat got free money due to a loophole in the grant process. I don’t think they should have been allowed into the auction since they aren’t bringing any broadband that is not already available.

The bigger future issue is if the new low-earth orbit satellite companies will qualify for the future FCC grants, such as the $20.4 billion grant program starting in 2021. The new grant programs are also likely to be reverse auctions. There is no doubt that Jeff Bezos or Elon Musk will gladly take government grant money, and there is no doubt that they can underbid any landline ISP in a reverse auction.

For now, we don’t know anything about the speeds that will be offered by the new satellites. We know that they claim that latency will be about the same as cable TV networks at about 25 ms. We don’t know about data plans and data caps, although Elon Musk has hinted at having unlimited data plans – we’ll have to wait to see what is actually offered.

It would be a tragedy for rural broadband if the new (and old) satellite companies were to win any substantial amount of the new grant money. To be fair, the new low-orbit satellite networks are expensive to launch, with price tags for each of the three providers estimated to be in the range of $10 billion. But these companies are using these satellites worldwide and will be launching them with or without help from an FCC subsidy. Rural customers are going to best be served in the long run by having somebody build a network in their neighborhood. It’s the icing on the cake if they are also able to buy satellite broadband.

Are You Ready for 10 Gbps?

Around the world, we’re seeing some migration to 10 Gbps residential broadband. During the last year the broadband providers in South Korea, Japan, and China began upgrading to the next-generation PON and are offering the blazingly fast broadband products to consumers. South Korea is leading the pack and expects to have the 10 Gbps speed to about 50% of subscribers by the end of 2022.

In the US there are a handful of ISPs offering a 10 Gbps product, mostly for the publicity – but they stand ready to install the faster product. Notable is Fibrant in Salisbury, NC and EPB in Chattanooga. EPB which was also among the first to offer a 1 Gbps residential product a few years ago.

I have a lot of clients who already offer 10 Gbps connections to large business and carrier customers to serve large businessesn like data centers and hospital complexes. However, except for the few pioneers, these larger bandwidth products are being delivered directly to a single customer using active Ethernet technology.

There are a few hurdles for offering speeds over a gigabit in the US. Perhaps foremost is that there are no off-the-shelf customer electronics that can handle speeds over a gigabit – the typical WiFi routers and computers work at slower speeds. The biggest hurdle for an ISP continues to be the cost of the electronics. Today the cost of next-generation PON equipment is high and will remain so until the volume of sales brings the per-unit prices down. The industry market research firm Ovum predicts that we’ll see wide-spread 10 Gbps consumer products starting in 2020 but not gaining traction until 2024.

In China, Huawei leads the pack. The company has a 10 Gbps PON system that is integrated with a 6 Gbps WiFi 6 router for the home. The system is an easy and overlay on top of the company’s traditional GPON network gear. In South Korea the largest ISP SK Broadband has worked with Nokia to develop a proprietary PON technology only used today inside of South Korea. Like Huawei, this overlays onto the existing GPON network. In Japan the 10 Gbps PON network is powered by Sumitomo, a technology only being sold in Japan. None of these technologies has made a dent in the US market, with Huawei currently banned due to security concerns.

In the US there are two technologies being trialed. AT&T is experimenting with XGS-PON technology. They plan to offer 2 Gbps broadband, upgradable to 10 Gbps in the new high-tech community of Walsh Ranch being built outside of Ft. Worth. AT&T is currently trialing the technology at several locations within its FTTP network that now covers over 12 million passings. Verizon is trying the NG-PON2 technology but is mostly planning to use this to power cell sites. It’s going to hard for any ISP to justify deployment of the new technologies until somebody buys enough units to pull down the cost.

Interestingly, Cable Labs is also working on a DOCSIS upgrade that will allow for faster speeds up to 10 Gbps. The problem most cable networks will have is in finding space of their network for the needed channels to support the faster speeds.

There are already vendors and labs exploring 25 Gbps and 50 Gbps PON. These products will likely be used for backhaul and transport at first. The Chinese vendors think the leap forward should be to 50 Mbps while other vendors are all considering a 25 Mbps upgrade path.

The real question that needs to be answered is if there is any market for 10 Gbps bandwidth outside the normally expected uses like cellular towers, data centers, and large business customers. This same question was asked when EPB at Chattanooga and LUS in Lafayette, Louisiana rolled out the earliest 1 Gbps residential bandwidth. Both companies were a bit surprised when they got a few instant takers for the faster products – in both markets from doctors that wanted to be able to analyze MRIs and other big files at home. There are likely a few customers who need speeds above 1 Gbps, with doctors again being good candidates. Just as broadband speeds have advanced, the medical imaging world has grown more sophisticated in the last decade and is creating huge data files. The ability to download these quickly offsite will be tempting to doctors.

I think we are finally on the verge of seeing data use cases that can eat up most of a gigabit of bandwidth in the residential environment. For example, uncompressed virtual and augmented reality can require masses of downloaded data in nearly real-time. As we start seeing use cases for gigabit speeds, the history of broadband has shown that the need for faster speeds is probably not far behind.

The End of the Central Office?

One of the traditional costs for bringing fiber to a new market has always included the creation of some kind of central office space. This might mean modifying space in an existing building or building a new building or large hut. In years past a central office required a lot of physical space, but we are finally to the point with technology where the need for a big central office is often disappearing.

A traditional central office started with the need to house the fiber terminating electronics that connect the new market to the outside world. There also is the need to house and light the electronics facing the customers – although in some network design configurations some of the customer facing electronics can be housed in remote huts in neighborhoods.

A traditional central office needs room for a lot of other equipment. First is significant space for batteries to provide short-term backup in case of power outages. For safety reasons the batteries are often placed in a separate room. Central offices also need space for the power plant used to make the conversion from AC power to DC power. Central offices also usually need significant air conditioning and need room to house the cooling units. If the fiber network terminating to a central office is large enough there is also the requirement for some kind of fiber management system needed to separate the individual fibers in a neat and sensible way. Finally, if the above needs meant building a large enough space, many ISPs also built space to provide working and office space for technicians.

Lately I’ve seen several fiber deployments that don’t require the large traditional central office space. This is largely due to the evolution of the electronics used for serving customers in a FTTP network. For example, the OLTs (optical line terminations) electronics has been significantly compressed in size and density and a shelf of equipment can now perform the same functions that would have required much of a full rack a decade ago. As that equipment has reduced in size, the power requirements have also dropped, reducing the size of the power plant and the batteries.

I’ve seen several markets where a large cabinet provides enough room to replace what would have required a full central office a decade ago. These are not small towns, and two of the deployments are for towns with populations over 20,000.

As the footprint for the ‘central office’ has decreased there’s been a corresponding drop in costs. There are several supply houses that will now pre-install everything needed into the smaller cabinet / hut and deliver the whole unit complete and ready to go after connecting to power and splicing to fiber.

What I find interesting is that I still see some new markets built in the more traditional way. In that same market of 20,000 people it’s possible to still use a configuration that constructs several huts around the city to house the OLT electronics. For purposes of this blog I’ll refer to that as a distributed configuration.

There are pros and cons to both configurations. The biggest benefit of having one core hut or cabinet is lower cost. That means one pre-fab building instead of having to build huts or cabinets at several sites.

The distributed design also has advantages. A redundant fiber ring can be established with a network consisting of at least three huts, meaning that fewer parts of the market will lose service due to a fiber cut near to the core hub. But the distributed network also means more electronics in the network since there is now the need for electronics to light the fiber ring.

The other advantage of a distributed network is that there are fewer fibers terminating to each hut compared to having all customer fibers terminating to a single hut. The distributed network likely also has smaller fibers in the distribution network since fiber can be sized for a neighborhood rather than for the whole market. That might mean less splicing required during the initial construction.

Anybody building a new fiber network needs to consider these two options. If the market is large enough then the distributed network becomes mandatory. However, many engineers seem to be stuck with the idea that they need multiple huts and a fiber ring even for smaller towns. That means paying a premium price to achieve more safety against customer outages. However, since raising the money to build a fiber network is often the number one business consideration, the ability to save electronics costs can be compelling. It would not be unusual to see the single-hub configuration save half a million dollars or more. There is no configuration that is the right choice for all situations. Just be sure if you’re building FTTP in a new market that you consider the options.

Open Access for Apartment Buildings

San Francisco recently passed an interesting ordinance that requires that landlords of apartments and multi-tenant business buildings allow access to multiple ISPs to bring broadband. This ordinance raises all sorts of regulatory and legal questions. At the most recent FCC monthly meeting the FCC jumped into the fray and voted on language that is intended to kill or weaken the ordinance.

The FCC’s ruling says that a new ISP can’t share wiring that is already being used by an existing broadband provider. I call this an odd ruling because there are very few technologies that share wires between competitors – with most fast broadband technologies a new ISP must rewire the building or beam broadband wirelessly. This means the FCC’s prohibition might not make much of a difference in terms of overturning the San Francisco ordinance. The only competitive broadband technology that routinely uses an existing wire is G.Fast, and even that can only be used by one broadband provider at a time and not shared. I can’t think of any examples of a practical impact of the FCC ruling.

The FCC’s ruling is odd for a number of other reasons. It’s certainly out of the ordinary for a federal agency to react directly to a local ordinance. My guess is that the FCC knows that many other cities are likely to jump onto the open access bandwagon. Cities are getting a lot of complaints from apartment tenants who don’t have access to the same broadband options as single family homes.

The FCC ruling is also unusual because it violates the FCC’s overall directive from Congress to be pro-competition. The FCC order clearly falls on the side of being anti-competitive.

What I find most striking about this decision is that this FCC gave up authority to regulate broadband when they killed Title II regulation last year. I guess what they meant was that they are giving up regulating broadband except when it suits them to regulate anyway. It’s an interesting question if the agency still has the authority to make this kind of ruling. It’s likely this lack of regulatory authority that forced the FCC to make such a narrow ruling instead of just overturning the San Francisco ordinance. I always knew it wouldn’t be long before the FCC selectively wanted back some of their former Title II authority.

The MDU market has an interesting history. Historically the large apartment buildings were served by the incumbent providers. The incumbents often stealthily gave themselves exclusive rights to serve apartments through deceptive contractual practices, and the FCC prohibited some of the most egregious abuses.

For many years competitors largely weren’t interested in apartments because the cost of rewiring most building was prohibitive. In the last few years the MDU market has changed significantly. There are now wiring and wireless technologies that make it more affordable to serve many large apartment buildings. There are now numerous competitors operating in the space. Many of them bring a suite of services far beyond the triple play and also bring security, smart camera solutions to make tenants feel safe, smart sensors of various kinds, and WiFi in places like hallways, stairwells, parking garages and outside. These new competitors often require an exclusive contract with a landlord as a way to help cover the cost of bringing the many ancillary services.

There is another regulatory issue to consider. There have been several laws from Congress that have been tested in the courts that give building owners the right to keep ISPs off their premise – this applies to single family homes as well as the largest apartment buildings. It won’t be surprising to see building owners suing the City for violating their property rights.

Yet another issue that muddies the water is that landlords often elect to act as the ISP and to build broadband and other services into the rent. Does the San Francisco ordinance prohibit this practice since it’s hard for any ISP to compete with ‘free’ service.

Another area affected by the ordinance might best be described as aesthetics. Landlords often have stringent rules like requiring that ISPs hide wiring, electronics boxes, and outdoor enclosures or huts. It’s a bit ironic that the City of San Francisco would force building owners to allow in multiple ISPs and the myriad wires and boxes that come with open access. San Francisco recently got a positive court ruling saying that aesthetics can be considered for small cell deployments and it seems odd in MDUs that the City is favoring competition over aesthetics.

At the end of the day I think the City might be sorry that they insinuated themselves into an extremely complicated environment. There are likely dozens of different relationships today between landlords and ISPs and it seems like a slippery slope to try to force all apartment owners to offer open access.

I know cities have been struggling with the open access issue. They receive complaints from apartment tenants who want different broadband options. It’s not hard to understand why a city with a lot of apartment dwellers might feel compelled to tackle this issue. I know other cities that have considered ordinances like the San Francisco one and abandoned the issue once they understood the complexity.

The City made an interesting choice with the ordinance. The City elected to require open access to help foster consumer choice. However, it’s possible that the long-term results might not be what the City expected and the ruling could drive away the creative ISPs who elect not to compete in an open access environment.

It seemed almost inevitable that the City ordinance will be challenged by somebody – but the courts were a more logical place to fight this battle than at the FCC. If anything, the FCC has just clouded the issue by layering on a toothless prohibition against the sharing of wires.

FCC Looks to Kill Copper Unbundling

FCC Chairman Ajit Pai circulated a draft order that would start the process of killing the unbundling of copper facilities. This unbundling was originally ordered with the Telecommunications Act of 1996, and unleashed telephone and broadband competition in the US. This new law was implemented before the introduction of DSL and newly formed competitors (CLECs) were able to use telco copper to compete for voice and data service using T1s. The 1996 Act also required that the big telcos offer their most basic products for resale.

The FCC noted that their proposed order will “not grant forbearance from regulatory obligations governing broadband networks”, meaning they are not going to fully eliminate the requirement for copper unbundling. This is because the FCC doesn’t have the authority to fully eliminate unbundling since the obligation was required by Congress –  the FCC is mandated to obey that law until it’s either changed by Congress or until there is no more copper left to unbundle. Much of the industry has been calling for an updated telecommunications act for years, but in the current dysfunction politics of Washington DC that doesn’t look likely.

The big telcos have hated the unbundling requirement since the day it was passed. Eliminating this requirement has been near the top of their regulatory wish list since 1996. The big telcos hate of unbundling is somewhat irrational since in today’s environment unbundling likely makes them money. There are still CLECs selling DSL from unbundled copper and generating monies for the telcos that they’d likely not have otherwise. But the hatred for the original ruling has become ingrained in the big telco culture.

The FCC’s proposal is to have a three year transition from the currently mandated rates that are set at incremental costs to some market-based leased rate. I guess we’ll have to see during that transition if the telcos plan to price CLECs out of the market or if they will offer reasonable lease rates that will continue to offer connections.

This change has the possibility of causing harm to CLECs and consumers. There are still a number of CLECs selling DSL over unbundled copper elements. In many cases these CLECs operate the newest DSL electronics and can offer faster data speeds than the telco DSL. It’s not unusual for CLECs to have 50 Mbps residential DSL. For businesses they can now combine multiple pairs of copper and I’ve seen unbundled DSL products for businesses as fast as 500 Mbps.

There are still a lot of customer that are choosing to stay with DSL. Some of these customers don’t feel the need for faster data speeds. In other cases it’s due to the fact that DSL is generally priced to be cheaper than cable modem products. At CCG we do surveys and it’s not unusual to find anywhere from 25% to 45% of the customers still buying DSL in a market that has a cable competitor. While there are millions of customers annually making the transition to cable modem service, there are still big numbers of households still using DSL – it’s many years away from dying.

There is another quieter use of unbundled copper that still has competitors worried. Any competitor that offers voice service using their own switch is still required by law to interconnect to the local incumbent telcos. Most of that interconnection is done today using fiber transport, but there still is a significant impact from unbundled elements.

Surprisingly, the vast majority of the public switched telecommunications network (PSTN) still uses technology based upon T1s. There was a huge noise made 5 – 10 years ago about having a ‘digital transition’ where the interconnection network was going to migrate to 100% IP. But for the most part this transition never occurred. Competitors can still bring fiber to meet an incumbent telco network, but that fiber signal must still be muxed down to T1 channels using T1s and DS3. The pricing for those interconnections are part of the same rules the FCC wants to kill. CLECs everywhere are going to be worried about seeing huge price increases for the interconnection process.

The big telcos have always wanted interconnection to be done at tariffed special access rates. These are the rates that often had a T1 (1.5 Mbps connection) priced at $700 per month. The unbundled cost for an interconnection T1 is $100 or less in most places and competitors are going to worry about seeing a big price increase to tie their network to telco tandems.

It’s not surprising to see this FCC doing this. They have been checking off the regulatory wish list of the telcos and the cable companies since Chairman Pai took over leadership. This is one of those regulatory issues that the big telcos hate as a policy issue, but which has quietly been operationally working well now for decades. There’s no pressing reason for the FCC to make this change. Copper is naturally dying over time and the issue eventually dies with the copper. There are direct measurable benefits to consumers from unbundling, so the real losers are going to be customers who lose DSL connections they are happy with.

Consider Rural Health Care Funding

One of the sources of the Universal Service Fund that often is forgotten is the Rural Health Care Program. The FCC recently carried forward $83.2 million that was unspent in 2018 into the 2019 funding pool. In June Chairman Ajit Pai proposed to raise the annual cap on this fund from $400 million to $571 million. That’s where this fund would have been today had the original fund been indexed by inflation since it was started in 1997. He also proposes that the cap on this Fund grow by inflation in the future.

I have a lot of clients who help their customers benefit from the Schools and Libraries Fund, but many of them never think about doing the same thing with the Rural Health Care Fund.

The Rural Health Care Program provides funding to eligible health care providers for broadband and voice services. Eligible health care providers must be either a public or a non-profit entity. The funds can be used for entities such as 1) educational institutions offering post-secondary medical instruction, teaching hospitals and medical schools; 2) community health centers providing care to migrants; 3) local health departments; 4) community mental health centers; 5) non-profit hospitals; 6) rural health clinics; 7) skilled nursing facilities; and 8) consortiums of providers that include one or more of the preceding list.

This program is comprised of two programs: The Healthcare Connect Fund Program and the Telecommunications Program. The Healthcare Connect Program provides support for high-speed broadband connections. Eligible entities can receive as much as a 65% discount on monthly broadband bills for services like Internet access, dark fiber, or traditional telco data services. This works a lot like the E-Rate program for Schools and Library program. The health care facility pays the reduced rate for service and the partner ISP can collect the discount from the Universal Service Fund.

The health care providers can also ask for assistance with telecommunications equipment and can use the funds to help pay for the construction of fiber facilities. This funding can be an interesting way for a rural ISP to get some assistance for paying for a fiber route to reach a health care facility (and then use that fiber to also serve other customers).

The Telecommunications Program works a little differently. In that program the health care facility can buy broadband and telecommunication services at rates that are reasonably comparable to rates charged for similar services in nearby urban areas. That’s likely to mean discounts smaller than the 65% in the Healthcare Connect program. Functionally this still works the same and the ISP can collect the difference between the urban rates and the rural rates.

Just like with E-Rate, the health care provider must apply for this funding. But also like E-Rate, it’s typical for an ISP to help prepare the paperwork. The paperwork will feel familiar to any ISP already participating in an E-Rate situation.

It’s obvious that since $83.2 million is being carried over from 2018 that rural health care providers are not all taking full advantage of this program. I see articles all of the time decrying a crisis in rural health care due to the high costs of providing services in rural America. This program can bring subsidized broadband connection to health care facilities at a time when that is likely a welcome relief.

This funding has been available for a long time, yet I rarely hear clients talking about it. I’m guessing most rural ISPs have never participated although there are likely eligible health care facilities nearby. This likely will require some training for potential customers. School and library associations have done a good job at alerting their members that this subsidy exists – but I’m guessing the same has not been done with rural health care providers. An ISP willing to tackle the filings can gain a great customer while also benefitting their community.

Millimeter Wave 5G is Fiber-to-the-Curb

I’ve been thinking about and writing about 5G broadband using millimeter wave spectrum for over a year. This is the broadband product that Verizon launched in Sacramento and a few other markets as a trial last year. I don’t know why it never struck me that this technology is the newest permutation of fiber-to-the curb.

That’s an important distinction to make because naming it this way makes it clear to anybody hearing about the technology that the network is mostly fiber with wireless only for the last few hundred feet.

I remember seeing a trial of fiber-to-the-curb back in the very early 2000s. A guy from the horse country in Virginia had developed the technology of delivering broadband from the pole into the home using radios. He had a working demo of the technology at his rural home. Even then he was beaming fast speeds – his demo delivered an uncompressed video signal from curb to home. He knew that the radios could be made capable of a lot more speed, but in those days I’m sure he didn’t think about gigabit speeds.

The issues that stopped his idea from being practical have been a barrier until recently. There was first the issue of getting the needed spectrum. He wanted to use what we now call midrange spectrum, but which were considered as high spectrum bands in 2000 – he would have to convince the FCC to carve out a slice of spectrum for his application, something that’s always been difficult. He also didn’t have any practical way of getting the needed bandwidth to the pole. ISP’s were still selling T1s, 1 Mbps DSL, and 1 Mbps cable modem service, and while fiber existed, the electronics cost for terminating fiber to devices on multiple poles was astronomical. Finally, even then, this guy had a hard time explaining how it would be cheaper to use wireless to get to the home rather than building a drop wire.

Verizon press releases would make you think that they will be conquering the world with millimeter wave radios and deploying the technology everywhere. However, once you think of this as fiber-to-the-curb that business plan quickly makes no sense. The cost of a fiber-to-the-curb network is mostly in the fiber. Any saving from using millimeter wave radios only applies to the last few hundred feet. For this technology to be compelling the savings for the last hundred feed has to be significant. Do the radio electronics really cost less for wireless compared to the cost of fiber drops and fiber electronics?

Any such comparison must consider all the costs of each technology – meaning the cost of installations, repairs, maintenance, and periodic replacement of electronics. And the comparisons need to be honest. For example, every other wireless technology I know requires more maintenance truck roles than fiber-based technologies due to the squirrelly nature of how wireless behaves in the wild.

Even should the radios become much cheaper than fiber drops, the business case for the technology might still have no legs. There is no way to get around the underlying fact that fiber-to-the-curb means building fiber along residential streets. Verizon has always said that they didn’t extend their fiber FiOS network to neighborhoods where the construction costs were too high. Verizon still seems to be the most cautious of the big ISPs and it’s hard to think that they’ve changed this philosophy. Perhaps the Verizon business plan is to cherry pick in markets outside their footprint, but only where they have the low-cost option of overlashing fiber. If that’s their real business plan then they will not be conquering the world with 5G, but just cherry picking neighborhoods that meet their price profile – a much smaller footprint and business plan than most of the industry is expecting.

My hope is that the rest of the industry starts referring to this technology as fiber-to-the-curb instead of calling it 5G. The wireless companies have gained great advantage from using the 5G name for multiple technologies. They have constantly used the speeds from the fiber-to-the-curb trials and the hot spot trials to make the public think the future means gigabit cellular service. It’s time to start demystifying 5G and using a different name for the different technologies.

Once this is understood it ought to finally be clear that millimeter wave fiber-to-the-curb is not coming everywhere. This sounds incredibly expensive to build in neighborhoods with already-buried utilities. Where density is low it might turn out that fiber-to-the-curb is more expensive than fiber-to-the-home. The big cost advantage seems to come from hitting multiple homes from one pole transmitter. Over time, when anybody can buy the needed components of the technology the best business case will become apparent to us all – for now the whole industry is guessing about what Verizon is doing because we don’t understand the basic costs of the technology.

At the end of the day this is just another new technology to put into the quiver when designing last mile networks. There will undoubtably be places where fiber-to-the-curb has a cost advantage over fiber drops. Assuming that Verizon or somebody else builds enough of the technology to pull hardware prices down, I picture a decade from now that fiber overbuilds will consider fiber-to-the-curb as part of the mix in designing the last few hundred feet.

We Need Public 5G Spectrum

Last October the FCC issued a Notice for Proposed Rulemaking that proposed expanding WiFi into the 6 GHz band of spectrum (5.925 to 7.125 GHz). WiFi has been a huge economic boon to the country and the FCC recognizes that providing more free public spectrum is a vital piece of the spectrum puzzle. Entrepreneurs have found a myriad of inventive ways to use WiFi that go far beyond what carriers have provided with licensed spectrum.

In much of the country the 6 GHz spectrum is likely to be limited to indoor usage due to possible outdoor interference with Broadcast Auxiliary Service, where remote crews transmit news feeds to radio and TV stations, and Cable Television Relay Service, which cable companies used to transmit data within a cable company. The biggest future needs for WiFi are going to be indoors, so restricting this spectrum to indoor use doesn’t feel like an unreasonable limitation.

However, WiFi has some inherent limitations. The biggest problem with the WiFi standard is that a WiFi network will pause to allow any user to use the bandwidth. In a crowded environment with a lot of devices the constant pausing adds latency and delay in the system, and in heavy-use environments like a business hotel the constant pauses can nearly shut down a WiFi network. Most of us don’t feel that interference today inside our homes, but as we add more and more devices over time, we will recognize the inherent WiFi interference into our network. The place where WiFi interference is already a big concern is in heavy wireless environments like hospitals, factories, airports, business hotels, and convention centers.

Many of our future computing needs are going to require low latency. For instance, creating home holograms from multiple transmitters is going to require timely delivery of packets to each transmitter. Using augmented reality to assist in surgery will require deliver of images in real time. WiFi promises to get better with the introduction of WiFi 6 using the 802.11ax standard, but that new standard does not eliminate the innate limitations of WiFi.

The good news is that we already have a new wireless standard that can create a low-latency dedicated signal paths to users. Fully implemented 5G with frequency slicing can be used to satisfy those situations where WiFi doesn’t meet the need. It’s not hard to picture a future indoor network where a single router can satisfy some user needs using the WiFi standard with other uses satisfied using 5G – the router will choose the best standard to use for a given need.

To some degrees the cellular carriers have this same vision. They talk of 5G being used to take over IoT needs instead of WiFi. They talk about using 5G for low latency uses like augmented reality. But when comparing the history of the cellular networks and WiFi it’s clear that WiFi has been used far more creatively. There are thousands of vendors working in today’s limited WiFi spectrum that have developed a wide array of wireless services. Comparatively, the cellular carriers have been quite vanilla in their use of cellular networks to deliver voice and data.

I have no doubt that AT&T and Verizon have plans to offer million-dollar 5G solutions for smart factories, hospitals, airports and other busy wireless environments. But in doing so they will tap only a tiny fraction of the capability of 5G. If we want 5G to actually meet the high expectations that the industry has established, we ought to create a public swath of spectrum that can use 5G. The FCC could easily empower the use of the 6 GHz spectrum for both WiFi and 5G, and in doing so would unleash wireless entrepreneurs to come up with technologies that haven’t even been imagined.

The current vision of the cellular carriers is to somehow charge everybody a monthly subscription to use 5G – and there will be enough devices using the spectrum that most people will eventually give in and buy the subscription. However, the big carriers are not going to be particularly creative, and instead are likely to be very restrictive on how we use 5G.

The alternate vision is to set aside a decent slice of public spectrum for indoor use of 5G. The public will gain use of the spectrum by buying a 5G router, with no monthly subscription fee – because it’s using public spectrum. After all, 5G is a just standard, developed worldwide and is not the proprietary property of the big cellular companies. Entrepreneurs will jump on the opportunity to develop great uses for the spectrum and the 5G standard. Rather than being held captive by the limited vision of AT&T and Verizon we’d see huge number of devices using 5G creatively. This could truly unleash things like augmented reality and virtual presence. Specialty vendors would develop applications that make great strides in hospital health care. We’d finally see smart shopping holograms in stores.

The public probably doesn’t understand that the FCC has complete authority over how each swath of spectrum is used. Only the FCC can determine which spectrum can or cannot be used for WiFi, 5G and other standards. The choice ought to be an easy one. The FCC can let a handful of cellular companies decide how society will use 5G or they can unleash the creativity of thousands of developers to come up with a myriad of 5G applications. We know that creating public spectrum creates immense societal and economic good. If the FCC hadn’t set aside public spectrum for WiFi we’d all still have wires to all our home broadband devices and many of the things we now take for granted would never have come to pass.

Another Story of Lagging Broadband

We don’t really need any more proof that the FCC broadband data is massively out of touch with reality. However, it seems like I see another example of this almost weekly. The latest news comes from Georgia where the Atlanta Journal-Constitution published an article that compared actual broadband speeds measured by speed tests to the FCC data. The newspaper analyzed speed tests from June through December 2017 and compared those results to the FCC databases of supposed broadband speeds for the same time period. Like everywhere else that has done this same comparison, the newspaper found the FCC data speeds to be overstated – in this case, way overstated.

The newspaper relied on speed tests provided by Measurement Labs, an Internet research group that includes Google, the Code for Science & Society, New America’s Open Technology Institute, and Princeton University’s PlanetLab. These speed tests showed an average Internet speeds of only 6.3 Mbps for areas where the FCC data reported speeds of 25 Mbps are available.

Anybody that understands the FCC mapping methodology knows that you have to make such a comparison carefully. The FCC maps are supposed to show available speeds and not actual speeds, so to some degree the newspaper is comparing apples and oranges. For instance, when multiple speeds are available, some people still elect to buy slower speeds to save money. I would expect the average speed in an area where 25 Mbps is the fastest broadband to be something lower than that.

However, the ultralow average speed test results of 6.3 Mbps points out a big problem in rural Georgia – homes electing to buy lower speeds can’t possibly account for that much of a difference. One thing we now know that is an area shown by the FCC to have 25 Mbps broadband speeds is probably served by DSL and perhaps by fixed wireless. The vast majority of cable companies now have speeds much faster than 25 Mbps and areas shown on the maps that are served by cable companies will show available speeds of at least 100 Mbps, and in many cases now show 1 Gbps.

The only way to explain the speed test results is that the FCC maps are wrong and the speeds in these areas are not really at the 25 Mbps level. That highlights one of the big fallacies in the FCC database, which is populated by the ISPs. The telcos are reporting speeds of ‘up to 25 Mbps’ and that’s likely what they are also marketing to customers in these areas. But in reality, much of the DSL is not capable of speeds close to that level.

The newspaper also gathered some anecdotal evidence. One of the areas that showed a big difference between FCC potential speed and actual speed is the town of Social Circle, located about 45 miles east of Atlanta. The newspaper contacted residents there who report that Internet speeds are glacial and nowhere near to the 25 Mbps as reported on the FCC maps. Several residents told the newspaper that the speeds are too slow to work from home – one of the major reasons that homes need faster broadband.

Unfortunately, there are real-life ramifications from the erroneous FCC maps. There have been several grant programs that could have provided assistance for an ISP to bring faster broadband to places like Social Circle – but those grants have been limited to places that have speeds less than 25 Mbps – the FCC definition of broadband. Areas where the maps are wrong are doubly condemned – they are stuck with slow speeds but also locked out of grant programs that can help to upgrade the broadband. The only beneficiary of the bad maps are the telcos who continue to sell inadequate DSL in towns like Social Circle where people have no alternative.

The State of Georgia has undertaken an effort to produce their own broadband maps in an attempt to accurately identify the rural broadband situation. The University of Georgia analyzed the FCC data which shows there was 638,000 homes and businesses that couldn’t get Internet with speeds of at least 25 Mbps. The state mapping effort is going to tell a different story, and if the actual slow speeds indicated by the speed tests are still true today then there are going to by many more homes that actually don’t have broadband.

It seems like every examination of the FCC mapping data shows the same thing – widespread claimed broadband coverage that’s not really there. Every time the FCC tells the public that we’re making progress with rural broadband, they are basing their conclusions on maps they know are badly flawed. It’s likely that there are many millions of more homes that don’t have broadband than claimed by the FCC – something they don’t want to acknowledge.