Will Hyper-inflation Kill Broadband Projects?

The broadband industry is facing a crisis. We are poised to build more fiber broadband in the next few years than has been built over the last four decades. Unfortunately, this peak in demand hits a market that was already superheated, and at a time when pandemic-related supply chain issues are driving up the cost of broadband network components.

The numbers I am hearing from clients are truly disturbing. Just in the last few weeks, I’ve heard that the cost of conduit and resin-based components like handholes and pedestals is up 40%. I’ve heard mixed messages on fiber – some saying that prices are up as much as 20%, while others are seeing little price increases. I think the increases have to do with the specific kind of fiber being purchased as well as the history of the buyer – the biggest increases are going to new or casual fiber buyers. I’ve heard the cost of fiber-related components like pigtails is also way up.

The kinds of numbers I’m hearing can only be classified as hyper-inflation. It’s way outside the bounds of normalcy when the cost of something is up 20% to 40% in a year. I’ve been listening to a lot of economists lately who say that many price increases that are due to the pandemic are temporary in nature and that what we are seeing is a price bubble – they predict prices ought to revert to old levels over time in competitive markets where multiple providers of components will be bidding for future business.

But I keep looking at the upcoming plans for the country collectively to build fiber, and it looks like we might be seeing a superheated industry for the rest of this decade. When much of the rest of the economy gets back to normal, it’s not hard to envision that not being the case for fiber.

This leads me to ask if this hyper-inflation is going to kill fiber projects? I start with the RDOF winners – the amount of subsidy they get is fixed and won’t be adjusted for higher costs. At what point do some of the RDOF projects stop making sense? The business modelers for these companies must be working overtime to see if the projects still work with the higher costs. It won’t be shocking to see some of the RDOF winners give up and return Census blocks to the FCC.

But this same thing affects the winners of every other grant. Consider the recent grant filings with the NTIA. Those were some of the most generous grants ever awarded, with the NTIA program picking up as much as 75% of the cost of a project. What happens to the winners of those grants if materials are much more costly when they go to build the project? Any extra cost must be borne by the grant winner, meaning that the real matching could be a lot more than 25%. Some grant winners are going to have a hard time finding extra funding to cover the extra costs. Some of these projects are in extremely rural places, and one has to worry that having to pay extra might make the difference between a project making sense and not. Even with grants, it’s often a fine line between a project being feasible or not.

This same worry has to be spreading through the big ISPs. Companies like Frontier, Windstream, and Lumen are betting their future viability on building more fiber. How do those plans change when fiber is a lot more expensive?

The worst thing is that we have no idea where prices will peak. We’ve not really yet seen any market impact from RDOF and other big grant programs. We’ve seen some impact from CAREs spending, but that was a drop in the bucket compared to what we’re likely to see from ARPA and federal infrastructure spending.

I have a lot of nervous clients, and I have no good advice for them on this issue. Should they buy as much fiber and components as they can now before prices rise even more, or should they wait and hope for parts of the market to return to normalcy? What we’re experiencing is so far outside the bounds of normal that we have no basis for making decisions. I chatted with a few folks recently who speculated that the best investment they could make this year would be to buy $1 million of fiber reels and sit on them for a year – they might be right.

Video Meetings are the New Normal

One of the big changes that came out of the pandemic will have a permanent impact on broadband networks. Holding online meetings on Zoom, Microsoft Teams, GoToMeeting, and other video platforms has become a daily part of business for many companies.

This article in the New York Times cites predictions that businesses will cut down on travel by 20% to 50%. This will have a huge impact over time on the airline and hotel industries. As a lifelong road warrior, I recall the relief every year when the school year started back in September and airports returned mostly to business travelers. It will be interesting in the future if airports really get more deserted during the business-only travel months.

But the real boon for businesses from less travel will be lower expenses and increased productivity. I can’t add up the number of times that I traveled somewhere for a one or two-hour meeting – something that has now fallen off my radar. We’re going to replace rushing to make a flight with the use of broadband.

What is interesting is how hard we tried in the past to make video conferencing into an everyday thing. Everybody of my age remembers these AT&T commercials from 1993 that predicted that video conferencing, working remotely, digital books, and GPS navigation would become a part of daily life. Most of the predictions made by these commercials became a reality much sooner than common video calling. Whole new industries have been built around digital books, and GPS is seemingly built into everything.

The business world fought against video conferencing. I recall a client from 20 years ago who had invested in an expensive video conference setup and insisted on either meeting in person or holding a video conference. I recall the hassle of having to rent a local video conferencing center to talk to this client – but even then, I could see how that expense was far better than spending time a wasted day in an airport and a night in a hotel.

I don’t know how typical my workday is, but I probably average 3 hours per day on video calls. I always hated long telephone calls, but I like the experience of seeing who I’m talking to. It’s enabled creating real bonds with clients and colleagues as I talk to them multiple times through video chat compared to an occasional live meeting.

A few weeks ago, I wrote about the concept of broadband holding times to account for the fact that we are tying up broadband connections for hours with video chats or connecting to a work or school server. I’m not sure that we’ve fully grasped what this means for broadband networks. Most network engineers had metrics they used for estimating the amount of bandwidth required to serve a hundred or a thousand customers. That math goes out the door when a significant percentage of those customers are spending hours on video chats that use a small but continuous 2-way bandwidth connection.

We’re not likely to fully grasp what this means for another year until the pandemic is fully behind us, and companies settle into a new normal. I know I’m not going to be in airports in the future like I was in the past, and many people I’ve talked to feel the same way.

Multi-gigabit Broadband

There have been a few ISPs in recent years that have quietly offered residential broadband with speeds up to 10-gigabits. However, this year has seen an explosion of ISPs marketing multi-gigabit broadband.

I recall an announcement from Google Fiber last year offering an upgrade to 2-gigabit service in Nashville and Huntsville for $100 per month. Since then, the company has expanded the offer to other markets, including Atlanta, Charlotte, Kansas City, Raleigh-Durham, Austin, Salt Lake City, Provo, and Irvine.

Not to be outdone, Comcast Xfinity announced a 2-gigabit product, likely available in those markets where Google Fiber is competing. But Comcast doesn’t seem to really want to sell the product yet, having priced it at $299.95 per month. We saw the same high pricing when Comcast first introduced gigabit service – it gave them the bragging rights for having the fastest product, but the company was clearly not ready to widely sell it.


Midco, the cable company, markets speed up to 5-gigabits in places where it has built fiber. In recent months I’ve seen announcements from several rural cooperatives and telcos that are now offering 2-gigabit speed.

This feels like a largely marketing-driven phenomenon, with ISPs trying to distinguish themselves in the market. It was inevitable that we’d see faster speeds after the runaway popularity of 1-gigabit broadband. OpenVault reported that as of June of this year that 10.5% of all broadband subscribers are buying a gigabit product. It makes sense that some of these millions of customers could be lured to spend more for even faster speed.

There are still a lot of broadband critics who believe that nobody needs gigabit broadband. But you can’t scoff at a product that millions are willing to buy. Industry pundits thought Google Fiber was crazy a decade ago when it announced that its basic broadband speed was going to be 1-gigabit. At that time, most of the big cable companies had basic broadband products at 60 Mbps, with the ability to buy speeds as fast as 200 Mbps.

It was clear then and is still true today that a gigabit customer can rarely if ever, download from the web at a gigabit speed – the web isn’t geared to support that much speed the whole way through the network. But customers with gigabit broadband will tell you there is a noticeable difference between gigabit broadband and more normal broadband at 100 Mbps. The human eye can perceive the improvement that comes with gigabit speed.

The most aggravating thing about the debate about multi-gigabit speeds is how far the regulators have fallen behind the real world. According to OpenVault, the percentage of homes that subscribe to broadband with speeds of 100 Mbps or faster has grown to 80% of all broadband subscribers. We know in some markets that delivered speeds are less than advertised speeds – but the huge subscription levels are proof that subscribers want fast broadband.

Satellite Companies Fighting over RDOF

There has been an interesting public fight going on at the FCC as Viasat has been telling the FCC that Elon Musk’s Starlink should not be eligible for funding from the Rural Digital Opportunity Fund (RDOF). At stake is the $886 million that Starlink won in December’s RDOF auction that is still under review at the FCC.

Viasat had originally filed comments at the FCC stating that the company did not believe that Starlink could fulfill the RDOF requirements in some of the grant award areas. Viasat’s original filings listed several reasons why Starlink couldn’t meet its obligations, but the primary one was that Starlink technology was incapable of serving everybody in some of the more densely populated RDOF award areas. Viasat calculated the number of potential customers inside 22-kilometer diameter circles – the area that it says can be covered by one satellite. According to Viasat’s math, the most customers that could reasonably be served is 1,371 customers – and the company identified 17 RDOF areas with a greater number of households, with the maximum one having 4,126 locations.

There have been similar claims made by others in the industry who say that Starlink will be good for serving remote customers, but the technology is not capable of being the only ISPs in an area and serving most of the homes simultaneously.

Last month, Viasat made an additional claim that Starlink does not have sufficient backhaul bandwidth to serve a robust constellation. This stems from an ongoing tug-of-war at the FCC over 12 GHz spectrum. Starlink wants this spectrum to enable it to create more ground stations for transferring data to and from the satellite constellation. This is spectrum that Dish Networks owns that it wants to purpose for 5G. Dish Network has offered a spectrum-sharing plan that would greatly reduce Starlink’s use of the spectrum. The FCC filings on the topic are interesting reading, as wireless engineers on both sides of the issue essentially argue that everything the other side says is wrong. I’m not sure how the FCC ever decides which side is right.

The latest Viasat criticism of Starlink is based upon public statements made by Elon Musk at the Barcelona MWC conference, where he commented on how hard it is to fund the satellite business. Musk said that the business is likely to need between $20 billion and $30 billion in additional investment to reach the goal of over 11,000 satellites. Musk said his first priority is just to make sure that Starlink doesn’t go bankrupt. Viasat says that this is evidence that Starlink is a ‘risky venture’, something the FCC originally said should not be eligible for the federal RDOF subsidy.

Starlink recently asked the FCC to ignore everything that Viasat has filed and said that the Viasat comments are anti-competitive and are a ‘sideshow’. This has to be a huge puzzler for the FCC. We already see Starlink bringing good broadband to remote places that don’t have any broadband today. But the question in front of the FCC is not if Starlink can be a good ISP, but whether the company deserves a 10-year federal subsidy to support the business. Obviously, if Starlink needs at least $20 billion more to be viable, then getting or not getting the $886 million spread over ten years is not going to make a difference in whether Starlink makes it as a company.

The FCC is in a bind because many of these same issues were raised before the RDOF auction in an attempt by others to keep Starlink out of the auction. It wasn’t hard to predict that Starlink would win the subsidy in some of the most remote places in the country since it was willing to bid lower than other ISPs. The FCC voted to allow Starlink into RDOF just before the auction, and is now seeing that original decision challenged.

It’s also an interesting dilemma because of the possibility of an infrastructure plan by Congress that would fund fiber in most of the places won by Starlink. Would the FCC had allowed Starlink into the RDOF had it known about the possibility of such federal grants – I would have to guess not. The FCC is now faced with depriving areas from getting a permanent subsidy if they continue with the plan to give the RDOF to Starlink. That would just be bad policy.

Demystifying Oversubscription

I think the concept that I have to explain the most as a consultant is oversubscription, which is the way that ISPs share bandwidth between customers in a network.

Most broadband technologies distribute bandwidth to customers in nodes. ISPs using passive optical networks, cable DOCSIS systems, fixed wireless technology, and DSL all distribute bandwidth to a neighborhood device of some sort that then distributes the bandwidth to all of the customers in that neighborhood node.

The easiest technology to demonstrate this with is passive optical fiber since most ISPs deliver nodes of only 32 people or less. PON technology delivers 2.4 gigabits of download bandwidth to the neighborhood node to share with 32 households.

Let’s suppose that every customer has subscribed to a 100 Mbps broadband service. Collectively, for the 32 households, that totals to 3.2 gigabits of demand – more than the 2.4 gigabits that is being supplied to the node. When people first hear about oversubscription, they think that ISPs are somehow cheating customers – how can an ISP sell more bandwidth than is available?

The answer is that the ISPs knows that it’s a statistical certainty that all 32 customers won’t use the full 100 Mbps download capacity at the same time. In fact, it’s rare for a household to ever use the full 100 Mbps capability – that’s not how the Internet works. Let’s say a given customer is downloading a huge file. Even if the ISP at the other end of that transaction has fast Internet, the signal doesn’t come pouring in from the Internet at a steady speed. Packets have to find a path between the sender and the receiver, and the packets come in unevenly, in fits and starts.

But that doesn’t fully explain why oversubscription works. It works because all of the customers in a node never use a lot of bandwidth at the same time. On a given evening, some of the people in the node aren’t at home. Some are browsing the web, which requires minimal download bandwidth. Many are streaming video, which requires a lot less than 100 Mbps. A few are using the bandwidth heavily, like a household with several gamers. But collectively, it’s nearly impossible for this particular node to use the full 2.4 gigabits of bandwidth.

Let’s instead suppose that everybody in this 32-home node has purchased a gigabit product, like is delivered by Google Fiber. Now, the collectively possible bandwidth demand is 32 gigabits, far greater than the 2.4 gigabits being delivered to the neighborhood node. This is starting to feel more like hocus pocus, because the ISP has sold 13 times the capacity that is available to the node. Has the ISP done something shady here?

The chances are extremely high that they have not. The reality is that the typical gigabit subscriber doesn’t use a lot more bandwidth than a typical 100 Mbps customer. And when the gigabit subscriber does download something, it does so quicker, meaning that the transaction has less of a chance of interfering with transactions from neighbors. Google fiber knows it can safely oversubscribe at thirteen to one because it knows from experience that there is rarely enough usage in the node to exceed the 2.4 gigabit download feed.

But it can happen. If this node is full of gamers, and perhaps a few super-heavy users like doctors that view bit medical files at home, this node could have problems at this level of oversubscription. ISPs have easy solutions for this rare event. The ISP can move some of the heavy users to a different node. Or the ISP can even split the node into two, with 16 homes on each node. This is why customers with a quality-conscious ISP rarely see any glitches in broadband speeds.

Unfortunately, this is not true with the other technologies. DSL nodes are overwhelmed almost by definition. Cable and fixed wireless networks have always been notorious for slowing down at peak usage times when all of the customers are using the network. Where a fiber ISP won’t put any more than 32 customers on a node, it’s not unusual for cable company to have a hundred customers.

Where the real oversubscription problems are seen today is on the upload link, where routine household demand can overwhelm the size of the upload link. Most households using DSL, cable, and fixed wireless technology during the pandemic have stories of times when they got booted from Zoom calls or couldn’t connect to a school server. These problems are fully due to the ISP badly oversubscribing the upload link.

Farm Fresh Broadband

I was lucky enough to get an advanced copy of Farm Fresh Broadband by University of Virginia professor Christopher Ali. It’s a great read for anybody interested in rural broadband. The book is published by MIT Press and is now available for pre-order on Amazon.

The first half of the book discusses the history and the policies that have shaped rural broadband, and my review of his book will focus on this early discussion, which is near and dear to my heart. Ali hit on the same topics that I have been writing about in this blog for years. Of particular interest was Ali’s section talking about the policy failures that have led to the poor state of rural broadband today. Ali correctly points out that “we have a series of policies and regulations aimed at serving the interests of monopoly capital rather than the public interest and the public good”. Ali highlights the following policy failures that have largely created the rural digital divide:

  • Definition-Based Policy. The FCC has been hiding since behind its 25/3 Mbps definition of broadband since 2015. We still see this today when current federal grants all begin with this massively outdated definition of broadband when defining what is eligible for grant funding. We recently passed a milestone where over 10% of all households in the country are subscribed to gigabit broadband, and yet we are still trying to define rural broadband using the 25/3 Mbps standard. Unfortunately, sticking with this policy crutch has led to disastrously poor allocation of subsidies.
  • Technology Neutrality. Ali points to regulators who refuse to acknowledge that there are technologies and carriers that should not be worthy of federal subsidies. This policy is largely driven by lobbying by the big ISPs in the industry. This led to the completely wasted $10 billion CAF II subsidy that was given to shore up DSL at a time when it was already clear that DSL was a failure as a rural technology. This same lack of regulator backbone has continued as we’ve seen federal subsidies given to Viasat in the CAF II reverse auction and Starlink in the RDOF. The money wasted on these technologies could have instead been invested in bringing permanent broadband solutions to rural areas. It looks like Congress is going to continue this bow the big ISPs by allowing grants to be awarded for any technology that can claim to deliver 100/20 Mbps.
  • Mapping. Ali highlights the problems with FCC mapping that disguises the real nature of rural broadband. He points to the example of Louisa County, Virginia, where the FCC maps consider the county to have 100% broadband coverage at 25/3 Mbps. It turns out that 40% of this coverage comes from satellite broadband. Much of the rest comes from overstatements by the telcos in the county of the actual speeds. M-Lab speed tests show the average speeds in the county as 3.91 Mbps download and 1.69 Mbps upload – something that was not considered as broadband a decade ago by the FCC. Unfortunately, Louisa County is not unique, and there are similar examples all over the country where poor mapping policies have deflected funding away from the places that need it the most.
  • Localism. There are hundreds of examples where small regional telephone companies and cooperatives have brought great broadband to pockets of rural America. We have made zero attempt to duplicate and spread these success stories. In the recent CAF II awards we saw just the opposite, with huge amounts of money given to companies that are not small and not local. We already know how to fix rural broadband – by duplicating the way we electrified America by loaning money to local cooperatives. But regulators would rather hand out huge grants to giant ISPs. When we look back in a few decades at the results of the current cycle of grant funding, does anybody really believe that a big ISP like Charter will bring the same quality of service to communities as rural cooperatives?

The second half of the book is the really interesting stuff, and all I supply for that are some teasers. Ali describes why farmers badly need broadband. He describes the giant bandwidth needed for precision agriculture, field mapping, crop and livestock monitoring, and overall management of farms to maximize yields. One of the statistics he cites is eye-opening. Fully-deployed smart agriculture could generate 15 gigabytes of data annually for every 1,000 acres of fields. With current land under cultivation, that equates to more than 1,300 terabytes of data per year. We have a long way to go to bring farms the broadband they need to move into the future.

I do have one criticism of the book, and it’s purely personal. Ali has a huge number of footnotes from studies, articles, and reports – and it’s going to kill many of my evenings as I slog through the fascinating references that I’ve not read before.

An Update on Telemedicine

I’ve been keeping tabs on the news about telemedicine since it is touted throughout the industry as one of the big benefits of having good broadband. One piece of news comes from a survey conducted by Nemours Children’s Health. This is a large pediatric health system with 95 locations in Delaware, Florida, New Jersey, and Pennsylvania. The company treats almost half a million children annually.

Nemours released a report on Telehealth in July. The report was based on a survey of 2,056 parents/guardians of children. The survey had some interesting results:

There is a Need for Telehealth. 48% of the survey respondents said that they had at least one recent experience where there was a hardship in getting a sick child to a live doctor visit. This included reasons such as living in an unsafe community or not having easy access to transportation. 28% of respondents reported two such occasions, and 15% reported three or more. These are the situations for which telehealth is an ideal solution to get a doctor to look at a sick child when care is needed rather than when the child can be transported to a doctor’s office.

Telehealth Good for Parents. Almost 90% of the respondents to the survey said that telehealth makes it easier for parents to take an active role in a child’s health care. A lot of parents said that somebody other than them takes sick children to see a doctor during the workday, and they love being able to participate first-hand in the discussion with a doctor.

Provider’s Play a Big Role in Enabling Telehealth. 28% of respondents to the survey said they have never been offered a telehealth visit. 12% said they had never heard of telehealth. Respondents who use telehealth said they were more likely to use the service when it is offered as an option by the health provider.

Reimbursement is Still a Barrier. Two-thirds of parents say that having telehealth visits covered by insurance is essential for them to consider using the service. There was a big push during the pandemic for insurance companies to cover telehealth visits. There is a concern at Nemours for this to continue when things return to normal.

As further evidence that reimbursement is a major issue, a recent article in KHN (Kaiser Health News) shows that there are surprising issues that are impacting telehealth. The article discusses insurance companies that don’t want to cover telehealth visits where the patent and doctor are in different states. This is based on laws in most states and also in Medicare and Medicaid rules that require a licensed clinician to hold a valid medical license in the state where a patient is located.

These laws don’t stop people from voluntarily visiting a doctor in another state, but the law is being raised for telemedicine. This is surfacing as an issue as states start rolling back special rules put in place during the early days of the pandemic.

Johns Hopkins Medicine in Baltimore recently had to cancel over 1,000 telehealth visits with patients in Virginia because such visits would not be covered by insurance. This left patients to find a way to make the physical trip to Johns Hopkins or find another health provider. As someone who has used John Hopkins, this is the place where people from the DC region look to when they need to see the best specialists.

When I first heard about telemedicine a decade ago, the ability to see specialists was one of the biggest cited benefits of telemedicine. These kinds of issues are always more complicated than they seem. For example, state medical boards don’t want to give up the authority to license and discipline doctors that treat patients in the state. Of course, money comes into play since medical licensing fees help to pay for the medical boards. When insurance companies find it too complicated to deal with a gray legal issue, they invariably take the safe path, which in this case is not covering cross-state telemedicine visits.

Probably the only way to guarantee that telemedicine will work would be with legislative action to clear up the gray areas. Add this to the list of broadband topics that need a solution from Congress.

The DOCSIS vs. Fiber Debate

In a recent article in FierceTelecom, Curtis Knittle, the VP of Wired Technologies at CableLabs, argues that the DOCSIS standard is far from over and that cable company coaxial cable will be able to compete with fiber for many years to come. It’s an interesting argument, and from a technical perspective, I’m sure Mr. Knittle is right. The big question will be if the big cable companies decide to take the DOCSIS path or bite the bullet and start the conversion to fiber.

CableLabs released the DOCSIS 4.0 standard in March 2020, and the technology is now being field tested in planned deployments through 2022. In the first lab deployment of the technology earlier this year, Comcast achieved a symmetrical 4 Gbps speed. Mr. Knittle claims that DOSIS 4.0 can outperform the XGS-PON we’re now seeing deployed. He claims that DOCSIS 4.0 will be able to produce a true 10-gigabit output while the XGS-PON actual output is closer to 8.7 Gbps downstream.

There are several issues that are going to drive the decision-making in cable company board rooms. The first is cost. An upgrade to DOCSIS 4.0 doesn’t sound cheap. The upgrade to DOCSIS 4.0 increases system bandwidth by working in higher frequencies – similar to G.Fast on telephone copper. A full upgrade to DOCSIS 4.0 will require ripping and replacing most network electronics. Coaxial copper networks are getting old and this probably also means replacing a lot of older coaxial cables in the network. It probably means replacing power taps and amplifiers throughout the outside network.

Building fiber is also expensive. However, the cable companies have surely learned the lesson from telcos like AT&T and Verizon that there is a huge saving in cost by overlashing fiber onto existing wires. The cable company can install fiber for a lot less than any competitor by overlashing onto existing coax.

There is also an issue of public perception. I think the public believes that fiber is the best broadband technology. Cable companies already see that they lose the competitive battle in any market where fiber is built. The big telcos all have aggressive plans to build fiber-to-the-premise, and there is a lot of fiber coming in the next five years. Other technologies like Starry wireless are also going to nibble away at the urban customer base. All of the alternative technologies to cable have faster upload speeds than the current DOCSIS technology. The cable industry has completely avoided talking about upload speeds because they know how cable subscribers struggled working and schooling from home during the pandemic. How many years can the cable company stave off competitors that offer a better experience?

There is finally the issue of speed to market. The first realistic date to start implementing DOCSIS 4.0 on a large scale is at least five years from now. That’s five long years to limp forward with underperforming upload speeds. Customers that become disappointed with an ISP are the ones that leap first when there is any alternative. Five years is a long time to cede the marketing advantage to fiber.

The big cable companies have a huge market advantage in urban markets – but they are not invulnerable. Comcast and Charter have both kept Wall Street happy by seeing continuous growth from the continuous capture of disaffected DSL customers. Wall Street is going to have a totally different view of the companies if that growth stops. The wheels likely come off stock prices if the two companies ever start losing customers.

I’ve always thought that the cable’s success for the last decade has been due more to having a lousy competitor in DSL than it has been by a great performance from the cable companies. Every national customer satisfaction poll continues to rank cable companies at the bottom behind even the IRS and funeral homes.

We know that fiber builders do well against cable companies. AT&T says that it gets a 30% market share in a relatively short time everywhere it builds fiber. Over time, AT&T thinks it will capture 50% of all subscribers with fiber, which means a 55% to 60% market share. The big decision for the cable companies to make is if they are willing to watch their market position start waning while waiting for DOCSIS 4.0. Are they going to bet another decade of success on aging copper networks? We’ve already seen Altice start the conversion to fiber. It’s going to be interesting to watch the other big cable companies wrestle with this decision.

Another Problem with RDOF

I have been critical of the RDOF awards for a number of reasons, but one of the worst problems isn’t being discussed. When the FCC picked the eligible areas for the RDOF awards, there was no thought about whether the grant award areas make any sense as a service area for an ISP. Instead, the FCC picked Census blocks that met a narrow definition of speed eligibility without any thought of the nearby Census blocks. The result is that RDOF serving areas can best be described as a checkerboard, with RDOF serving areas scattered in with non-RDOF areas.

The easiest way to show this is with an example. Consider the community of Bear Paw in western North Carolina. This is a community of 200 homes, 42 cottages, and 23 condominiums that sticks out on a peninsula in Lake Hiwassee. The community was founded to house the workers who originally built the Tennessee Valley Authority’s Nottley dam on the Hiwassee River. Today’s community has grown from the original cottages. As you might expect for a small town deep into Appalachia, the town has poor broadband, with the only option today being slow DSL offered by Frontier. Residents describe the DSL as barely functional. This is exactly the kind of area where the RDOF awards were supposed to improve broadband.

Below are two maps. The first is printed from the FCC’s RDOF maps – it’s a little hard to read because whoever created the map at the FCC chose a bizarre color combination. On the right is a more normal map of the same area. The red areas on the FCC map are the places where RDOF was claimed by an ISP. As you can see, in a community with only 265 households, the FCC awarded RDOF to some parts of the community and not to others.







The checkerboard RDOF award causes several problems. First, any ISP will tell you that the RDOF award areas are ludicrous – it’s impossible for an RDOF winner to build only to the red areas.

And that’s where the second problem kicks in. The RDOF award winner in Bear Paw is Starlink, the satellite company. Starlink is not going to be building any landline broadband. Unfortunately for Bear Paw, giving the award to Starlink makes no sense. All of the lots in Bear Paw are in the heavy woods – that’s one of the attractions for living in the community. Everything I’ve read say that satellite broadband from Starlink and others will be sketchy or even impossible in heavily wooded areas.

The obvious solution if Starlink doesn’t work well is for the community to try to find another ISP to build fiber to the community. But getting another ISP to build in Bear Paw won’t be easy. Other federal and state grant programs will not fund the red RDOF areas on the FCC map. Even should Congress pass the infrastructure bill, there might not be enough grant money made available to an ISP to make a coherent business case to build to Bear Paw. The FCC checkerboard awards significantly curtail any future grant funding available to serve the community.

The shame of all of this is that any other grant program would have brought a real solution for Bear Paw. With most grants, an ISP would have proposed to build fiber to the entire community and would have applied for the grant project to make that work. But the RDOF awards are going to make it hard, or impossible to ever find solutions for the parts of the checkerboard that the RDOF left behind.

By spraying RDOF awards willy-nilly across the landscape, the FCC has created hundreds of places with the same situation as Bear Paw. The FCC has harmed Bear Paw in several ways. It first allowed a company to win the RDOF using a technology that is not suited to the area. Why wasn’t Starlink banned from bidding in wooded parts of the country? (Or an even better question might be why Starlink was allowed into the RDOF process at all?)  Since no other grants can be given to cover the RDOF areas, there will probably not be enough grant money available from other sources for an ISP to bring fiber to the community. Even if the federal infrastructure funding is enacted and the federal government hands out billions in broadband grant money, towns like Bear Paw are likely going to get left behind. How do you explain to the residents of Bear Paw that the FCC gave out money in a way that might kill their once-in-a-generation chance to get good broadband?

The Migration to Faster Speeds

The OpenVault Broadband Insights Report for the 2nd quarter of 2021 highlights the strength of customer demand for broadband.

The most interesting statistic is the migration of customers to faster broadband tiers. The following table shows the percentage of households subscribed to various broadband speed plans in 2020 and 2021.

June 2020 June 2021
Under 50 Mbps 18.4% 10.5%
50 – 99 Mbps 20.4% 9.6%
100 – 199 Mbps 37.8% 47.5%
200 – 499 Mbps 13.5% 17.2%
500 – 999 Mbps 5.0% 4.7%
1 Gbps 4.9% 10.5%

In just the last year, the number of households subscribed to gigabit broadband has doubled, while the number subscribed to slower speeds nearly cut in half. Many millions of homes upgraded to faster broadband plans over the past year.

OpenVault provides some clues as to why homes are upgrading to faster broadband. Consider the following table that shows the percentage of households using different amounts of total monthly broadband.

June 2018 June 2019 June 2020 June 2021
Less than 100 GB 51.6% 42.7% 34.2% 29.5%
100 – 499 GB 37.7% 39.5% 37.6% 38.6%
500 – 999 GB 8.9% 13.7% 19.4% 21.1%
1 -2 TB 1.7% 3.7% 7.8% 9.3%
Greater than 2 TB 0.1% 0.4% 1.0% 1.5%

The percentage of homes using less than 100 gigabytes of broadband per month has dropped by 43% over three years. At the same time, the number of homes using more than a terabyte of data per month has grown by 500% over three years. While there may be no direct correlation between having a faster broadband plan and using more broadband, total broadband usage is likely one of the factors leading residential customers to upgrade. Another big factor pushing upgrades is customers looking for faster upload speeds to support working and schooling from home.

The average household broadband usage in June 2021 was 433 gigabytes – which is the combined upload and download usage for the average American home. (405 GB download and 28 GB upload). To put that number into perspective, look at how it fits into the past trend of average broadband usage.

1st quarter 2018           215 Gigabytes

1st quarter 2019           274 Gigabytes

1st quarter 2020           403 Gigabytes

2nd quarter 2020         380 Gigabytes

1st quarter 2021          462 Gigabytes

2nd quarter 2021         433 Gigabytes

The second quarter 2021 usage is up 14% over 2020, but down compared to the first quarter of this year. OpenVault observed that broadband usage seems to be returning to seasonal patterns, and in past years it’s been normal for broadband usage to decrease during the summer.

The continued increased household usage has to be good news for ISPs like Comcast that are enforcing monthly data caps. OpenVault shows 10.8% of homes are using more than a terabyte of data per month. However, OpenVault also shows that having data caps influences broadband customers to curtail usage. In June, the average usage for homes with no data caps was 451.6 GB and 421.1 GB for homes with data caps.