Treasury Makes it Easier to Fund Broadband

On June 17, the US Treasury Department clarified the rules for using federal ARPA broadband money that is being given to states, counties, cities, and smaller political subdivisions. The new FAQs make it a lot clearer that local government can use the funds to serve businesses and households that are considered as served – meaning they receive broadband speeds over 25/3 Mbps. My first reading of the rules came to the same conclusion, but these clarifications hopefully make this clear for everybody. There was language in the original Treasury Interim rules that might have scared off city and county attorneys from using the funding for broadband. Following is some of the clarifying language from the revised FAQs:

FAQ 6.8 adds the clarifying language that unserved or underserved households or businesses do not need to be the only ones in the service area funded by the project. This is a massively helpful clarification that discloses Treasury’s intent for the funds. The response to this FAQ could have previously been interpreted to mean that the money could only be used to bring broadband to places that have less than 25/3 Mbps. FAQ 6.9 further makes this same point, that while the goal of a broadband project must be to provide service to unserved or underserved areas, a sensible solution might require serving a larger area to be economical – and again, unserved and underserved locations need not be the only places funded by the ARPA funding.

FAQ 6.11 looks at the original use of the term ‘reliably’ when defining the broadband provided to homes and businesses. The Treasury response makes it clear that advertised speeds don’t define broadband speeds, but rather the actual broadband performance experienced by customers.

The use of “reliably” in the IFR provides recipients with significant discretion to assess whether the households and business in the area to be served have access to wireline broadband service that can actually and consistently meet the specified threshold of at least 25/3 Mbps – i.e., to consider the actual experience of current broadband customers that subscribe to served at or above the 25/3 Mbps threshold. Whether there is a provider serving the area that advertises or otherwise claims to offer speeds that meet the 25 Mbps download and 3 Mbps upload speed threshold is not dispositive.  

FAQ 6.11 goes on to clarify that governments can consider a wide range of information to use as proof that broadband is not reliably meeting the 25/3 threshold including federal or state broadband data (meaning State broadband maps or the newly released NTIA broadband map), speed tests, interviews with residents, or any other relevant information. Local governments can consider issues such as whether speeds are adequate at all times of the day – do speeds bog down at the busiest times? Issues like latency and jitter can be considered.

Maybe most significantly, the FAQ gives an automatic pass to overbuild DSL or cable companies still using DOCSIS 2.0. While there are very few homes still served by DOCSIS 2.0, Treasury is allowing localities to basically declare DSL to be obsolete, regardless of any speed claims made by the telcos. This negates the tens of thousands of Census blocks where telcos claim rural DSL speeds of 25/3 Mbps – an area served only by DSL is justification to use the funding.

In a clarification that some states and counties will find reassuring, FAQ 6.10 says that the ARPA funding can be used to fund middle-mile fiber as long as it is done with a goal of supporting last-mile fiber.

These were critically important clarifications since there has been a lot of debate at the local level about whether ARPA money could be used in various circumstances. The clarifications make it clear that ARPA money can always be used to overbuild rural DSL. It’s also clear that the ARPA money can be used in urban settings as long as the funded area included at least one location that doesn’t have a broadband option of at least 25/3 Mbps. There are numerous little pockets in all cities where the cable companies didn’t build and where DSL is the only option. Cities can clearly use this funding to provide support for low-income neighborhoods and places the big ISPs have bypassed or ignored.

Why Do We Give Grants to Huge ISPs?

The blog title is a rhetorical question because we all know why we give federal money to big ISPs – they are powerful companies that have a lot of lobbyists and that make a lot of contributions to politicians. But for some reason, the rest of us don’t talk enough about why giving money to the big ISPs is bad policy.

I could write a week’s worth of blogs detailing reasons why big ISPs don’t deserve grant funding. The public dislikes big ISPs and has rated them for two decades as having the worst customer service among all corporations and entities, disliked even more than insurance companies and the IRS. The public hates talking to big ISPs, because every call turns into a sales pitch to spend more money.

The big ISPs routinely deceive their customers. They routinely advertise special prices and then proceed to bill consumers more than what was promised. They have hidden fees and try to disguise their rates as taxes and fees. The big telcos unashamedly bill rural customers big fees for decrepit DSL that barely works. The telcos have known for over a decade that they can’t deliver what they are peddling.

Cable companies come across as better than the telcos only because their broadband technology is faster. But in every city, there are some neighborhoods where speeds are far slower than advertised speeds – neighborhoods where longstanding network problems never get fixed. I hear stories all of the time about repeated slowdowns and outages. About 30% of the folks we’ve surveyed during the pandemic have said that they couldn’t work from home due to problems with cable company upload speeds.

And then there are the big reasons. The big telcos created the rural broadband crisis. They made a decision decades ago to walk away from rural copper. They quietly cut back on all upgrades and maintenance and eliminated tens of thousands of rural technicians, meaning that customers routinely wait a week or longer to even see a technician.

What’s worse, the big telcos didn’t walk away from rural America honestly. They kept talking about how they could provide good service, to the point that the FCC awarded them $11 billion in the CAF II program to improve rural DSL – we paid them for what they should have routinely done by reinvesting the billions they have collected from rural customers. But rather than use the CAF II money to improve rural DSL, most of the money got pocketed to the benefit of stockholders.

While I think the decision to walk away from rural broadband was made in the boardroom – the worst consequences of the decision were implemented locally. That’s how giant companies work and is the primary reason we shouldn’t give money to big ISPs. Upper management puts pressure on regional vice presidents to improve the bottom line, and it’s the regional managers who quietly cut back on technicians and equipment. Rural broadband didn’t die from one big sweeping decision – it was murdered by thousands of small cutbacks by regional bureaucrats trying to earn better bonuses. I’ve talked to many rural technicians who tell me that their companies have taken away every tool they have for helping customers.

What does this all boil down to? If we give money to the big ISPs to build rural networks, they are going to pocket some of the money like they did with CAF II. But even if they use grant money to build decent rural networks, it’s hard to imagine them being good stewards of those networks. The networks will not get the needed future upgrades. There will never be enough technicians. And every year the problems will get a little worse until we look up in twenty years and see rural fiber networks owned by the big ISPs that are barely limping along. Meanwhile, we’ll see networks operated by cooperatives, small telcos, and municipalities that work perfectly, that offer good customer service, and that have responsive repair and maintenance.

I have a hard time thinking that there is a single policy person or politician in the country who honestly thinks that big ISPs will take care of rural America over time. They’ll take federal money and build the least they can get away with. Then, within only a few years they’ll start to nickel and dime the rural properties as they have always done.

I have to laugh when I hear somebody comparing current rural broadband grant programs to our effort a century ago for rural electrification. That electrification money went mostly to cooperatives and not to the big commercial corporations. We’ve lost track of that important fact when we use the electrification analogy. The government made the right decision by lending money to citzens to solve the electricity gap and didn’t give money to the big commercial electric companies that had already shunned rural America.

The main reason we shouldn’t give grants to big ISPs is that solving the rural broadband gap is too important to entrust to companies that we know will do a lousy job. There is nobody who thinks that the big telcos or cable companies will do the right thing in rural America over the long run if we’re dumb enough to fund them.

A Security Warning

Today I am going to talk about something that happened outside of our industry but that should be a concern of every ISP. There is a lesson to be learned from the Colonial Pipeline hack by the DarkSide ransomware group from Russia.

I am positive that if I call my ISP clients that every one of them will tell me that their broadband networks are secure and that there is no way for malware to shut down their broadband network. I would trust that response since most broadband networks are encrypted from end-to-end between the core and customers.

But the ISPs would still be wrong. The hack of Colonial Pipeline did not attack the software that operates the pipeline. Instead, the hackers found their way into the computers used for the billing system. When that 10-year-old software got locked, Colonial had no way to take orders, pay the gas suppliers, or bill customers for delivering gas. The money side of the business was locked. Colonial made the decision that it couldn’t operate without that software.

I think if I ask the question to ISPs of whether every computer, laptop, and tablet connected to the OSS/BSS software is totally secure I would get a different answer. Hackers only need to get into one computer to shut down an ISP’s OSS/BSS. Without that software, most ISPs would not be able to take new orders, answer billing questions, send out new bills, take trouble tickets, or dispatch repair people. With the OSS/BSS software locked an ISP wouldn’t even be able to look at customer records. Most ISPs would be unable to somehow switch to a manual method of doing things. Most ISPs would have little choice but to pay the ransomware if they found themselves in the same position as Colonial.

This is the same approach that the ransomware hackers take with many large targets. They shut down the billing systems system for hospitals to bring them to a halt. They shut down the supply chain and inventory software of factories to bring them to a screeching halt. Businesses of all types now have sophisticated suites of software that are equivalent to our industry’s OSS/BSS software. Over the last decade, most larger businesses have migrated to a master software that controls most of the day-to-day backoffice functions of the business. That automation has been a huge time and dollar saver – but it is the point of attack for malware hackers.

I advise every ISP to take a look at the security of computers used by staff. That’s where the vulnerabilities are – and that’s where the ransomware folks exploit. Very few ISPs pay the same kind of attention to PCs, laptops, and cellphones as they do to the broadband network. We often don’t keep up with software updates for every device. We let employees take devices home or travel with them and use hotel WiFi.

I would bet that we’ve already had ISPs hacked – because most of the businesses that are hit with ransomware don’t talk about it. They pay the ransom and hope they get up and running again. A company like Colonial had to disclose it because the gas supply chain works on a 24/7 cycle and gas stations started running out of gas soon after the attack.

I am not a security expert, and I don’t have any answers. But I know a lot of clients do not have ironclad security for the backoffice side of the business. As soon as I heard about this hack I realized how this could happen easily in our industry as well.

AT&T’s Plan for Ditching Copper

Jeff Baumgartner of Light Reading recently reported on a wide-ranging discussion by AT&T CEO John Stankey. One of the most interesting parts of the discussion was about AT&T’s plan to use cellular wireless in rural markets to replace DSL.

I’m not going to repeat everything in the article, but the gist is that AT&T hopes to be able to start walking away from rural copper. Stankey was quoted as saying that there is already a voice alternative in rural markets – meaning cellphones. Unfortunately, that ignores the many rural homes with poor cellular coverage. The FCC was going to plow something like $4 billion into a grant program to expand rural cellular coverage, but the misreporting of existing cellular coverage areas by the big cellular carriers put that plan on hold.

Stankey believes that cellular broadband will be the alternative to rural DSL. Verizon has the same strategy but doesn’t serve as many rural markets after having unloaded most of them to Frontier over recent years.

What might a rural cellular data network look like? In most rural counties there are generally only a few existing cell towers – it’s not unusual for this to be a half dozen or less. The traditional older cell towers often don’t reach a lot of rural homes since the towers were built for the old cellular model of making sure that cars could get cell signal along numbered highways. But over time, many counties have added a few more towers for public safety purposes that reach a lot more homes for voice service.

Most people don’t realize that cellular broadband has a lot of the same characteristics as other rural wireless broadband. The signal from the cell towers quickly dies with distance. Depending upon the spectrum being used, cellular broadband can hit speeds of 50-100 Mbps for the first mile from a rural cell site, but the speeds drop off pretty rapidly from that point. Cellular broadband does not travel nearly as far as cellular voice, and rural people are used to the idea of being to make a call but not being able to grab the web. Cellular data also gets slowed and stopped by hills and other impediments. Any county without a flat topology will have lots of cellular dead spots.

What this means is that cellular broadband is not a pure replacement for landline service. For the typical rural county with a limited number of cellular towers, there are going to be plenty of homes that can’t get a cell signal. There will be a lot more homes that can’t get enough broadband speed to be meaningful.

What Stankey failed to mention in the interview is that AT&T has already walked away from the DSL market. As of last October, the company won’t sign new DSL customers anywhere in the country – in towns or rural areas. That means everybody buying or building a rural home in an AT&T area doesn’t have DSL as a broadband option. I’m sure AT&T will continue to milk existing DSL revenues for the next few years. But is the company going to care a whit if some rural households can’t get the cellular data?

The various rural grant programs are filling in some of the rural broadband gaps – but not close to all. As large as the RDOF grants were, the FCC says those grants will reach 5 million rural homes if the grants are all awarded. There are still 10 to 15 million more homes in rural America that don’t have adequate broadband – maybe more. Unfortunately, some recent federal grants went to providers like Viasat or to ISPs that might not be able or willing to fulfill the RDOF requirements.

Don’t get me wrong. I’m happy for the rural home that can finally get a decent cellular data plan. I just don’t want regulators or politicians to think that companies like AT&T are taking care of rural America with this new strategy. I would characterize AT&T’s strategy as providing cover for the company to pull down rural copper. The copper is old and at end of life and has to come down – but it’s disingenuous to not tell the public that cellular broadband means the end of copper.

An Attack on WiFi Spectrum

A little over a year ago the FCC approved the use of 1,200 MHz of spectrum in the 6 GHz band for public use – for new WiFi. WiFi is already the most successful deployment of spectrum ever. A year ago, Cisco predicted that by 2022 that WiFi will be carrying more than 50% of global IP traffic.

These are amazing statistics when you consider that WiFi has been limited to using 70 MHz of spectrum in the 2.4 GHz spectrum band and 500 MHz in the 5 GHz spectrum band. The additional 1,200 MHz of spectrum will vastly expand the capabilities of WiFi. WiFi performance was already slated to improve due to the introduction of WiFi 6 technology. Adding the new spectrum will drive WiFi performance to a new level. The FCC order adds seven 160 MHz channels to the WiFi environment (or alternately adds fifty-nine 20 MHz channels. For the typical WiFi environment, such as a home in an urban setting, this is enough new channels that big bandwidth devices ought to be able to grab a full 160 MHz channel. This is going to increase the performance of WiFi routers significantly by allowing homes or businesses to separate devices by channel to avoid interference.

One minor worry about the 6 GHz band is that it isn’t being treated the same everywhere. China has decided to allocate the entire 6 GHz spectrum band to 5G. Europe has allocated only 500 MHz for WiFi with the rest going to 5G. Other places like Latin America have matched the US allocation and are opting for a greatly expanded WiFi. This means that future WiFi devices won’t be compatible everywhere and will vary by the way the devices handle the 6 GHz spectrum. That’s not the ideal situation for a device maker, but this likely can be handled through software in most cases.

The GSMA, which is the worldwide association for large cellular carriers is lobbying for the US to allow 6 GHz to be used for 5G. They argue that since the 6 GHz spectrum is available to the public that cellular carriers ought to be able to use it like anybody else. They’d like to use it for License Assisted Access (LAA), which would allow the cellular carriers to use the spectrum for cellular broadband. If allowed, cellular traffic could flood the spectrum in urban areas and kill the benefits of 6 GHz for WiFi.

This is not the first time this issue was raised. The cellular industry lobbied hard to be able to use LAA when the FCC approved 5 GHz spectrum for WiFi. Luckily, the FCC understood the huge benefits of improved WiFi and chose to exclude cellular carriers from using the spectrum.

It would be a huge coup for cellular carriers to get to use the 6 GHz spectrum because they’d get it for free at a time where they’ve paid huge dollars for 5G spectrum. The FCC already heard these same arguments when they made the 6 GHz decision, so hopefully, the idea goes nowhere.

I talk to a lot of ISPs that tell me that poor WiFi performance is to blame for many of the perceived problems households have with broadband. Inefficient and out-of-date routers along with situation where too many devices are trying to use only a few channels is causing many of the problems with broadband. The 6 GHz WiFi spectrum will bring decades of vastly improved WiFi performance. It’s something that every homeowner will recognize immediately when they connect a properly configured WiFi router using the 6 GHz spectrum.

For now, there are not many devices that are ready to handle the new WiFi spectrum and WiFi 6 together. Some cellphones are now coming with the capability, and as this starts getting built into chips it will start working for laptops, tablets, PCs, and smart televisions. But homes will only see the real advantage over time as they upgrade WiFi routers and the various devices.

Interestingly, improved WiFi is a direct competitor for the cellular carriers in the home. The carriers have always dreamed of being able to sell subscriptions for homes to connect our many devices. WiFi allows for the same thing with just the cost of buying a new router. It would be an obvious boon to cellular carriers to both kill off the WiFi competitor while getting their hands on free spectrum.

Hopefully, the FCC will reject this argument as something that has already been decided. The GSMA argues that 5G will bring trillions of dollars in benefits to the world – but it can still do that without this spectrum. The benefit of improved WiFi has a huge value as well.

What’s the Right Definition of Upload Speed?

I read a blog on the WISPA website written by Mark Radabaugh that suggests that the best policy for broadband speeds would be met by asymmetrical architecture (meaning that upload speeds don’t need to be as fast as download speeds). I can buy that argument to some extent because there is no doubt that most homes download far more data than we upload.

But then the blog loses me when Mr. Radabaugh suggests that an adequate definition of speed might be 50/5 Mbps or 100/10 Mbps. I have seen enough evidence during the pandemic to know that 5 Mbps or 10 Mbps are not adequate upload speeds today. My consulting firm conducts speed tests and surveys for communities and during the pandemic, we’ve learned a lot about upload demand.

We’ve seen consistently in surveys that between 30% and 40% of the families that worked or schooled from home during the pandemic said that the upload connections were not adequate. Many of the respondents making this claim have lived in cities using cable company broadband with upload speeds between 10 Mbps and 20 Mbps. While those speeds may be adequate for one person working from home, they were clearly not adequate for multiple people trying to use the upload connection at the same time.

But it’s not quite that simple and we need to stop fixating on speed as the way to measure if a broadband connection is adequate. Many of the people who find home upload speeds to be inadequate complain about inconsistent speeds. They’ll connect easily enough to a work or school server, or a Zoom call but eventually get dumped out of the connection. Speeds on most technologies are not constant and bounce up and down as demand changes in the neighborhood. A 10 Mbps upload connection is not adequate if there are times when the speed drops below that speed and connections are dropped. Inconsistent broadband connections can also be related to poor latency and heavy jitter.

The latest grant requirements from the NTIA for using ARPA funding describe the issue succinctly. The NTIA says that the ARPA grants can be used in places where an ISP is not “reliably” delivering speeds of 25/3 Mbps. Reliability is a concept that is long overdue in our discussion of broadband because, unfortunately, many of our broadband technologies deliver different speeds from minute to minute and hour to hour. We cannot keep pretending that a WISP, DSL, or cable modem service that delivers 15 Mbps upload sometimes but a 5 Mbps connection at other times is reliable. Such a connection ought to more properly be labeled as a 5 Mbps connection that sometimes bursts to faster speeds.

The WISPA blog also fails to mention the context of discussing a speed requirement. There is a big difference in setting a definition of speed for today’s broadband market versus setting an expected speed for a project that’s being funded by a federal grant. We should never use federal grant funding to build broadband that just barely meets today’s definition of broadband. It’s an undisputed fact that households, on average, use a lot more broadband every year. We know that the amount of bandwidth used by households is likely to double every three years. If we build to meet today’s broadband demand, a network will be obsolete within a decade. Grant funding should be used to build networks that meet expected future needs.

I often tell stories about network engineers who undersize new transport electronics. While they know that broadband demand and traffic have been doubling on their networks every three years, they just can’t bring themselves to recommend new electronics that have ten times today’s capacity. Even experienced engineers have a mental block against truly believing the impact of growth. But anybody designing a network needs to be looking to make sure the new network is still going to be robust a decade from now.

It’s just as essential for policymakers to understand the incessant growth in broadband demand. When talking about awarding grants we shouldn’t be discussing the current definition of broadband, but instead the likely definition of broadband in a decade. That’s the big point that the WISPA blog misses. I think it’s easy to demonstrate that 5 Mbps or 10 Mbps is inadequate broadband speeds today – we have too much evidence that upload speeds need to probably be at least 25 Mbps. But we can’t accept today’s upload needs when funding new networks. Grant-funded networks should be forward-looking – and I think the NTIA’s suggestion of 100 Mbps upload for future networks is reasonable.

Cord Cutting Accelerates in 1Q 2021

The largest traditional cable providers collectively lost over 1.6 million customers in the first quarter of 2021 – an overall loss of 2.2% of customers. To put the quarter’s loss into perspective, the big cable providers lost almost 18 thousand cable customers per day throughout the quarter.

The numbers below come from Leichtman Research Group which compiles these numbers from reports made to investors, except for Cox which is estimated. The numbers reported are for the largest cable providers, and Leichtman estimates that these companies represent 95% of all cable customers in the country.

Following is a comparison of the first quarter subscriber numbers compared to the end of the 2020:

1Q 2021 4Q 2020 Change % Change
Comcast 19,355,000 19,846,000 (491,000) -2.5%
Charter 16,062,000 16,200,000 (138,000) -0.9%
AT&T 15,885,000 16,505,000 (620,000) -3.8%
Dish TV 8,686,000 8,816,000 (130,000) -1.5%
Verizon 3,845,000 3,927,000 (82,000) -2.1%
Cox 3,590,000 3,650,000 (60,000) -1.6%
Altice 2,906,600 2,961,000 (54,400) -1.8%
Mediacom 626,000 643,000 (17,000) -2.6%
Frontier 453,000 485,000 (32,000) -6.6%
Atlantic Broadband 313,591 318,387 (4,796) -1.5%
Cable One 252,000 261,000 (9,000) -3.4%
Total 71,974,191 73,612,387 (1,638,196) -2.2%
Total Cable 43,105,191 43,879,387 (774,196) -1.8%
Total Other 28,869,000 29,733,000 (864,000) -2.9%


Some observations about the numbers:

  • The big loser continued to be AT&T, which lost a net of 620,000 traditional video customers between DirecTV and AT&T TV. In the second quarter of this year AT&T spun these customers all off to a corporation.
  • The big percentage loser continues to be Frontier which lost 6.6% of its cable customers in the quarter.
  • Big customer losses finally hit to Comcast, which lost 491,000 traditional cable customers in a quarter where it added 460,000 broadband customers.
  • Charter continues to lose cable customers at a slower pace than the rest of the industry. I have to wonder if this means bundles that hard to break or some similar issue.
  • This is the ninth consecutive quarter that the industry lost over one million cable subscribers.

To put these losses into perspective, these same companies had over 85.4 million cable customers at the end of 2018 and 79.5 million by the end of 2019. That’s a loss of 13 million customers (16% of customers) since the end of 2018.

The big losses in cable subscribers happened at the same time that the biggest ISPs in the country are adding a lot of broadband customers. The biggest ISPs added over 1 million new broadband subscribers in the first quarter of 2021.

In 2020, we saw that a lot of customers dropping traditional video were switching to online versions of the full cable line-up. That didn’t carry into the first quarter of 2021 where the combination of Hulu plus Live TV, Sling TV, AT&T TV, and FuboTV collectively lost over 257,000 customers. I have to suspect that has to do with affordability.

Verizon to Expand Wireless Home Broadband

At its virtual investor day in March, Verizon announced plans to expand its wireless 5G Home broadband product. This is the product that can best be referred to as fiber-to-the-curb because it requires building fiber up and down streets in neighborhoods and then delivering the broadband wirelessly.

Until now, Verizon has been using millimeter-wave spectrum for the Home product. It seems evident that this product has not been everything that Verizon hoped for. The company first introduced the product three years ago and then paused to re-engineer the product. My guess is that Verizon found that millimeter-wave spectrum is more unforgiving in the wild than what they had hoped for.

Verizon plans to modify the Home product by layering on C-Band spectrum, which it recently purchased in an FCC auction. This spectrum sits between the two WiFi bands at 3.7 GHz to 4.2 GHz. It’s an interesting choice to make for fiber-to-the-curb because the C-Band spectrum will carry farther from the curb but will carry a lot less bandwidth than millimeter-wave spectrum.

In the original trial, Verizon reported broadband speeds of 300 Mbps. The company touts that the latest iteration of the product is faster, but I haven’t yet seen any speeds reported yet by new customers. When asked about the speeds that will be available using C-Band, the Verizon company spokesperson said speeds will be “competitive”. This tells us that speeds are still not approaching Verizon’s gigabit speed goal.

It’s an interesting product in several ways. First, Verizon seems to be building this in markets where it was not the incumbent telephone company. Many of the markets announced so far were legacy markets served by AT&T, and Verizon is stepping into the void that AT&T created by ceasing the sale of DSL. It will be interesting to see if Verizon uses the product in its legacy markets where it never built FiOS.

Verizon has been an interesting ISP since the advent of the FiOS fiber product. The company remained disciplined and only built FiOS where the construction costs met the company’s internal cost metrics. This resulted in a disjointed FiOS roll-out where FiOS was only built in the parts of markets that met the company’s metrics. I have to think they will do something similar again.

One of the reasons for the company to expand the Home product is to take advantage of its aggressive fiber builds over the last five years. Verizon has been building fiber all over the country to replace expensive backhaul leases for cell towers. The Home product is a way to take a second advantage of that construction, as well as a way to justify building fiber deeper into neighborhoods to reach small cell sites.

Verizon has said that it hopes to build to pass 25 million homes and businesses by 2025 with the technology. At the investor meeting, Verizon voiced a goal of reaching a 20% market penetration with the new service – meaning a target of 5 million new broadband customers. Part of the strategy for doing so is to bundle the product aggressively with Verizon cellular service.

In the long run, the success of this product is probably going to boil down to broadband speeds that will be achieved with the C-Band spectrum. We’ll have to wait to see how that spectrum behaves when transmitted at pole height in neighborhoods with trees, shrubs, and other impediments. I would think that any product north of 100 Mbps speeds will play well today if it’s priced low enough. The challenge for Verizon is likely to be a decade from now when the cable companies might have increased basic speeds to something like 500 Mbps. The good news for Verizon is that this product could someday be converted to fiber FiOS by building the fiber drops. Verizon might be using the wireless product to gain market share in new markets, with the full realization that eventually homes will want the fiber connection.

Broadband Usage Still Strong 1Q 2021

OpenVault just released its Broadband Insights Report for the 1st quarter of 2021. The report shows continued robust average household demand for broadband. The monthly average household usage at the end of the first quarter was 461.7 gigabytes. This is the combined upload and download usage for the average American home. To put that number into perspective, look how it fits into the past trend of broadband usage from OpenVault:

1st quarter 2018           215 Gigabytes

1st quarter 2019           274 Gigabytes

1st quarter 2020           403 Gigabytes

2nd quarter 2020          380 Gigabytes

3rd quarter 2020           384 Gigabytes

4th quarter 2020           483 Gigabytes

1st quarter 2021           462 Gigabytes

OpenVault observes that usage seems to be returning somewhat to seasonal patterns, where historically usage dropped in the first quarter each year compared to the preceding fourth quarter. The first-quarter usage is down 4% from the December 31 usage but is up 15% from the first quarter of the pandemic a year ago. Probably more importantly, usage is up 69% since 2019 and 115% since 2018. Average household bandwidth usage has more than doubled in three years – an extraordinary growth rate. It’s going to be really interesting later this year to see how households react to the end of the pandemic. The general expectation is that most classrooms will be back to normal by the fall, and a significant percentage of the workforce will start returning to the office.

The statistic that probably best defines the pandemic is the growth of average household upload usage each month. At the end of 2019, the average US home uploaded 19 gigabytes of data per month. By the end of 2020 that had grown to 31 gigabytes. In the first quarter that dipped a bit to an average of 30 gigabytes.

The first quarter saw a widening gap between the usage of homes with data caps at 440 gigabytes per month and homes with unlimited usage at 495 gigabytes per month. It appears that data caps cause homes to curtail usage of over 50 gigabytes per month.

Median usage dropped significantly in the quarter from 289 gigabytes at the end of the fourth quarter down to 269 gigabytes at the end of the first quarter. The median is the level at which 50% of homes use less broadband and 50% use more. A dropping median usage would indicate that a significant number of homes have reduced broadband usage – perhaps homes where students or adults went back to school or the office.

The number of households that are heavy users of broadband continues to be strong. At the end of the first quarter, 13% of homes consumed more than 1 terabyte of data per month (1,000 gigabytes), up from 7.3% of homes just a year earlier. OpenVault has started also counting what they call extreme users or homes using more than 2 terabytes per month – 1.6% of all homes were extreme users at the end of the first quarter, up from 1% only a year earlier.

Of all of the statistics gathered by OpenVault, the fastest-growing category is the number of homes subscribing to a gigabit-speed service. At the end of the fourth quarter that has grown to 9.8% of all households, three and half times the 2.8% of homes that were buying gigabit products at the end of 2019. Perhaps the most amazing statistic from OpenVault is that 80% of households now subscribe to a broadband service that provides speeds of 100 Mbps download or faster. This one fact alone provides the justification to update the outdated FCC definition of broadband of 25/3 Mbps. The vast majority of American households obviously believe broadband means 100 Mbps or faster.

10G – Really?

Earlier this year at the CES show in January the big cable companies discussed their vision for the future and introduced the concept that cable networks would be able to deliver 10-gigabit broadband in the future. They labeled the promotion at the show as 10G. I didn’t write about it at the time because I assumed this was a gimmick to give some buzz to this show in the middle of the pandemic. But lately, I’ve seen that they are still talking about the 10G initiative.

For once this is not just the big US cable companies. The US companies were accompanied in this big splash announcement by Rogers, Shaw Communications, Vodafone, Taiwan Broadband Communications, Telecom Argentina, Liberty Global, and smaller cable companies.

My first reaction to the name 10G was a chuckle because the cable companies are linking themselves to the deployment of 5G cellular, which turns out to not be any faster than 4G. But the cellular companies have hammered home the supposed advantages of 5G so relentlessly that I imagine the average person thinks 5G means faster speeds. I don’t think I would have chosen the cellular analogy as a symbol for faster speed.

The question I have to ask is why the companies want to talk about 10-gigabit broadband so early? It’s likely to be near the end of this decade before any of them actually can deliver that much speed to customers. The assertion is made because of the promise of the new DOCSIS 4.0 standard that was released by CableLabs a year ago. Comcast recently conducted the first lab trial of the technology and achieved speeds of about 4 gigabits.

But it’s a long way from the first breadboard lab trial to a working technology that will be deployed in cable networks. DOCSIS 4.0 is a fundamental change to cable networks and is going to be an expensive upgrade. It means changing most of the field electronics including cable modems. In many cases, it’s going to mean replacing old coaxial cable. And it might even mean having to convert to switched digital video to make this much bandwidth fit inside cable networks.

When DOCSIS 4.0 was first announced, the CTOs of most of the cable companies were cold to the technology, having just finished the upgrade to DOCSIS 3.1 (at least for download speeds). There was a lot of speculation in the industry that cable companies would consider converting to fiber rather than go through another big patch on aging copper networks – most cable systems are nearing fifty years in age.

It’s not hard to understand what prompted this. Fiber providers are now starting to routinely deploy XGS-PON which has the capability of 10 gigabit symmetrical broadband. That means 10-gigabit is already here today in some markets, a full decade before cable companies can respond.

The cable companies are also quietly worrying about their lousy upload performance on cable networks. While the companies all crowed about how they survived the pandemic, they all know that many of their customers struggled badly trying to handle work and school from home. They failed the people who needed them the most.

Two decades after Verizon launched FiOS, the big telcos are finally laying fiber like crazy. The cable companies know that once customers change to fiber that they are likely never coming back to a cable company network. While fiber has been built to only a relatively small portion of urban America, it is relentlessly coming. The cable companies have to fear that over the next decade that fiber providers will do to them what they’ve been able to do to DSL competitors. The cable industry’s success over the last decade comes from taking customers from old telephone copper.

I think a peek into the future where there is a lot more fiber is likely what prompted the 10G promotion – that and perhaps a few too-clever marketing folks. But talking about 10G is not the same thing as delivering broadband that homes need today – and the cable companies know this.