BEAD Grant Contracts

One of the steps in the BEAD grant program that isn’t being talked about is the contract that an ISP must sign with a broadband office before officially being awarded a grant. While the whole industry has been focused on creating a good grant application, the grant contract is the most important document in the grant process because it specifically defines what a grant winner must do to fulfill the grant and how they will be reimbursed.

The grant contract is going to define a lot of important things:

  • This is the document that will define the line of credit that must be provided. If an ISP has elected a line of credit that can be decreased over time, make sure that contract defines the specific events that will allow for a reduction in the size of the line of credit.
  • The contract is going to define the specific environmental studies that are required, along with the timing of the environmental work. A lot of BEAD grant recipients are going to be disappointed if they are required to complete time-consuming environmental studies before starting any other work. Note that just like the rest of the industry, the folks who do environmental studies are likely to get quickly backlogged with BEAD work and may take a lot longer than normal.
  • The contract is going to define how the Broadband Office envisions implementing the many issues that were in the grant application. Regardless of what an ISP might have proposed in the grant application, the Broadband Office is going to try to use the contract to impose their will for items like setting rates. It’s important to note that an ISP doesn’t get what they proposed in the BEAD grant application – the real negotiation for how the grant is going to work happens in agreeing to a contract.
  • Perhaps the most important part of the contract is that it is going to define how the ISP will get reimbursed for completed work. There are many States that are talking about reimbursing ISPs based on meeting specific milestones. Be very careful to understand specifically what this means, because it might mean waiting for many quarters, or even a year before seeing a check out of the grant office. The natural inclination of ISPs is to order all of the materials to build a network when the grant is awarded – but that is not a good idea if the payments for that material isn’t coming for a long time. Note that payments tied to milestones likely means an ISP must front all of the money for engineering and labor long before reimbursements are made. This is a use of cash that ISPs might not be expecting. The ideal reimbursement plan is one that pays for invoices on a monthly or quarterly basis as grant work is completed.
  • The grant application is going to define the terms of grant compliance. For example, the BEAD grants require a lot of details concerning the grant labor force that haven’t been included in previous grants. The contract is going to define how the ISP proves to the Broadband Office that it is complying with the many BEAD requirements. In the case of labor, and many other requirements, documented full compliance is likely going to be required before a Broadband Office ever writes the first reimbursement check.
  • The contract is likely to have an expected contract completion date. The contract might require an ISP to finish the construction in the time that the ISP proposed in the grant application – while also imposing delays with things like environmental studies, compliance, and reimbursement rules that might make it hard for the ISP to meet that schedule.

It’s important to note that ISPs are not required to sign the contract first offered to them. A grant contract is like any contract, and the terms can be negotiated – with the caveat that a Broadband Office can’t negotiate away requirements that were included in the law that created the BEAD grants. Expect to be shocked by some of the requested contract terms included in the first draft of the contract.

Finally, note that signing a contract with terms you don’t like is still binding. There have been ISPs that have walked from other grant programs when the offered contract was too harsh. Don’t be in such a hurry to get started that you sign a contract you can’t live with.

First Look at Broadband Labels

The FCC’s Broadband Labels were implemented by ISPs with more than 100,000 customers on or before April 10. Not surprisingly, many ISPs waited until the last day. I think the FCC hoped that the labels would create “clear, easy-to-understand, and accurate information about the cost and performance of high-speed internet services.” I looked at a lot of the labels this past week. As you might expect, the actual labels often fall far short of the FCC’s goal. I’m not going to use this single blog to try to rate and rank the various labels but will highlight a few of the things I found.

The first observation is that the labels are generally hard to find – they are not prominently displayed on ISP websites. This is because the FCC rules say that ISPs only have to display the labels at ‘points of sale’. ISPs have interpreted this to mean that a customer must first submit a valid address to the ISP website, and then typically navigate through several more links to find the labels. Even after entering an address, the links to broadband labels are often not clearly identified, and it was a challenge to find the labels for some ISPs. I thought one of the purposes of the labels was to make it easier for the public to comparison-shop between ISPs – but finding the labels usually takes a lot of work, especially for somebody who isn’t familiar with navigating ISP websites.

The one big benefit of the labels for most ISPs is that they make it easier to find broadband prices. Over the last few years, it’s grown increasingly difficult to find the list price for broadband on big ISP websites – the price that customers pay at the end of a special promotion rate. ISPs are now disclosing the full list price on the labels.

One exception to showing list prices is Comcast. The company is showing the promotional rates in bold for many broadband products and only shows the list price in fine print. Comcast is also deceptive about the cost of its broadband modem. All they say is that it’s optional, without mentioning that their price for a modem rental is $15. They also don’t mention that to get some features a Comcast modem is mandatory. I rate the Comcast labels as still being as deceptive as their website was before the labels. But Comcast isn’t the only one not being open and clear about the modem rental. I’m guessing that big ISPs are rationalizing that WiFi and the modem are not a broadband product as a way to keep them off the label. Any ISP not disclosing modem prices and policies is creating a hidden fee.

One of the features of the labels is that an ISP is supposed to provide a plain English description if its technology and network practices. Most ISPs failed at this, and a customer trying to understand two competing ISPs is not going to understand the technology difference using the broadband labels.

Consider Verizon. It has a network management section of the label that mixes in descriptions of its wide range of different technologies rather than describing each separately. There are a few things that a shopper for FWA service ought to be told: 1) that the FWA product is delivered over the same network delivering bandwidth to cellphones, 2) that the key factor that determines the speed for a customer at a given tower is the distance between the customer and the tower, and 3) that broadband can be throttled if the cell site gets busy. They disclose the third item, but overall, they fail at describing how FWA works.

The labels are not going to tell the public much about speeds. A few ISPs, like Verizon FWA and T-Mobile FWA, are honest and report a range of speeds. Cox is relatively honest and says that speeds are ‘up-to’ the cited marketing speed for a given product. But most big ISPs are claiming they deliver speeds in excess of advertised rates. Charter says speeds are at the advertised speed or faster. Comcast, CenturyLink, Mediacom, and Sparklight all cite ‘typical speeds’ which are all faster than the advertised speed – some significantly faster. This is the first time I’ve seen the term ‘typical speed’, and I have no idea what ISPs mean by it.

Windstream took an interesting approach to broadband labels and only created labels for fiber customers and not for older DSL. I don’t know if that meets the FCC requirements, but Windstream is reporting 100 Mbps capability for DSL in some markets on the FCC map, and this feels like something that should have a label.

All of the labels must disclose latency, and many of the latency numbers cited seem significantly low. I think that the ISPs are citing the latency between their headend and the customer, not the latency that a customer can expect in getting to the Internet. If so, this also feels deceptive to me.

Overall, the Broadband Labels do not fulfill the FCC’s goals of making it easier for customers to understand broadband products. It is a relief to see most ISPs disclose prices – but if Comcast gets away with highlighting marketing promotional rates, the labels for other ISPs might change soon to match. Disclosures on speeds are mostly a joke – and most customers are going to be surprised to find that their ISP is bringing them faster speeds than what they are paying for (sarcasm alert). For the most part, the descriptions of network practices are not written in plain English to help a potential customer understand the technology being used. The carefully crafted lawyer language in these sections makes it hard for even experienced industry folks to understand network management policies.

The Future of Broadband

The earlier blogs in this series looked at the growing demand for broadband speeds and broadband usage. I then went on to look at what the likely future demand might mean for last-mile and middle-mile networks.

There were some interesting conclusions included in the four blogs:

  • The demand for broadband speed grew at a rate of 21% per year since 1999 when the best broadband available to homes was 1 Mbps from DSL or cable modem. If that rate of growth holds up for the next 25 years, the definition of download broadband in 2049 will be 10-gigabits. That may sound outlandish, but 25 years is a long time, and more than a third of households already subscribe to gigabit speeds, and we’re already building last-mile networks capable of 10-gigabit speeds. Even if the demand growth curve slows down and doesn’t reach that high level, the demand for speed in the 25 years is bound to be a lot faster than today’s.
  • The demand for broadband usage has grown at a slower but steady pace. If the current rate of growth from 2022 to 2023 (11% to 12%) remains steady, the demand for broadband usage would be 12-15 times larger than today in 25 years. That would mean the average future home and business would use over 5 terabytes of broadband per month.
  • Only fiber technology will be able to satisfy the future demand for speed and usage in 25 years. Current fiber deployments will require upgrades of last-mile lasers to something like the 40G PON which is now being developed by vendors.
  • Coaxial networks will not be able to meet the demand of 25 years from now, and during that time will have to be upgraded to fiber.
  • Wireless technologies will not be able to meet future demands unless the FCC rearranges spectrum to provide larger channels.
  • Existing middle-mile networks cannot handle the expected future demands and will need to be upgraded over time to faster with speeds greater than one terabit. Unfortunately, most existing middle-mile fiber cannot handle faster lasers and will have to be replaced. In fifteen or twenty years, we’ll experience a middle-mile crisis when major investment will be needed to keep the networks functioning.

I’m the first to admit that I don’t have a crystal ball, and these predictions are not precise. But I’m positive that greater broadband demand is coming over time. My predicted time frame is not the important message – what matters is that increased demand is coming in the future. After thinking about everything discussed in the last four blogs, I reached the following conclusions:

  • We should start soon to develop a strategy to bolster middle-mile fiber routes. We’ll be facing a crisis in 15-20 years where most middle-mile fiber will have to be replaced to accommodate faster lasers.
  • Commercial companies are building some new middle-mile fiber, but not at a pace that’s needed. Any failure to upgrade secondary middle-mile fiber routes will mean that rural areas and small towns will become bandwidth starved in the foreseeable future, even if the last-mile technology in those places has been upgraded to fiber.
  • Regulators and policymakers should consider future demand before giving out money to build broadband infrastructure. That failure to consider future broadband needs has repeatedly resulted in the FCC and states providing grant funding for broadband infrastructure that can’t meet predictable future demand. Any broadband infrastructure funded by grants should have the capacity to handle the expected demand during its expected useful life. To expect less means funding networks and technologies that will be obsolete too soon.
  • The expected future growth in demand means that every existing broadband network will have to be upgraded at some point in the next 25 years. For example, even an XGS-PON network built today will likely be obsolete in less than 25 years.
  • We need to invest in strategies that relieve broadband traffic from having to go to and from the major Internet POPs. That might include strategies to increase the use of edge-computing, caching more data locally, and creating many more peering points closer to local ISPs.

The Future of Middle-Mile Fiber

The earlier blogs in this week’s series looked at the increased demand over time for broadband speed and usage and concluded that the continued growth of broadband demand will ultimately put great stress on last-mile networks – to the point where, eventually, fiber becomes the only viable broadband technology.

But what about middle-mile fiber routes? Middle-mile networks have gotten faster over time. In the 1990s, when the predominant last-mile technology was dial-up Internet, the predominant middle-mile technology was a DS3, which delivers 45 Mbps. As millions of people started using broadband, middle-mile networks were upgraded to 1 Gbps lasers and 10 Gbps lasers. For the last decade, the predominant technology for middle-mile has been 100 Gbps lasers. Recently, middle-mile fiber construction and upgrades are using 400 Gbps lasers. Vendors have already developed and field-tested terabit lasers with a speed of 1,000 Gbps.

One of the primary conclusions in an earlier blog is that 25 years from now, our broadband networks will likely be carrying at 12 to 15 times the volume of data that they carry today, and probably more.

Unfortunately, there is no easy path to upgrade middle-mile networks to keep up with the expected future demand. Consider the following chart showing the capacity of middle-mile lasers, starting with the 100 Gbps laser that is used on most middle-mile routes today.

This table implies that terabit lasers will not satisfy a future broadband demand that is 12-15 times larger than today. By 25 years from now, some of the big middle-mile routes will need to be upgraded to something faster than terabit lasers.

Unfortunately, faster lasers can’t alone satisfy the future demand for increased middle-mile bandwidth. Achieving faster speeds on middle-mile routes is going to require a lot of replacement of existing middle-mile fiber – to fiber that is clearer and that has fewer microscopic impediments. A lot of the fiber in use today can’t handle terabit or faster lasers.

Just like with coaxial cable networks, many of our middle-mile routes are also aging. We’re already seeing that some of the middle-mile routes built in the 1980s and 1990s are aging and deteriorating. The fiber built in those decades was not as clear as today’s fiber, plus we used construction techniques that stressed the fiber, which eventually results in microscopic cracks that impedes the light signal. We’ve developed both clearer fiber and better construction techniques, but those improvements are too late for the older existing fiber routes.

We are probably going to have to use a combination of three strategies to handle the middle-mile demand over the next 25 years:

  • Rip and replace current fiber to higher-quality fiber to be able to handle terabit or faster lasers.
  • Build a lot of new fiber alongside existing fiber routes to handle the increased capacity.
  • Employ strategies for reducing the demand on middle-mile networks.

Realistically, it will take a combination of all of these strategies to handle the future expected demand. Some of the demand will be handled by replacing existing fiber routes. Some will be handled by building new fiber alongside existing middle-mile fiber routes. But a lot of the solution must come from reducing the strain on middle-mile networks.

There are several strategies that can reduce traffic on middle-mile fiber routes.

  • Edge Computing. Starting around 2010, there was a national trend to move data processing to large data centers. However, it’s become clear that some computing functions are better handled at the edge, meaning close to customers. As an example, it’s far more efficient to build a small data center at a smart factory so that the data processing can be done locally to give real-time instructions to manufacturing machinery. It’s also smarter to put the data processing in the factory so that the factory doesn’t shut down if there is a broadband outage in the network connected to the factory. If a factory changes to an edge computing network, there is a significant decrease in the amount of bandwidth being carried over a middle-mile network. ISPs don’t have much power to convince users to migrate data processing to the edge other than perhaps to convince large broadband users that this is a better configuration.
  • Caching. This means storing data close to users. The best example of this today is Netflix. The company has over 80 million customers in the U.S. and Canada today, and a large percentage of those folks watch the same set of popular shows. Netflix has been caching content closer to customers by placing servers with large ISPs in larger markets, where they store copies of the most popular shows on the network. This means that a lot of Netflix content is generated locally on an ISP’s last-mile network and doesn’t need to use middle-mile. Netflix sends one copy to a ISP rather than sending large numbers of individual video streams.
  • Peering. This means carriers swapping broadband traffic to each other rather than sending all traffic to the major Internet hubs. Most large ISPs already have direct connections to the biggest bandwidth users – companies like Google, Netflix, Microsoft, Amazon, and Facebook. But peering can go a lot further. Peering today is generally only done by the largest ISPs in the largest markets. The way to increase the use of peering is to establish many new peering sites around the country. Far too much traffic is exchanged today at the giant carrier hotels located in major cities.

The bottom line conclusion is that our current middle-mile routes are inadequate to meet future demand that will increase broadband traffic by a factor of 15 times or more. While the increase in last-mile broadband demand can be handled by faster fiber lasers, the existing fiber on most middle-mile routes cannot handle terabit and faster lasers. This means we’ll have to replace a lot of middle-mile fiber while also upgrading electronics. There is a lot of questions about who might fund such a massive upgrade. And even if the major middle-mile routes are replaced, most middle-mile fiber is regional. My prediction is that within 15 to 20 years we’ll be having a lot of discussion about the impending collapse of middle-mile networks and carriers will be begging the FCC or Congress to provide a huge subsidy program to upgrade networks.

The Future of the Last Mile

The last two blogs in this series looked at the broadband demand for speed and usage. The first blog predicted that demand in 25 years for broadband speeds could be as much as 100 times more than today’s definition of broadband of 100 Mbps download. The second blog predicted that demand for broadband usage in 25-years could conservatively be 12 to 15 times more than today, and could be a lot more.

Today’s blog looks at what that kind of future demand means for last mile technologies. The fastest broadband technology today is fiber, and the most common fiber technology is passive optical network (PON). This technology brings broadband to local clusters of customers. The original PON technology deployed in the early 2000s was BPON, which had the capability to deliver 622 megabits of speed to share in a cluster of 32 homes.

The next PON technology, introduced widely around 2010, was GPON. This technology uses faster lasers that deliver 2.4 gigabits of speed to share in a cluster of 32 homes. The industry has pivoted in the last few years to XGS-PON, which can deliver 10 gigabits of bandwidth to a neighborhood cluster of homes. Vendors are already working on a PON technology that will deliver 40 gigabytes to a cluster of homes. Cable Labs is working on a PON technology they have labeled as CPON that will deliver 100 gigabits of speed to a cluster of homes.

Consider the following table that shows the increase in last-mile fiber bandwidth that comes with PON technologies:

 

 

 

XGS-PON is a great upgrade, but has only 4 times the capacity of GPON. XGS-PON is not going to satisfy broadband needs in 25 years when demand is at least 12 to 15 times greater than today. By then, fiber ISPs will likely have upgraded to 40G PON, which has over 16 times the capacity of GPON. There will be a lot of talk in 25 years of upgrading to something like CPON, with a capacity of over 40 times that of GPON.

Something that cable executives all know but don’t want to say out loud is that cable networks will not be able to keep up with expected future demand over 25 years. The planned upgrade to DOCSIS 4.0 brings cable company technology close to the capability of XGS-PON. DOCSIS 4.0 will allow for multi-gigabit speeds over coax, but there is no planned or likely upgrade for coax to match the capabilities of 40G PON.

Any discussions about boosting the future capacity of cable networks is moot anyway. Most coaxial networks were built between the 1970s and 1990s, and in 25 years the copper will be between 60 and 80 years old. There is no question that the coaxial copper will be past its useful life by then.

A few cable companies have already acknowledged this reality. Altice announced a transition to fiber years ago but doesn’t seem to have the financial strength to complete the upgrades. Cox has quietly started to upgrade its largest markets to fiber. All big cable companies are using fiber for expansion. By 25 years from now, all cable companies will have made the transition to fiber. Executives at the other big cable companies all know this, but in a world that concentrates on quarterly earnings, they are in no rush to tell their shareholders about the eventual costly need for an expensive infrastructure upgrade.

There is no possibility for wireless technology to keep up with the increased demand that will be expected in 25 years. The only way to increase wireless speeds and capacity would be to greatly increase the size of wireless channels – which the FCC is unlikely to do – or use much higher frequencies. We’ve already learned that millimeter-wave and higher frequencies can deliver much faster speed, but don’t play well in an outdoor environment in an end-to-end wireless network. This doesn’t mean that wireless ISPs won’t be delivering broadband for decades to come – but over time, wireless last-mile technologies will fall behind fiber in the same way that DSL slowly fell behind cable modems.

Unless satellite technology finds a way to get a lot faster, it won’t be a technology of choice except for folks in remote areas.

Mobile data is always going to be vital, but there will be major pressure on wireless companies to finally deliver on the promises of 5G to keep up with future demand for speed and bandwidth.

The Demand for Broadband Usage

My last blog looked at the long-term trajectory of broadband speed. It’s clear that the historical growth of broadband speeds has been at a continuous growth rate of 21% per year – which equates to a 100-fold increase over the last 25 years. Today’s blog looks at the trajectory of demand for broadband usage.

Consider the following statistics that come from OpenVault showing the nationwide average amount of monthly broadband consumed by businesses and residences over the past five years. These numbers combine download and upload usage.

Admittedly, the growth in 2020 was extraordinary due to the pandemic. But there is no question, just like with the bandwidth speeds, that broadband usage has been growing for both homes and businesses. For the last five years, business broadband usage grew by 311% (25.5% per year), and home broadband usage grew by 236% (18.8% per year).

The challenge of looking into the future is predicting the future growth rate of usage demand. The following table looks at two different growth rates. The first two columns assume that future growth is at the same rate of growth from 2022 to 2023 – 11.7% for businesses and 9.4% for residences. The second set of columns looks at a 12.5% growth rate. This table might be one of the best ways to show the incredible impact of compound growth over many years. The residential growth rate of 12.5% is only 33% larger than the 9.4% growth rate, but over 25 years the higher growth rate more than doubles the predicted future demand.

To be conservative, I’ll use the growth from the first two columns in my additional analysis below. But even those growth rates show a 16-time increase in average business usage and a 9-time increase in average residential usage over 25 years. Most folks probably buy the story that in a decade, the average business will use a terabyte of data per month while the average home will use 1.6 terabytes. However, I suspect most people are uncomfortable with a prediction that homes and businesses will use over 5 terabytes per month in 25 years – due to the difficulty of grasping the impact of compound growth.

What do these numbers mean in terms of the amount of bandwidth that our networks have to carry? The following table applies the predicted broadband usage to the number of homes and businesses in the country. The 2018 and 2023 count of homes and businesses comes from the Federal Reserve, and I’ve predicted future homes and businesses on a straight-line growth.

For those not familiar with the term exabytes, the calculation of 90.5 exabytes in 2023 starts with the number of monthly gigabytes being used nationwide, as shown below. Each successive measurement in the table below reflects a difference factor of 1,024. For example, 1 terabyte equals 1,024 gigabytes. The 90.5 exabytes for 2023 could also be stated as:

 

 

 

The table above predicts an overall 12-fold increase in broadband usage over 25 years. Don’t forget that these numbers only come from residential and business broadband usage. There are a lot more sources of broadband, which means that usage on networks will be higher than shown in the table above. Not included in that table are things like data generated from mobile devices, data generated by governments and universities, data generated by outdoor sensors and farming, data generated by self-driving cars and robots, data used to monitor and operate the electric grid and green energy production, and data used to monitor and operate broadband networks. There is good evidence that these other uses of broadband are growing faster than home and business use.

We understand the factors that have contributed to the growth in broadband over the last five years. This includes things like the following:

  • Most of the software that homes and business use is now located in the cloud. A good example is the Microsoft Office suite of software. Five years ago, users ran Excel, Word, Outlook, and PowerPoint on software loaded in home and business computers. Today, the majority this software operates in the cloud – meaning the computing is done in Microsoft data centers using a broadband connection.
  • The delivery of online news and similar content has largely migrated to video.
  • A huge percentage of homes and businesses have Internet-connected devices that connect to the cloud. This can be a wide range of devices like computers, tablets, smartphones, TVs, appliances, gaming consoles, burglar alarms, cameras, smoke detectors, etc. A recent survey by Parks Associates reports that the average U.S. home has 17 connected devices. These devices connect to and communicate with the cloud without active intervention from users. This is what is defined as machine-to-machine traffic, meaning devices talk directly to the cloud, and is the fastest growing segment of broadband usage.
  • 30 million homes cut the cord and dropped traditional cable TV in the last five years. These homes now get all video entertainment from online sources. Additionally, the quality of the video has increased significantly as video quality has increased from standard definition, to high definition, to 4K.
  • Other entertainment markets like gaming have moved to the cloud.

Nobody has a crystal ball that can predict how we’ll use broadband 25 years from now. We’re always on the verge of new uses that consume more bandwidth. The migration to high-quality video continues, with web services starting to use 8K video. Spatial computing from devices like the Apple Vision Pro shows the potential for combining virtual and augmented reality with the real world. Probably the biggest change to bandwidth on the immediate horizon is the use of AI throughout the economy.

But it seems inevitable that the demand for broadband usage will continue to grow. It’s hard to imagine a world where the growth in demand would stop. Using a conservative growth rate for broadband demand would increase overall broadband usage by 12- to 15-fold over the next 25 years. It’s not hard to imagine new technologies that could double that future predicted demand.

The Demand for Broadband Speed

This is the first in a series of blogs this week that will look at the long-term trajectory of the broadband industry.

The recent decision of the FCC to increase the definition of broadband from 25/3 Mbps to 100/20 Mbps got me thinking about the long term trajectory of the demand for broadband speed. For many years, Cisco issued reports that regularly reported that the demand for speed was growing at roughly 21% per year for residential broadband, and a little faster for business broadband. Cisco and others noted that the curve of broadband speeds was on a relatively straight line back to the early 1980s.

It’s not hard to test the Cisco long-term growth rate. The following table applies a 21% growth rate to the 25/3 Mbps definition of broadband that was established by the FCC in 2015.

This table is somewhat arbitrary since it assumes that broadband demand in 2015 was exactly 25 Mbps – but there was widespread praise of this definition at that time, other than from ISPs who wanted to stick with the 4/1 Mbps definition. This simple table accurately predicts that we would be talking about the need to increase the definition of broadband to 100 Mbps download around 2022 – which is exactly what happened. The FCC had to deal with political issues and wasn’t able to make the change until March 2024 – but in 2022, the FCC wanted to change the definition of broadband to a speed that was at a 21% compounded annual growth rate from the definition the FCC had established in 2015.

I can’t think of any fundamental changes that would say that this same growth in demand won’t happen in the near future. Consider the following chart that starts with the assumption that 100 Mbps is the right definition of broadband in 2022. Growing that number over time by the same 21% results in the following table. This table predicts that by 2030 we should be having the conversation about increasing the definition of download broadband to 500 Mbps. This prediction seems very reasonable to me.

However, 2030 is only six years from now, and today’s topic is looking into the future. One way to think about future demand is to look back at the broadband speeds 25 years ago. In 1999, both telcos and cable companies offered 1 Mbps DSL broadband connection as an upgrade to dial-up – and 1 Mbps became the de facto definition of broadband at the time. Twenty five years later, the definition of broadband was increased to 100 Mbps, a 100-fold increase. This tracks directly with Cisco’s reported growth rate, and the growth rate of download speed between 1999 and 2022 works out to be 21.2% per year.

There are a lot of reasons to think that the demand for faster speeds will keep growing. Every year we find more uses for fast broadband. If we plot the demand for broadband speeds out for 25 more years, at the historical rate of growth, demand would be 100 times higher in 25 years than it is today. That would mean the right definition of broadband in 25 years would be 10 gigabits.

I know that a lot of people will jump all over this prediction and say it’s ludicrous and unrealistic. But consider the last 25 years. You would have been hard pressed to find anybody in 1999 who would have predicted that the definition of download speed in 2022 would be 100 Mbps. This is partially because the human mind has a hard time accepting the results of compounded growth – the results after many years of growth always feels too large. I was already running my consulting company in 1999, and I don’t recall anybody who was visionary enough to predict a hundred-fold increase in broadband speeds over twenty-five years. Anybody saying that would have been laughed out of most industry forums – it would have sounded like a fantasy. Yet here we are – the demand for download speed really increased 100-fold since 1999.

There is one weakness in my argument – it’s very hard to pin down a concrete number for the demand for broadband speed. In the context I’ve been using (and the way the FCC looks at speed), broadband speed demand is a composite number encompassing the average of all broadband users. There is a wide range of opinions on the right definition of broadband speed. ISPs operating older and slower technologies still swear that 25 Mbps is all the speed anybody needs. Fiber ISPs think the definition should be gigabit since one-third of households are now subscribing to gigabit speeds. The fact that the FCC set the definition of broadband to 100 Mbps is an interesting data point – but the FCC definition of speed doesn’t mean much more than that it’s a conservative compromise of the many opinions from around the industry.

There are more concrete data points to consider, and the next blog in the series will look at the demand for broadband usage.

WiFi Sensing

It’s incredibly hard to keep things private in the new digital age. There are far too many stories circulating about people who talked to a friend on the phone or texted about something and almost instantly got hit with ads for the subject of the conversation. And that happens without malware – no telling what information you’re giving out if your devices have been infected with malicious software that is spying on you.

There is a new way for folks to track and spy on you. A recent article in the MIT Technology Review described how WiFi tracking has become a usable technology. WiFi tracking measures the way that WiFi signals bounce around objects in a way that can be reverse-engineered to show the shapes of objects in the environment.

Scientists and vendors have been working for a decade to create a viable product using WiFi sensing. The two most likely products that have been investigated are to monitor breathing or detect falls in the home. Breath monitoring is a useful tool to diagnose problems like sleep apnea or identify emergencies like a stroke or heart attack. Fall monitors are a useful tool for seniors who remain in their homes since falling is one of the most common problems and sources of injuries for seniors.

However, both of those use cases have already been solved using other technologies. Breath monitors are using other wireless frequencies, or are more commonly using microphones. It’s also becoming common to use ultra-wideband radar to detect things since it has smaller wavelengths and can better define objects. There are already devices in use in places like hospitals and prisons that are used to monitor that patients and prisoners are where they are supposed to be and that they are not experiencing breathing issues.

For a WiFi system to produce the same resolution as Ultra-wideband radar requires interpreting readings from multiple WiFi devices within a home. WiFi sensing is still an attractive idea since almost every home now has multiple WiFi transmitters in place. There are folks working to build the technology into smart-phone and WiFi router chips, which would make WiFi sensing an everyday technology.

It’s easy to see the appeal of the technology. While motion detectors can tell you that a stranger is in your home, WiFi sensing has the potential to pin down exactly where they are and even give some of what they are doing. For now, WiFi sensing is already good enough to 100% detect human presence in a home.

One of Verizon’s new Fios routers include a human presence detector powered by Origin Wireless. All of the wireless devices in a home like smart plugs, speakers, burglar alarms, and the myriad of other WiFi enabled devices contribute to making such devices effective. Cognitive Systems announced that its smart plugs will contain this new capability. It’s hard to think it won’t also soon be built into smart speakers, TVs and other devices.

The ability to detect people has security experts worried because hackers could turn the technology around to spy on what people are doing inside their home by showing the room they are in and even what they are doing. It would be a good way for a burglar to verify that nobody is home or that they are sleeping.

Even scarier, a good WiFi detector might not need hacking, and somebody standing outside your home might be able to see what people are doing through the walls. In 2023, researchers at Carnegie Mellon were able to use an AI engine called DensePose to generate body shapes from Wi-Fi signals.

The best use of WiFi sensing will probably come by working in conjunction with other technologies. Old technologies like motion detectors will be replaced with more sophisticated monitoring systems that provide homes and businesses with much richer data. But with better sensors and monitors comes an increased security risk of your data being used by outsiders.

Another G Generation

I’ve read several articles coming out of the Mobile World Congress trade show in Barcelona, Spain, and one of the common threads is that there was a lot of talk about 5.5G (or 5G Advanced) – the next iteration of 5G.

My first question on reading about this was to ask what new features are being discussed that were not part of the original announced promises of 5G. I went back and read a few of my blogs and other articles that were written when 5G was first announced. The claims made at that time was that 5G was going to greatly increase download speeds, improve latency, and would enable functions like driverless cars and remote surgery.

Since the original 5G announcements five years ago, carriers have poured billions of dollars into improving cellular networks. Speeds are definitely faster, and the speeds on my cellphone are easily six or eight times faster – but are not at the gigabit speeds that were originally bandied around in the industry. Most of the claimed whiz-bang applications never came to pass. But what matters most is the cellular carriers never found a good way to monetize the big investments and the improved speeds.

There are some new features of 5.5G that were not incorporated into the original 5G specification, and the industry is nearly finished with a new standard for Advanced 5G. The original 5G specifications was 3GPP release 17, and the new features will be in the new standard 3GPP release 18, which is mostly complete and locked and will be released in a few months. Vendors have already been experimenting with the new capabilities.

This article from Ericsson describes the technical details of release 18. The new standard looks at new things like increasing the energy efficiency of cell sites, increasing upload speeds, incorporating machine learning and AI into cell sites, and supporting functions like cloud gaming.

As you would expect, there is a lot of discussion about how the new capabilities will suddenly result in new ways to monetize Advanced 5G. For example, the new specification will better support mixed reality headsets like the Apple Vision Pro and the upgraded Meta Quest Pro. There are fresh claims that the new standard will better support self-driving cars and smart manufacturing. But mostly, the expectation for monetizing 5G comes from a handful of ideas like private 5G networks and smart factories – things that have already been in the pipeline for 5G.

In case the introduction of Advance 5G won’t be confusing enough for customers, the vendors and carriers are also talking about releasing 5G standalone. This is different than Advance 5G and refers to a network that uses the original features in specification 17 that are independent of 4G LTE. So far, any U.S. 5G deployment has been mostly 4G LTE delivered using a different spectrum than original 4G.

As usual in this part of the industry, the claims made for Advance 5G are likely overstated. However, it’s important that 5G technology continues to evolve to match the way that we use mobile broadband. There is a lot more content being created on cellphone than was imagined five years ago, meaning there is a lot of pressure for fast and reliable upload speeds. The new mixed reality headsets will put pressure on both upload and download simultaneously. And cellular networks continue to get busier, and the original premise of 5G was to use various techniques to more efficiently use the network.

BEAD Grants and ACP

Another chance to fund the Affordable Care Program just went past when Congress finally signed legislation to approve the budgets for the current fiscal year. There was a lot of lobbying to get an extension to ACP included in one of the two budget bills that were recently enacted.

The FCC already took steps to end the program that had 23 million participants. As of February 7, the program no longer accepted new participants in the plan. The FCC required ISPs to notify ACP recipients that the last funding would be in April. The FCC might issue a partial discount in May if enough funds remain. Without Congressional action, the program will cease to exist when the funds run dry.

In October 2023, the White House asked Congress to approve an additional $6 billion to continue to fund the ACP. In a rare show of bipartisanship these days, a group of senators and representatives introduced the Affordable Connectivity Program Extension Act that would have provided $7 billion to extend ACP from unspent Covid-19 funds. Support for ACP poured in from all corners of the country from governors to local politicians. Just in my own neighborhood, the Land of Sky Regional Council Board of Delegates unanimously approved a resolution in support of ACP.

Most of the support and lobbying effort was aimed at getting ACP renewal included in the new budget bills. When that failed, the future chances to fund ACP are looking slimmer by the day.

The consequences of the end of ACP are still to play out. The BEAD legislation required ISPs requesting BEAD funds to participate in ACP, and State Broadband Offices were counting on BEAD participation as a key part of the directive of the IIJA legislation to have affordable rates. ISPs are being put under pressure to self-fund and continue the BEAD discounts. But without a mandate, very few of them will do so. I’ve heard from a number of ISPs that will extend the discounts for a few months past the end of May to see if Congress renews ACP. But it’s hard to think that many ISPs will continue discounts for long after that.

It was not unexpected that we would end up in this situation. Social programs that don’t have a permanent source of funding routinely expire when the temporary funding runs dry. The expanded child care credit that was part of the IIJA Covid funding also expired. The House passed a renewed expansion of the childcare credit, but it stalled in the Senate and also failed to make it into the newly passed budget bills.

I’ve heard rumors for years that the policymakers in DC never expected the ACP program to be permanent. The expectation of the original architects of the plan was that ISPs would bow to public pressure to fill the void when ACP ran dry. However, the giant ISPs are not likely to self-fund the discounts and smaller ISPs can’t afford to do so.

I’ve seen some recent articles that argue that the FCC could tackle at least some of the BEAD obligation out of the Universal Service Fund. Even if the FCC is willing to consider this, their normal process are slow and cumbersome and it’s hard to think it could happen much before the end of the year. But there doesn’t seem to be any talk of the Commissioners willing to tackle this.

Even if ACP gets renewed later this year, it will be a mess. The process of onboarding customers to ACP is cumbersome, and it seems likely that every customer will need to start with a fresh application. A lot of customers are likely not going to jump through the hoops a second time to get the discount.