Categories
Regulation - What is it Good For?

Can the FCC Fund the ACP?

A lot of folks have been pleading with the FCC to pick up the tab to continue the the Affordable Connectivity Program (ACP). Folks are assuming that the FCC has the ability to take on the ACP program inside the Universal Service Fund. To make that work, the FCC would have to apply a monthly assessment against all broadband users – something the FCC should have the authority to do if it votes to reinstate Title II authority over broadband at its April meeting.

What might it look like for the FCC to absorb the dying ACP program? FCC Chairwoman Jessica Rosenworcel told Congress that rolling the ACP into the USF could add $9.00 to monthly broadband and telephone bills. She also cited an internal FCC report that found that broadband bills could increase between $5.28 and $17.96 per month. I decided to kick the tires on the FCC’s estimates.

Taking over the Existing ACP. The existing ACP has 23.3 million recipients. That includes 13 million cellular customers, and the rest using landline or wireless broadband. It’s not easy to pin down the number of U.S. broadband customers that a fee might be assessed to. For example, there are numerous wholesale arrangements that would have to be defined – like assessing the fee on a landlord who includes broadband in the rent. Using a variety of sources, I assumed there about 121 million total broadband customers that could be assessed a fee to support ACP.

Funding the current ACP with a monthly fee on all broadband users equates to a monthly fee of $5.78. However, the monthly ACP fund disbursements grew 28% over the last year, so an initial fee would have to be set higher to prepare for growth over the next year. That means the starting USF fee might have to be something like $7.50 per month, and there would have to be additional future increases to the fee until the ACP fund reached equilibrium. It’s not hard to envision the broadband fee growing significantly beyond $10 per month in a few years.

This also raises the uncomfortable question about giving low-income households a $30 monthly discount and then charging the same folks to fund the program. If low-income households are excused from the USF fee, then the fee to everybody else would be increased by another 20%.

Exclude Cellular from ACP. There is a lot of controversy about giving the ACP discount to cellular customers. Almost all of the cellular companies involved in the program are cellular resellers, and most of the suspected ACP fraud involves cellular ACP claims.

If ACP is limited to landline (and fixed wireless) customers, the broadband fee would be a lot smaller. With the current number of ACP enrollees, the FCC broadband fee would be roughly $2.54 per month. However, it seems likely that a lot of ACP recipients receiving the discount on cellphones would convert that to a home broadband connection, which would quickly boost the fee.

The most common qualification for ACP is participation in the SNAP program that provides food subsidies for low-income households. There are currently 21.6 million households that get SNAP benefits, and if all of them applied for the ACP discount, the monthly fee to fund the USF would equate to  $5.36. The current economy has historically low unemployment rates, and a future dip in the economy could quickly add to households eligible for SNAP and ACP.

Assessing a Fee on Broadband Isn’t Easy. It’s more challenging than you might think to assess a fee on every broadband customers. A fee on single family homes and standalone businesses is fairly straightforward. But there are a lot of complicated broadband billing arrangements. Landlords for both residents and businesses often build broadband into the rent. Landlords might drop broadband rather than pay a fee for every tenant. There are many arrangements providing free broadband to public housing. There are many varieties of wholesale broadband relationships that would have to be figured out.

Impact of Raising Rates. It’s not hard to imagining the furor that would ensue if people drop their broadband connection as unaffordable because of the extra fee. One of Chairman Rosenworcel’s fears is that funding broadband this way would push a lot of broadband rates to an unaffordable level.

Conclusion. I think Chairwoman Rosenworcel is in the right range with her estimate if you trend the current ACP recipients to grow for a few more years. However, the FCC has alternatives. If ACP recovery was limited to home broadband and not cellphones, it looks like the fee might might top out at $6 or $7 – lower than her $9 projection. If cell phones remain eligible for ACP, it’s not hard to envision the USF fee growing far past her cited $9 fee – that might be how the FCC predicted a $17 fee.

But the real issue isn’t the size of the monthly fee – but whether the FCC is willing to take on the responsibility. If the FCC was to assess a $5 – 7 fee on every broadband user, the agency would be in the crosshairs by both sides of the political spectrum. Realistically, it also seems likely that an attempt by the FCC to implement such a fee would be challenged and end up in court for years – which wouldn’t help anybody.

The FCC is obviously being cautious, but they might be right in doing so. Tackling such a controversial solution with such high visibility would likely put the FCC under a lot of scrutiny, which might even bring the entire Universal Service Fund under attack. I know it’s not the answer that people want to hear, but the best solution is for Congress to fix ACP – unfortunately, nobody is feeling highly hopeful about that.

Categories
Regulation - What is it Good For?

FCC to Reimpose Broadband Regulation

The FCC will vote on reimposing Title II authority over broadband at its April 25 meeting this month. It seems likely that the proposal will pass since three Commissioners have already expressed support for the idea. The proposed order is 434 pages long and includes 2,921 footnotes. Hopefully this summary will suffice for anybody but full regulatory nerds like me.

The press is largely going to label this as the FCC putting net neutrality back in place. However, net neutrality is only a small portion of the regulatory changes that accompany reimposing Title II authority over broadband. The national conversation would be more useful if the question was asked if people think broadband should be regulated – and it’s likely that a large percentage of folks don’t like a world where giant ISPs set the rules and prices.

Anybody who follows telecom regulation knows that regulating broadband at the federal level has been on a roller-coaster ride that follows the party that wins the White House. Chairman Tom Wheeler, who led the FCC under President Obama, implemented net neutrality rules tied to the existing Title II regulation. Chairman Ajit Pai led the FCC under President Trump and canceled both Title II authority and net neutrality rules to try to make it harder for future FCCs to reinstate broadband regulation. The Pai FCC went so far as to wipe the FCC’s hands of remaining broadband regulation and defaulted to the Federal Trade Commission as the final say on some broadband issues. The current move to reimpose Title II regulation was only enabled after a Democratic president nominated and Congress finally approved a fifth Commissioner to replace Chairman Pai. It almost seems inevitable that if the White House changes parties again that the roller coaster ride will repeat.

As a backdrop, while Chairman PAI was killing Title II authority, a federal court ruled on a previous challenge to Chairman Wheeler’s net neutrality order and concluded that the FCC has the regulatory authority to implement net neutrality as long as Title II regulations are in place. This should mean that any challenges to the actions of the current FCC would need to use a different tactic to challenge new Title II authority.

The current proposal from the FCC differs in some areas from the Tom Wheeler set of rules. In addition to reimposing net neutrality, the new rules will enable the FCC to monitor broadband outages, give the FCC more authority over network security issues, and increase the protection of consumer data. The new rules will also mandate national net neutrality rules that would preempt state rules like the ones created in California – although the FCC said it will tread lightly in these areas as an experiment in state rule.

It’s a natural question to ask why we need Title II regulation because the press rarely talks about broadband regulation in terms that consumers can understand. Here are just a few of the things that can happen after the FCC reintroduces Title II regulation:

  • The FCC used to have a broadband complaint process where the agency would intervene in cases of bad behavior by ISPs. Consumers could plead for relief from particularly egregious ISP behavior, and the FCC often required ISPs to set things right. The FCC also had the authority to dictate policies related to broadband customer service.
  • While they never exercised it, the FCC has the ability to regulate rates under Title II. This is the big bogeyman that worries ISPs. The FCC in the past used this power to coax ISPs to cut back on practices like rate caps.
  • The FCC used to have the authority to make ISPs refund money to customers when ISPs overbilled or otherwise cheated customers.
  • The FCC used to intervene and mediate disputes between ISPs over network practices. That ability died when Title II authority was killed.
  • The FCC had the authority to fine ISPs that engaged in bad behavior with customers – that largely died when Title II authority was killed.
  • The FCC had more authority to act against hacking and other behavior by bad actors.

Anybody who has been reading my blog knows that I am a huge fan of some basic level of broadband regulation. It seems irresponsible for the government not to have any authority over the actions of what can be argued to be the most important industry in the country. It’s an industry that is largely dominated by a handful of duopoly players who serve the large majority of customers in the country. Broadband is vital to both the economy and to people’s everyday lives, and it’s almost unfathomable that the FCC hasn’t been looking out for the public for the last six years after Title II authority was killed.

Reimposing Title II authority is far from ideal since it won’t stop the roller-coast ride if there is a future change of parties. A much better solution has always been to have Congress give the FCC specific authority to regulate broadband. That would also cut back on lawsuits that challenge the FCC’s authority to create regulations. But Congress hasn’t done anything major along these lines since the Telecom Act of 1996, during the early days of dial-up access. It doesn’t seem to be a big ask to give the FCC permanent authority over broadband, and the failure of Congress to do so is evidence of the stranglehold that ISP lobbyists have on Capital Hill. I’ve been hoping for Congressional action for over twenty years – and maybe they will surprise me one of these years and do the responsible thing.

Categories
Regulation - What is it Good For?

BEAD Grant Contracts

One of the steps in the BEAD grant program that isn’t being talked about is the contract that an ISP must sign with a broadband office before officially being awarded a grant. While the whole industry has been focused on creating a good grant application, the grant contract is the most important document in the grant process because it specifically defines what a grant winner must do to fulfill the grant and how they will be reimbursed.

The grant contract is going to define a lot of important things:

  • This is the document that will define the line of credit that must be provided. If an ISP has elected a line of credit that can be decreased over time, make sure that contract defines the specific events that will allow for a reduction in the size of the line of credit.
  • The contract is going to define the specific environmental studies that are required, along with the timing of the environmental work. A lot of BEAD grant recipients are going to be disappointed if they are required to complete time-consuming environmental studies before starting any other work. Note that just like the rest of the industry, the folks who do environmental studies are likely to get quickly backlogged with BEAD work and may take a lot longer than normal.
  • The contract is going to define how the Broadband Office envisions implementing the many issues that were in the grant application. Regardless of what an ISP might have proposed in the grant application, the Broadband Office is going to try to use the contract to impose their will for items like setting rates. It’s important to note that an ISP doesn’t get what they proposed in the BEAD grant application – the real negotiation for how the grant is going to work happens in agreeing to a contract.
  • Perhaps the most important part of the contract is that it is going to define how the ISP will get reimbursed for completed work. There are many States that are talking about reimbursing ISPs based on meeting specific milestones. Be very careful to understand specifically what this means, because it might mean waiting for many quarters, or even a year before seeing a check out of the grant office. The natural inclination of ISPs is to order all of the materials to build a network when the grant is awarded – but that is not a good idea if the payments for that material isn’t coming for a long time. Note that payments tied to milestones likely means an ISP must front all of the money for engineering and labor long before reimbursements are made. This is a use of cash that ISPs might not be expecting. The ideal reimbursement plan is one that pays for invoices on a monthly or quarterly basis as grant work is completed.
  • The grant application is going to define the terms of grant compliance. For example, the BEAD grants require a lot of details concerning the grant labor force that haven’t been included in previous grants. The contract is going to define how the ISP proves to the Broadband Office that it is complying with the many BEAD requirements. In the case of labor, and many other requirements, documented full compliance is likely going to be required before a Broadband Office ever writes the first reimbursement check.
  • The contract is likely to have an expected contract completion date. The contract might require an ISP to finish the construction in the time that the ISP proposed in the grant application – while also imposing delays with things like environmental studies, compliance, and reimbursement rules that might make it hard for the ISP to meet that schedule.

It’s important to note that ISPs are not required to sign the contract first offered to them. A grant contract is like any contract, and the terms can be negotiated – with the caveat that a Broadband Office can’t negotiate away requirements that were included in the law that created the BEAD grants. Expect to be shocked by some of the requested contract terms included in the first draft of the contract.

Finally, note that signing a contract with terms you don’t like is still binding. There have been ISPs that have walked from other grant programs when the offered contract was too harsh. Don’t be in such a hurry to get started that you sign a contract you can’t live with.

Categories
Regulation - What is it Good For?

First Look at Broadband Labels

The FCC’s Broadband Labels were implemented by ISPs with more than 100,000 customers on or before April 10. Not surprisingly, many ISPs waited until the last day. I think the FCC hoped that the labels would create “clear, easy-to-understand, and accurate information about the cost and performance of high-speed internet services.” I looked at a lot of the labels this past week. As you might expect, the actual labels often fall far short of the FCC’s goal. I’m not going to use this single blog to try to rate and rank the various labels but will highlight a few of the things I found.

The first observation is that the labels are generally hard to find – they are not prominently displayed on ISP websites. This is because the FCC rules say that ISPs only have to display the labels at ‘points of sale’. ISPs have interpreted this to mean that a customer must first submit a valid address to the ISP website, and then typically navigate through several more links to find the labels. Even after entering an address, the links to broadband labels are often not clearly identified, and it was a challenge to find the labels for some ISPs. I thought one of the purposes of the labels was to make it easier for the public to comparison-shop between ISPs – but finding the labels usually takes a lot of work, especially for somebody who isn’t familiar with navigating ISP websites.

The one big benefit of the labels for most ISPs is that they make it easier to find broadband prices. Over the last few years, it’s grown increasingly difficult to find the list price for broadband on big ISP websites – the price that customers pay at the end of a special promotion rate. ISPs are now disclosing the full list price on the labels.

One exception to showing list prices is Comcast. The company is showing the promotional rates in bold for many broadband products and only shows the list price in fine print. Comcast is also deceptive about the cost of its broadband modem. All they say is that it’s optional, without mentioning that their price for a modem rental is $15. They also don’t mention that to get some features a Comcast modem is mandatory. I rate the Comcast labels as still being as deceptive as their website was before the labels. But Comcast isn’t the only one not being open and clear about the modem rental. I’m guessing that big ISPs are rationalizing that WiFi and the modem are not a broadband product as a way to keep them off the label. Any ISP not disclosing modem prices and policies is creating a hidden fee.

One of the features of the labels is that an ISP is supposed to provide a plain English description if its technology and network practices. Most ISPs failed at this, and a customer trying to understand two competing ISPs is not going to understand the technology difference using the broadband labels.

Consider Verizon. It has a network management section of the label that mixes in descriptions of its wide range of different technologies rather than describing each separately. There are a few things that a shopper for FWA service ought to be told: 1) that the FWA product is delivered over the same network delivering bandwidth to cellphones, 2) that the key factor that determines the speed for a customer at a given tower is the distance between the customer and the tower, and 3) that broadband can be throttled if the cell site gets busy. They disclose the third item, but overall, they fail at describing how FWA works.

The labels are not going to tell the public much about speeds. A few ISPs, like Verizon FWA and T-Mobile FWA, are honest and report a range of speeds. Cox is relatively honest and says that speeds are ‘up-to’ the cited marketing speed for a given product. But most big ISPs are claiming they deliver speeds in excess of advertised rates. Charter says speeds are at the advertised speed or faster. Comcast, CenturyLink, Mediacom, and Sparklight all cite ‘typical speeds’ which are all faster than the advertised speed – some significantly faster. This is the first time I’ve seen the term ‘typical speed’, and I have no idea what ISPs mean by it.

Windstream took an interesting approach to broadband labels and only created labels for fiber customers and not for older DSL. I don’t know if that meets the FCC requirements, but Windstream is reporting 100 Mbps capability for DSL in some markets on the FCC map, and this feels like something that should have a label.

All of the labels must disclose latency, and many of the latency numbers cited seem significantly low. I think that the ISPs are citing the latency between their headend and the customer, not the latency that a customer can expect in getting to the Internet. If so, this also feels deceptive to me.

Overall, the Broadband Labels do not fulfill the FCC’s goals of making it easier for customers to understand broadband products. It is a relief to see most ISPs disclose prices – but if Comcast gets away with highlighting marketing promotional rates, the labels for other ISPs might change soon to match. Disclosures on speeds are mostly a joke – and most customers are going to be surprised to find that their ISP is bringing them faster speeds than what they are paying for (sarcasm alert). For the most part, the descriptions of network practices are not written in plain English to help a potential customer understand the technology being used. The carefully crafted lawyer language in these sections makes it hard for even experienced industry folks to understand network management policies.

Categories
The Industry

The Future of Broadband

The earlier blogs in this series looked at the growing demand for broadband speeds and broadband usage. I then went on to look at what the likely future demand might mean for last-mile and middle-mile networks.

There were some interesting conclusions included in the four blogs:

  • The demand for broadband speed grew at a rate of 21% per year since 1999 when the best broadband available to homes was 1 Mbps from DSL or cable modem. If that rate of growth holds up for the next 25 years, the definition of download broadband in 2049 will be 10-gigabits. That may sound outlandish, but 25 years is a long time, and more than a third of households already subscribe to gigabit speeds, and we’re already building last-mile networks capable of 10-gigabit speeds. Even if the demand growth curve slows down and doesn’t reach that high level, the demand for speed in the 25 years is bound to be a lot faster than today’s.
  • The demand for broadband usage has grown at a slower but steady pace. If the current rate of growth from 2022 to 2023 (11% to 12%) remains steady, the demand for broadband usage would be 12-15 times larger than today in 25 years. That would mean the average future home and business would use over 5 terabytes of broadband per month.
  • Only fiber technology will be able to satisfy the future demand for speed and usage in 25 years. Current fiber deployments will require upgrades of last-mile lasers to something like the 40G PON which is now being developed by vendors.
  • Coaxial networks will not be able to meet the demand of 25 years from now, and during that time will have to be upgraded to fiber.
  • Wireless technologies will not be able to meet future demands unless the FCC rearranges spectrum to provide larger channels.
  • Existing middle-mile networks cannot handle the expected future demands and will need to be upgraded over time to faster with speeds greater than one terabit. Unfortunately, most existing middle-mile fiber cannot handle faster lasers and will have to be replaced. In fifteen or twenty years, we’ll experience a middle-mile crisis when major investment will be needed to keep the networks functioning.

I’m the first to admit that I don’t have a crystal ball, and these predictions are not precise. But I’m positive that greater broadband demand is coming over time. My predicted time frame is not the important message – what matters is that increased demand is coming in the future. After thinking about everything discussed in the last four blogs, I reached the following conclusions:

  • We should start soon to develop a strategy to bolster middle-mile fiber routes. We’ll be facing a crisis in 15-20 years where most middle-mile fiber will have to be replaced to accommodate faster lasers.
  • Commercial companies are building some new middle-mile fiber, but not at a pace that’s needed. Any failure to upgrade secondary middle-mile fiber routes will mean that rural areas and small towns will become bandwidth starved in the foreseeable future, even if the last-mile technology in those places has been upgraded to fiber.
  • Regulators and policymakers should consider future demand before giving out money to build broadband infrastructure. That failure to consider future broadband needs has repeatedly resulted in the FCC and states providing grant funding for broadband infrastructure that can’t meet predictable future demand. Any broadband infrastructure funded by grants should have the capacity to handle the expected demand during its expected useful life. To expect less means funding networks and technologies that will be obsolete too soon.
  • The expected future growth in demand means that every existing broadband network will have to be upgraded at some point in the next 25 years. For example, even an XGS-PON network built today will likely be obsolete in less than 25 years.
  • We need to invest in strategies that relieve broadband traffic from having to go to and from the major Internet POPs. That might include strategies to increase the use of edge-computing, caching more data locally, and creating many more peering points closer to local ISPs.
Categories
Technology The Industry

The Future of Middle-Mile Fiber

The earlier blogs in this week’s series looked at the increased demand over time for broadband speed and usage and concluded that the continued growth of broadband demand will ultimately put great stress on last-mile networks – to the point where, eventually, fiber becomes the only viable broadband technology.

But what about middle-mile fiber routes? Middle-mile networks have gotten faster over time. In the 1990s, when the predominant last-mile technology was dial-up Internet, the predominant middle-mile technology was a DS3, which delivers 45 Mbps. As millions of people started using broadband, middle-mile networks were upgraded to 1 Gbps lasers and 10 Gbps lasers. For the last decade, the predominant technology for middle-mile has been 100 Gbps lasers. Recently, middle-mile fiber construction and upgrades are using 400 Gbps lasers. Vendors have already developed and field-tested terabit lasers with a speed of 1,000 Gbps.

One of the primary conclusions in an earlier blog is that 25 years from now, our broadband networks will likely be carrying at 12 to 15 times the volume of data that they carry today, and probably more.

Unfortunately, there is no easy path to upgrade middle-mile networks to keep up with the expected future demand. Consider the following chart showing the capacity of middle-mile lasers, starting with the 100 Gbps laser that is used on most middle-mile routes today.

This table implies that terabit lasers will not satisfy a future broadband demand that is 12-15 times larger than today. By 25 years from now, some of the big middle-mile routes will need to be upgraded to something faster than terabit lasers.

Unfortunately, faster lasers can’t alone satisfy the future demand for increased middle-mile bandwidth. Achieving faster speeds on middle-mile routes is going to require a lot of replacement of existing middle-mile fiber – to fiber that is clearer and that has fewer microscopic impediments. A lot of the fiber in use today can’t handle terabit or faster lasers.

Just like with coaxial cable networks, many of our middle-mile routes are also aging. We’re already seeing that some of the middle-mile routes built in the 1980s and 1990s are aging and deteriorating. The fiber built in those decades was not as clear as today’s fiber, plus we used construction techniques that stressed the fiber, which eventually results in microscopic cracks that impedes the light signal. We’ve developed both clearer fiber and better construction techniques, but those improvements are too late for the older existing fiber routes.

We are probably going to have to use a combination of three strategies to handle the middle-mile demand over the next 25 years:

  • Rip and replace current fiber to higher-quality fiber to be able to handle terabit or faster lasers.
  • Build a lot of new fiber alongside existing fiber routes to handle the increased capacity.
  • Employ strategies for reducing the demand on middle-mile networks.

Realistically, it will take a combination of all of these strategies to handle the future expected demand. Some of the demand will be handled by replacing existing fiber routes. Some will be handled by building new fiber alongside existing middle-mile fiber routes. But a lot of the solution must come from reducing the strain on middle-mile networks.

There are several strategies that can reduce traffic on middle-mile fiber routes.

  • Edge Computing. Starting around 2010, there was a national trend to move data processing to large data centers. However, it’s become clear that some computing functions are better handled at the edge, meaning close to customers. As an example, it’s far more efficient to build a small data center at a smart factory so that the data processing can be done locally to give real-time instructions to manufacturing machinery. It’s also smarter to put the data processing in the factory so that the factory doesn’t shut down if there is a broadband outage in the network connected to the factory. If a factory changes to an edge computing network, there is a significant decrease in the amount of bandwidth being carried over a middle-mile network. ISPs don’t have much power to convince users to migrate data processing to the edge other than perhaps to convince large broadband users that this is a better configuration.
  • Caching. This means storing data close to users. The best example of this today is Netflix. The company has over 80 million customers in the U.S. and Canada today, and a large percentage of those folks watch the same set of popular shows. Netflix has been caching content closer to customers by placing servers with large ISPs in larger markets, where they store copies of the most popular shows on the network. This means that a lot of Netflix content is generated locally on an ISP’s last-mile network and doesn’t need to use middle-mile. Netflix sends one copy to a ISP rather than sending large numbers of individual video streams.
  • Peering. This means carriers swapping broadband traffic to each other rather than sending all traffic to the major Internet hubs. Most large ISPs already have direct connections to the biggest bandwidth users – companies like Google, Netflix, Microsoft, Amazon, and Facebook. But peering can go a lot further. Peering today is generally only done by the largest ISPs in the largest markets. The way to increase the use of peering is to establish many new peering sites around the country. Far too much traffic is exchanged today at the giant carrier hotels located in major cities.

The bottom line conclusion is that our current middle-mile routes are inadequate to meet future demand that will increase broadband traffic by a factor of 15 times or more. While the increase in last-mile broadband demand can be handled by faster fiber lasers, the existing fiber on most middle-mile routes cannot handle terabit and faster lasers. This means we’ll have to replace a lot of middle-mile fiber while also upgrading electronics. There is a lot of questions about who might fund such a massive upgrade. And even if the major middle-mile routes are replaced, most middle-mile fiber is regional. My prediction is that within 15 to 20 years we’ll be having a lot of discussion about the impending collapse of middle-mile networks and carriers will be begging the FCC or Congress to provide a huge subsidy program to upgrade networks.

Categories
Technology The Industry

The Future of the Last Mile

The last two blogs in this series looked at the broadband demand for speed and usage. The first blog predicted that demand in 25 years for broadband speeds could be as much as 100 times more than today’s definition of broadband of 100 Mbps download. The second blog predicted that demand for broadband usage in 25-years could conservatively be 12 to 15 times more than today, and could be a lot more.

Today’s blog looks at what that kind of future demand means for last mile technologies. The fastest broadband technology today is fiber, and the most common fiber technology is passive optical network (PON). This technology brings broadband to local clusters of customers. The original PON technology deployed in the early 2000s was BPON, which had the capability to deliver 622 megabits of speed to share in a cluster of 32 homes.

The next PON technology, introduced widely around 2010, was GPON. This technology uses faster lasers that deliver 2.4 gigabits of speed to share in a cluster of 32 homes. The industry has pivoted in the last few years to XGS-PON, which can deliver 10 gigabits of bandwidth to a neighborhood cluster of homes. Vendors are already working on a PON technology that will deliver 40 gigabytes to a cluster of homes. Cable Labs is working on a PON technology they have labeled as CPON that will deliver 100 gigabits of speed to a cluster of homes.

Consider the following table that shows the increase in last-mile fiber bandwidth that comes with PON technologies:

 

 

 

XGS-PON is a great upgrade, but has only 4 times the capacity of GPON. XGS-PON is not going to satisfy broadband needs in 25 years when demand is at least 12 to 15 times greater than today. By then, fiber ISPs will likely have upgraded to 40G PON, which has over 16 times the capacity of GPON. There will be a lot of talk in 25 years of upgrading to something like CPON, with a capacity of over 40 times that of GPON.

Something that cable executives all know but don’t want to say out loud is that cable networks will not be able to keep up with expected future demand over 25 years. The planned upgrade to DOCSIS 4.0 brings cable company technology close to the capability of XGS-PON. DOCSIS 4.0 will allow for multi-gigabit speeds over coax, but there is no planned or likely upgrade for coax to match the capabilities of 40G PON.

Any discussions about boosting the future capacity of cable networks is moot anyway. Most coaxial networks were built between the 1970s and 1990s, and in 25 years the copper will be between 60 and 80 years old. There is no question that the coaxial copper will be past its useful life by then.

A few cable companies have already acknowledged this reality. Altice announced a transition to fiber years ago but doesn’t seem to have the financial strength to complete the upgrades. Cox has quietly started to upgrade its largest markets to fiber. All big cable companies are using fiber for expansion. By 25 years from now, all cable companies will have made the transition to fiber. Executives at the other big cable companies all know this, but in a world that concentrates on quarterly earnings, they are in no rush to tell their shareholders about the eventual costly need for an expensive infrastructure upgrade.

There is no possibility for wireless technology to keep up with the increased demand that will be expected in 25 years. The only way to increase wireless speeds and capacity would be to greatly increase the size of wireless channels – which the FCC is unlikely to do – or use much higher frequencies. We’ve already learned that millimeter-wave and higher frequencies can deliver much faster speed, but don’t play well in an outdoor environment in an end-to-end wireless network. This doesn’t mean that wireless ISPs won’t be delivering broadband for decades to come – but over time, wireless last-mile technologies will fall behind fiber in the same way that DSL slowly fell behind cable modems.

Unless satellite technology finds a way to get a lot faster, it won’t be a technology of choice except for folks in remote areas.

Mobile data is always going to be vital, but there will be major pressure on wireless companies to finally deliver on the promises of 5G to keep up with future demand for speed and bandwidth.

Categories
The Industry

The Demand for Broadband Usage

My last blog looked at the long-term trajectory of broadband speed. It’s clear that the historical growth of broadband speeds has been at a continuous growth rate of 21% per year – which equates to a 100-fold increase over the last 25 years. Today’s blog looks at the trajectory of demand for broadband usage.

Consider the following statistics that come from OpenVault showing the nationwide average amount of monthly broadband consumed by businesses and residences over the past five years. These numbers combine download and upload usage.

Admittedly, the growth in 2020 was extraordinary due to the pandemic. But there is no question, just like with the bandwidth speeds, that broadband usage has been growing for both homes and businesses. For the last five years, business broadband usage grew by 311% (25.5% per year), and home broadband usage grew by 236% (18.8% per year).

The challenge of looking into the future is predicting the future growth rate of usage demand. The following table looks at two different growth rates. The first two columns assume that future growth is at the same rate of growth from 2022 to 2023 – 11.7% for businesses and 9.4% for residences. The second set of columns looks at a 12.5% growth rate. This table might be one of the best ways to show the incredible impact of compound growth over many years. The residential growth rate of 12.5% is only 33% larger than the 9.4% growth rate, but over 25 years the higher growth rate more than doubles the predicted future demand.

To be conservative, I’ll use the growth from the first two columns in my additional analysis below. But even those growth rates show a 16-time increase in average business usage and a 9-time increase in average residential usage over 25 years. Most folks probably buy the story that in a decade, the average business will use a terabyte of data per month while the average home will use 1.6 terabytes. However, I suspect most people are uncomfortable with a prediction that homes and businesses will use over 5 terabytes per month in 25 years – due to the difficulty of grasping the impact of compound growth.

What do these numbers mean in terms of the amount of bandwidth that our networks have to carry? The following table applies the predicted broadband usage to the number of homes and businesses in the country. The 2018 and 2023 count of homes and businesses comes from the Federal Reserve, and I’ve predicted future homes and businesses on a straight-line growth.

For those not familiar with the term exabytes, the calculation of 90.5 exabytes in 2023 starts with the number of monthly gigabytes being used nationwide, as shown below. Each successive measurement in the table below reflects a difference factor of 1,024. For example, 1 terabyte equals 1,024 gigabytes. The 90.5 exabytes for 2023 could also be stated as:

 

 

 

The table above predicts an overall 12-fold increase in broadband usage over 25 years. Don’t forget that these numbers only come from residential and business broadband usage. There are a lot more sources of broadband, which means that usage on networks will be higher than shown in the table above. Not included in that table are things like data generated from mobile devices, data generated by governments and universities, data generated by outdoor sensors and farming, data generated by self-driving cars and robots, data used to monitor and operate the electric grid and green energy production, and data used to monitor and operate broadband networks. There is good evidence that these other uses of broadband are growing faster than home and business use.

We understand the factors that have contributed to the growth in broadband over the last five years. This includes things like the following:

  • Most of the software that homes and business use is now located in the cloud. A good example is the Microsoft Office suite of software. Five years ago, users ran Excel, Word, Outlook, and PowerPoint on software loaded in home and business computers. Today, the majority this software operates in the cloud – meaning the computing is done in Microsoft data centers using a broadband connection.
  • The delivery of online news and similar content has largely migrated to video.
  • A huge percentage of homes and businesses have Internet-connected devices that connect to the cloud. This can be a wide range of devices like computers, tablets, smartphones, TVs, appliances, gaming consoles, burglar alarms, cameras, smoke detectors, etc. A recent survey by Parks Associates reports that the average U.S. home has 17 connected devices. These devices connect to and communicate with the cloud without active intervention from users. This is what is defined as machine-to-machine traffic, meaning devices talk directly to the cloud, and is the fastest growing segment of broadband usage.
  • 30 million homes cut the cord and dropped traditional cable TV in the last five years. These homes now get all video entertainment from online sources. Additionally, the quality of the video has increased significantly as video quality has increased from standard definition, to high definition, to 4K.
  • Other entertainment markets like gaming have moved to the cloud.

Nobody has a crystal ball that can predict how we’ll use broadband 25 years from now. We’re always on the verge of new uses that consume more bandwidth. The migration to high-quality video continues, with web services starting to use 8K video. Spatial computing from devices like the Apple Vision Pro shows the potential for combining virtual and augmented reality with the real world. Probably the biggest change to bandwidth on the immediate horizon is the use of AI throughout the economy.

But it seems inevitable that the demand for broadband usage will continue to grow. It’s hard to imagine a world where the growth in demand would stop. Using a conservative growth rate for broadband demand would increase overall broadband usage by 12- to 15-fold over the next 25 years. It’s not hard to imagine new technologies that could double that future predicted demand.

Categories
The Industry

The Demand for Broadband Speed

This is the first in a series of blogs this week that will look at the long-term trajectory of the broadband industry.

The recent decision of the FCC to increase the definition of broadband from 25/3 Mbps to 100/20 Mbps got me thinking about the long term trajectory of the demand for broadband speed. For many years, Cisco issued reports that regularly reported that the demand for speed was growing at roughly 21% per year for residential broadband, and a little faster for business broadband. Cisco and others noted that the curve of broadband speeds was on a relatively straight line back to the early 1980s.

It’s not hard to test the Cisco long-term growth rate. The following table applies a 21% growth rate to the 25/3 Mbps definition of broadband that was established by the FCC in 2015.

This table is somewhat arbitrary since it assumes that broadband demand in 2015 was exactly 25 Mbps – but there was widespread praise of this definition at that time, other than from ISPs who wanted to stick with the 4/1 Mbps definition. This simple table accurately predicts that we would be talking about the need to increase the definition of broadband to 100 Mbps download around 2022 – which is exactly what happened. The FCC had to deal with political issues and wasn’t able to make the change until March 2024 – but in 2022, the FCC wanted to change the definition of broadband to a speed that was at a 21% compounded annual growth rate from the definition the FCC had established in 2015.

I can’t think of any fundamental changes that would say that this same growth in demand won’t happen in the near future. Consider the following chart that starts with the assumption that 100 Mbps is the right definition of broadband in 2022. Growing that number over time by the same 21% results in the following table. This table predicts that by 2030 we should be having the conversation about increasing the definition of download broadband to 500 Mbps. This prediction seems very reasonable to me.

However, 2030 is only six years from now, and today’s topic is looking into the future. One way to think about future demand is to look back at the broadband speeds 25 years ago. In 1999, both telcos and cable companies offered 1 Mbps DSL broadband connection as an upgrade to dial-up – and 1 Mbps became the de facto definition of broadband at the time. Twenty five years later, the definition of broadband was increased to 100 Mbps, a 100-fold increase. This tracks directly with Cisco’s reported growth rate, and the growth rate of download speed between 1999 and 2022 works out to be 21.2% per year.

There are a lot of reasons to think that the demand for faster speeds will keep growing. Every year we find more uses for fast broadband. If we plot the demand for broadband speeds out for 25 more years, at the historical rate of growth, demand would be 100 times higher in 25 years than it is today. That would mean the right definition of broadband in 25 years would be 10 gigabits.

I know that a lot of people will jump all over this prediction and say it’s ludicrous and unrealistic. But consider the last 25 years. You would have been hard pressed to find anybody in 1999 who would have predicted that the definition of download speed in 2022 would be 100 Mbps. This is partially because the human mind has a hard time accepting the results of compounded growth – the results after many years of growth always feels too large. I was already running my consulting company in 1999, and I don’t recall anybody who was visionary enough to predict a hundred-fold increase in broadband speeds over twenty-five years. Anybody saying that would have been laughed out of most industry forums – it would have sounded like a fantasy. Yet here we are – the demand for download speed really increased 100-fold since 1999.

There is one weakness in my argument – it’s very hard to pin down a concrete number for the demand for broadband speed. In the context I’ve been using (and the way the FCC looks at speed), broadband speed demand is a composite number encompassing the average of all broadband users. There is a wide range of opinions on the right definition of broadband speed. ISPs operating older and slower technologies still swear that 25 Mbps is all the speed anybody needs. Fiber ISPs think the definition should be gigabit since one-third of households are now subscribing to gigabit speeds. The fact that the FCC set the definition of broadband to 100 Mbps is an interesting data point – but the FCC definition of speed doesn’t mean much more than that it’s a conservative compromise of the many opinions from around the industry.

There are more concrete data points to consider, and the next blog in the series will look at the demand for broadband usage.

Categories
Technology

WiFi Sensing

It’s incredibly hard to keep things private in the new digital age. There are far too many stories circulating about people who talked to a friend on the phone or texted about something and almost instantly got hit with ads for the subject of the conversation. And that happens without malware – no telling what information you’re giving out if your devices have been infected with malicious software that is spying on you.

There is a new way for folks to track and spy on you. A recent article in the MIT Technology Review described how WiFi tracking has become a usable technology. WiFi tracking measures the way that WiFi signals bounce around objects in a way that can be reverse-engineered to show the shapes of objects in the environment.

Scientists and vendors have been working for a decade to create a viable product using WiFi sensing. The two most likely products that have been investigated are to monitor breathing or detect falls in the home. Breath monitoring is a useful tool to diagnose problems like sleep apnea or identify emergencies like a stroke or heart attack. Fall monitors are a useful tool for seniors who remain in their homes since falling is one of the most common problems and sources of injuries for seniors.

However, both of those use cases have already been solved using other technologies. Breath monitors are using other wireless frequencies, or are more commonly using microphones. It’s also becoming common to use ultra-wideband radar to detect things since it has smaller wavelengths and can better define objects. There are already devices in use in places like hospitals and prisons that are used to monitor that patients and prisoners are where they are supposed to be and that they are not experiencing breathing issues.

For a WiFi system to produce the same resolution as Ultra-wideband radar requires interpreting readings from multiple WiFi devices within a home. WiFi sensing is still an attractive idea since almost every home now has multiple WiFi transmitters in place. There are folks working to build the technology into smart-phone and WiFi router chips, which would make WiFi sensing an everyday technology.

It’s easy to see the appeal of the technology. While motion detectors can tell you that a stranger is in your home, WiFi sensing has the potential to pin down exactly where they are and even give some of what they are doing. For now, WiFi sensing is already good enough to 100% detect human presence in a home.

One of Verizon’s new Fios routers include a human presence detector powered by Origin Wireless. All of the wireless devices in a home like smart plugs, speakers, burglar alarms, and the myriad of other WiFi enabled devices contribute to making such devices effective. Cognitive Systems announced that its smart plugs will contain this new capability. It’s hard to think it won’t also soon be built into smart speakers, TVs and other devices.

The ability to detect people has security experts worried because hackers could turn the technology around to spy on what people are doing inside their home by showing the room they are in and even what they are doing. It would be a good way for a burglar to verify that nobody is home or that they are sleeping.

Even scarier, a good WiFi detector might not need hacking, and somebody standing outside your home might be able to see what people are doing through the walls. In 2023, researchers at Carnegie Mellon were able to use an AI engine called DensePose to generate body shapes from Wi-Fi signals.

The best use of WiFi sensing will probably come by working in conjunction with other technologies. Old technologies like motion detectors will be replaced with more sophisticated monitoring systems that provide homes and businesses with much richer data. But with better sensors and monitors comes an increased security risk of your data being used by outsiders.

Exit mobile version