Only Twenty Years

I’ve written several blogs that make the argument that we should only award broadband grants based on future-looking broadband demand. I think it is bad policy to provide federal grant funding for any technology that delivers speeds that are already slower than the speeds already available to most broadband customers in the country.

The current BEAD grants currently use a definition of 100/20 Mbps to define who households that aren’t considered to have broadband today. But inexplicably, the BEAD grants then allow grant winners to build technologies that deliver that same 100/20 Mbps speeds. The policymakers who designed the grants would allow federal funding to go to a new network that, by definition, sits at the nexus between served and unserved today. That is a bad policy for so many reasons that I don’t even know where to begin lambasting it.

One way to demonstrate the shortsightedness of that decision is a history lesson. Almost everybody in the industry tosses out a statistic that a fiber network built today should be good for at least thirty years. I think that numbers is incredibly low and that modern fiber ought to easily last for twice that time. But for the sake of argument, let’s accept a thirty-year life of fiber.

Just over twenty years ago, I lived inside the D.C. Beltway, and I was able to buy 1 Mbps DSL from Verizon or from a Comcast cable modem. I remember a lot of discussion at the time that there wouldn’t be a need for upgrades in broadband speeds for a while. The 1 Mbps speed from the telco and cable company was an 18-times increase in speed over dial-up, and that seemed to provide a future-proof cushion against homes needing more broadband. That conclusion was quickly shattered when AOL and other online content providers took advantage of the faster broadband speeds to flood the Internet with picture files that used all of the speed. It took only a few years for 1 Mbps per second to feel slow.

By 2004, I changed to a 6 Mbps download offering from Comcast – they never mentioned the upload speed. This was a great upgrade over the 1 Mbps DSL. Verizon made a huge leap forward in 2004 and introduced Verizon FiOS on fiber. That product didn’t make it to my neighborhood until 2006, at which time I bought a 30 Mbps symmetrical connection on fiber. In 2006 I was buying broadband that was thirty times faster than my DSL from 2000. Over time, the two ISPs got into a speed battle. Comcast had numerous upgrades that increased speeds to 12 Mbps, then 30 Mbps, 60 Mbps, 100 Mbps, 200 Mbps, and most recently 1.2 Gbps. Verizon always stayed a little ahead of cable download speeds and continued to offer much faster upload speeds.

The explosion of broadband demand after the introduction of new technology should be a lesson for us. An 18-time speed increase from dial-up to DSL seemed like a huge technology leap, but public demand for faster broadband quickly swamped that technology upgrade, and 1 Mbps DSL felt obsolete almost as soon as it was deployed. It seems that every time there has been a technology upgrade that the public found a way to use the greater capacity.

In 2010, Google rocked the Internet world by announcing gigabit speeds. That was a 33-time increase over the 30 Mbps download speeds offered at the time by the cable companies. The cable companies and telcos said at the time that nobody needed speeds that fast and that it was a marketing gimmick (but they all went furiously to work to match the faster fiber speeds).

I know homes and businesses today that are using most of the gigabit capacity. That is still a relatively small percentage of homes, but the number is growing. Over twenty years, the broadband use by the average home has skyrocketed, and the average U.S. home now uses almost 600 gigabytes of broadband per month – a number that would have been unthinkable in the early 2000s.

I look at this history, and I marvel that anybody would think that it’s wise to use federal funds to build a 100/20 Mbps network today. Already today, something like 80% of homes in the country can buy a gigabit broadband product. The latest OpenVault report says that over a quarter of homes are already subscribing to gigabit speeds. Why would we contemplate using federal grants to build a network with a tenth of the download capacity that is already available to most American homes today?

The answer is obvious. Choosing the technologies that are eligible for grant funding is a political decision, not a technical or economic one. There are vocal constituencies that want some of the federal grant money, and they have obviously convinced the folks who wrote the grant rules that they should have that chance. The biggest constituency lobbying for 100/20 Mbps was the cable companies, which feared that grants could be used to compete against their slow upload speeds. But just as cable companies responded to Verizon FiOS and Google Fiber, the cable companies are now planning for a huge leap upward in upload speeds. WISPs and Starlink also lobbied for the 100/20 Mbps grant threshold, although most WISPs seeking grant funding are now also claiming much faster speed capabilities.

If we learn anything from looking back twenty years, it’s that broadband demand will continue to grow, and that homes in twenty years will use an immensely greater amount of broadband than today. I can only groan and moan that the federal rules allow grants to be awarded to technologies that can deliver only 100/20 Mbps. But I hope that state Broadband Grant offices will ignore that measly, obsolete, politically-absurd option and only award grant funding to networks that might still be serving folks in twenty years.

2 thoughts on “Only Twenty Years

  1. Please let me constructively disagree with portions of your argument. 1) Could you please read the Broadband Internet Technical Advisory Group (BITAG)’s “Understanding latency” paper, and try to deeply understand and explain its conclusions to your audience?

    https://www.bitag.org/documents/BITAG_latency_explained.pdf

    To summarize, (and I would prefer you belted down you and your readership to wade through it – at least the first 5 pages), more bandwidth than about 10Mbits does not benefit web traffic at all, more than 25 is a 4k movie, and despite wild extrapolations as to what else might require extreme bandwidth, we came up empty. Looking at the statistics I have for 50Mbit service vs gbit service, I see roughly the exact same average bandwidth usage.

    What is needed, more, moving forward, is lower and more consistent latency and jitter for the interactive applications of the future. Remarkably, most of these do not need more bandwidth!

    As for more bandwidth than even 25Mbits, we have reached very real limits as to what the human eye can perceive, (you cannot watch a separate netflix movie with each eye!), and we are long past what the human ear can hear.

    All that is backed up by easily repeatable, actual benchmarks, code and field data, about how the network actually works. That paper advocates that in addition to improving bandwidths that we focus far, far more deeply on latency and jitter, and that focusing on bandwidth improvements at the cost of all else, can and has led to a much worse internet than we otherwise could have had.

    We have designed and built a network optimized for “speedtest”, which very little of our day to day traffic resembles.

    Reducing latency can come from a variety of technologies – beating bufferbloat universally is my principal bugaboo, but I will save that for another day – GPON fiber is slightly better than DOCSIS 4.0-LL, and active ethernet fiber can have 10000 (not kidding!) times less serialization delay than GPON does. However, many forms of wireless are quite competetive with docsis, and close to what gpon can do.

    An even bigger place to reduce latency is in moving data close to the user – more towns should gain internet exchanges so a call across town doesn’t have to go across the country and back, and the CDN, nearby, and that will do more to improve perceived “bandwidth”, than a gbit with higher RTTs.

    I otherwise agree with you, in that running 100/20 service after you’ve run fiber is kind of silly. It actually costs more money to rate limit and shape a gbit fiber connection than it is to just let that much bandwidth run free. But then you run smack dab into bufferbloat on the wifi, which is how 97% of the last few feet of the last mile gain access. But that rant, another day.

Leave a Reply