Grants Should Look Forward

State Broadband Offices had to go through a process this year of deciding if various technologies qualify for grant purposes as priority projects. A priority technology must meet the following requirement: Provide broadband service that meets speed, latency, reliability, consistency in quality of service, and related criteria as the Assistant Secretary shall determine; and ensure that the network built by the project can easily scale speeds over time to meet the evolving connectivity needs of households and businesses and support the deployment of 5G, successor wireless technologies, and other advanced services.

NTIA chose a speed of 100/20 Mbps as the metric for meeting the current test of a priority technology. This is convenient, since this was the declared speed that the legislation said a BEAD-funded technology must be able to deliver. Today’s blog asks if that definition is adequate.

One way to consider what the current speed of broadband should be is to look at historical trends. For many years, Cisco issued reports that regularly reported that the demand for speed was growing at roughly 21% per year for residential broadband, and a little faster for business broadband. Cisco and others noted that the demand for broadband speeds was on a relatively straight line back to the early 1980s.

It’s not hard to test the Cisco long-term growth rate. The following table applies a 21% growth rate to the 25/3 Mbps definition of broadband established by the FCC in 2015.This table is somewhat arbitrary since it assumes that broadband demand in 2015 was exactly 25 Mbps – but there was widespread praise of the new definition at that time, other than from ISPs who wanted to stick with the 4/1 Mbps definition. This simple table accurately predicted that we would be talking about the need to increase the definition of broadband to 100 Mbps download around 2022, which is exactly what happened. The FCC did not have a fifth Commissioner at the time and wasn’t able to make the change until March 2024 – but in 2022, the FCC wanted to change the definition of broadband to 100 Mbps download, which was at a 21% compounded annual growth rate from the definition of broadband the FCC had established in 2015.

I can’t think of any fundamental industry changes that would change the historical growth rate in the near future. We’ve certainly seen a big demand to buy faster broadband products. Consider the following chart that starts with the assumption that 100 Mbps was the right definition of broadband in 2022. Growing that number over time by the same 21% results in the following table. What does this table suggest for BEAD and other grant?. Consider the evaluation of Starlink, which is the technology that is closest to meeting or not meeting the needed speed. Ookla released a report in the first quarter of 2025 showing that the median speed on Starlink was 104.71 Mbps download and 14.84 Mbps upload, and that only 17% of Starlink customers in the first quarter fully met the 100/20 Mbps speed threshold.

The table above suggests that the current definition of broadband in 2025 should be something like 177/35 Mbps. It’s debatable if Starlink meets the 100/20 Mbps test today, but it clearly doesn’t meet a test based on the speed demand in 2025.

The BEAD future-looking test is challenging because nobody defined what future-looking means. I can think of two definitions of forward-looking that might make sense. One is to judge what speeds should be delivered when the grant project has been constructed, which for most BEAD projects will be at the end of 2029. The growth chart suggests that the speed for defining broadband in 2029 will be around 380/76 Mbps.

I think a better forward-looking test for a government-sponsored grant should be that a grant-funded network should still be relevant a decade after a grant is awarded. The chart suggests the desired speed should be 1191/238 Mbps in 2035.

Naysayers will argue that the 21% growth in speed demand can’t be sustained. Consider taking a more conservative approach that cuts the historical growth rate in half. That conservative approach would say that a target speed for a grant-funded project would be 195/30 Mbps in 2029 and 345/69 Mbps in 2035. I have nothing to go on except my gut, which tells me that 345/69 Mbps will feel inadequate in 2035.

8 thoughts on “Grants Should Look Forward

  1. the calculation is just wrong numbers. There is a continuous increase is use mostly from the increase in the number of devices in a home or business that use a cloud resource. You can cherry pick a chart into looking like a linear increase but it’s certainly not and any broadband operator can show you true charts with the plateaus and jumps.

    The big increases come with sea changes in services.
    -streaming was a huge change with the biggest demand bump
    -streaming moved from lower bitrate 720p to higher bitrate 1080p, another decent sized bump
    -work from home was a small download but a bigger upload bump
    -4k streaming is having some impact, but the demand for 4k is no where near the previous demands. This bump mostly comes from service plans getting over 30Mbps and so devices will negotiate 4k. you might say the 4k demand sort-of came with the 1080p demand but a lot of people didn’t have fast enough services to see it.
    -game downloads or rather tieing up a connection for minutes to hours for multi-hundred-GB downloads certainly moves the average. We have users using more data just for game downloads in a day than they use for everything else in an entire month.

    what’s missing here is the next sea change.
    -VR? seems unlikely anytime soon
    -8k video? virtually no demand for the TVs
    -AI? Not really, not in residential or small business connections. Talking to an AI or using a cloud AI resource is small volumes of data.

    I know these new use cases come out of nowhere so who knows what 2026-2030 will bring, but for broadband use I would expect a relatively small linear increase primarily from more devices and most of those increases on the upload side as more devices, cameras, work-from-home demands uploads. However, I think you’ll see charts show more because of people switching to >30Mbps capable service plans and getting that 4K video bump.

    trying to bring this long post back on topic, 100/20 is perfectly adequate til 2030 for at least 80% of people. That’s probably 95% but I’m being conservative. Gamers will not be happy with it because game downloads take many hours. and the 20Mbps upload will get pushed a bit by some work-from home people. Is the goal to fix 100% of problems, or 95%, or 80%?

    Broken record here I know, but if grants are looking forward, then the ENTIRE grant process should be about building in at least 2 long haul carriers with connections to 2-4 IXs. That makes for local competition because it’s available, and even if you don’t like capitalism it makes the next round of grants much easier because there’s long haul high capacity fiber to the area. That’s how you plan for the future, not spending billions on short-term L.E.O. services.

    • As an operator for over a decade I couldn’t have said it better myself. The question of what will bring the next higher tier of bandwidth usage is very valid. What is it? We go in home after home doing installs and ask if people are streaming 4k. I don’t log the response but my guess is we’re less than 25% still that are streaming 4k, the rest are on 1080p. 8k is a myth. AI for the home user is going to reduce bandwidth slightly, not increase it. There are many new sensors in the home sending data but that data is very small. It does add up, but not against a TV streaming 24/7, and we are supporting that 24/7 stream just fine now and current speeds. Upload speeds have seen the biggest jump, we just moved our 25×10 to 50×50 on a trial basis but I’m pretty sure it’s going to stick. If you want some bogus numbers I can now show a chart that proves 73% of our client base “upgraded” from 25×10 to 50×50 in 2025. By that graph, in a few years we’ll need 1Gx1G… As Daniel said, it’s easy to cherry pick data and graph it to tell a story that wants to be told.

      I was recently in a discussion with 2 other ISP’s comparing peak usage numbers. Got some very interesting info. Just a simple calculation. Peak usage on the ISP divided by subscribed customer count

      ISP #1 1430 subs — 6.7 Mbps peak avg
      ISP #2 4000 subs — 3.5 Mbps peak avg
      ISP #3 880 subs — 4.5 Mbps peak avg

      These are long term, successful ISP’s, not ones that are headed out of business. So what does it tell us? There is a huge variance in areas and requirements. What is good for one area is bogus for another.

  2. Lawmakers have no technical abilities, for the most part. Vendors shape their thoughts, along with the occasional campaign contribution. Lobbyists, like locusts, are thick at Capitol Hill. The already-obsolete 100-20 gives legacy copper purveyors new life. I’m doubtful they can achieve these speeds for the most part, but will claim they can. But, Elon benefits best.

    Satellite is lowest on the totem pole when it comes to broadband. While doing a good job, It is difficult to maintain bandwidth as customers sign up. Fiber is the only long-term answer. Our wireless friends will cry and scream, but nothing in the marketplace is better. Our floor should be 1Gig x 1Gig. Striving for mediocrity is either stupidity or the obvious, lobbying.

    Sadly, the US is still around 17th is the world for delivering broadband.
    Bobby Vassallo
    Dallas

    • There’s a completely unvalidated presumption that MOAR speed = more economic prosperity.

      I’ve yet to see any evidence at all, much less compelling evidence, that 1G to a rural home provides any economic benefit over 100M to that home. However, I can do very basic napkin math that the taxes to fund the 1G decreased the bank acount of the family in that home.

      So why not 10G a minimum? or 40G? or 100G? Where’s the data suggesting that being 17th in the world has any positive or negative affect at all? What revs the economic engine more at 101Mbps or 327Mbps or 785Mbps or 1Gbps?

      We’re borrowing huge sums of money to keep up with the jones’ ?

      • Thank you for that heady response, Dan. This discussion just took an ominous turn when you took old Doug Dawson’s discussion and reduced it to an economics discussion. Im sure that your 40Gig/100Gig was an exaggeration for emphasis, but point taken. I must agree that huge throughput should have limits, but promised 100×20 is already obsolete. What happens in the near future when AI sucks capacity? This is already starting.

        As I’m sure you are aware, the largest increase in internet usage is upload. Increased demand is probably due to the impact of Zoom/Teams video-conferencing. Regardless, it’s a thing.

        Fiber’s probably good for another 50 years. No builder wants to rinse and repeat every couple of years simply because no one saw usage increases coming. I say, play the long game here. Since Al Gore invented the internet, it has been an American creation. We should be at least competitive with the Third World. A young Einstein is out on a tractor somewhere in East Texas. With internet, he might cure cancer. Alas, his cell phone lets him operate at only 8×1 Megs. He won’t be joining on Zoom or Teams.

      • yes, somewhat exagerated but not as much as you might think.

        I argue that 100×20 isn’t obsolete in rural places. If anything the 20 should move towards 50 but that’s just to cover rural small businesses where multiple people are using zoom/teams calls. An HD zoom call is under 4Mbps bidirectional, a small business can comfortably run 4 zoom calls simultaneously on a 100×20 (that is true, not ‘up to’). That’s an exceptionally unlikely scenario for a work-from-home or a rural business. I would guess that 99.9% of locations are <=2 simultaneous. This just isn't the data hog that necessitates gigabit to the home.

        AI has yet to start using heavy capacity to the home or small business, and from the position of someone using it daily, it's not going to do this ever. It's not streaming video, it's mostly text exchanges. AI is blowing up datacenters and interconnects between them for sure, but 'AI' workloads are LOWER throughput that even browsing the web/facebook. This is also not the use case that necessitates gigabit to the home.

        Fiber only good for 50 years? I mean, that's beyond speculation. Even streaming a 3D holo environment is under 50Mbps. We don't know what the future brings, but human eyes aren't improving. 1080p is already very comfortable for people, in 50 years 4k is still going to look great because eyes aren't getting better. If anything, a higher % of users being today's 'power users' is likely and so those long haul fiber paths are going to be stressed.

        as to the hidden einsteins etc, there was no broadband in einstein's life. If he has 8×1 on a cell phone he can definitely join a meeting with 720p video today. and the 100×20 available to 98%+ on terrestrial and 99.9%+ of Americans via LEO.

        Again, I'd challenge you to show how that 101st or 1000th Mbps improves outcomes for education, business, anything.

        America became great without broadband. Certainly broadband is a key component today and in the future, but show a trend that a 100Mbps threshold doesn't provide what's needed today and for the next 5-10 years. And as a counter point, those countries with faster broadband to the home, are they doing better? You might consider that there's a who lot more than makes an economy than that 101st Mbps and spending debt to get those numbers instead of building roads or ports or factories I would argue is MEASURABLY harmful while only having 100×20 vs 500×100 probably can't even be put on paper as to the negative effects.

  3. Arguing about speed is largely about WISPs – all other terrestrial technologies can easily support gigabit download speed (and, with a bit of work, 100 Mbps upload). The higher speed tiers aren’t needed for Netflix and Zoom, but they are sure nice if you need to upload or download a large file. Since cable and fiber don’t cost more offering a gig, why not offer it?

    Speed is less of an issue than capacity – and both download and upload volume (GB/month) keep increasing at a fairly consistent pace of at least 20% annual. If a provider can’t keep up with that, peak period performance quickly suffers. See Viasat for an object lesson on customer satisfaction when that happens.

    WISPs may well get caught in an uncomfortable space between fiber and LEO – with the $40/month option for 100 Mbps for Starlink in some areas, many WISP offerings will be both slower and more expensive. (And reliability may be at least as good.)

    • I don’t think it really is a ‘WISP’ issue. WISPs have access to tech that can do multi-gigabit. We do it, we have multiple vendors delivering such products (Ubiquiti Wave, Cambium 4k & cnwave, Tachyon, Tarana, Ketson, Telrad, Intracom) as well as options from Nokia and Baicells for 5GNR.

      Many don’t deploy these for economic reasons, but that’s the same reason the fiber isn’t in the ground. Wireless tech is dramatically faster to deploy and is capable. We deploy a number of the items on this list and sell gigabit services today. We also sell very rural products and have customers that are very happy and will not upgrade to higher speeds, and tend to be very loyal because we brought them affordable services when no one else did and we upgrade them and have a local/personal touch that big corps just dont.

      And on the other side, having 2.5-10Gbps capacity on the *pon last mile makes no implication at all on the uplink size. We buy ‘carrier’ fiber and feed our pops with that, fttx companies *can* do that but they tend to mid-haul from a central point and build a tree topology and so quite often have less uplink capacity. ie, a fttx provider will often only have 10-20Mbps ‘dedicated’ to a customer through oversell on their head-end links. A wisp often has a bottleneck at either/or the AP or with modern last-hop tech, bottleneck at the backhaul. A full Wave site can handle >100 subscribers and push 8-10Gbps pretty easily but is often behind a 1.2-2.5Gbps backhaul and so that’s the constraint. A typical WISP will struggle to fill their upstream connections once they grow beyond 1G uplinks, because backhaul tech prices got high fast.

      The point being that this is really about funding differences. WISPs are very often self funded and therefore limited by that funding. fttx is mixed, but a LOT of fiber networks are built with government funding. The comparison between a WISP and a FISP is at least 80% funding. The wisp can deliver last-hop to low density areas at a fraction the cost of fiber and in a tiny fraction the timeline AND be comparatively fast if the funding supports it. The other 20% gap for a wisp is the long haul fiber, ie the ability to buy 10-100G services in the market they are service and not have to shoot it long distances wirelessly.

      fiber isn’t cost competative in rural areas else wisps simply wouldn’t exist at all. this goes for LEO as well, the government is funding these models either directly or indirectly and doing it’s best to avoid funding wisps. yet wisps still remain competative in the market. when you consider the ‘hostile’ environment wisps face and still hang around it might change the perspective on the true value of them. Take away all the funding and the wisp is the likely winner because time to market. WISPs are facing hard times because they are losing a % of low density customers to LEO, but I expect many to hang on and get through the subsidized period of other offerings.

Leave a Reply to dandensonCancel reply