Lets Stop Talking About Technology Neutral

A few weeks ago, I wrote a blog about the misuse of the term overbuilding. Big ISPs use the term to give politicians a phrase to use to shield the big companies from competition. The argument is always phrased about how federal funds shouldn’t be used to overbuild where an ISP is already providing fast broadband. What the big ISPs really mean is that they don’t want to have competition anywhere, even where they still offer outdated technologies or where they have neglected networks.

Today I want to take on the phrase ‘technology neutral’. This phrase is being used to justify building technologies that are clearly not as good as fiber. The argument has been used a lot in recent years to say that grants should be technology neutral so as not to favor only fiber. The phrase was used a lot to justify allowing Starlink into the RDOF reverse auction. The phrase has been used a lot to justify allowing fixed wireless technology to win grants, and lately, it’s being used more specifically to allow fixed wireless using unlicensed spectrum into the BEAD grants.

The argument justifies allowing technologies like satellite or fixed wireless using unlicensed spectrum to get grants since the technologies are ‘good enough’ when compared to the requirement of grant rules.

I have two arguments to counter that justification. The only reason the technology neutral argument can be raised is that politicians set the speed requirements for grants at ridiculously low levels. Consider all of the current grants that set the speed requirement for technology at 100/20 Mbps. The 100 Mbps speed requirement is an example of what I’ve recently called underbuilding – it allows for building a technology that is already too slow today. At least 80% of folks in the country today can buy broadband from a cable company or fiber company. Almost all of the cable companies offer download speeds as fast as a gigabit. Even in older cable systems, the maximum speeds are faster than 100 Mbps. Setting a grant speed requirement of only 100 Mbps download is saying to rural folks that they don’t deserve broadband as good as what is available to the large majority of people in the country.

The upload speed requirement of 20 Mbps was a total political sellout. This was set to appease the cable companies, many which struggle to beat that speed. Interestingly, the big cable companies all recognize that their biggest market weakness is slow upload speeds, and most of them are working on plans to implement a mid-split upgrade or else some early version of DOCSIS 4.0 to significantly improve upload speed. Within just a few years, the 20 Mbps upload speed limit is going to feel like ancient history.

The BEAD requirement of only needing to provide 20 Mbps upload is ironic for two reasons. First, in cities, the cable companies will have much faster upload speeds implemented by the time that anybody builds a BEAD network. Second, the cable companies that are pursuing grants are almost universally using fiber to satisfy those grants. Cable companies are rarely building coaxial copper plant for new construction. This means the 20 Mbps speed was set to protect cable companies against overbuilding – not set as a technology neutral speed that is forward looking.

The second argument against the technology neutral argument is that some technologies are clearly not good enough to justify receiving grant dollars. Consider Starlink satellite broadband. It’s a godsend to folks who have no alternatives, and many people rave about how it has solved their broadband problems. But the overall speeds are far slower than what was promised before the technology was launched. I’ve seen a huge number of speed tests for Starlink that don’t come close to the 100/20 Mbps speed required by the BEAD grants.

The same can be said for FWA wireless using cellular spectrum. It’s pretty decent broadband for folks who live within a mile or two of a tower, and I’ve talked to customers who are seeing speeds significantly in excess of 100/20 Mbps. But customers just a mile further away from a tower tell a different story, where download speeds are far under 100 Mbps download. A technology that has such a small coverage area does not meet the technology neutral test unless a cellular company promises to pepper an area with new cell towers.

Finally, and a comment that always gets pushback from WISPs, is that fixed wireless technology using unlicensed spectrum has plainly not been adequate in most places. Interference from the many users of unlicensed spectrum means the broadband speeds vary depending on whatever is happening with the spectrum at a given moment. Interference on the technology also means higher latency and much higher packet losses than landline technologies.

I’ve argued until I am blue in the face that grant speed requirements should be set for the speeds we expect a decade from now and not for the bare minimum that makes sense today. It’s ludicrous to allow award grant funding to a technology that barely meets the 100/20 Mbps grant requirement when that network probably won’t be built until 2025. The real test for the right technology for grant funding is what the average urban customer will be able to buy in 2032. It’s hard to think that speed won’t be something like 2 Gbps/200 Mbps. If that’s what will be available to a large majority of households in a decade it ought to be the technology neutral definition of speed to qualify for grants.

8 thoughts on “Lets Stop Talking About Technology Neutral

  1. Although I am enjoying your blog tremendously, and have, in fact linked back to it from my blog – we disagree on a few points here.

    1) The most important thing about fiber, in my mind, is not the actual “capacity” – I don’t care for how we use the word speed here – but the inherently lower latencies in it, than even DOCSIS-4-LL. Yes reduced RTTs between the users is barely on the table, reduced RTTs to the internet is somewhat there, but obscured by the “bandwidth” discussion. Unfortunately there are many forms of fiber that use request/grant protocols rather than separate tx/rx pairs (or waves) that are being promoted by fiber ISPs that make for worse fiber than what is otherwise possible vs a vs the more bog standard (and cheaper AND interoperable) fiber ethernet links we use in data centers.

    Similarly, more cross connects within cities at internet exchanges, for voip/videoconferencing traffic, would lead to vastly reduced latencies for people talking to each other and not the internet.

    I would like it very much if fiber advocates started publishing typical RTTs (especially under load) of their particular favorite version of it.

    2) It has been pretty conclusively shown in the last decade that bandwidth greater than 25Mbit does not help web page downloads much, due to the RTT problem. I do not expect this to change much in the future.

    Somewhat related is that the human eye cannot consume much more than 4k video, and even if your typical poverty stricken family of four had 100Mbit down, cannot afford 4 4k tvs to watch it all.

    Average usage of a gbit link today is not much higher than the average for even 25Mbit. As a predictor for bandwidth needs, it is certainly nice to have a large update for a game finish quickly, but day to day, more bandwidth does not help for most existing traffic today. Consistently low latency *does*.

    3) Certainly upload speeds MUST be improved to better handle videoconferencing traffic, but even this, going to a mere 10Mbit, would in general be enough to sustain a decent frame rate for multiple people in a household, if the queues are well managed.

    I will go around citing the BITAG latency report until I am blue in the face, and ask more fiber folk to think about real world traffic, and what traffic, might in the future, actually require more than what we have today.

    Where does this leave the grand plans for a massive, un-economic, fiber rollout to the entire USA? If I could, somehow, shift the focus to first putting better routers with better wifi, out there, we could then start rationally thinking about how much more bandwidth is needed for everyone, and how to roll it out.

    I would definitely like to see fiber connecting every city from at least 2, preferably 3 directions, and every passing from that, every few miles, across long stretches of the USA made available to local ISPs and wireless/celluar providers, but that is a narrower, and far more achievable goal than what is often being proposed. We can do a great deal more to improve *latency* across the USA with fiber and the BEAD program.

  2. To tackle two of your main points: about what “technology neutral” means. I agree, it is an attempt to make the point that FWA and celluar are often good enough. It is the conflation of *latency* with “bandwidth” that gets my goat. Celluar in particular has incredibly high jitter and latency and that is the real problem with it. I note that in my teams’ efforts with “cake-autorate” we have made those problems much less observable, and have made both celluar and starlink scale to more users and traffic types.

    Yet the cable companies’ 100/20 services often have really terrible latency also, and the too-common policer mechanisms in many fiber deployments can really mess with interactive traffic as well.

    Starlink no longer needs RDOF funding IMHO and will cover the most remote areas of the planet handily, and enterprising users will find ways to help their neighbors and cover their towns. Their network continues to improve.

    A goodly percentage of non-celluar (why do you lump these together?) FWA providers are providing much better service, latency-wise, leveraging tools like those from preseem, cambium, and my own libreqos. They are also able to deliver 50/50 service quite easily nowadays (one hilarious example is how well phillywisper is doing in comcast’s home town), all the way to a gbit to a mdu, and I do see a lot of 200/20 and better as well. I wish more fiber folk were paying attention.

    I wish on the whole some of the debate would shift to IPv4 address scarcity, reliability in terms of MTBF and MTTR, and a balanced strategy for making a better internet overall, emerge.

  3. Mr. Dawson, I think you may be missing the point. The IIJA – which provided the funding for NTIA’s BEAD program – is technology neutral (oops, sorry, there I said it), and like it or not, nothing in it calls for a fiber-only approach. BEAD has $42B+ available for its stated purpose of closing the digital divide. A recent analysis by economist Bill Lehr (and commissioned by WISPA) shows that won’t be enough – not by a large margin – if NTIA continues to insist on a fiber-only approach. So we’ve got a decision to make: if our national priority is to close the digital divide as quickly as possible, using available BEAD funding, then ISPs need the flexibility to choose the right tool for the right job – including fiber, of course – which was precisely what Congress intended in the IIJA. So let’s call the question: is our priority to close the digital divide as soon as possible? Or is BEAD really nothing more than just a billboard for a single technology, no matter what Congress intended, or what the Treasury can now afford? BTW, usage metrics show that our actual bandwidth demand – especially residential – is well within the reach of non-fiber networks, so any argument that we all somehow need a gig just isn’t borne out by reality. If the metrics said otherwise I might be closer to your camp. But there’s a reason all infrastructure – power, water, sewer, telephone, roadway – is traffic engineered. We don’t need, and won’t use, interstate highways as our residential driveways, so we build interstates where they’re actually needed. That’s why “the right tool for the right job” – and technology neutrality – belongs in this discussion.

    • What’s important to bear in mind is BEAD is looking at the long term as well as the short term — as it should. The digital divide was formed because of policy and planning shortcomings that delayed modernizing legacy analog copper twisted pair for voice telephone service that reached most every doorstep to fiber for a new era of Internet protocol-based telecommunications. This is the infrastructure challenge America has failed to timely meet — and must meet if it is to have world class advanced telecommunications infrastructure. Public dollars should be invested with this goal in mind and in infrastructure that has the capacity to accommodate future carrying capacity needs and won’t quickly become obsolete as DSL over copper did.

  4. Almost 6 months after the initial filing of the new map data and we have yet to receive 1 challenge to our data. The most likely reason? We have happy clients in our area that aren’t looking desperately for good internet service. 90% of them are on 25×10 Mbps plans and 100% of them are on Unlicensed Fixed Wireless. Engineered correctly and maintained is where it’s at. Plus an eye on the future for the slow increase of actual real bandwidth needs. We went from 12×5 to 25×10 in 2018, now we’re starting to see a few people asking for our 50×25 and we are currently building to support 100×25 but I expect that curve to be flattening. The people that legitimately needed to go from 25 to 50 are hitting around 30 Mbps now.

    The people in our area have lives to live and good internet to use, I don’t think anyone has a clue that there is a new FCC mapping system they could be weighing in on. Because we provided the blood, sweat, tears, and dollars to create a solid infrastructure. But yeah, step aside, big fiber company is bringing “real” internet to the poor bandwidth starved people.

  5. The U.S. got off on the wrong track here in the 1996 Telecommunications Act where it defined advanced telecommunications capability as “without regard to any transmission media or technology, as high-speed, switched, broadband telecommunications capability that enables users to originate and receive high-quality voice, data, graphics, and video telecommunications using any technology.” 47 US 1302(d)(1)

    This was the definitive policy moment where it was decided not to set the goal of rapidly modernizing the existing twisted pair copper delivery infrastructure to fiber for the next century. Instead of an FTTP infrastructure standard, the United States chose to adopt a service standard supported by “any technology.” It has been paying the price ever since, debating what technology can adequately provide that standard of service that has delayed the needed fiber modernization that should have been largely completed by 2010.

Leave a Reply