Categories
The Industry

The Fantasy of Measuring Speeds

The FCC issued an order on January 19 that takes the next step towards implementing better data collection and mapping of broadband data. I’ll discuss some of the details of that order in an upcoming blog. But today’s blog asks a more fundamental question about basing broadband policies on broadband speeds. For most of the broadband technologies widely deployed in the US, it’s challenging or impossible to accurately measure broadband speed.

The two most challenging technologies to measure are DSL and fixed wireless. Consider the following issues that impact the speed of DSL at a given customer:

  • A telco might be using multiple vintages and types of DSL in the same market. How do you report speeds in a market when some types of DSL are five times faster than others?
  • DSL signal decreases over distance. A home at the end of a long city block might have significantly slower speeds than a home at the start of a block.
  • The size of the copper wire in the network influences speed. In a city there is likely to be copper carrying between 16-gauge and 24-gauge.
  • The age and quality of the copper matter since copper wire slowly degrade over time, particularly if the copper comes into contact with the elements. This is a local issue, house by house and block by block.
  • The backhaul network used to bring broadband to a neighborhood can be undersized. If there are too many customers being served in a node (oversubscription), then speed suffers.
  • Telcos don’t deploy technology consistently. Two adjoining neighborhoods might be using the same vintage of DSL, but one has newer and faster cards in the neighborhood cabinet.

You can make a similar list of issues that affect the speeds delivered to a customer using wireless technologies:

  • The specific spectrum being used matters because each band of spectrum carries varying amounts of data according to the wavelength.
  • Environmental factors like foliage or being blocked by a neighboring home have a huge impact on data speeds at a given customer. Speed also varies by outside temperatures, humidity, and weather events like rain.
  • Distance is important, just like with DSL. A customer who is further away from the transmitter will experience a slower speed.
  • Wireless technology is subject to varying degrees of interference that can vary widely during the day.
  • Lack of adequate backhaul and oversubscription can be deadly to wireless speeds.

We tend to think of cable company networks as being more homogeneous, but oftentimes they are not. We’ve done speed tests in cities and found some neighborhoods where customers get more than 100% of advertised speeds and other neighborhoods where homes are getting less than a quarter of the advertised speeds. There are a variety of network issues that might cause a big difference in speeds. Are cable companies going to be honest about network inadequacies in FCC reporting and report slow or fast speeds – they aren’t that honest today.

All networks, including fiber, can be negatively impacted during times of heavy neighborhood usage. What’s the right speed on a broadband network? The speed that can be obtained at 4:00 in the morning when the network is empty or the speed at 8:30 in the evening when the network is bogged down with the heaviest neighborhood usage?

If you’ve never done it, I suggest you run multiple speed tests, back-to-back. I am on a Charter cable network and I ran speed tests for any hour recently and I saw reported speed varying by as much as 50%. What speed is my broadband connection? The cable company will claim it’s at the fastest possible speed (or might even claim a marketing speed that is faster than my fastest measured speed). But is that really the speed? There is an argument that the slowest speed I encounter during the day defines a limit on how I can use broadband.

It’s absolutely impossible to define a speed other than perhaps for a customer that has a dedicated fiber connection where the ISP removes all of the factors that might decrease speeds (such a connection is expensive). The speed on all other broadband products varies – some a little, such a GPON fiber connection, and some a lot, like DSL.

The FCC is about to embark on a grand new scheme to force ISPs to better define and report broadband speeds. It’s bound to fail. If I can’t figure out the speed on my cable modem connection, then the FCC is on a fool’s mission.

The trouble with the FCC’s approach is that the agency wants an ISP to report actual speed by clusters of homes – today it’s by Census block and soon it will be polygons. But this is a waste of everybody’s time when nobody can even define the speed for an individual home. Further, speed is not the only issue that affects broadband performance. The FCC is ignoring that latency and jitter can have more to do with a bad broadband experience than broadband speed. No matter what the FCC tries to do to improve reporting, any speeds reported by ISPs are going to mostly be pure fantasy – and that’s true even if ISPs strive for honesty, which nobody expects. We need to find a better way to define broadband because we are basing policies and grants on an imaginary set of reported broadband speeds.

4 replies on “The Fantasy of Measuring Speeds”

Before the network is deployed, it’s very much using assumptions about the last mile technology and network architecture. Once it is deployed though, there are ways to measure performance. We are not a CAFII recipient but there is testing equipment out there to measure performance of the production network at specified times. We use this tool in our Calix WIFI routers but there are other systems out there.

What are your thoughts on the CAFII testing mechanisms and inputting that data into the process? To an extent it seems like a burden to report this and definitely would impact smaller providers more than larger ones but it seems like it would provide some clarity to the topic. If anything were to be implemented it would really need to be a light touch or automated reporting process vs some massive undertaking requiring a lot of man hours or a dedicated person to assemble.

https://www.calix.com/content/dam/calix/marketing-documents/public/connexions_2019/CAF_%20Are-you-Ready.pdf

Hey Doug, do you have a favorite measurement tool? It’s long been rumored that providers (especially cable) treat speed test traffic specially. I can attest that the xfinity speed tools *always* show better numbers. (You could imagine them being better at getting echoes from nearby servers, but…I’m skeptical that’s all there is to it…)

Also, we need to get some new term to describe oversubscription that causes degradation for end users, since oversubscription is generally an expected technique to avoid poor equipment utilization. Maybe “toxic oversubscription?” 🙂 🙂

If the ISP offers its own speed tools, then it could make sure to test to a server that is perfectly situated within the network so as to give the best possible speeds. I consider that as a “Ok if you tell me that’s what you’re doing” kind of thing.

It is also possible to give special consideration to certain types of traffic. Cell companies prioritize traffic associated with cellular voice calls. It is possible to prioritize traffic based on other criteria…such as source or destination IP address…doing this would ensure that the test traffic would sail right through any congestion…and to me, would not be honest.

To me, this is not really an engineering problem. I think it’s a definition, granularity and choices problem.

I did this for a living for a cell company. The engineering issues were relatively easy. Since the test results were for internal use only (network troubleshooting), the definition, granularity and choices issues were not terribly difficult either. It just took time to develop a large enough data set to be able to be useful in decision making.

I can’t imagine what it would be like with marketing, political and ethical issues added in. 🙂

Leave a Reply