Categories
Technology

Packet Loss and Broadband Performance

In a recent article in FierceWireless, Joe Madden wrote an article looking at the various wireless technologies he has used at his home in rural central California. Over time he subscribed to a fixed wireless network using WiFi spectrum, cellular LTE broadband, Starlink, and a fixed wireless provider using CBRS spectrum. A lot of rural folks can describe a similar path where they have tried all of the broadband technologies available to them.

Since Joe is a wireless expert who works at Mobile Experts, he was able to analyze his broadband performance in ways that are not easily understood by the average subscriber. Joe came to an interesting conclusion – the difference in performance between various broadband technologies has less to do with speed than with the consistency of the broadband signal.

The average speed tests on the various products varied from 10/2 Mbps on fixed wireless using WiFi, to 117/13 Mbps on Starlink. But what Joe found was that there was a huge difference in consistency as measured by packet loss. Fixed wireless on WiFi had packet loss of 8.5%, while the packet loss on fixed wireless using CBRS spectrum dropped to 0.1%. The difference is stark and is due to the interference that affects using unlicensed spectrum compared to a cleaner signal on licensed spectrum.

But just measuring packet loss is not enough to describe the difference in the performance of the various broadband connections. Joe looked at the number of lost packets that were delivered over 250 milliseconds. That will require some explanation. Packet loss in general describes the percentage of data packets that are not delivered on time. In an Internet transmission, some packets are always lost somewhere in the routing to customers – although most packets are lost due to the local technology at the user end.

When a packet doesn’t show up as expected, the Internet routing protocols ask for that packet to be sent again. If the second packet gets to the user quickly enough, it’s the same, from a user perspective, as if that packet was delivered on time. Joe says that re-sent packets that don’t arrive until after 250 milliseconds are worthless because by then, the signal has been delivered to the user. The easiest way to visualize this is to look at the performance of Zoom calls for folks using rural technologies. Packets that don’t make it on time result in a gap in the video signal that manifests as fuzziness and unclear resolution on the video picture.

Packet loss is the primary culprit for poor Zoom calls. Not receiving all of the video packets on time is why somebody on a Zoom call looks fuzzy or pixelated. If the packet loss is high enough, the user is booted from the Zoom call.

The difference in the percentage of packets that are delivered late between the different technologies is eye-opening. In the fixed wireless using WiFi spectrum an astounding 65% of re-sent packets took longer than 250 ms. Cellular LTE broadband was almost as bad at 57%. Starlink was better at 14%, while fixed wireless using CBRS was lowest at 5%.

Joe is careful to point out that these figures only represent his home and not the technologies as deployed everywhere. But with that said, there are easily explainable technology reasons for the different levels of packet delay. General interference plays havoc with broadband networks using unlicensed spectrum. Starlink has delay just from the extra time for broadband signals to go to and from the satellite and the ground in both directions. The low packet losses on a CBRS network might be due to having very few other neighbors using the new service.

Joe’s comparison doesn’t include other major broadband technologies. I’ve seen some cable networks with high packet loss due to years of accumulated repairs and unresolved issues in the network. The winner of the packet loss comparison is fiber, which typically has an incredibly low packet loss and also a quick recovery rate for lost packets.

The bottom line from the article is that speed isn’t everything. It’s just one of the characteristics that define a good broadband connection, but we’ve unfortunately locked onto speed as the only important characteristic.

2 replies on “Packet Loss and Broadband Performance”

Packet loss due to bad wiring is one problem.

However packet loss is a natural and required feature for the internet to do proper congestion control, and to keep buffer sizes short enough for interactivity, Your zoom call extending out for ages, or glitching, when you do a big up or download, is a symptom of bufferbloat.

Zoom and other video conferencing systems can withstand a high degree of random packet loss, by design, but not bursty packet loss (which does distort the screen) or widely variable delays (jitter). The delays in zoom can stretch out to seconds, unfortunately – without good queue management. I use things like galene + fq_codel (RFC8290) or cake (RFC7567) on the bottleneck links, leveraging “smart queue management” (sqm) to manage them, and zoom is always good due to the FQ and the relatively low bandwidth required.

Some links:

Broadband Internet Technical Advisory group´s recent latency report: https://www.bitag.org/documents/BITAG_latency_explained.pdf

TTI/Vanguard talk using jugglers to explain voip vs congestion control issues:

It’s nice to have this spelled out. And, it’s totally not a surprise.

Latency, jitter, retransmit times, buffering, server response… these are the things that make a difference in actual performance and, excepting server response time and buffering, all going to tend to be worse with wireless connections which are subject to additional, unpredictable congestion and interference problems. And, there are very complicated interactions with things like tcp window size dynamics, QoS, or more bespoke udp policies which absolutely impact overall performance.

Give me fiber or give me death.

Leave a ReplyCancel reply

Discover more from POTs and PANs

Subscribe now to keep reading and get access to the full archive.

Continue reading

Exit mobile version