There is a new network tool that’s starting to be eased into networks that can significantly lower latency. The new standard L4S (Low Latency, Low Loss, Scalable Throughput) was released in January 2023.
You might ask why we need better latency. Most of you have taken speed tests that measure latency with a ping test. That’s a measure of the time it takes for a single connection between your ISP and your router. The FCC says any latency below 100 ms (milliseconds) is acceptable, and unless you’re using high-orbit satellite broadband, you’ll likely never see a ping latency over 100 ms.
However, a ping test doesn’t tell you anything about the latency while using broadband. A lot of the problems users have with broadband come from latency issues when the network is under load. This latency is often referred to as buffer bloat. Measuring latency under load means looking at the accumulated latency from all network components of an Internet connection. Every component in the network – the switches and routers at both ends of a connection plus your modem each has a limit on the volume of data that can be carried at any given second. If everything is working right, then latency under load is low since packets are being delivered to your computer as intended.
Connections are rarely perfect, and that’s when troubles begin. Let’s say that your home router gets temporarily busy because the folks in your home are doing multiple tasks at the same time. If your home connection gets busy, the packets can pile up, and many get dropped. This prompts the originating ISP to resend packets. Your home router has a buffer that is supposed to compensate for this by temporarily holding packets, but that often doesn’t work as planned, particularly for real-time transmissions like a Teams video conference. Every time packets have to be resent adds more time to the latency for a particular connection, and the more packets that are coming in due to resent packets, the greater the chance of even more backlog.
You may not have noticed, but the Ookla speed test also tells you about your latency under load. Immediately to the right of the ping latency is the average download and upload latency during the speed test. These two readings are a better indicator of your network performance.
If you really want to understand your latency, watch those numbers during the speed test. In writing this blog, I took speed tests on my computer and cellphone. My ping on Charter was 34 ms – a little slower than what I normally see. The average download latency was 147 ms, and during the test, I saw one reading over 400 ms. The average upload latency was 202 ms, with the highest reading I saw at 695 ms (seven tenths of a second). My AT&T cellphone latencies were higher (which is normal). The ping time was 46 ms. The average download latency was 585 ms, with the highest reading I saw at over 1 second. The average upload latency was 102 ms, with the highest reading over 300 ms. The high readings of latency under load explain why I often struggle with real-time activities.
How does L4S fix this problem? First, the various components of your network have to enable L4S. The most important components are the originating switch, the switches at your ISP, and your home router. When these network components have enabled the L4S technology, the goal is to reduce the time that packets are waiting in queue. L4S adds an indicator to packets to report the experience they had moving through the Internet. L4S doesn’t react if everything is working fine. But if there are delays, the originator of the transmission is asked to slow down the rate of sending packets (as are other enabled components in the network. This temporary slowdown stops packets from building up and can drastically reduce the percentage of dropped packets.
Comcast has started to work L4S into their networks in some of its major markets. They report that the technology can cut load latency at least in half, in some cases bringing the latency under load to close to the ping latency.
The real key to making this work is to have the largest content providers build L4S into their networks. For example, a gaming app would need to make sure L4S is enabled at their serving data center to take advantage of the improved latency. If the Comcast trials are successful, it seems likely that a lot of the industry will adopt L4S, and savvy users will avoid applications that don’t use it.
There will be an interesting shift in the industry if use of L4S become widespread. A lot of customers have upgraded broadband speeds to get better performance but found that they didn’t see a big improvement. It a lot of cases, the real culprit in bad performance is buffer bloat. If this gets introduced everywhere, customers might find they are satisfied with slower broadband speeds.
If you to dig deeper into the new standard, you can find it here.
I have sometimes pointed out that SQM, available now for over 12 years, also solves bufferbloat thoroughly, without requiring the changes to a network that L4S does. Some variant of it is already available from many vendors like eero, mikrotik, bunt, in addition to third party router firmwares like OpenWrt.
Various vendors make it (fq_codel (rfc8290) or CAKE) available for ISPs – Preseem, LibreQoS, bequant, Paraqum. We’ll probably get around to supporting L4S also at some point, but it is really the fair queuing that makes it better than the L4S baseline.
Thank you for detailing how to *see* bloat at such length above. I’ve ranted quite a lot about the flaws of speed tests elsewhere: https://blog.cerowrt.org/post/speedtests/
For those of us using products like LibreQoS ( https://libreqos.io/ ) we’ve been on this for years already. Offering A+ service test results from this site https://www.waveform.com/tools/bufferbloat to our clients.
Worth pointing out that L4S essentially requires end-to-end support and a comprehensive reform of all links between content provider and consumer. This is somewhat easier for a company like comcast that has peering relationships with big vendors but not so much for services that have more providers in the path.
As others have stated, we have really good technology available today in SQM (fq_codel, cake) and a number of fantastic software products in the ISP space that massively improves this for pennies.
Just having ISPs implement SQM makes a massive improvement. Add it in at the customer side, especially to handle their upload traffic, and the network is dramatically better without any substantial overhaul.
Doug — as mentioned by Dan, L4S requires end to end support — the end points (to mark the packets correctly) and in forwarding nodes (to honor the markings with the desired queuing behavior. It is pretty remarkable that Comcast has gotten traction on this. Kudos to them! Apparently Apple’s IOS now supports it. Let’s see when Youtube, Netflix, etc also support it.
As Trendal have also pointed, LibreQoS does a great a great job with fq_codel & CAKE,, and does not require such changes.
I speculate that L4S is easier to implement in hardware. LibreQoS does up to, (not sure, but maybe) 40Gbps on commodity servers and smart NIC cards. That’s great, but backbone links these days are 100+ Gbps, so I can see a place for both approaches.