Categories
Technology

Latency and Broadband Performance

turtle_backThe industry always talks about latency as one of the two reasons (along with download speeds) that define a good broadband connection. I thought today I’d talk about latency.

As a reference, the standard definition of latency is that it’s a measure of the time it takes for a data packet to travel from its point of origin to the point of destination.

There are a lot of underlying causes for delays that increase latency – the following are primary kinds of delays:

  • Transmission Delay. This is the time required to push packets out the door at the originating end of a transmission. This is mostly a function of the kind of router and software used at the originating server. This can also be influenced by packet length, and it generally takes longer to create long packets than it does to create multiple short ones. These delays are caused by the originator of an Internet transmission.
  • Processing Delay. This is the time required to process a packet header, check for bit-level errors and to figure out where the packet is to be sent. These delays are caused by the ISP of the originating party. There are additional processing delays along the way every time a transmission has to ‘hop’ between ISPs or networks.
  • Propagation Delay. This is the delay due to the distance a signal travels. It takes a lot longer for a signal to travel from Tokyo to Baltimore than it takes to travel from Washington DC to Baltimore. This is why speed tests try to find a nearby router to ping so that they can eliminate latency due to distance. These delays are mostly a function of physics and the speed at which signals can be carried through cables.
  • Queueing Delay. This measures the amount of time that a packet waits at the terminating end to be processed. This is a function of both the terminating ISP and also of the customer’s computer and software.

Total latency is the combination of all of these delays. You can see by looking at these simple definitions that poor latency can be introduced at multiple points along an Internet transmission, from beginning to end.

The technology of the last mile is generally the largest factor influencing latency. A few years ago the FCC did a study of the various last mile technologies and measured the following ranges of performance of last-mile latency, measured in milliseconds: fiber (10-20 ms), coaxial cable (15-40 ms), and DSL (30-65 ms). These are measures of latency between a home and the first node in the ISP network. It is these latency differences that cause people to prefer fiber. The experience on a 30 Mbps download fiber connection “feels” faster than the same speed on a DSL or cable network connection due to the reduced latency.

It is the technology latency that makes wireless connections seem slow. Cellular latencies vary widely depending upon the exact generation of equipment at any given cell site. But 4G latency can be as high as 100 ms. In the same FCC test that produced the latencies shown above, satellite was almost off the chart with latencies measured as high as 650 ms.

The next biggest factor influencing latency is the network path between the originating and terminating end of a signal. Every time that a signal hits a network node the new router must examine the packet header to determine the route and may run other checks on the data. The delays of hitting network routers or of changing networks is referred to in the industry as hops, and each hop adds latency.

There are techniques and routing schemes that can reduce the latency that comes from extra hops. For example, most large ISPs peer with each other, meaning they pass traffic between them and avoid the open Internet. By doing so they reduce the number of hops needed to pass a signal between their networks. Companies like Netflix also use caching where they will store content closer to users so that the signal isn’t originating from their core servers.

Internet speeds also come into play. The transmission delay is heavily influenced by the upload speeds at the originating end of a transmission. And the queuing delay is influenced by download speeds at the terminating end of a transmission. This is illustrated with a simple example. If you want to download a 10 Mb file, it takes one-tenth of a second to download on a 100 Mbps connection and ten seconds on a 1 Mbps connection.

A lot of complaints about Internet performance are actually due to latency issues. It’s something that’s hard to diagnose since latency issues can appear and reappear as Internet traffic between two points uses different routing. But the one thing that is clear is that the lower the latency the better.

Exit mobile version