Standards for 5G

itu_logo_743395401Despite all of the hype that 5G is right around the corner, it’s important to remember that there is not yet a complete standard for the new technology.

The industry just took a big step on February 22 when the ITU released a draft of what it hopes is the final specification for 5G. The document is heavy in engineering detail and is not written for the layman. You will see that the draft talks about a specification for ‘IMT-2020’ which is the official name of 5G. The goal is for this draft to be accepted at a meeting of the ITU-R Study Group in November.

This latest version of the standard defines 13 metrics that are the ultimate goals for 5G. A full 5G deployment would include all of these metrics. What we know that we will see is commercial deployments from vendors claiming to have 5G, but which will actually meet only some parts of a few of these metrics. We saw this before with 4G, and the recent deployment of LTE-U is the first 4G product that actually meets most of the original 4G standard. We probably won’t see a cellular deployment that meets any of the 13 5G metrics until at least 2020, and it might be five to seven more years after that until fully compliant 5G cellular is deployed.

The metric that is probably the most interesting is the one that establishes the goal for cellular speeds. The goals of the standard are 100 Mbps download and 50 Mbps upload. Hopefully this puts to bed the exaggerated press articles that keep talking about gigabit cellphones. And even should the technology meet these target speeds, in real life deployment the average user is probably only going to receive half those speeds due to the fact that cellular speeds decrease rapidly with distance from a cell tower. Somebody standing right next to a cell tower might get 100 Mbps, but even as close as a mile away the speeds will be considerably less.

Interestingly, these speed goals are not much faster than is being realized by LTE-U today. But the new 5G standard should provide for more stable and guaranteed data connections. The standard is for a 5G cell site to be able to connect to up to 1 million devices per square kilometer (a little more than a third of a square mile). This, plus several other metrics, ought to result in stable 5G cellular connections – which is quite different than what we are used to with 4G connections. The real goal of the 5G standard is to provide connections to piles of IoT devices.

The other big improvement over 4G are the expectations for latency. Today’s 4G connections have data latencies as high as 20 ms, which accounts for most problems in loading web pages or watching video on cellphones. The new standard is 4 ms latency, which would improve cellular latency to around the same level that we see today on fiber connections. The new 5G standard for handing off calls between adjoining cell sites is 0 ms, or zero delay.

The standard increases the demand potential capacity of cell sites and provides a goal for the ability of a cell site to process peak data rates of 20 Gbps down and 10 Gbps up. Of course, that means bringing a lot more bandwidth to cell towers and only extremely busy urban towers will ever need that much capacity. Today the majority of fiber-fed cell towers are fed with 1 GB backbones that are used to satisfy upload and download combined. We are seeing cellular carriers inquiring about 10 GB backbones, and we need a lot more growth to meet the capacity built into the standard.

There are a number of other standards. Included is a standard requiring greater energy efficiency, which ought to help save on handset batteries – the new standard allows for handsets to go to ‘sleep’ when not in use. There is a standard for peak spectral efficiency which would enable 5G to much better utilize existing spectrum. There are also specifications for mobility that extend the goal to be able to work with vehicles going as fast as 500 kilometers per hour – meaning high speed trains.

Altogether the 5G standard improves almost every aspect of cellular technology. It calls for more robust cell sites, improved quality of the data connections to devices, lower energy requirements and more efficient hand-offs. But interestingly, contrary to the industry hype, it does not call for gigantic increases of cellular handset data speeds compared to a fully-compliant 4G network. The real improvements from 5G are to make sure that people can get connections at busy cell sites while also providing for huge numbers of connections to smart cars and IoT devices. A 5G connection is going to feel faster because you ought to almost always be able to make a 5G connection, even in busy locations, and that the connection will have low latency and be stable, even in moving vehicles. It will be a noticeable improvement.

Latency and Broadband Performance

turtle_backThe industry always talks about latency as one of the two reasons (along with download speeds) that define a good broadband connection. I thought today I’d talk about latency.

As a reference, the standard definition of latency is that it’s a measure of the time it takes for a data packet to travel from its point of origin to the point of destination.

There are a lot of underlying causes for delays that increase latency – the following are primary kinds of delays:

  • Transmission Delay. This is the time required to push packets out the door at the originating end of a transmission. This is mostly a function of the kind of router and software used at the originating server. This can also be influenced by packet length, and it generally takes longer to create long packets than it does to create multiple short ones. These delays are caused by the originator of an Internet transmission.
  • Processing Delay. This is the time required to process a packet header, check for bit-level errors and to figure out where the packet is to be sent. These delays are caused by the ISP of the originating party. There are additional processing delays along the way every time a transmission has to ‘hop’ between ISPs or networks.
  • Propagation Delay. This is the delay due to the distance a signal travels. It takes a lot longer for a signal to travel from Tokyo to Baltimore than it takes to travel from Washington DC to Baltimore. This is why speed tests try to find a nearby router to ping so that they can eliminate latency due to distance. These delays are mostly a function of physics and the speed at which signals can be carried through cables.
  • Queueing Delay. This measures the amount of time that a packet waits at the terminating end to be processed. This is a function of both the terminating ISP and also of the customer’s computer and software.

Total latency is the combination of all of these delays. You can see by looking at these simple definitions that poor latency can be introduced at multiple points along an Internet transmission, from beginning to end.

The technology of the last mile is generally the largest factor influencing latency. A few years ago the FCC did a study of the various last mile technologies and measured the following ranges of performance of last-mile latency, measured in milliseconds: fiber (10-20 ms), coaxial cable (15-40 ms), and DSL (30-65 ms). These are measures of latency between a home and the first node in the ISP network. It is these latency differences that cause people to prefer fiber. The experience on a 30 Mbps download fiber connection “feels” faster than the same speed on a DSL or cable network connection due to the reduced latency.

It is the technology latency that makes wireless connections seem slow. Cellular latencies vary widely depending upon the exact generation of equipment at any given cell site. But 4G latency can be as high as 100 ms. In the same FCC test that produced the latencies shown above, satellite was almost off the chart with latencies measured as high as 650 ms.

The next biggest factor influencing latency is the network path between the originating and terminating end of a signal. Every time that a signal hits a network node the new router must examine the packet header to determine the route and may run other checks on the data. The delays of hitting network routers or of changing networks is referred to in the industry as hops, and each hop adds latency.

There are techniques and routing schemes that can reduce the latency that comes from extra hops. For example, most large ISPs peer with each other, meaning they pass traffic between them and avoid the open Internet. By doing so they reduce the number of hops needed to pass a signal between their networks. Companies like Netflix also use caching where they will store content closer to users so that the signal isn’t originating from their core servers.

Internet speeds also come into play. The transmission delay is heavily influenced by the upload speeds at the originating end of a transmission. And the queuing delay is influenced by download speeds at the terminating end of a transmission. This is illustrated with a simple example. If you want to download a 10 Mb file, it takes one-tenth of a second to download on a 100 Mbps connection and ten seconds on a 1 Mbps connection.

A lot of complaints about Internet performance are actually due to latency issues. It’s something that’s hard to diagnose since latency issues can appear and reappear as Internet traffic between two points uses different routing. But the one thing that is clear is that the lower the latency the better.

My Thoughts on AT&T AirGig

PoleBy now most of you have seen AT&T’s announcement of a new wireless technology they are calling AirGig. This is a technology that can bounce millimeter wave signals along a series of inexpensive plastic antennae perched at the top of utility poles.

The press release is unclear about the speeds that might be delivered from the technology. The press release says it has the potential to deliver multi-gigabit speeds. But at the same time it talks about delivering 4G cellular as well as 5G cellular and fixed broadband. The 4G LTE cellular standard can deliver about 15 Mbps while the 5G cellular standard (which is still being developed) is expected to eventually increase cellular speeds to about 50 Mbps. So perhaps AT&T plans to use the technology to deploy micro cell sites while also being able to deliver millimeter wave wireless broadband loops. The link above includes a short video which doesn’t clarify this issue very well.

Like any new radio technology, there is bound to be a number of issues involved with moving the technology from the lab to the field. I can only speculate at this point, but I can foresee the following as potential issues with the millimeter wave part of the technology:

  • The video implies that the antennas will be used to deliver bandwidth using a broadcast hotspot. I’m not entirely sure that the FCC will even approve this spectrum to be used in this manner – this is the same spectrum used in microwave ovens. It can be dangerous to work around for linemen climbing poles and it can create all sorts of havoc by interfering with cable TV networks and TV reception.
  • Millimeter wave spectrum does not travel very far when used as a hot spot. This spectrum has high atmospheric attenuation and is absorbed by gases in the atmosphere. When focused in a point-to-point the spectrum can work well to about half a mile. But in a hot spot mode it’s good, at best, for a few hundred feet and loses bandwidth quickly with distance traveled. The bandwidth is only going to reach to homes that are close to the pole lines.
  • Millimeter wave spectrum suffers from rain fade and during a rain storm almost all of the spectrum is scattered.
  • The spectrum doesn’t penetrate foliage, or much of anything else. So there is going to have to be a clear path between the pole unit and the user. America is a land of residential trees and even in the open plains people plant trees closely around their house as a windbreak.
  • The millimeter wave spectrum won’t penetrate walls, so this will require some sort of outdoor receiver to catch millimeter wave signals.
  • I wonder how the units will handle icing. Where cables tend to shake ice off within a few days, hardware mounted on poles can be ice-covered for months.
  • The technology seems to depend on using multiple wireless hops to go from unit to unit. Wireless hops always introduce latency into the signal and it will be interesting to see how much latency is introduced along rural pole runs.
  • For any wireless network to deliver fast speeds it has to be connected somewhere to fiber backhaul. There are still many rural counties with little or no fiber.

We have always seen that every wireless technology has practical limitations that make it suitable for some situations and not others. This technology will be no different. In places where this can work it might be an incredible new broadband solution. But there are bound to be situations where the technology will have too many problems to be practical.

I’ve seen speculation that one of the major reasons for this press release is to cause a pause to anybody thinking of building fiber. After all, why should anybody build fiber if there is cheap multi-gigabit wireless coming to every utility pole? But with all of the possible limitations mentioned above (and others that are bound to pop up in the real world) this technology may only work in some places, or it might not work well at all. This could be the technology we have all been waiting for or it could be a flop. I guess we’ll have to wait and see.

Speed Tests

cheetah-993774Netflix just came out with a new speed test at fast.com which is intended to measure the download speed of Internet connections to determine if they are good enough to stream Netflix. The test only measures the speeds between a user and the Netflix servers. This is different than most other speed tests on the web that also look at upload speeds and latency.

This raises the question of how good speed tests are in general. How accurate are they and what do they really tell a user? There are a number of different speed tests to be found on the web. Over the years I have used the ones at speedtest.net (Ookla), dslreports.com, speed.io, the BandWidthPlace and TestMySpeed.

Probably the first thing to understand about speed tests is that they are only testing the speed of a ping between the user and the test site routers and are not necessarily indicative of the speeds for other web activities like downloading files, making a VoIP phone call or streaming Netflix. Each of those activities involves a different type of streaming and the speed test might not accurately report what a user most wants to know.

Every speed test uses a different algorithm to measure speed. For example, the algorithm for speedtest.net operated by Ookla discards the fastest 10% and the slowest 30% of the results obtained. In doing so they might be masking exactly what drove someone to take the speed test, such as not being able to hold a connection to a VoIP call. Ookla also multithreads, meaning that they open multiple paths between a user and the test site and then average the results together. This could easily mask congestion problems a user might be having with the local network.

Another big problem with any speed test is that it measures the connection between a customer device and the speed test site. This means that the customer parts of the network like the home WiFi network are included in the results. A lot of ISPs I know now claim that poor in-home WiFi accounts for the majority of the speed issue problems reported by customers. So a slow speed test doesn’t always mean that the ISP has a slow connection.

The speed of an Internet connection for any prolonged task changes from second to second. Some of the speed tests like Netflix Ookla show these fluctuations during the test. There are numerous issues for changing speeds largely having to do with network congestion at various points in the network. If one of your neighbors makes a big download demand during your speed test you are likely to see a dip in bandwidth. And this same network contention can happen at any one of numerous different parts of the network.

The bottom line is that speed tests are not much more than an indicator of how your network is performing. If you test your speed regularly then a slow speed test result can be an indicator that something is wrong. But if you only check it once in a while, then any one speed test only tells you about the minute that you took the test and not a whole lot more. It’s not yet time to call your ISP after a single speed test.

There have been rumors around the industry that the big ISPs fudge on the common speed tests. It would be relatively easy for them to do this by giving priority routing to anybody using one of the speed test web sites. I have no idea if they do this, but it would help to explain those times when a speed test tells me I have a fast connection and low latency and yet can’t seem to get things to work.

I think the whole purpose of the Netflix speed test is to put pressure on ISPs that can’t deliver a Netflix-capable connection. I don’t know how much good that will do because such connections are likely going to be on old DSL and other technologies where the ISP already knows the speeds are slow.