FCC Further Defines Speed Tests

The FCC recently voted to tweak the rules for speed testing for ISPs who accept federal funding from the Universal Service Fund or from other federal funding sources. This would include all rate-of-return carriers including those taking ACAM funding, carriers that won the CAF II reverse auctions, recipients of the Rural Broadband Experiment (RBE) grants, Alaska Plan carriers, and likely carriers that took funding in the New York version of the CAF II award process. These new testing rules will also apply to carriers accepting the upcoming RDOF grants.

The FCC had originally released testing rules in July 2018 in Docket DA 18-710. Those rules applied to the carriers listed above as well as to all price cap carriers and recipients of the CAF II program. The big telcos will start testing in January of 2020 and the FCC should soon release a testing schedule for everybody else – the dates for testing were delayed until this revised order was issued.

The FCC made the following changes to the testing program:

  • Modifies the schedule for commencing testing by basing it on the deployment obligations specific to each Connect America Fund support mechanism;
  • Implements a new pre-testing period that will allow carriers to become familiar with testing procedures without facing a loss of support for failure to meet the requirements;
  • Allows greater flexibility to carriers for identifying which customer locations should be tested and selecting the endpoints for testing broadband connections. This last requirement sounds to me like the FCC is letting the CAF II recipients off the hook by allowing them to only test customers they know meet the 10/1 Mbps speeds.

The final order should be released soon and will hopefully answer carrier questions. One of the areas of concern is that the FCC seems to want to test the maximum speeds that a carrier is obligated to deliver. That might mean having to give customers the fastest connection during the time of the tests even if they have subscribed to slower speeds.

Here are some of the key provisions of the testing program that were not changed by the recent order:

  • ISPs can choose between three methods for testing. First, they may elect what the FCC calls the MBA program, which uses an external vendor, approved by the FCC, to perform the testing. This firm has been testing speeds for the network built by large telcos for many years. ISPs can also use existing network tools if they are built into the customer CPE that allows test pinging and other testing methodologies. Finally, an ISP can install ‘white boxes’ that provide the ability to perform the tests.
  • Testing, at least for now is perpetual, and carriers need to recognize that this is a new cost they have to bear due to taking federal funding.
  • The number of tests to be conducted will vary by the number of customers for which a recipient is getting support; With 50 or fewer households the test is for 5 customers; for 51-500 households the test is 10% of households. For 500 or more households the test is 50 households. ISPs declaring a high latency must test more locations with the maximum being 370.
  • Tests for a given customer are for one solid week, including weekends in each quarter. Tests must be conducted in the evenings between 6:00 PM and 12:00 PM. Latency tests must be done every minute during the six-hour testing window. Speed tests – run separately for upload speeds and download speeds – must be done once per hour during the 6-hour testing window.
  • ISPs are expected to meet latency standards 95% of the time. Speed tests must achieve 80% of the expected upland and download speed 80% of the time. An example of this requirement is that a carrier guaranteeing a gigabit of speed must achieve 800 Mbps 80% of the time. ISPs that meet the speeds and latencies for 100% of customers are excused from quarterly testing and only have to test once per year.
  • There are financial penalties for ISPs that don’t meet these tests.
  • ISPs that have between 85% and 100% of households that meet the test standards lose 5% of their FCC support.
  • ISPs that have between 70% and 85% of households that meet the test standards lose 10% of their FCC support.
  • ISPs that have between 55% and 75% of households that meet the test standards lose 15% of their FCC support.
  • ISPs with less than 55% of compliant households lose 25% of their support.
  • The penalties only apply to funds that haven’t yet been collected by an ISP.

Should Satellite Broadband be Subsidized?

I don’t get surprised very often in this industry, but I must admit that I was surprised by the amount of money awarded for satellite broadband in the reverse auction for CAF II earlier this year. Viasat, Inc., which markets as Exede, was the fourth largest winner, collecting $122.5 million in the auction.

I understand how Viasat won – it’s largely a function of the way that reverse auctions work. In a reverse auction, each bidder lowers the amount of their bid in successive rounds until only one bidder is left in any competitive situation. The whole pool of bids is then adjusted to meet the available funds, which could mean an additional reduction of what winning bidders finally receive.

Satellite providers, by definition, have a huge unfair advantage over every other broadband technology. Viasat was already in the process of launching new satellites – and they would have launched them with or without the FCC grant money. Because of that, there is no grant level too low for them to accept out of the grant process – they would gladly accept getting only 1% of what they initially requested. A satellite company can simply outlast any other bidder in the auction.

This is particularly galling since Viasat delivers what the market has already deemed to be inferior broadband. The download speeds are fast enough to satisfy the reverse auction at speeds of at least 12 Mbps. The other current satellite provider HughesNet offer speeds of at least 25 Mbps. The two issues that customers have with satellite broadband is the latency and the data caps.

By definition, the latency for a satellite at a 23,000 orbit is at least 476 ms (milliseconds) just to account for the distance traveled to and from the earth. Actual latency is often above 600 ms. The rule of thumb is that real-time applications like VoIP, gaming, or holding a connection at a corporate LAN start having problems when latency is greater than 100-150 ms.

Exede no longer cuts customers dead for the month once they reach the data cap, but they instead reduce speeds when the network is busy for any customer over the cap. Customer reviews say this can be extremely slow during prime times. The monthly data caps are small and range from $49.99 monthly for a 10 GB data cap to $99.95 per month for a 150 GB data cap. To put those caps into perspective, OpenVault recently reported that the average landline broadband household used 273.5 GB per month of data in the first quarter of 2019.

Viasat has to be thrilled with the result of the reverse auction. They got $122.5 million for something they were already doing. The grant money isn’t bringing any new option to customers who were already free to buy these products before the auction. There is no better way to say it other than Viasat got free money due to a loophole in the grant process. I don’t think they should have been allowed into the auction since they aren’t bringing any broadband that is not already available.

The bigger future issue is if the new low-earth orbit satellite companies will qualify for the future FCC grants, such as the $20.4 billion grant program starting in 2021. The new grant programs are also likely to be reverse auctions. There is no doubt that Jeff Bezos or Elon Musk will gladly take government grant money, and there is no doubt that they can underbid any landline ISP in a reverse auction.

For now, we don’t know anything about the speeds that will be offered by the new satellites. We know that they claim that latency will be about the same as cable TV networks at about 25 ms. We don’t know about data plans and data caps, although Elon Musk has hinted at having unlimited data plans – we’ll have to wait to see what is actually offered.

It would be a tragedy for rural broadband if the new (and old) satellite companies were to win any substantial amount of the new grant money. To be fair, the new low-orbit satellite networks are expensive to launch, with price tags for each of the three providers estimated to be in the range of $10 billion. But these companies are using these satellites worldwide and will be launching them with or without help from an FCC subsidy. Rural customers are going to best be served in the long run by having somebody build a network in their neighborhood. It’s the icing on the cake if they are also able to buy satellite broadband.

Why Offer Fast Data Speeds?

A commentor on an earlier blog asked a great question. They observed that most ISPs say that customer usage doesn’t climb when customers are upgraded to speeds faster than 50 Mbps – so why does the industry push for faster speeds? The question was prompted by the observation that the big cable companies have unilaterally increased speeds in most markets to between 100 Mbps to 200 Mbps. There are a lot of different answers to that question.

First, I agree with that observation and I’ve heard the same thing. The majority of households today are happy with a speed of 50 Mbps, and when a customer that already has enough bandwidth is upgraded they don’t immediately increase their downloading habits.

I’ve lately been thinking that 50 Mbps ought to become the new FCC definition of broadband, for exactly the reasons included in the question. This seems to be the speed today where most households can use the Internet in the way they want. I would bet that many households that are happy at 50 Mbps would no longer be happy with 25 Mbps broadband. It’s important to remember that just three or four years ago the same thing could have been said about 25 Mbps, and three or four years before that the same was true of 10 Mbps. One reason to offer faster speeds is to stay ahead of that growth curve. Household bandwidth and speed demand has been doubling every three years or so since 1980. While 50 Mbps is a comfortable level of home bandwidth for many today, in just a few years it won’t be.

It’s also worth noting that there are some households who need more than the 50 Mbps speeds because of the way they use the Internet. Households with multiple family members that all want to stream at the same time are the first to bump against the limitations of a data product. If ISPs never increase speeds above 50 Mbps, then every year more customers will bump against that ceiling and begin feeling frustrated with that speed. We have good evidence this is true by seeing customers leave AT&T U-verse, at 50 Mbps, for faster cable modem broadband.

Another reason that cable companies have unilaterally increased speeds is to help overcome customer WiFi issues. Customers often don’t care about the speed in the room with the WiFi modem, but care about what they can receive in the living room or a bedroom that is several rooms away from the modem. Faster download speeds can provide the boost needed to get a stronger WiFi signal through internal walls. The big cable companies know that increasing speeds cuts down on customer calls complaining about speed issues. I’m pretty sure that the cable companies will say that increasing speeds saves them money due to fewer customer complaints.

Another important factor is customer perception. I always tell people that if they have the opportunity, they should try a computer connected to gigabit speeds. A gigabit product ‘feels’ faster, particularly if the gigabit connection is on fiber with low latency. Many of us are old enough to remember that day when we got our first 1 Mbps DSL or cable modem and got off dial-up. The increase in speed felt liberating, which makes sense because a 1 Mbps DSL line is twenty times faster than dial-up, and also has a lower latency. A gigabit connection is twenty times faster than a 50 Mbps connection and seeing it for the first time has that same wow factor – things appear on the screen almost instantaneously as you hit enter. The human eye is really discerning, and it can see a big difference between loading the same web site at 25 Mbps and at 1 Gbps. The actual time difference isn’t very much, but the eye tells the brain that it is.  I think the cable companies have figured this out – why not give faster speeds if it doesn’t cost anything and makes customers happy?

While customers might not immediately use more broadband, I think increasing the speed invites them to do so over time. I’ve talked to a lot of people who have lived with inadequate broadband connections and they become adept at limiting their usage, just like we’ve all done for many years with cellular data usage. Rural families all know exactly what they can and can’t do on their broadband connection. For example, if they can’t stream video and do schoolwork at the same time, they change their behavior to fit what’s available to them. Even non-rural homes learn to do this to a degree. If trying to stream multiple video streams causes problems, customers quickly learn not to do it.

Households with fast and reliable broadband don’t give a second thought about adding an additional broadband application. It’s not a problem to add a new broadband device or to install a video camera at the front door. It’s a bit of the chicken and egg question – does fast broadband speeds promote greater broadband usage or does the desire to use more applications drive the desire to get faster speeds? It’s hard to know any more since so many homes have broadband speeds from cable companies or fiber providers that are set faster than what they need today.

One-Web Launches Broadband Satellites

Earlier this month OneWeb launched six test satellites intended for an eventual satellite fleet intended to provide broadband. The six satellites were launched from a Soyuz launch vehicle from the Guiana Space Center in Kourou, French Guiana.

OneWeb was started by Greg Wyler of Virginia in 2012, originally under the name of WorldVu. Since then the company has picked up heavy-hitter investors like Virgin, Airbus, SoftBank and Qualcomm. The company’s plan is to launch an initial constellation of 650 satellites that will blanket the earth, with ultimate deployment of 1,980 satellites. The plans are to deploy thirty of the sixty-five pound satellites with each launch. That means twenty-two successful launches are needed to deploy the first round.

Due to the low-earth orbits of the satellites, at about 745 miles above earth, the OneWeb satellites will avoid the huge latency that is inherent from current satellite broadband providers like HughesNet, which uses satellites orbiting at 22,000 miles above the earth. The OneWeb specifications filed with the FCC talks about having latency in the same range as cable TV networks in the 25-30 millisecond range. But where a few high-orbit satellites can see the whole earth, the big fleet of low-orbit satellites is needed just to be able in see everywhere.

The company is already behind schedule. The company had originally promised coverage across Alaska by the end of 2019. They are now talking about having customers demos sometime in 2020 with live broadband service in 2021. The timeline matter for a satellite company because the bandwidth license from the FCC requires that they launch 50% of their satellites within six years and all of them within nine years. Right now, OneWeb and also Elon Musk’s SpaceX have both fallen seriously behind the needed deployment timeline.

The company’s original goal was to bring low-latency satellite broadband to everybody in Alaska. While they are still talking about bringing broadband to those who don’t have it today, their new business plan is to sell directly to airlines and cruise ship lines and to sell wholesale to ISPs who will then market to the end user.

It will be interesting to see what kinds of speeds will really be delivered. The company talks today about a maximum speed of 500 Mbps. But I compare that number to the claim that 5G cellphones can work at 600 Mbps, as demonstrated last year by Sprint – it’s possible only in a perfect lab setting. The best analog to a satellite network is a wireless transmitter on a tower in a point-to-multipoint network. That transmitter is capable of making a relatively small number of big-bandwidth connections or many more low-bandwidth connections. The economic sweet spot will likely be to offer many connections at 50 – 100 Mbps rather than fewer connections at a higher speed.

It’s an interesting business model. The upfront cost of manufacturing and launching the satellites is high. It’s likely that a few launches will go awry and destroy satellites. But other than replacing satellites that go bad over time, the maintenance costs are low. The real issue will be the bandwidth that can be delivered. Speeds of 50 – 100 Mbps will be welcomed in the rural US for those with no better option. But like with all low-bandwidth technologies – adequate broadband that feels okay today will feel a lot slower in a decade as household bandwidth demand continues to grow. The best long-term market for the satellite providers will be those places on the planet that are not likely to have a landline alternative – which is why they first targeted rural Alaska.

Assuming that the low-earth satellites deliver as promised, they will become part of the broadband landscape in a few years. It’s going to be interesting to see how they play in the rural US and around the world.

Verizon’s Case for 5G, Part 3

Ronan Dunne, an EVP and President of Verizon Wireless recently made Verizon’s case for aggressively pursuing 5G. In this blog I want to examine the two claims based upon improved latency – gaming and stock trading.

The 5G specification sets a goal of zero latency for the connection from the wireless device to the cellular tower. We’ll have to wait to see if that can be achieved, but obviously the many engineers that worked on the 5G specification think it’s possible. It makes sense from a physics perspective – a connection of a radio signal through air travels for all practical purposes at the speed of light (there is a miniscule amount of slowing from interaction with air molecules). This makes a signal through the air slightly faster than one through fiber since light slows down when passing through fiberglass by 0.83 milliseconds for every hundred miles of fiber optic cable traversed.

This means that a 5G signal will have a slight latency advantage over FTTP – for the first few connection from a customer. However, a 5G wireless signal almost immediately hits a fiber network at a tower or small cell site in a neighborhood, and from that point forward the 5G signal experiences the same latency as an all-fiber connection.

Most of the latency in a fiber network comes from devices that process the data – routers, switches and repeaters. Each such device in a network adds some delay to the signal – and that starts with the first device, be it a cellphone or a computer. In practical terms, when comparing 5G and FTTP the network with the fewest hops and fewest devices between a customer and the internet will have the lowest latency – a 5G network might or might not be faster than an FTTP network in the same neighborhood.

5G does have a latency advantage over non-fiber technologies, but it ought to be about the same advantage enjoyed by FTTP network. Most FTTP networks have latency in the 10-millisecond range (one hundredth of a second). Cable HFC networks have latency in the range of 25-30 ms; DSL latency ranges from 40-70 ms; satellite broadband connections from 100-500 ms.

Verizon’s claim for improving the gaming or stock trading connection also implies that the 5G network will have superior overall performance. That brings in another factor which we generally call jitter. Jitter is the overall interference in a network that is caused by congestion. Any network can have high or low jitter depending upon the amount of traffic the operator is trying to shove through it. A network that is oversubscribed with too many end users will have higher jitter and will slow down – this is true for all technologies. I’ve had clients with first generation BPON fiber networks that had huge amounts of jitter before they upgraded to new FTTP technology, so fiber (or 5G) alone doesn’t mean superior performance.

The bottom line is that a 5G network might or might not have an overall advantage compared to a fiber network in the same neighborhood. The 5G network might have a slight advantage on the first connection from the end user, but that also assumes that cellphones are more efficient than PCs. From that point forward, the network with the fewest hops to the Internet as well the network with the least amount of congestion will be faster – and that will be case by case, neighborhood by neighborhood when comparing 5G and FTTP.

Verizon is claiming that the improved latency will improve gaming and stock trading. That’s certainly true where 5G competes against a cable company network. But any trader that really cares about making a trade a millisecond faster is already going to be on a fiber connection, and probably one that sits close to a major internet POP. Such traders are engaging in computerized trading where a person is not intervening in the trade decision. For any stock trades that involve humans, a extra few thousandths of a second in executing a trade is irrelevant since the human decision process is far slower than that (for someone like me these decisions can be measured in weeks!).

Gaming is more interesting. I see Verizon’s advantage for gaming in making game devices mobile. If 5G broadband is affordable (not a given) then a 5G connection allows a game box to be used anywhere there is power. I think that will be a huge hit with the mostly-younger gaming community. And, since most homes buy broadband from the cable company, lower latency with 5G ought to be to a gamer using a cable network, assuming the 5G network has adequate upload speeds and low jitter. Gamers who want a fiber-like experience will likely pony up for a 5G gaming connection if it’s priced right.

Standards for 5G

itu_logo_743395401Despite all of the hype that 5G is right around the corner, it’s important to remember that there is not yet a complete standard for the new technology.

The industry just took a big step on February 22 when the ITU released a draft of what it hopes is the final specification for 5G. The document is heavy in engineering detail and is not written for the layman. You will see that the draft talks about a specification for ‘IMT-2020’ which is the official name of 5G. The goal is for this draft to be accepted at a meeting of the ITU-R Study Group in November.

This latest version of the standard defines 13 metrics that are the ultimate goals for 5G. A full 5G deployment would include all of these metrics. What we know that we will see is commercial deployments from vendors claiming to have 5G, but which will actually meet only some parts of a few of these metrics. We saw this before with 4G, and the recent deployment of LTE-U is the first 4G product that actually meets most of the original 4G standard. We probably won’t see a cellular deployment that meets any of the 13 5G metrics until at least 2020, and it might be five to seven more years after that until fully compliant 5G cellular is deployed.

The metric that is probably the most interesting is the one that establishes the goal for cellular speeds. The goals of the standard are 100 Mbps download and 50 Mbps upload. Hopefully this puts to bed the exaggerated press articles that keep talking about gigabit cellphones. And even should the technology meet these target speeds, in real life deployment the average user is probably only going to receive half those speeds due to the fact that cellular speeds decrease rapidly with distance from a cell tower. Somebody standing right next to a cell tower might get 100 Mbps, but even as close as a mile away the speeds will be considerably less.

Interestingly, these speed goals are not much faster than is being realized by LTE-U today. But the new 5G standard should provide for more stable and guaranteed data connections. The standard is for a 5G cell site to be able to connect to up to 1 million devices per square kilometer (a little more than a third of a square mile). This, plus several other metrics, ought to result in stable 5G cellular connections – which is quite different than what we are used to with 4G connections. The real goal of the 5G standard is to provide connections to piles of IoT devices.

The other big improvement over 4G are the expectations for latency. Today’s 4G connections have data latencies as high as 20 ms, which accounts for most problems in loading web pages or watching video on cellphones. The new standard is 4 ms latency, which would improve cellular latency to around the same level that we see today on fiber connections. The new 5G standard for handing off calls between adjoining cell sites is 0 ms, or zero delay.

The standard increases the demand potential capacity of cell sites and provides a goal for the ability of a cell site to process peak data rates of 20 Gbps down and 10 Gbps up. Of course, that means bringing a lot more bandwidth to cell towers and only extremely busy urban towers will ever need that much capacity. Today the majority of fiber-fed cell towers are fed with 1 GB backbones that are used to satisfy upload and download combined. We are seeing cellular carriers inquiring about 10 GB backbones, and we need a lot more growth to meet the capacity built into the standard.

There are a number of other standards. Included is a standard requiring greater energy efficiency, which ought to help save on handset batteries – the new standard allows for handsets to go to ‘sleep’ when not in use. There is a standard for peak spectral efficiency which would enable 5G to much better utilize existing spectrum. There are also specifications for mobility that extend the goal to be able to work with vehicles going as fast as 500 kilometers per hour – meaning high speed trains.

Altogether the 5G standard improves almost every aspect of cellular technology. It calls for more robust cell sites, improved quality of the data connections to devices, lower energy requirements and more efficient hand-offs. But interestingly, contrary to the industry hype, it does not call for gigantic increases of cellular handset data speeds compared to a fully-compliant 4G network. The real improvements from 5G are to make sure that people can get connections at busy cell sites while also providing for huge numbers of connections to smart cars and IoT devices. A 5G connection is going to feel faster because you ought to almost always be able to make a 5G connection, even in busy locations, and that the connection will have low latency and be stable, even in moving vehicles. It will be a noticeable improvement.

Latency and Broadband Performance

turtle_backThe industry always talks about latency as one of the two reasons (along with download speeds) that define a good broadband connection. I thought today I’d talk about latency.

As a reference, the standard definition of latency is that it’s a measure of the time it takes for a data packet to travel from its point of origin to the point of destination.

There are a lot of underlying causes for delays that increase latency – the following are primary kinds of delays:

  • Transmission Delay. This is the time required to push packets out the door at the originating end of a transmission. This is mostly a function of the kind of router and software used at the originating server. This can also be influenced by packet length, and it generally takes longer to create long packets than it does to create multiple short ones. These delays are caused by the originator of an Internet transmission.
  • Processing Delay. This is the time required to process a packet header, check for bit-level errors and to figure out where the packet is to be sent. These delays are caused by the ISP of the originating party. There are additional processing delays along the way every time a transmission has to ‘hop’ between ISPs or networks.
  • Propagation Delay. This is the delay due to the distance a signal travels. It takes a lot longer for a signal to travel from Tokyo to Baltimore than it takes to travel from Washington DC to Baltimore. This is why speed tests try to find a nearby router to ping so that they can eliminate latency due to distance. These delays are mostly a function of physics and the speed at which signals can be carried through cables.
  • Queueing Delay. This measures the amount of time that a packet waits at the terminating end to be processed. This is a function of both the terminating ISP and also of the customer’s computer and software.

Total latency is the combination of all of these delays. You can see by looking at these simple definitions that poor latency can be introduced at multiple points along an Internet transmission, from beginning to end.

The technology of the last mile is generally the largest factor influencing latency. A few years ago the FCC did a study of the various last mile technologies and measured the following ranges of performance of last-mile latency, measured in milliseconds: fiber (10-20 ms), coaxial cable (15-40 ms), and DSL (30-65 ms). These are measures of latency between a home and the first node in the ISP network. It is these latency differences that cause people to prefer fiber. The experience on a 30 Mbps download fiber connection “feels” faster than the same speed on a DSL or cable network connection due to the reduced latency.

It is the technology latency that makes wireless connections seem slow. Cellular latencies vary widely depending upon the exact generation of equipment at any given cell site. But 4G latency can be as high as 100 ms. In the same FCC test that produced the latencies shown above, satellite was almost off the chart with latencies measured as high as 650 ms.

The next biggest factor influencing latency is the network path between the originating and terminating end of a signal. Every time that a signal hits a network node the new router must examine the packet header to determine the route and may run other checks on the data. The delays of hitting network routers or of changing networks is referred to in the industry as hops, and each hop adds latency.

There are techniques and routing schemes that can reduce the latency that comes from extra hops. For example, most large ISPs peer with each other, meaning they pass traffic between them and avoid the open Internet. By doing so they reduce the number of hops needed to pass a signal between their networks. Companies like Netflix also use caching where they will store content closer to users so that the signal isn’t originating from their core servers.

Internet speeds also come into play. The transmission delay is heavily influenced by the upload speeds at the originating end of a transmission. And the queuing delay is influenced by download speeds at the terminating end of a transmission. This is illustrated with a simple example. If you want to download a 10 Mb file, it takes one-tenth of a second to download on a 100 Mbps connection and ten seconds on a 1 Mbps connection.

A lot of complaints about Internet performance are actually due to latency issues. It’s something that’s hard to diagnose since latency issues can appear and reappear as Internet traffic between two points uses different routing. But the one thing that is clear is that the lower the latency the better.

My Thoughts on AT&T AirGig

PoleBy now most of you have seen AT&T’s announcement of a new wireless technology they are calling AirGig. This is a technology that can bounce millimeter wave signals along a series of inexpensive plastic antennae perched at the top of utility poles.

The press release is unclear about the speeds that might be delivered from the technology. The press release says it has the potential to deliver multi-gigabit speeds. But at the same time it talks about delivering 4G cellular as well as 5G cellular and fixed broadband. The 4G LTE cellular standard can deliver about 15 Mbps while the 5G cellular standard (which is still being developed) is expected to eventually increase cellular speeds to about 50 Mbps. So perhaps AT&T plans to use the technology to deploy micro cell sites while also being able to deliver millimeter wave wireless broadband loops. The link above includes a short video which doesn’t clarify this issue very well.

Like any new radio technology, there is bound to be a number of issues involved with moving the technology from the lab to the field. I can only speculate at this point, but I can foresee the following as potential issues with the millimeter wave part of the technology:

  • The video implies that the antennas will be used to deliver bandwidth using a broadcast hotspot. I’m not entirely sure that the FCC will even approve this spectrum to be used in this manner – this is the same spectrum used in microwave ovens. It can be dangerous to work around for linemen climbing poles and it can create all sorts of havoc by interfering with cable TV networks and TV reception.
  • Millimeter wave spectrum does not travel very far when used as a hot spot. This spectrum has high atmospheric attenuation and is absorbed by gases in the atmosphere. When focused in a point-to-point the spectrum can work well to about half a mile. But in a hot spot mode it’s good, at best, for a few hundred feet and loses bandwidth quickly with distance traveled. The bandwidth is only going to reach to homes that are close to the pole lines.
  • Millimeter wave spectrum suffers from rain fade and during a rain storm almost all of the spectrum is scattered.
  • The spectrum doesn’t penetrate foliage, or much of anything else. So there is going to have to be a clear path between the pole unit and the user. America is a land of residential trees and even in the open plains people plant trees closely around their house as a windbreak.
  • The millimeter wave spectrum won’t penetrate walls, so this will require some sort of outdoor receiver to catch millimeter wave signals.
  • I wonder how the units will handle icing. Where cables tend to shake ice off within a few days, hardware mounted on poles can be ice-covered for months.
  • The technology seems to depend on using multiple wireless hops to go from unit to unit. Wireless hops always introduce latency into the signal and it will be interesting to see how much latency is introduced along rural pole runs.
  • For any wireless network to deliver fast speeds it has to be connected somewhere to fiber backhaul. There are still many rural counties with little or no fiber.

We have always seen that every wireless technology has practical limitations that make it suitable for some situations and not others. This technology will be no different. In places where this can work it might be an incredible new broadband solution. But there are bound to be situations where the technology will have too many problems to be practical.

I’ve seen speculation that one of the major reasons for this press release is to cause a pause to anybody thinking of building fiber. After all, why should anybody build fiber if there is cheap multi-gigabit wireless coming to every utility pole? But with all of the possible limitations mentioned above (and others that are bound to pop up in the real world) this technology may only work in some places, or it might not work well at all. This could be the technology we have all been waiting for or it could be a flop. I guess we’ll have to wait and see.

Speed Tests

cheetah-993774Netflix just came out with a new speed test at fast.com which is intended to measure the download speed of Internet connections to determine if they are good enough to stream Netflix. The test only measures the speeds between a user and the Netflix servers. This is different than most other speed tests on the web that also look at upload speeds and latency.

This raises the question of how good speed tests are in general. How accurate are they and what do they really tell a user? There are a number of different speed tests to be found on the web. Over the years I have used the ones at speedtest.net (Ookla), dslreports.com, speed.io, the BandWidthPlace and TestMySpeed.

Probably the first thing to understand about speed tests is that they are only testing the speed of a ping between the user and the test site routers and are not necessarily indicative of the speeds for other web activities like downloading files, making a VoIP phone call or streaming Netflix. Each of those activities involves a different type of streaming and the speed test might not accurately report what a user most wants to know.

Every speed test uses a different algorithm to measure speed. For example, the algorithm for speedtest.net operated by Ookla discards the fastest 10% and the slowest 30% of the results obtained. In doing so they might be masking exactly what drove someone to take the speed test, such as not being able to hold a connection to a VoIP call. Ookla also multithreads, meaning that they open multiple paths between a user and the test site and then average the results together. This could easily mask congestion problems a user might be having with the local network.

Another big problem with any speed test is that it measures the connection between a customer device and the speed test site. This means that the customer parts of the network like the home WiFi network are included in the results. A lot of ISPs I know now claim that poor in-home WiFi accounts for the majority of the speed issue problems reported by customers. So a slow speed test doesn’t always mean that the ISP has a slow connection.

The speed of an Internet connection for any prolonged task changes from second to second. Some of the speed tests like Netflix Ookla show these fluctuations during the test. There are numerous issues for changing speeds largely having to do with network congestion at various points in the network. If one of your neighbors makes a big download demand during your speed test you are likely to see a dip in bandwidth. And this same network contention can happen at any one of numerous different parts of the network.

The bottom line is that speed tests are not much more than an indicator of how your network is performing. If you test your speed regularly then a slow speed test result can be an indicator that something is wrong. But if you only check it once in a while, then any one speed test only tells you about the minute that you took the test and not a whole lot more. It’s not yet time to call your ISP after a single speed test.

There have been rumors around the industry that the big ISPs fudge on the common speed tests. It would be relatively easy for them to do this by giving priority routing to anybody using one of the speed test web sites. I have no idea if they do this, but it would help to explain those times when a speed test tells me I have a fast connection and low latency and yet can’t seem to get things to work.

I think the whole purpose of the Netflix speed test is to put pressure on ISPs that can’t deliver a Netflix-capable connection. I don’t know how much good that will do because such connections are likely going to be on old DSL and other technologies where the ISP already knows the speeds are slow.